This is just an idea, which may not in practice be a major problem but can at times be an inconvenience. I thought I had posted this in the "python -m" thread a little while back but checking my history it does't appear I did, I do apologize if this is a duplicate. When running a python script directly, the directory of that script gets added to sys.path. When running as a module "python -m", it looks like an empty string gets added to sys.path, which appears to result in CWD being used. But, depending on the directory structure, modules imported may not be the expected modules. For instance, if developing a package 'mypackage', with some OS specific code organized into an os module or subpackage "mypackage.os", and then running something like "python -m pytest" or "pylint ..." in the directory for the package would cause an "import os" to treat the mypackage/os(.py) module or package as the top level import and cause errors. (I've actually had this happen which is what prompts this idea)
It is the intended and the expected behaviour. The working directory is always added to the sys.path.
If you have a hierarchy such as a.b.c.d (corresponding to a/b/c/d filesystem tree) and you run the module c from the sub-directory c, you cannot expect that python guesses that c is a sub-package of a and runs the script as if you are calling from folder a.
You must call c from directory a and import it with the corresponding a.b.c hierarchy: python -m a.b.c