
Hi, Going through my email backlog.
Le 11/08/2011 20:30, P.J. Eby a écrit :
(By the way, both of these additions to the import protocol (i.e. the dynamically-added ``__path__``, and dynamically-created modules) apply recursively to child packages, using the parent package's ``__path__`` in place of ``sys.path`` as a basis for generating a child ``__path__``. This means that self-contained and virtual packages can contain each other without limitation, with the caveat that if you put a virtual package inside a self-contained one, it's gonna have a really short ``__path__``!) I don't understand the caveat or its implications. Since each package's __path__ is the same length or shorter than its
At 04:39 PM 8/11/2011 +0200, Éric Araujo wrote: parent's by default, then if you put a virtual package inside a self-contained one, it will be functionally speaking no different than a self-contained one, in that it will have only one path entry. So, it's not really useful to put a virtual package inside a self-contained one, even though you can do it. (Apart form it letting you avoid a superfluous __init__ module, assuming it's indeed superfluous.)
I still don’t understand why this matters or what negative effects it could have on code, but I’m fine with not understanding. I’ll trust that people writing or maintaining import-related tools will agree or complain about that item.
I’ll just regret that it's not possible to provide a module docstring to inform that this is a namespace package used for X and Y. It *is* possible - you'd just have to put it in a "zc.py" file. IOW, this PEP still allows "namespace-defining packages" to exist, as was requested by early commenters on PEP 382. It just doesn't *require* them to exist in order for the namespace contents to be importable.
That’s quite cool. I guess such a namespace-defining module (zc.py here) would be importable, right? Also, would it cause worse performance for other zc.* packages than if there were no zc.py?
This was probably said on import-sig, but here I go: yet another import artifact in the sys module! I hope we get ImportEngine in 3.3 to clean up all this. Well, I rather *like* having them there, personally, vs. having to learn yet another API, but oh well, whatever.
Agreed with “whatever” :) I just like to grunt sometimes.
AFAIK, ImportEngine isn't going to do away with the need for the global ones to live somewhere,
Yep, but as Nick replied, at least we’ll gain one structure to rule them all.
Let's imagine my application Spam has a namespace spam.ext for plugins. To use a custom directory where plugins are stored, or a zip file with plugins (I don't use eggs, so let me talk about zip files here), I'd have to call sys.path.append *and* pkgutil.extend_virtual_paths? As written in the current proposal, yes. There was some discussion on Python-Dev about having this happen automatically, and I proposed that it could be done by making virtual packages' __path__ attributes an iterable proxy object, rather than a list:
That sounds a bit too complicated. What about just having pkgutil.extend_virtual_paths call sys.path.append? For maximum flexibility, extend_virtual_paths could have an argument to avoid calling sys.path.append.
Besides, putting data files in a Python package is held very poorly by some (mostly people following the File Hierarchy Standard), ISTM that anybody who thinks that is being inconsistent in considering the Python code itself to not be a "data file" by that same criterion... especially since one of the more common uses for such "data" files are for e.g. HTML templates (which usually contain some sort of code) or GUI resources (which are pretty tightly bound to the code).
A good example is documentation: Having a unique location (/usr/share/doc) for all installed software makes my life easier. Another example is JavaScript files used with HTML documents, such as jQuery: Debian recently split the jQuery file out of their Sphinx package, so that there is only one library installed that all packages can use and that can be updated and fixed once for all. (I’m simplifying; there can be multiple versions of libraries, but not multiple copies. I’ll stop here; I’m not one of the authors of the Filesystem Hierarchy Standard, and I’ll rant against package_data in distutils mailing lists :)
A pure virtual package having no source file, I think it should have no __file__ at all.
Antoine and someone else thought likewise (I can find the link if you want); do you consider it consensus enough to update the PEP? Regards