[Distutils] Standardizing distribution of "plugins" for extensible apps

Phillip J. Eby pje at telecommunity.com
Wed Dec 8 15:11:38 CET 2004


At 01:51 AM 12/8/04 -0500, Bob Ippolito wrote:

>I don't think that's really a big deal.  The real problem, from my 
>perspective, is the interpreter-global sys.modules.  Sure, you could get 
>around it, but only if you replace Python's import machinery entirely 
>(which you can do, but that is also interpreter-global).
>
>For example, let's say I have barPackage that needs foo 1.0 and bazPackage 
>that needs foo 2.0.  How does that work?  foo 1.0 and foo 2.0 can't both 
>be sys.modules['foo'].

[Disclaimer: the rest of this post discusses possible solutions for the 
versioning problem, but please let's not get into solving that problem 
right now; it's not really a distutils issue, except insofar as the 
distutils needs to provide the metadata to allow the problem to be 
solved.  I really don't want to get into detailed design of an actual 
implementation, and am just sharing these ideas to show that solutions are 
*possible*.]

One possible solution is to place each plugin under a prefix name, like 
making the protocols package in PyProtocols show up as 
"PyProtocols_0_9_3.protocols".  Then, the rest can be accomplished with 
standard PEP 302 import hooks.

Let's say that another plugin, "PEAK_0_5a4", wants to import the protocols 
module.  The way that imports currently work, the local package is tried 
before the global non-package namespace.  So, if the module 
'PEAK_0_5a4.peak.core' tries to import 'protocols', the import machinery 
first looks for 'PEAK_0_5a4.peak.protocols' -- which the PEP 302 import 
hook plugin will be called for, since it's the loader that would be 
responsible if that module or package really existed.  However, when it 
sees the package doesn't exist, it simply checks its dependency resolution 
info in order to see that it should import 'PyProtocols_0_9_3.protocols', 
and then stick an extra reference to it in sys.modules under 
'PEAK_0_5a4.peak.protocols'.

It's messy, but it would work.  There are a few sticky points, however:

* "Relative-then-absolute" importing is considered bad and is ultimately 
supposed to go away in some future Python version

* Absolute imports done dynamically will not be trapped (e.g. PEAK's "lazy 
import" facility) unless there's an __import__ hook also used

* Modules that do things with '__name__' may act strangely

(Of course, a more sophisticated version of this technique could only 
prefix packages that have more than one version installed and required, and 
so reduce the reach of these issues quite a bit.)

Another possible solution is to use the Python multi-interpreter API, 
wrapped via ctypes or Pyrex or some such, and using an interpreter per 
plugin.  Each interpreter has its own builtins, sys.modules and sys.path, 
so each plugin sees the universe exactly as it wants to.

And, there are probably other solutions as well.  I bring these up only to 
point out that it's possible to get quite close to a solution with only PEP 
302 import hooks, without even trapping __import__.  IMO, as long as we 
provide adequate metadata to allow a conflict resolver to know who wants 
what and who's got what, we have what we need for the first round of design 
and implementation: the binary format.

Once a binary format exists, it becomes possible for lots of people to 
experiment with creating installers, registries, conflict resolvers, 
signature checkers, autoupdaters, etc.  But until we have a binary format, 
it's all just hot air.



More information about the Distutils-SIG mailing list