[Distutils] Standardizing distribution of "plugins" for
Phillip J. Eby
pje at telecommunity.com
Wed Dec 8 14:35:34 CET 2004
At 11:50 PM 12/7/04 -0500, Bob Ippolito wrote:
>On Dec 7, 2004, at 10:40 PM, Phillip J. Eby wrote:
>>Many applications need or want to be able to support extension via
>>dynamic installation of "plugin" code, such as Zope Products, Chandler
>>Parcels, and WSGI servers' "application" objects. Frequently, these
>>plugins may require access to other plugins or to libraries of other
>>Python code. Currently, application platforms must create their own
>>binary formats to address their specific requirements, but these formats
>>are of course specific to the platform and not portable, so it's not
>>possible to package a third-party module just once, and deploy it in any
>>Python application platform.
>>Although each platform may have its own additional requirements for the
>>contents of such a "plugin", the minimum basis for such plugins is that
>>they include Python modules and other files, and import and export
>>selected modules. Although the standard distutils pattern is to use a
>>platform's own packaging system, this really only makes sense if you are
>>dealing with Python as a language, and not as an application
>>platform. Platform packaging doesn't make sense for applications that
>>are end-user programmable, because even if the core application can be
>>installed in one location, each instance of use of that application (e.g.
>>per user on a multi-user system) may have its own plugins installed.
>Is anything you're proposing NOT applicable to Python packages also?
Not at "level 1", no. The other levels are definitely plug-in specific,
though, apart from simple import/export resolution.
However, I'm focusing on plugins because 1) that's my use case, and 2) I'd
like to keep it focused on deliverables, and not let the effort wander off
into some kind of vague "we need a CPAN clone" sort of discussion. :)
In particular, I want to keep PEP 262 and any download or installation
tools completely out of scope at level 1.
> A lot of the issues you're proposing to solve are problems we have for
> more than just plugins. Though, I suppose it's a lot easier to propose a
> new plugin mechanism than a major overhaul to the module system ;)
>>* A PEP 302 importer that supports running of code directly from plugin
>>distributions, and which can be configured to install or extract C
>>extension modules. (This will serve as a base for platform developers to
>>create their platform-specific plugin registries, installation
>>mechanisms, etc., although there will be opportunity for
>>standardization/code sharing at this level, too.)
>This importer also needs to grow methods so that these plugins can
>introspect themselves and ask for things (metadata that may be in the zip,
>etc.). Something smarter than zipimport's get_data would be nice too.
Yes, but I consider that part of level 2. For level 1, I only want to
enable normal PEP 302 get_data() support, which is about as good as OSGi
level 1 already.
>Also, if you allow extraction of C extension modules, you'll probably also
>have to allow extraction of dependent dlls and whatnot.. which is a real
>mess. For dependent dynamic libraries on Darwin, this is a *real* mess,
>because the runtime linker is only affected by environment variables at
>process startup time. I can only think of two solutions to this for Darwin:
>(a) build an executable bundle on and execve it on process startup, drop
>the dependent libraries inside that executable bundle
>(b) have some drop location for dependent libraries and C extensions and
>rewrite the load commands in them before loading (which may fail if there
>isn't enough wiggle room in the header)
With regard to 'b', I'm not quite sure I understand about the rewriting
load commands. Are you saying that on Darwin, you have no
LD_LIBRARY_PATH? Because, wouldn't it suffice for the application to have
that defined when it starts, and install the libraries on that path? What
am I missing, here?
IOW, if you have a directory set up on LD_LIBRARY_PATH or its equivalent,
can't you just dump the libraries and C extensions there?
>>* additions to setup() metadata to support declaration of:
>> * modules required/provided, with version information
>Does this mean sys.modules keys will also have a version in them? How do
>you import cross-plugin or cross-package in this scenario?
For level 1, I'm only concerned with ensuring that for some plugin X, it's
possible to construct a sys.path that allows its dependencies to be
imported. Simultaneous resolution of conflicting dependencies from
multiple plugins is a level 2 goal, and I have several possible solutions
in mind for it, but I don't want to bring in distractions at level 1. As
long as the plugin format contains the necessary metadata for level 2 tools
to handle it, I think we should go ahead and get level 1 implemented.
In any case, there aren't any platforms that cleanly support such
cross-plugin version skew today, so it's not like we'll be going
backwards. We'll just be going forwards, once the base format exists.
>> * platform metadata for C extensions contained in the plugin distribution
>How do you build one of these that has C extensions for multiple
>platforms? Multiple variants of the same platform? Multiple Python
>runtime versions? It's not always possible to cross-compile from platform
>A to B for arbitrary A and B.
I don't want to try. I'd be fine with saying that a plugin file with C
extensions is always specific to a single platform, where "platform" is a
specified processor and set of operating system versions. It might be nice
to have a tool that can take a distutils-packaged source distro and build a
plugin from it, but I think that's a separate issue.
>>* Entry point for dynamic integration (technically a level 2 feature, but
>>it'd be nice to reserve the header and define its syntax)
>What do you mean by "entry point" and "dynamic integration"?
In OSGi, there's a "Bundle-Activator" field that carries the name of a
class in the package that will be instantiated and have its 'start()' and
'stop()' methods called when the plugin is started or stopped by the
application platform. The methods get passed a BundleContext object that
provides them with lots of goodies for accessing plugin metadata, and
registering or discovering services between plugins. So, the "entry point"
would probably be a class, and "dynamic integration" means offering
services, starting threads or servers, etc.
Detailed design of these facilities should be left to levels 2 and 3, so as
not to hold up the base system.
>>* Update URL (for obtaining the "latest" version of the plugin)
>How about some kind of public key as well, so that if you visit the update
>URL you will know if the new package was provided by the same author or not?
Sounds like a cool idea, although I'm not sure how you could sign a zipfile
from inside the zipfile. Unless the idea is to stick some extra data on
the front or back?
>If we require that modules specify a __version__ that is a "constant", it
>would be easy to parse... When you "link" this bundle, it could
>automatically say that it requires a version >= to the version it saw when
>it was scanned for dependencies (unless explicitly specified to be more or
Interesting idea. Of course, __version__ often doesn't exist, and when it
does, it often consists of CVS tag spew.
>In any case, I think versioning is a really hard problem, especially in
>Python due to sys.modules and the import mechanism, so I think that this
>task should be deferred.
We need to distinguish between being able to resolve dependencies, and
resolve conflicts. We can focus on resolving dependencies now, and
conflicts later. There are several possible conflict resolution
mechanisms, each suitable for different scenarios.
More information about the Distutils-SIG