Re: [Python-Dev] PEP 376 proposed changes for basic plugins support
At 02:03 AM 8/2/2010 +0200, Tarek Ziadé wrote:
but then we would be back to the problem mentioned about entry points: installing projects can implicitly add a plugin and activate it, and break existing applications that iterate over entry points without further configuration. So being able to disable plugins from the beginning seems important to me
So which are these apps that don't allow configuration, and which are the plugins that break them? Have the issues been reported so that the authors can fix them? ISTM that the issue can only arise in cases where you are installing plugins to a *global* environment, rather than to an environment specific to the application. In the case of setuptools, for example, it's expected that a project will use 'setup_requires' to identify the plugins it wishes to use, apart from any that were intentionally installed globally. (The requested plugins are then added to sys.path only for the duration of the setup script execution.) Other applications have plugin directories where their plugins are to be installed, and still others have explicit configuration to enable named plugins. Even in the worst-case scenario, where an app has no plugin configuration and no private plugin directory, you can still control plugin availability by installing plugins to the directory where the application's main script is located, or point PYTHONPATH to point to a directory you've chosen to hold the plugins of your choice. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea. I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI. IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file. Tarek -- Tarek Ziadé | http://ziade.org
Tarek Ziadé wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ? Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace. See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 02 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
On 12:21 pm, mal@egenix.com wrote:
Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace.
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed. Jean-Paul
On 02/08/2010 13:31, exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace.
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
unittest will solve this problem by having plugins explicitly enabled in its own configuration system, and possibly managed through a separate tool like a plugins subcommand. The full package list will *only* need to be scanned when managing plugins, not during normal execution. Having this distutils2 supported "plugin declaration and discovery" will be extremely useful for the unittest plugin system. Given that plugins may need configuring after installation, and tools that handle both activation and configuration can be provided, it doesn't seem a heavy cost. The downside to this is that installing and activating plugins are two separate steps. Given that each project can have a different set of plugins enabled I don't see a way round it. Michael
Jean-Paul
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
Michael Foord wrote:
On 02/08/2010 13:31, exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace.
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
unittest will solve this problem by having plugins explicitly enabled in its own configuration system, and possibly managed through a separate tool like a plugins subcommand. The full package list will *only* need to be scanned when managing plugins, not during normal execution.
Having this distutils2 supported "plugin declaration and discovery" will be extremely useful for the unittest plugin system. Given that plugins may need configuring after installation, and tools that handle both activation and configuration can be provided, it doesn't seem a heavy cost.
The downside to this is that installing and activating plugins are two separate steps. Given that each project can have a different set of plugins enabled I don't see a way round it.
You might want to take a look at the Trac plugin system which works in more or less the same way: http://trac.edgewall.org/wiki/TracPlugins Since applications tend to have a rather diverse set of needs for plugins, I don't think we should add plugins support to PEP 376. Users of applications will not want to edit a single configuration file to maintain plugins of many different applications (they might break some other application doing so) and sys admins will have trouble with such a setup as well (they usually want to have control over which plugins get used for various reasons). In the end, you'd have a system wide plugin configuration (maintained by the sys admin), a per user one (with local customizations) and a per application one (providing application-specific defaults) - which only increases complexity and doesn't really solve anything. Instead, I'd suggest to let each application do its own little thing to manage plugins, in a complex or simple way, with or without configuration, and have them all live happily side-by-side. The stdlib should really only provide tools to applications and make useful suggestions, not try to enforce application design choices. I think that's simply out of scope for the stdlib Tarek: What you might want to do is add new type fields to PEP 345, making it easier to identify and list packages that work as plugins for applications, e.g. Type: Plugin for MyCoolApp The MyCoolApp could then use the Type-field to identify all installed plugins, get their installation directories, etc. and work on from there. Whether or not to use an installed plugin is really not without the scope of Python's packaging system. This is something the application must provide in its config file, together with possible additional sections to configure a particular plugin. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 02 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
On 02/08/2010 20:36, M.-A. Lemburg wrote:
Michael Foord wrote:
On 02/08/2010 13:31, exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace.
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
unittest will solve this problem by having plugins explicitly enabled in its own configuration system, and possibly managed through a separate tool like a plugins subcommand. The full package list will *only* need to be scanned when managing plugins, not during normal execution.
Having this distutils2 supported "plugin declaration and discovery" will be extremely useful for the unittest plugin system. Given that plugins may need configuring after installation, and tools that handle both activation and configuration can be provided, it doesn't seem a heavy cost.
The downside to this is that installing and activating plugins are two separate steps. Given that each project can have a different set of plugins enabled I don't see a way round it.
You might want to take a look at the Trac plugin system which works in more or less the same way:
Ouch. I really don't want to emulate that system. For installing a plugin for a single project the recommended technique is: * Unpack the source. It should provide a setup.py. * Run: $ python setup.py bdist_egg Then you will have a *.egg file. Examine the output of running python to find where this was created. Once you have the plugin archive, you need to copy it into the plugins directory of the project environment For global plugins it just uses entry points, which is similar to the functionality we are suggesting adding... However note: Unlike plugins installed per-environment, you'll have to explicitly enable globally installed plugins via trac.ini. Really this sounds *astonishingly* like the system we are proposing. :-) (Global discovery with per-application choice about whether or not installed plugins are actually used).
Since applications tend to have a rather diverse set of needs for plugins, I don't think we should add plugins support to PEP 376.
We are really just suggesting adding entry points.
Users of applications will not want to edit a single configuration file to maintain plugins of many different applications
This we are not proposing. Nor were we ever proposing it. The single file that was proposed (and in my understanding is no longer proposed) was to be maintained by distutils2 *anyway*.
(they might break some other application doing so) and sys admins will have trouble with such a setup as well (they usually want to have control over which plugins get used for various reasons).
In the end, you'd have a system wide plugin configuration (maintained by the sys admin), a per user one (with local customizations) and a per application one (providing application-specific defaults) - which only increases complexity and doesn't really solve anything.
We simply provide information about the availability of plugins. System administrators or users can control the use of this information (and the plugins) as per their own policies.
Instead, I'd suggest to let each application do its own little thing to manage plugins, in a complex or simple way, with or without configuration, and have them all live happily side-by-side.
The stdlib should really only provide tools to applications and make useful suggestions, not try to enforce application design choices. I think that's simply out of scope for the stdlib
Well, a tool for application developers is pretty much all that is being proposed. All the best, Michael Foord
Tarek:
What you might want to do is add new type fields to PEP 345, making it easier to identify and list packages that work as plugins for applications, e.g.
Type: Plugin for MyCoolApp
The MyCoolApp could then use the Type-field to identify all installed plugins, get their installation directories, etc. and work on from there.
Whether or not to use an installed plugin is really not without the scope of Python's packaging system. This is something the application must provide in its config file, together with possible additional sections to configure a particular plugin.
-- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
Michael Foord wrote:
On 02/08/2010 20:36, M.-A. Lemburg wrote:
Michael Foord wrote:
On 02/08/2010 13:31, exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. > So without specific examples of why this is a problem, it's hard to > see why > a special Python-specific set of configuration files is needed to > resolve > it, vs. say, encouraging application authors to use the available > alternatives for doing plugin directories, config files, etc. > I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace.
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
unittest will solve this problem by having plugins explicitly enabled in its own configuration system, and possibly managed through a separate tool like a plugins subcommand. The full package list will *only* need to be scanned when managing plugins, not during normal execution.
Having this distutils2 supported "plugin declaration and discovery" will be extremely useful for the unittest plugin system. Given that plugins may need configuring after installation, and tools that handle both activation and configuration can be provided, it doesn't seem a heavy cost.
The downside to this is that installing and activating plugins are two separate steps. Given that each project can have a different set of plugins enabled I don't see a way round it.
You might want to take a look at the Trac plugin system which works in more or less the same way:
Ouch. I really don't want to emulate that system. For installing a plugin for a single project the recommended technique is:
* Unpack the source. It should provide a setup.py. * Run:
$ python setup.py bdist_egg
Then you will have a *.egg file. Examine the output of running python to find where this was created.
Once you have the plugin archive, you need to copy it into the plugins directory of the project environment
For global plugins it just uses entry points, which is similar to the functionality we are suggesting adding... However note:
Unlike plugins installed per-environment, you'll have to explicitly enable globally installed plugins via trac.ini.
Really this sounds *astonishingly* like the system we are proposing. :-) (Global discovery with per-application choice about whether or not installed plugins are actually used).
The difference being that Trac is usually hosted using a separate Python installation, so the pre-application choice is really a per-Trac instance choice. But yes, the system you are proposing does sound a lot like what Trac uses and it works well - for that one application.
Since applications tend to have a rather diverse set of needs for plugins, I don't think we should add plugins support to PEP 376.
We are really just suggesting adding entry points.
Tarek's email sounded a lot like the attempt to come up with a universal plugin system, both in terms of managing installed plugins and their configuration. Perhaps I've just missed some twist in the thread :-)
Users of applications will not want to edit a single configuration file to maintain plugins of many different applications
This we are not proposing. Nor were we ever proposing it. The single file that was proposed (and in my understanding is no longer proposed) was to be maintained by distutils2 *anyway*.
Sorry, I was refering to the plugins.cfg file used for enabling the plugins, not the PLUGINS file used by installers. http://mail.python.org/pipermail/python-dev/2010-August/102627.html
(they might break some other application doing so) and sys admins will have trouble with such a setup as well (they usually want to have control over which plugins get used for various reasons).
In the end, you'd have a system wide plugin configuration (maintained by the sys admin), a per user one (with local customizations) and a per application one (providing application-specific defaults) - which only increases complexity and doesn't really solve anything.
We simply provide information about the availability of plugins. System administrators or users can control the use of this information (and the plugins) as per their own policies.
Instead, I'd suggest to let each application do its own little thing to manage plugins, in a complex or simple way, with or without configuration, and have them all live happily side-by-side.
The stdlib should really only provide tools to applications and make useful suggestions, not try to enforce application design choices. I think that's simply out of scope for the stdlib
Well, a tool for application developers is pretty much all that is being proposed.
Right, but one which has consequences for users of applications relying on the feature. setuptools was also "just" a tool for application developers, but one which had some serious side-effects for users. Let's please not play the same trick again and be more careful about the user side of things, e.g. plugin configuration should not be part of a Python packaging system. Plugin discovery is useful, but doesn't really require yet another lookup file. The few bits of extra information could easily be placed into the distribution meta-data of PEP 345. Perhaps the main motivation behind adding a new PLUGINS file is to reduce the overhead of having to scan dozens of meta-data .dist-info directories ?! If that's the case, then it would be better to come up with an idea of how to make access to that meta-data available in a less I/O intense way, e.g. by having pip or other package managers update a central SQLite database cache of the data found on disk. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 02 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
At 10:37 PM 8/2/2010 +0200, M.-A. Lemburg wrote:
If that's the case, then it would be better to come up with an idea of how to make access to that meta-data available in a less I/O intense way, e.g. by having pip or other package managers update a central SQLite database cache of the data found on disk.
Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days. Btw, while adding PLUGINS to PEP 376 is a new proposal, it's essentially another spelling of the existing entry_points.txt used by eggs; it changes the format to csv instead of .ini, and adds "description" and "type" fields, but drops requirements information and I'm not sure if it can point to arbitrary objects the way entry_points.txt can. Anyway, entry_points.txt has been around enough years in the field that the concept itself can't really be called "new" - it's actually quite proven. Checking http://nullege.com/codes/search/pkg_resources.iter_entry_points/call , I find 187 modules using just that one entry points API. Some projects do have more than one module loading plugins, but the majority of those 187 appear to be different projects. Note that that's modules *loading plugins*, not plugins being provided... so the total number of PyPI projects using entry points in some way is likely much higher, once you add in the plugins that these 187 lookups are, well, looking up.
P.J. Eby wrote:
At 10:37 PM 8/2/2010 +0200, M.-A. Lemburg wrote:
If that's the case, then it would be better to come up with an idea of how to make access to that meta-data available in a less I/O intense way, e.g. by having pip or other package managers update a central SQLite database cache of the data found on disk.
Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories. Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
Btw, while adding PLUGINS to PEP 376 is a new proposal, it's essentially another spelling of the existing entry_points.txt used by eggs; it changes the format to csv instead of .ini, and adds "description" and "type" fields, but drops requirements information and I'm not sure if it can point to arbitrary objects the way entry_points.txt can.
Anyway, entry_points.txt has been around enough years in the field that the concept itself can't really be called "new" - it's actually quite proven. Checking http://nullege.com/codes/search/pkg_resources.iter_entry_points/call , I find 187 modules using just that one entry points API.
Some projects do have more than one module loading plugins, but the majority of those 187 appear to be different projects.
Note that that's modules *loading plugins*, not plugins being provided... so the total number of PyPI projects using entry points in some way is likely much higher, once you add in the plugins that these 187 lookups are, well, looking up.
setuptools entry points are just one way of doing plugins. There are other such systems that work well and which do not require any special administration or setup, simply because the application using the plugins defines the plugin protocol. Since you are into comparing numbers, you might want to count the number of Zope plugins that are available on PyPI and its plugin system has been around much longer than setuptools has been. I don't think that proves anything, though. I simply don't see a good reason to complicate the Python packaging system by trying to add a particular plugin support to it. Plugins are application scope features and should be treated as such. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 03 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
On 03/08/2010 09:28, M.-A. Lemburg wrote:
P.J. Eby wrote:
At 10:37 PM 8/2/2010 +0200, M.-A. Lemburg wrote:
If that's the case, then it would be better to come up with an idea of how to make access to that meta-data available in a less I/O intense way, e.g. by having pip or other package managers update a central SQLite database cache of the data found on disk.
Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
Sounds good as an "optional extra" (i.e. it should be safe to completely delete the cache file and still have everything work) to me. If the API for using the feature is worked out well first this could be done as a behind the scenes optimisation...
Btw, while adding PLUGINS to PEP 376 is a new proposal, it's essentially another spelling of the existing entry_points.txt used by eggs; it changes the format to csv instead of .ini, and adds "description" and "type" fields, but drops requirements information and I'm not sure if it can point to arbitrary objects the way entry_points.txt can.
Anyway, entry_points.txt has been around enough years in the field that the concept itself can't really be called "new" - it's actually quite proven. Checking http://nullege.com/codes/search/pkg_resources.iter_entry_points/call , I find 187 modules using just that one entry points API.
Some projects do have more than one module loading plugins, but the majority of those 187 appear to be different projects.
Note that that's modules *loading plugins*, not plugins being provided... so the total number of PyPI projects using entry points in some way is likely much higher, once you add in the plugins that these 187 lookups are, well, looking up.
setuptools entry points are just one way of doing plugins. There are other such systems that work well and which do not require any special administration or setup, simply because the application using the plugins defines the plugin protocol.
Right, and those won't magically stop working if this proposal is implemented.
Since you are into comparing numbers, you might want to count the number of Zope plugins that are available on PyPI and its plugin system has been around much longer than setuptools has been. I don't think that proves anything, though.
I simply don't see a good reason to complicate the Python packaging system by trying to add a particular plugin support to it.
Plugins are application scope features and should be treated as such.
The fact is that entry points *are* widely used and solve a problem that you *can't* solve without some feature like this. The success of entry points demonstrates their utility (and you talk vaguely about 'problems' setuptools caused without any concrete examples - do you know of any *specific* difficulties with entry points?). I doubt I will change your mind, but the bottom line is that if you don't like this feature you don't have to use it. For applications that want it (like the unittest plugin system) it will be *enormously* useful and *reduce* complexity for the user (by allowing simpler plugin management tools). All the best, Michael -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
Michael Foord wrote:
On 03/08/2010 09:28, M.-A. Lemburg wrote:
P.J. Eby wrote:
At 10:37 PM 8/2/2010 +0200, M.-A. Lemburg wrote:
If that's the case, then it would be better to come up with an idea of how to make access to that meta-data available in a less I/O intense way, e.g. by having pip or other package managers update a central SQLite database cache of the data found on disk.
Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
Sounds good as an "optional extra" (i.e. it should be safe to completely delete the cache file and still have everything work) to me. If the API for using the feature is worked out well first this could be done as a behind the scenes optimisation...
True, but is also allows more freedom in using existing resources: The data put into the PLUGINS file is essentially not needed, since this can be had from the meta-data (provided this gets some extra fields for describing the plugin nature).
Btw, while adding PLUGINS to PEP 376 is a new proposal, it's essentially another spelling of the existing entry_points.txt used by eggs; it changes the format to csv instead of .ini, and adds "description" and "type" fields, but drops requirements information and I'm not sure if it can point to arbitrary objects the way entry_points.txt can.
Anyway, entry_points.txt has been around enough years in the field that the concept itself can't really be called "new" - it's actually quite proven. Checking http://nullege.com/codes/search/pkg_resources.iter_entry_points/call , I find 187 modules using just that one entry points API.
Some projects do have more than one module loading plugins, but the majority of those 187 appear to be different projects.
Note that that's modules *loading plugins*, not plugins being provided... so the total number of PyPI projects using entry points in some way is likely much higher, once you add in the plugins that these 187 lookups are, well, looking up.
setuptools entry points are just one way of doing plugins. There are other such systems that work well and which do not require any special administration or setup, simply because the application using the plugins defines the plugin protocol.
Right, and those won't magically stop working if this proposal is implemented.
Right, but the proposal does add an extra burden on the package manager tools and it codifies one specific way of implementing plugins.
Since you are into comparing numbers, you might want to count the number of Zope plugins that are available on PyPI and its plugin system has been around much longer than setuptools has been. I don't think that proves anything, though.
I simply don't see a good reason to complicate the Python packaging system by trying to add a particular plugin support to it.
Plugins are application scope features and should be treated as such.
The fact is that entry points *are* widely used and solve a problem that you *can't* solve without some feature like this. The success of entry points demonstrates their utility (and you talk vaguely about 'problems' setuptools caused without any concrete examples - do you know of any *specific* difficulties with entry points?).
Not specific to entry points, but I do know a lot about of problems that setuptools has introduced (and didn't want to start yet another flame war based on these ;-).
I doubt I will change your mind, but the bottom line is that if you don't like this feature you don't have to use it. For applications that want it (like the unittest plugin system) it will be *enormously* useful and *reduce* complexity for the user (by allowing simpler plugin management tools).
Sure and those tools can use such a system. No question there :-) Maybe I just have to spell things out more clearly: I support the idea of adding better query tools to the installed package meta-data to make it easier to write plugin systems or simplify existing ones. I don't support the idea of using central configuration files for plugins that span multiple applications (this reminds me a lot of the early Windows win.ini file days and all the problems this caused back then). It's better to have per application configurations, implemented in a way that is suitable for the application (e.g. some applications might want to store plugin data in a database, provide GUI tools to enable/disable plugins, etc.). I also don't think it's necessary to complicate things to get this extra functionality: If you look at the proposal, it is really just about adding a new data store to manage a certain package type called "plugins". Next time around, someone will want to see support for "skins" or "themes". Then perhaps identify "script" packages, or "application" packages, or "namespace" packages, or "stubs", etc. All this can be had by providing this kind of extra meta-information in the already existing format. If we add a new extra file to be managed by the package managers every time someone comes up with a new use case, we'd just clutter up the disk with more and more CSV file extracts and make PEP 376 more and more complex. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 03 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
At 01:40 PM 8/3/2010 +0200, M.-A. Lemburg wrote:
If you look at the proposal, it is really just about adding a new data store to manage a certain package type called "plugins". Next time around, someone will want to see support for "skins" or "themes". Then perhaps identify "script" packages, or "application" packages, or "namespace" packages, or "stubs", etc. All this can be had by providing this kind of extra meta-information in the already existing format.
If by "existing format", you mean "entry points", then yes, that is true. ;-) They are used today for most of the things you listed; anything that's an importable Python object (module, class, function, package, constant, global) can be listed as an entry point belonging to a named group. Heck, the first code sample on Nullege for iter_entry_points is some package called Apydia loading an entry point group called "apydia.themes"! Seriously, though, PEP 376 is just setuptools' egg-info under a different name with uninstall support added. And egg-info was designed to be able to hold all those things you're talking about. The EggTranslations project, for example, defines i18n-support files that can be placed under egg-info, and provides its own APIs for looking those things up. Applications using EggTranslations can not only have their own translations shipped as plugins, but plugins can provide translations for other plugins of the same application. (I believe it also supports providing other i18n resources such as icons as well.) So, it isn't actually necessary for the stdlib to provide any particular support for specific kinds of metadata within PEP 376, as long as the PEP 376 API supports finding packages with metadata files of a particular name. (EggTranslations uses similar APIs provided by pkg_resources.) However, since Tarek proposed adding a stdlib-supported plugins feature, I am suggesting it adopt the entry_points.txt file name and format, to avoid unnecessary API fragmentation.
If we add a new extra file to be managed by the package managers every time someone comes up with a new use case, we'd just clutter up the disk with more and more CSV file extracts and make PEP 376 more and more complex.
The setuptools egg-info convention is not to create files that don't contain any useful content, so that their presence or absence conveys information. If that convention is continued in PEP 376, features that aren't used won't take up any disk space. As for cluttering the PEP, IMO any metadata files that aren't part of the "installation database" feature should probably have their own PEP.
Hello Le 03/08/2010 13:09, Michael Foord a écrit :
On 03/08/2010 09:28, M.-A. Lemburg wrote:
P.J. Eby wrote:
At 10:37 PM 8/2/2010 +0200, M.-A. Lemburg wrote:
[idea about sqlite3 db for caching] [distros won’t like it, the filesystem is the db] [the db is a cache, it does not replace the files] [advantages of sqlite3 db] Sounds good as an "optional extra" (i.e. it should be safe to completely delete the cache file and still have everything work) to me. If the API for using the feature is worked out well first this could be done as a behind the scenes optimisation...
FYI, the current implementation in the distutils2-augmented pkgutil uses a cache (a function call memo) for functions that e.g. iterate over dist-info directories (and optionally egg-info directories too) or get a Distribution object representing an installed project. Tools that modify the state of the installation can call a function to clear this cache. A sqlite db would certainly speed things up; it could replace the existing caching in distutils2 or be left to other tools. Regards
On Tue, 03 Aug 2010 10:28:07 +0200
"M.-A. Lemburg"
Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
If the cache can become stale because of system package management tools, how do you avoid I/O calls while checking that the database is fresh enough at startup? Regards Antoine.
On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou
On Tue, 03 Aug 2010 10:28:07 +0200 "M.-A. Lemburg"
wrote: Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
If the cache can become stale because of system package management tools, how do you avoid I/O calls while checking that the database is fresh enough at startup?
There is a tension between the two approaches: either you want "auto-discovery", or you want a system with explicit registration and only the registered plugins would be visible to the system. System-wise, I much prefer the later, and auto-discovery should be left at the application discretion IMO. A library to deal with this at the *app* level may be fine. But the current system of loading packages and co is already complex enough in python that anything that complexify at the system (interpreter) level sounds like a bad idea. David
There is a tension between the two approaches: either you want "auto-discovery", or you want a system with explicit registration and only the registered plugins would be visible to the system.
I think both are necessary. A discovery API should be available, but the library or application should be free to do whatever it wants with the discovered plugins - enable them automatically, present them to the user, or nothing. (a GUI application, for example, needs to be able to display a list of available plugins, with checkboxes to enable or disable each of them. It is not reasonable to expect the user to type the plugin name in a textbox) Regards Antoine.
On 03/08/2010 15:19, David Cournapeau wrote:
On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou
wrote: On Tue, 03 Aug 2010 10:28:07 +0200 "M.-A. Lemburg"
wrote: Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
If the cache can become stale because of system package management tools, how do you avoid I/O calls while checking that the database is fresh enough at startup?
There is a tension between the two approaches: either you want "auto-discovery", or you want a system with explicit registration and only the registered plugins would be visible to the system.
Not true. Auto-discovery provides an API for applications to tell users which plugins are *available* whilst still allowing the app to decide which are active / enabled. It still leaves full control in the hands of the application. It also allows the user / sysadmin to use their standard tools, whether that be disutils2 or package managers, to install the plugins instead of requiring ad-hoc approaches like "drop this file in this location". All the best, Michael Foord
System-wise, I much prefer the later, and auto-discovery should be left at the application discretion IMO. A library to deal with this at the *app* level may be fine. But the current system of loading packages and co is already complex enough in python that anything that complexify at the system (interpreter) level sounds like a bad idea.
David _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
On Tue, Aug 3, 2010 at 11:35 PM, Michael Foord
On 03/08/2010 15:19, David Cournapeau wrote:
On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou
wrote: On Tue, 03 Aug 2010 10:28:07 +0200 "M.-A. Lemburg"
wrote: Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
If the cache can become stale because of system package management tools, how do you avoid I/O calls while checking that the database is fresh enough at startup?
There is a tension between the two approaches: either you want "auto-discovery", or you want a system with explicit registration and only the registered plugins would be visible to the system.
Not true. Auto-discovery provides an API for applications to tell users which plugins are *available* whilst still allowing the app to decide which are active / enabled. It still leaves full control in the hands of the application.
Maybe I was not clear, but I don't understand how your statement contradict mine. The issue is how to determine which plugins are available: if you don't have an explicit registration, you need to constantly restat every potential location (short of using OS specific systems to to get notification from fs changes). The current python solutions that I am familiar with are prohibitively computing intensive for this reason (think about what happens when you stat locations on NFS shares). David
On 03/08/2010 16:24, David Cournapeau wrote:
On Tue, Aug 3, 2010 at 11:35 PM, Michael Foord
wrote: On 03/08/2010 15:19, David Cournapeau wrote:
On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou
wrote: On Tue, 03 Aug 2010 10:28:07 +0200 "M.-A. Lemburg"
wrote: Don't forget system packaging tools like .deb, .rpm, etc., which do not generally take kindly to updating such things. For better or worse, the filesystem *is* our "central database" these days.
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
If the cache can become stale because of system package management tools, how do you avoid I/O calls while checking that the database is fresh enough at startup?
There is a tension between the two approaches: either you want "auto-discovery", or you want a system with explicit registration and only the registered plugins would be visible to the system.
Not true. Auto-discovery provides an API for applications to tell users which plugins are *available* whilst still allowing the app to decide which are active / enabled. It still leaves full control in the hands of the application.
Maybe I was not clear, but I don't understand how your statement contradict mine. The issue is how to determine which plugins are available: if you don't have an explicit registration, you need to constantly restat every potential location (short of using OS specific systems to to get notification from fs changes). The current python solutions that I am familiar with are prohibitively computing intensive for this reason (think about what happens when you stat locations on NFS shares).
Ah, I thought you were arguing against the plugins proposal altogether. If you are merely saying that you prefer the proposal to maintain the list of plugins via an explicit registration process (i.e. a central file somewhere) rather than "stating around" then I don't *particularly* have an opinion on the matter. I want to use the API and the implementation details are up to others to work out. :-) Sorry for the confusion. Michael
David
-- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
At 10:28 AM 8/3/2010 +0200, M.-A. Lemburg wrote:
Since you are into comparing numbers, you might want to count the number of Zope plugins that are available on PyPI and its plugin system has been around much longer than setuptools has been. I don't think that proves anything, though.
Actually, some of the ones I found in the search using entry points *were* Zope, which, as I mentioned before, is increasingly moving away from the old approach in favor of entry points. In any case, I am not advocating *setuptools* -- I'm advocating that if PEP 376 expands to add plugin support, that it do so with a file format and associated API based on that of entry points, so as to make migration of those ~187 modules and their associated plugins to distutils2 a little easier. In other words, I'm trying to make it easier for people to move OFF of setuptools. Crazy, I know, but there you go. ;-)
On Aug 3, 2010, at 4:28 AM, M.-A. Lemburg wrote:
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
This is exactly what Twisted already does with its plugin cache, and the previously-cited ticket in this thread should expand the types of metadata which can be obtained about plugins. Packaging systems are perfectly capable of generating and updating such metadata caches, but various packages of Twisted (Debian's especially) didn't read our documentation and kept moving around the place where Python source files were installed, which routinely broke the post-installation hooks and caused all kinds of problems. I would strongly recommend looping in the Python packaging teams from various distros *before* adding another such cache, unless you want to be fielding bugs from Launchpad.net for five years :).
On 8/3/2010 12:33 PM, Glyph Lefkowitz wrote:
On Aug 3, 2010, at 4:28 AM, M.-A. Lemburg wrote:
I don't think that's a problem: the SQLite database would be a cache like e.g. a font cache or TCSH command cache, not a replacement of the meta files stored in directories.
Such a database would solve many things at once: faster access to the meta-data of installed packages, fewer I/O calls during startup, more flexible ways of doing queries on the meta-data, needed for introspection and discovery, etc.
This is exactly what Twisted already does with its plugin cache, and the previously-cited ticket in this thread should expand the types of metadata which can be obtained about plugins. +1 Packaging systems are perfectly capable of generating and updating such metadata caches, but various packages of Twisted (Debian's especially) didn't read our documentation and kept moving around the place where Python source files were installed, which routinely broke the post-installation hooks and caused all kinds of problems.
I would strongly recommend looping in the Python packaging teams from various distros *before* adding another such cache, unless you want to be fielding bugs from Launchpad.net http://Launchpad.net for five years :).
+1 -- Steve Holden +1 571 484 6266 +1 800 494 3119 DjangoCon US September 7-9, 2010 http://djangocon.us/ See Python Video! http://python.mirocommunity.org/ Holden Web LLC http://www.holdenweb.com/
At 09:03 PM 8/2/2010 +0100, Michael Foord wrote:
Ouch. I really don't want to emulate that system. For installing a plugin for a single project the recommended technique is:
* Unpack the source. It should provide a setup.py. * Run:
$ python setup.py bdist_egg
Then you will have a *.egg file. Examine the output of running python to find where this was created.
Once you have the plugin archive, you need to copy it into the plugins directory of the project environment
Those instructions are apparently out-of-date; you can actually just "easy_install -m" or "pip" the plugin directly to the plugins directory, without any additional intervening steps. (The only reason to create an .egg file for Trac is if you intend to distribute to non-developer users who will be told to just drop it in the plugins directory.)
For global plugins it just uses entry points, which is similar to the functionality we are suggesting adding...
I believe it's using entry points for both, actually. It just has an (application-specific) filtering mechanism to restrict which entry points get loaded.
Really this sounds *astonishingly* like the system we are proposing. :-)
Which is why I keep pointing out that the code for doing most of it is already available in setuptools, distribute, pip, buildout, etc., and so (IMO) ought to just get copied into distutils2, the way easy_install's package index code was. ;-) (Of course, adding some filtering utilities to make it easier for apps to do explicit configuration would still be nice.)
What you might want to do is add new type fields to PEP 345, making it easier to identify and list packages that work as plugins for applications, e.g.
Type: Plugin for MyCoolApp
The MyCoolApp could then use the Type-field to identify all installed plugins, get their installation directories, etc. and work on from there.
Classifiers seem a good way to do that. They’re already defined in accepted PEPs, extensible on demand, used by Web frameworks components/applications/middlewares/things and other projects, and queriable in PyPI. We could use “Environment :: Plugins” and “Framework :: Something” or define a new classifier (not all applications are frameworks, and “plugins” seems a very strange value for “environment”). It would be the first time that a classifier triggers specific processing from distutils though, so we may prefer to define a new Provides-Plugin field for consistency and explicitness. Regards
Éric Araujo wrote:
What you might want to do is add new type fields to PEP 345, making it easier to identify and list packages that work as plugins for applications, e.g.
Type: Plugin for MyCoolApp
The MyCoolApp could then use the Type-field to identify all installed plugins, get their installation directories, etc. and work on from there.
Classifiers seem a good way to do that. They’re already defined in accepted PEPs, extensible on demand, used by Web frameworks components/applications/middlewares/things and other projects, and queriable in PyPI.
We could use “Environment :: Plugins” and “Framework :: Something” or define a new classifier (not all applications are frameworks, and “plugins” seems a very strange value for “environment”).
This would work to mark plugins as such, but we'd still would have to have a field to name the app they belong to and I don't think that the process of adding new classifiers is flexible enough to handle those names. A Type field would be under application control and allow easy discovery of installed plugins for a specific app.
It would be the first time that a classifier triggers specific processing from distutils though, so we may prefer to define a new Provides-Plugin field for consistency and explicitness.
-- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 03 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
Tarek Ziad� wrote:
On Mon, Aug 2, 2010 at 3:06 AM, P.J. Eby
wrote: .. So without specific examples of why this is a problem, it's hard to see why a special Python-specific set of configuration files is needed to resolve it, vs. say, encouraging application authors to use the available alternatives for doing plugin directories, config files, etc.
I don't have a specific example in mind, and I must admit that if an application does the right thing (provide the right configuration file), this activate feature is not useful at all. So it seems to be a bad idea.
I propose that we drop the PLUGINS file idea and we add a new metadata field called Provides-Plugin in PEP 345, which will contain the info I've described minus the state field. This will allow us to expose plugins at PyPI.
IOW, have entry points like setuptools provides, but in a metadata field instead of a entry_points.txt file.
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace.
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
I'd say that it's up to the application to deal with this problem. An application which requires lots and lots of plugins could define a registration protocol that does not require loading all plugins at scanning time. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 02 2010)
Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/
On 01:27 pm, mal@egenix.com wrote:
exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
I'd say that it's up to the application to deal with this problem.
An application which requires lots and lots of plugins could define a registration protocol that does not require loading all plugins at scanning time.
It's not fixable at the application level, at least in Twisted's plugin system. It sounds like Zope's system has the same problem, but all I know of that system is what you wrote above. The cost increases with the number of plugins installed on the system, not the number of plugins the application wants to load. Jean-Paul
On Aug 2, 2010, at 9:53 AM, exarkun@twistedmatrix.com wrote:
On 01:27 pm, mal@egenix.com wrote:
exarkun@twistedmatrix.com wrote:
On 12:21 pm, mal@egenix.com wrote:
See Zope for an example of how well this simply mechanism works out in practice: it simply scans the "Products" namespace for sub-packages and then loads each sub-package it finds to have it register itself with Zope.
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
I'd say that it's up to the application to deal with this problem.
An application which requires lots and lots of plugins could define a registration protocol that does not require loading all plugins at scanning time.
It's not fixable at the application level, at least in Twisted's plugin system. It sounds like Zope's system has the same problem, but all I know of that system is what you wrote above. The cost increases with the number of plugins installed on the system, not the number of plugins the application wants to load.
We do have a plan to address this in Twisted's plugin system (eventually): http://twistedmatrix.com/trac/ticket/3773, although I'm not sure if that's relevant to the issue at hand.
Le 02/08/2010 14:31, exarkun@twistedmatrix.com a écrit :
On 12:21 pm, mal@egenix.com wrote:
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace. [...]
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
If namespace packages make it into Python, they would indeed solve a part of the problem in a nice, generic way. Regarding the performance issue, I wonder if functions in pkgutil or importlib could allow one to iterate over the plugins (i.e. submodules and subpackages of the namespace package) without actually loading then. We would get only their names though, not their description or any other information useful to decide to activate them or not. Maybe importing is the way to go, with a doc recommendation that people make their plugins subpackages with an __init__ module containing only a docstring. Regards
On 03:08 pm, merwok@netwok.org wrote:
Le 02/08/2010 14:31, exarkun@twistedmatrix.com a �crit :
On 12:21 pm, mal@egenix.com wrote:
Do we really need to make Python packaging even more complicated by adding support for application-specific plugin mechanisms ?
Packages can already work as application plugins by simply defining a plugins namespace package and then placing the plugin packages into that namespace. [...]
This is also roughly how Twisted's plugin system works. One drawback, though, is that it means potentially executing a large amount of Python in order to load plugins. This can build up to a significant performance issue as more and more plugins are installed.
If namespace packages make it into Python, they would indeed solve a part of the problem in a nice, generic way.
I don't think this solves the problem. Twisted's plugin system already uses namespace packages. It helps slightly, by spreading out your plugins, but you can still end up with lots of plugins in a particular namespace.
Regarding the performance issue, I wonder if functions in pkgutil or importlib could allow one to iterate over the plugins (i.e. submodules and subpackages of the namespace package) without actually loading then. We would get only their names though, not their description or any other information useful to decide to activate them or not.
The trick is to continue to provide enough information so that the code iterating over the data can make a correct decision. It's not clear that names are enough.
Maybe importing is the way to go, with a doc recommendation that people make their plugins subpackages with an __init__ module containing only a docstring.
Regards
Jean-Paul
participants (10)
-
Antoine Pitrou
-
David Cournapeau
-
exarkun@twistedmatrix.com
-
Glyph Lefkowitz
-
M.-A. Lemburg
-
Michael Foord
-
P.J. Eby
-
Steve Holden
-
Tarek Ziadé
-
Éric Araujo