distlib updated with resources API
I've updated distlib[1] with a resources API - functionality for accessing data files co-located with code in Python packages. This is missing from the stdlib and currently people use pkg_resources to achieve this. The design and implementation allows for accessing resources from packages imported from the file system or from .zip files, and is intended to be extensible to other PEP 302-compliant import systems. The update includes: A short tutorial showing how to use the API. [2] A discussion of the design of the API. [3] Reference documentation for the API. [4] I'd be interested in any and all feedback on this - be it on design, code, docs. Please regard it as very much a first draft - it would appear to cover pretty much the same functionality as the corresponding part of pkg_resources [5]. Regards, Vinay Sajip [1] https://bitbucket.org/vinay.sajip/distlib/ [2] http://distlib.readthedocs.org/en/latest/tutorial.html#using-the-resource-ap... [3] http://distlib.readthedocs.org/en/latest/internals.html#the-resources-api [4] http://distlib.readthedocs.org/en/latest/reference.html#the-distlib-resource... [5] http://packages.python.org/distribute/pkg_resources.html#basic-resource-acce...
On 9/23/12 5:57 PM, Vinay Sajip wrote:
I've updated distlib[1] with a resources API - functionality for accessing data files co-located with code in Python packages. This is missing from the stdlib and currently people use pkg_resources to achieve this.
The design and implementation allows for accessing resources from packages imported from the file system or from .zip files, and is intended to be extensible to other PEP 302-compliant import systems.
The update includes:
A short tutorial showing how to use the API. [2] A discussion of the design of the API. [3] Reference documentation for the API. [4]
I'd be interested in any and all feedback on this - be it on design, code, docs. Please regard it as very much a first draft - it would appear to cover pretty much the same functionality as the corresponding part of pkg_resources [5]. Nice work !
On a side note, since these are the original modules that were taking out of Python's packaging implementation, I don't think you can copyright them under your name like what I have seen in setup.py Unless your plan is not to have distlib incorporated into Python but roll a separate project, I'd recommend having it in hg.python.org under the PSF umbrella. If it's a separate project I think the original licencing must remain Cheers Tarek
Tarek Ziadé
On a side note, since these are the original modules that were taking out of Python's packaging implementation, I don't think you can copyright them under your name like what I have seen in setup.py
AFAIK I've only added my copyright to the individual files I've created, but not to any of the files I've copied over from packaging. The plan is to move the project over to hg.python.org at some point, but Antoine suggested (on python-dev) leaving it on BitBucket until it gets a little more mature. I'm fine with that too; there's still a lot to do on it.
Unless your plan is not to have distlib incorporated into Python but roll a separate project, I'd recommend having it in hg.python.org under the PSF umbrella.
I think the suggestion was to keep it out of the stdlib for now, and add all or part of it later, as and when there's a general consensus on python-dev about its fitness for purpose. I can certainly update the files which I copied from packaging with a PSF copyright (or other copyright, like the Fellowship of the Packaging), if that would make you more comfortable, and / or state "Portions Copyright XXX" in setup.py. I certainly don't intend for copyright issues to be contentious :-) If distlib doesn't make it into the stdlib, it'll be dead in the water anyway, as the purpose of it is to find a common underpinning in the stdlib that higher level tools can use, to achieve interoperability and consistency across different packaging tools. I will be adding more features as time permits - e.g. I have started adding scripts functionality including binary launchers for Windows, and I will soon be adding a "plugins" package to handle the "entry point" functionality. At the moment the distlib stuff is really in limbo awaiting feedback from distutils-sig and python-dev members; I'll be adding bits and pieces, but without some kind of endorsement/suggestions/contributions from others, no real progress can be made. Regards, Vinay Sajip
If distlib doesn't make it into the stdlib, it'll be dead in the water anyway, as the purpose of it is to find a common underpinning in the stdlib that higher level tools can use, to achieve interoperability and consistency across different packaging tools. I will be adding more features as time permits - e.g. I have started adding scripts functionality including binary launchers for Windows, and I will soon be adding a "plugins" package to handle the "entry point" functionality. At the moment the distlib stuff is really in limbo awaiting feedback from distutils-sig and python-dev members; I'll be adding bits and pieces, but without some kind of endorsement/suggestions/contributions from others, no real progress can be made.
I feel it is necessary to implement pkg_resources.py on top of any new API. For example just monkey-patch pkg_resources with the parts you've implemented, or implement the new API by calling pkg_resources. It would be a very healthy exercise to see whether you've forgotten something, and avoids having to port 2/3rds of the software on pypi to use the new API. Does pkgutil's resource API (get_data) fit in in any way? I notice it includes the "list zip contents" implementation but it doesn't expose it in any way. It would be a chore to have to implement 3 importer adapters to make all 3 APIs work.
Daniel Holth
I feel it is necessary to implement pkg_resources.py on top of any new API. For example just monkey-patch pkg_resources with the parts you've implemented, or implement the new API by calling pkg_resources. It would be a very healthy exercise to see whether you've forgotten something, and avoids having to port 2/3rds of the software on pypi to use the new API.
That can be looked at in due course. However, weaning projects off distutils/ setuptools APIs and onto anything else will not be pain-free. ISTM one can't promise to fulfil the contract of the pkg_resources resource functions, without essentially duplicating large chunks of pkg_resources; for example, the resource functions take a Requirement object as well as a package name, so any usage of the APIs with Requirements would not work with a simple wrapper. There are specific functions in pkg_resources which I have specifically left out: for example, there's no "get a real filename for this resource, extracting it from zip to a cache folder if you need to". This type of functionality doesn't seem to be a core requirement and could be provided with a specialised finder if really needed, but I see no case for it in the base classes which might go in the stdlib. The resources API is IMO small and simple enough to evaluate on its own (in terms of whether it meets the stated requirements, and whether the stated requirements are complete). PEP 365, proposing inclusion of pkg_resources in the stdlib, was rejected. It doesn't seem right to have something which might potentially be in the stdlib monkey-patching external projects. There might be other approaches to ease porting to new APIs, e.g. 2to3 style fixers.
Does pkgutil's resource API (get_data) fit in in any way? I notice it includes the "list zip contents" implementation but it doesn't expose it in any way. It would be a chore to have to implement 3 importer adapters to make all 3 APIs work.
I'm not sure I quite understand what you mean, as I couldn't see any reference to listing zip contents in the pkgutil docs, but pkgutil's get_data wraps the PEP 302 loader's get_data API, as does distlib.resources. However, pkgutil.get_data doesn't allow you to customise its behaviour, whereas distlib.resources does (by allowing you to implement and register your own finder). One of the reasons why Python packaging is in its current state is that distutils is hard to extend. Any replacement APIs we introduce should not repeat that mistake. While pkg_resources is not quite so bad in that department, since it allows for providers for different loaders, its provider hierarchy seems to be such that eggs are an integral part of the mix, rather than an option you can do without: IResourceProvider NullProvider EmptyProvider EggProvider DefaultProvider ZipProvider In distlib.resources, there are only three equivalents: Resource -> IResourceProvider ResourceFinder -> DefaultProvider ZipResourceFinder -> ZipProvider Part of the feedback I'm looking for is whether these provide a suitable basis for extending by packaging tool providers where needed. Regards, Vinay Sajip
What you can do with pkgutil is call pkgutil.iter_importer_modules.register(importer, function) so that your new importer works with iter_modules. Unfortunately even though it implements most of what you would need to listdir() a zip file, using zipimport._zip_directory_cache[], it does not expose the functionality. https://bitbucket.org/dholth/cpython/src/b534ce119c3c/Lib/pkgutil.py#cl-357 For this particular resources feature you could probably rename Egg to Dist as far as the pkg_resources code goes, there is barely anything in the EggProvider class. Make sure you have a test where the package is not in the root of the .zip file foo.zip/site-packages/bar.py Speaking of old code, does anyone feel like replacing # @decorator def fn(): ... fn = decorator(fn) with the probably-ok-to-use-by-now decorator syntax? Daniel
Daniel Holth
What you can do with pkgutil is call pkgutil.iter_importer_modules.register(importer, function) so that your new importer works with iter_modules. Unfortunately even though it implements most of what you would need to listdir() a zip file, using zipimport._zip_directory_cache[], it does not expose the functionality.
I haven't defined any new importers. In terms of listing a whole zip's contents, that's not part of the idea of a resource: you need to know what resource you want, before you can get it, at least at the top level. Of course, you can iterate over a resource tree once you have a top-level resource. Possibly an iterator could be provided for convenience, but I'm not sure what the use case is. For example, should particular files like __pycache__ and .pyc be excluded when operating on file system resources? I don't use _zip_directory_cache directly - it's exposed as the zipimport loader's "_files" attribute, and I use that.
Make sure you have a test where the package is not in the root of the .zip file
foo.zip/site-packages/bar.py
Thanks for pointing that out, I've raised an issue to remind myself.
Speaking of old code, does anyone feel like replacing
# @decorator def fn(): ... fn = decorator(fn)
with the probably-ok-to-use-by-now decorator syntax?
Where's the anachronistic code you're referring to? Regards, Vinay Sajip
On Thu, Sep 27, 2012 at 9:14 AM, Vinay Sajip
I haven't defined any new importers. In terms of listing a whole zip's contents, that's not part of the idea of a resource: you need to know what resource you want, before you can get it, at least at the top level. Of course, you can iterate over a resource tree once you have a top-level resource. Possibly an iterator could be provided for convenience, but I'm not sure what the use case is. For example, should particular files like __pycache__ and .pyc be excluded when operating on file system resources?
The pkgutil "walk modules/packages" API uses listdir / equivalent to find the packages. So does the code that finds .egg-info / .dist-info.
I don't use _zip_directory_cache directly - it's exposed as the zipimport loader's "_files" attribute, and I use that.
Really the importer API is deficient, but that is a different problem.
Speaking of old code, does anyone feel like replacing
# @decorator def fn(): ... fn = decorator(fn)
with the probably-ok-to-use-by-now decorator syntax?
Where's the anachronistic code you're referring to?
pkgutil and pkg_resources
On 9/26/12 11:58 PM, Vinay Sajip wrote:
Tarek Ziadé
writes: On a side note, since these are the original modules that were taking out of Python's packaging implementation, I don't think you can copyright them under your name like what I have seen in setup.py AFAIK I've only added my copyright to the individual files I've created, but not to any of the files I've copied over from packaging. The plan is to move the project over to hg.python.org at some point, but Antoine suggested (on python-dev) leaving it on BitBucket until it gets a little more mature. I'm fine with that too; there's still a lot to do on it.
I think it's perfectly fine to have hg.python.org/distlib and do what you described. It can mature there - it does not bother cpython or other repositories. Plus, it makes it easier to avoid any licensing headache in a few months/years, since any contributor just have to sign the Python contributor agreement.
Unless your plan is not to have distlib incorporated into Python but roll a separate project, I'd recommend having it in hg.python.org under the PSF umbrella. I think the suggestion was to keep it out of the stdlib for now, and add all or part of it later, as and when there's a general consensus on python-dev about its fitness for purpose. I can certainly update the files which I copied from packaging with a PSF copyright (or other copyright, like the Fellowship of the Packaging), if that would make you more comfortable, and / or state "Portions Copyright XXX" in setup.py. I certainly don't intend for copyright issues to be contentious :-)
I would just leave it in hg.python.org and drop any single author header, and have the project driven by the community under the PSF governance. If we want to thank contributors like you or me or anyone that helped in this code base we can always maintain a CONTRIBUTORS.txt file. What I'd like to avoid is distlib becoming a project that's owned/driven by a single person -- even if in the current effort you are the person that is contributing and driving things. I hope this does not sound like harsh or anything - I am thankful you are doing this. I just think it's important that this project stays under the python-dev umbrella as the "official" subproject of packaging/distutils2 to have a smoother transition later - and have you as its de-facto maintainer. Cheers Tarek
On 27 September 2012 15:01, Tarek Ziadé
I think it's perfectly fine to have hg.python.org/distlib and do what you described.
It can mature there - it does not bother cpython or other repositories.
One advantage of distlib being on bitbucket is that anyone can fork it and create pull requests. I've been doing that recently on a number of projects, and it's a very efficient workflow. I don't know how the workflow would go if hg.python.org was used, but I'd be much less likely to contribute code if I had to create a patch, submit it via the tracker, and manually maintain it in a local copy of the code until it gets accepted. Maybe it's possible to integrate hg.python.org with bitbucket - I see bitbucket allows you to create a repo by forking an externally hosted Mercurial repository, but I've never tried it, and I don't know if pull requests would work. Paul.
On 9/27/12 4:22 PM, Paul Moore wrote:
I think it's perfectly fine to have hg.python.org/distlib and do what you described.
It can mature there - it does not bother cpython or other repositories. One advantage of distlib being on bitbucket is that anyone can fork it and create pull requests. I've been doing that recently on a number of
On 27 September 2012 15:01, Tarek Ziadé
wrote: projects, and it's a very efficient workflow. I don't know how the workflow would go if hg.python.org was used, but I'd be much less likely to contribute code if I had to create a patch, submit it via the tracker, and manually maintain it in a local copy of the code until it gets accepted.
But then, unless I am mistaken, you have to ask each contributor to sign the agreement to avoid any issue the day it goes into Python. I think you can mirror hg.python.org on bitbucket, see below
Maybe it's possible to integrate hg.python.org with bitbucket - I see bitbucket allows you to create a repo by forking an externally hosted Mercurial repository, but I've never tried it, and I don't know if pull requests would work.
bitbucket has hg.python.org mirrors IIRC - there's one for cpython and I guess we could ask one for another repo ?
Paul.
(Sorry, meant to post to the list)
On 27 September 2012 15:26, Tarek Ziadé
But then, unless I am mistaken, you have to ask each contributor to sign the agreement to avoid any issue the day it goes into Python.
TBH, I'm not sure. I've contributed some patches to Python via the tracker, and I honestly don't recall if I ever signed an agreement (I don't mind doing so, just don't know if I did). How would a pull request on bitbucket be any different? Paul
On 9/27/12 4:40 PM, Paul Moore wrote:
(Sorry, meant to post to the list)
But then, unless I am mistaken, you have to ask each contributor to sign the agreement to avoid any issue the day it goes into Python. TBH, I'm not sure. I've contributed some patches to Python via the
On 27 September 2012 15:26, Tarek Ziadé
wrote: tracker, and I honestly don't recall if I ever signed an agreement (I don't mind doing so, just don't know if I did). How would a pull request on bitbucket be any different?
I think it's a border case where you are under the 'authority' of the person that do the merge. I do recall I had to make 10+ people sign the agreement at the last packaging sprint, even if some of them just did pull requests... But I am not a lawyer - ccing Van!
Paul
Le jeudi 27 septembre 2012 à 16:01 +0200, Tarek Ziadé a écrit :
On 9/26/12 11:58 PM, Vinay Sajip wrote:
Tarek Ziadé
writes: On a side note, since these are the original modules that were taking out of Python's packaging implementation, I don't think you can copyright them under your name like what I have seen in setup.py AFAIK I've only added my copyright to the individual files I've created, but not to any of the files I've copied over from packaging. The plan is to move the project over to hg.python.org at some point, but Antoine suggested (on python-dev) leaving it on BitBucket until it gets a little more mature. I'm fine with that too; there's still a lot to do on it.
I think it's perfectly fine to have hg.python.org/distlib and do what you described.
It can mature there - it does not bother cpython or other repositories.
Plus, it makes it easier to avoid any licensing headache in a few months/years, since any contributor just have to sign the Python contributor agreement.
There are two of us maintaining hg.python.org: Georg and I. So, I don't know about Georg, but I don't want to maintain repositories for every third-party library that might one day become part of Python. OTOH, if Georg wants to handle it, then fine :-) Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
On 9/27/12 4:28 PM, Antoine Pitrou wrote:
On 9/26/12 11:58 PM, Vinay Sajip wrote:
Tarek Ziadé
writes: On a side note, since these are the original modules that were taking out of Python's packaging implementation, I don't think you can copyright them under your name like what I have seen in setup.py AFAIK I've only added my copyright to the individual files I've created, but not to any of the files I've copied over from packaging. The plan is to move the project over to hg.python.org at some point, but Antoine suggested (on python-dev) leaving it on BitBucket until it gets a little more mature. I'm fine with that too; there's still a lot to do on it. I think it's perfectly fine to have hg.python.org/distlib and do what you described.
It can mature there - it does not bother cpython or other repositories.
Plus, it makes it easier to avoid any licensing headache in a few months/years, since any contributor just have to sign the Python contributor agreement. There are two of us maintaining hg.python.org: Georg and I. So, I don't know about Georg, but I don't want to maintain repositories for every
Le jeudi 27 septembre 2012 à 16:01 +0200, Tarek Ziadé a écrit : third-party library that might one day become part of Python. OTOH, if Georg wants to handle it, then fine :-)
Since I see distutils2, unittest2, stackless and many users repo in there, Please define the exact rules here - rather than you willingness to do the benevolent work. and what you mean by maintaining an extra repo exactly.
Tarek Ziadé
There are two of us maintaining hg.python.org: Georg and I. So, I don't know about Georg, but I don't want to maintain repositories for every third-party library that might one day become part of Python. OTOH, if Georg wants to handle it, then fine
Since I see distutils2, unittest2, stackless and many users repo in there, Please define the exact rules here - rather than you willingness to do the benevolent work.
and what you mean by maintaining an extra repo exactly.
Maintaining a repo means setting it up and possibly changing the configuration (e.g. commit hooks) when required. This is all done by hand by one of us. My main point is not that hg.p.o is closed to new repositories, but that it is not meant to become a "forge" or an incubator where anyone can create new Python repos. Is distlib important enough yet? Regards Antoine.
On 9/27/12 5:40 PM, Antoine Pitrou wrote:
Tarek Ziadé
writes: There are two of us maintaining hg.python.org: Georg and I. So, I don't know about Georg, but I don't want to maintain repositories for every third-party library that might one day become part of Python. OTOH, if Georg wants to handle it, then fine Since I see distutils2, unittest2, stackless and many users repo in there, Please define the exact rules here - rather than you willingness to do the benevolent work.
and what you mean by maintaining an extra repo exactly. Maintaining a repo means setting it up and possibly changing the configuration (e.g. commit hooks) when required. This is all done by hand by one of us.
My main point is not that hg.p.o is closed to new repositories, but that it is not meant to become a "forge" or an incubator where anyone can create new Python repos. Is distlib important enough yet?
I don't know what you mean by 'important' We removed packaging from Python and said we would work on a smaller set for inclusion. I've also said that I believed that it's simpler to include back wrt licensing if it's a code base that's under the contributors agreement. Last but not least, distlib is the plan forward endorsed by python-dev, so having it in hg.python.org makes that plan more legitimate, no ? I frankly don't follow your reluctance here, and I find your definition of what can be in hg.python.org to be very vague When I asked for a distutils2 repository it was done in 2 minutes and I did not have to argue for hours and potentially push the discussion to yet another packaging drama This is really annoying :/
Regards
Antoine.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 09/27/2012 05:49 PM, Tarek Ziadé wrote:
On 9/27/12 5:40 PM, Antoine Pitrou wrote:
Tarek Ziadé
writes: There are two of us maintaining hg.python.org: Georg and I. So, I don't know about Georg, but I don't want to maintain repositories for every third-party library that might one day become part of Python. OTOH, if Georg wants to handle it, then fine Since I see distutils2, unittest2, stackless and many users repo in there, Please define the exact rules here - rather than you willingness to do the benevolent work.
and what you mean by maintaining an extra repo exactly. Maintaining a repo means setting it up and possibly changing the configuration (e.g. commit hooks) when required. This is all done by hand by one of us.
My main point is not that hg.p.o is closed to new repositories, but that it is not meant to become a "forge" or an incubator where anyone can create new Python repos. Is distlib important enough yet?
I don't know what you mean by 'important'
We removed packaging from Python and said we would work on a smaller set for inclusion.
I've also said that I believed that it's simpler to include back wrt licensing if it's a code base that's under the contributors agreement.
Last but not least, distlib is the plan forward endorsed by python-dev, so having it in hg.python.org makes that plan more legitimate, no ?
No, but I can see why you want it there, and I for one have no problems creating and maintaining that repository.
I frankly don't follow your reluctance here, and I find your definition of what can be in hg.python.org to be very vague
And we'd like to keep it that way.
When I asked for a distutils2 repository it was done in 2 minutes and I did not have to argue for hours and potentially push the discussion to yet another packaging drama
This is really annoying :/
Are you sure you're not overreacting? All that Antoine is saying is that hg.p.o is not a professional hosting service, and the more repositories we have, the more (potential) requests we may get that have to be processed. If you can wait a few days for a new hook to be installed -- then everything is fine. Georg
Georg Brandl
No, but I can see why you want it there, and I for one have no problems creating and maintaining that repository.
Great. Thanks!
And we'd like to keep it that way.
:-)
Are you sure you're not overreacting?
All that Antoine is saying is that hg.p.o is not a professional hosting service, and the more repositories we have, the more (potential) requests we may get that have to be processed. If you can wait a few days for a new hook to be installed -- then everything is fine.
I assume you'll just clone the distlib repo on BitBucket. I'm happy to wait. Thanks, again! Regards, Vinay Sajip
On 09/27/2012 06:48 PM, Vinay Sajip wrote:
Georg Brandl
writes: No, but I can see why you want it there, and I for one have no problems creating and maintaining that repository.
Great. Thanks!
And we'd like to keep it that way.
:-)
Are you sure you're not overreacting?
All that Antoine is saying is that hg.p.o is not a professional hosting service, and the more repositories we have, the more (potential) requests we may get that have to be processed. If you can wait a few days for a new hook to be installed -- then everything is fine.
I assume you'll just clone the distlib repo on BitBucket. I'm happy to wait.
I've cloned the bitbucket.org/vinay.sajip/distlib repo to hg.python.org/distlib. At the moment no hooks are set up; let me know if you want e.g. email notification, CIA or roundup integration. cheers, Georg
Georg Brandl
I've cloned the bitbucket.org/vinay.sajip/distlib repo to hg.python.org/distlib.
Thank you very much.
At the moment no hooks are set up; let me know if you want e.g. email notification, CIA or roundup integration.
Are the coding style (whitespace) hooks set up? They would be useful to have. Re. the integration with CI, roundup and email, I'm not sure exactly which features are easy to provide - how much work would they mean for you? I don't see any urgent need to implement these right away - perhaps we can add these when some more work has been done, and there are more contributors who can benefit. Regards, Vinay Sajip
On 09/28/2012 10:17 AM, Vinay Sajip wrote:
Georg Brandl
writes: I've cloned the bitbucket.org/vinay.sajip/distlib repo to hg.python.org/distlib.
Thank you very much.
At the moment no hooks are set up; let me know if you want e.g. email notification, CIA or roundup integration.
Are the coding style (whitespace) hooks set up? They would be useful to have.
I've now set up the checkwhitespace hook. Let me know if it works as intended.
Re. the integration with CI, roundup and email, I'm not sure exactly which features are easy to provide - how much work would they mean for you? I don't see any urgent need to implement these right away - perhaps we can add these when some more work has been done, and there are more contributors who can benefit.
If you can tell me an address to send the notifications to, email is easy to set up. For CIA, I need the user and project names to submit for. The Roundup hook works by sending email to the Roundup email gateway. Georg
On 9/27/12 6:37 PM, Georg Brandl wrote:
On 09/27/2012 05:49 PM, Tarek Ziadé wrote:
On 9/27/12 5:40 PM, Antoine Pitrou wrote:
Tarek Ziadé
writes: There are two of us maintaining hg.python.org: Georg and I. So, I don't know about Georg, but I don't want to maintain repositories for every third-party library that might one day become part of Python. OTOH, if Georg wants to handle it, then fine Since I see distutils2, unittest2, stackless and many users repo in there, Please define the exact rules here - rather than you willingness to do the benevolent work.
and what you mean by maintaining an extra repo exactly. Maintaining a repo means setting it up and possibly changing the configuration (e.g. commit hooks) when required. This is all done by hand by one of us.
My main point is not that hg.p.o is closed to new repositories, but that it is not meant to become a "forge" or an incubator where anyone can create new Python repos. Is distlib important enough yet?
I don't know what you mean by 'important'
We removed packaging from Python and said we would work on a smaller set for inclusion.
I've also said that I believed that it's simpler to include back wrt licensing if it's a code base that's under the contributors agreement.
Last but not least, distlib is the plan forward endorsed by python-dev, so having it in hg.python.org makes that plan more legitimate, no ?
No, but I can see why you want it there, and I for one have no problems creating and maintaining that repository.
Ok thanks
I frankly don't follow your reluctance here, and I find your definition of what can be in hg.python.org to be very vague
And we'd like to keep it that way.
When I asked for a distutils2 repository it was done in 2 minutes and I did not have to argue for hours and potentially push the discussion to yet another packaging drama
This is really annoying :/
Are you sure you're not overreacting?
Of course I am
Tarek Ziadé
I've also said that I believed that it's simpler to include back wrt licensing if it's a code base that's under the contributors agreement.
What does that have to do with hg.p.o? You can ask contributor agreements if your code is hosted on bitbucket. There's no magical relationship between hg.p.o and contributor agreements.
Last but not least, distlib is the plan forward endorsed by python-dev,
Is it? I haven't seen a PEP or an official decision about that. Just because someone proposed it on a mailing-list doesn't mean it is "endorsed by python-dev". By the way, you can already create repository clones from the Web interface. They just won't have any fancy configuration (hooks).
This is really annoying :/
Why is it so? Why does being on hg.p.o have anything to do with you being able to work on distlib? As Georg said, this is not a professional service that happens to be offered by a company. This is run by volunteers, and I doubt any of us enjoys sysadmin-related tasks. Regards Antoine.
On 9/28/12 12:55 AM, Antoine Pitrou wrote:
Last but not least, distlib is the plan forward endorsed by python-dev, Is it? I haven't seen a PEP or an official decision about that. Just because someone proposed it on a mailing-list doesn't mean it is "endorsed by python-dev".
We discussed about this with Vinay, Nick and al on python-dev, based on Nick's document that describes what 'distlib' is. The document has changed since then, http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/core_packagin... But the idea was to create a subset of 4 or 5 modules that implement the various PEPs. Vinay started to work on this and made progress. When I said "endorsed", I mean that most of the people in python-dev that care about packaging agreed or did not disagree. Now, if you disagree please say it. Or if you need an official decision, we need to first declare who is the current packaging BDFL maybe ? And since you seem interested in the topic maybe you could take that role ? Cheers Tarek
Tarek Ziadé
On 9/28/12 12:55 AM, Antoine Pitrou wrote:
Last but not least, distlib is the plan forward endorsed by python-dev, Is it? I haven't seen a PEP or an official decision about that. Just because someone proposed it on a mailing-list doesn't mean it is "endorsed by python-dev".
We discussed about this with Vinay, Nick and al on python-dev, based on Nick's document that describes what 'distlib' is.
The document has changed since then, http://python-notes.boredomandlaziness.org/en/latest/pep_ideas /core_packaging_api.html
Yep, so it's still a draft, even though it may be promising.
Now, if you disagree please say it. Or if you need an official decision, we need to first declare who is the current packaging BDFL maybe ?
And since you seem interested in the topic maybe you could take that role ?
I have no problem with distlib on the principle. However, if I were the packaging BDFL, my decision would be "integrate it all in distutils" :-) By the way, if you want to help with hg.python.org and manage the distlib repo there, you can send an email on http://mail.python.org/mailman/listinfo/infrastructure and ask for ssh access to the virtual machine. Regards Antoine.
On Fri, Sep 28, 2012 at 9:37 AM, Tarek Ziadé
On 9/28/12 12:55 AM, Antoine Pitrou wrote:
Last but not least, distlib is the plan forward endorsed by python-dev,
Is it? I haven't seen a PEP or an official decision about that. Just because someone proposed it on a mailing-list doesn't mean it is "endorsed by python-dev".
We discussed about this with Vinay, Nick and al on python-dev, based on Nick's document that describes what 'distlib' is.
The document has changed since then, http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/core_packagin...
Yeah, don't read too much into the current state of that - it will eventually become a proposal for a standardised *in-memory* data structure to better support metadata interoperability between packaging tools, but it isn't there yet (although scrubbing every reference to "JSON file" and replacing it with "API data structure" would get you close - think of the overall idea as "like dictConfig, but for distribution metadata rather than logging configurations". We need something like that in order to allow import hooks to correctly supply distribution metadata). The original email thread from the removal of packaging from 3.3 is probably a better point of reference, with a concrete "distlib" PEP still on the todo list. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
The wheel strategy has just been to implement the peps in setuptools and in pkg_resources. It is nearly effortless to do so since the PEPs are so similar to the existing system of eggs. Plus you get to play with it using 900 * 1.6 ** (year-2005) packages. Afterwards, with binary packages, you can deprecate the distutils build system. You still have to build package with distutils forever, but it doesn't matter as much because you don't have to do it on your production machine. When a particular package has trouble with distutils, you tell them to choose among a healthy ecosystem of superior build systems rather than trying to add features to distutils. Are we trying to kill setuptools? I'm not entirely sure, but we should stop trying to do that. The migration should take essentially forever as soon as it makes sense for each pypi publisher.
On Fri, Sep 28, 2012 at 10:07 AM, Daniel Holth
Are we trying to kill setuptools? I'm not entirely sure, but we should stop trying to do that. The migration should take essentially forever as soon as it makes sense for each pypi publisher.
I'd certainly like to kill easy_install, and see any popular elements of setuptools metadata become officially defined *independently* of any given implementation. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
I'd certainly like to kill easy_install, and see any popular elements of setuptools metadata become officially defined *independently* of any given implementation.
I would like to kill distutils without killing setuptools, if that makes any sense. I think the most important thing to do is to clear the confusion and doubt about packaging so you can just relax and make a package rather than wondering if new-packaging-stuff is working yet or if what you are doing is going to suddenly stop working. So my message is relax, make your package how you please, it will continue to be useful, packaging is under control. I think we have the documenting the egg metadata bit pretty well covered at least in e-mail and in the existing PEAK documentation. The next step for me is probably to give setup.cfg's [metadata] section another shot. It would be easy to implement in setuptools and pip, and we would solve the "what do I have to install to run setup.py?" problem and the "where do I put the environment markers" (because install_requires=[] can't parse them) problem. setup.cfg: [metadata] setup_requires_dist = one two ; python_version < '4.0' requires_dist = ...
On 28 September 2012 18:05, Nick Coghlan
On Fri, Sep 28, 2012 at 10:07 AM, Daniel Holth
wrote: Are we trying to kill setuptools? I'm not entirely sure, but we should stop trying to do that. The migration should take essentially forever as soon as it makes sense for each pypi publisher.
I'd certainly like to kill easy_install, and see any popular elements of setuptools metadata become officially defined *independently* of any given implementation.
What I would like to see is: 1. Every packaging tool creating standards-based (dist-info format) metadata files. That includes distutils (the "feature freeze" notwithstanding) as that would catch many existing packages. I'd also like to see distribute switch to using dist-info rather than egg-info, although I'm not sure if that's a separate thing or if distribute just reuses distutils here. New tools like Bento should be using the dist-info format. Tools that consume metadata (e.g., pip install/pip freeze) could then focus on the dist-info format, retaining any other support as legacy-only. 2. Some level of standardised functionality for building and installing. By "standardised", I mean that given *any* sdist, no matter what build tool it uses under the scenes, there is a way of saying "build this, and put the output into the following directories". For distutils, this is --install-XXX. Distribute complicates this by changing the meaning of these options (you need --single-version-externally-managed and --record) but otherwise follows the standard. Add an API to wrap this (include autodetection of setuptools/distribute, and hooks for other tools to integrate in, and you're done). A wrapper like this might be a good thing to have in distlib. 2a. A common, introspectable, format for describing a distribution and its build Not necessary for the above, but useful separately to allow tools to check whether they need to install dependencies before building, for example. I don't see a way of having this affect existing packages, though, short of some form of setup.py converter. So it'll only be viable to depend on this when a new build tool has established itself. 3. A standard layout for installed files. This gets harder, because OS conventions come into play. But basically, the sysconfig locations are the way to encapsulate this. Oh, and kill the egg format :-) (Seriously, does the egg format offer any benefit other than the old multiversion support that no-one uses any more? If not, it probably should be allowed to die off). 4. A standard binary install format (wheel! :-)) 5. Conversion tools to build things like RPMs or MSIs from a wheel would likely be the best way to integrate platform-format installers. The other aspect of easy_install is the package location bit. I believe Vinay has added something like this (the PyPI search and web scraping code) to distlib. Paul.
On Thu, Oct 4, 2012 at 8:55 AM, Paul Moore
On 28 September 2012 18:05, Nick Coghlan
wrote: On Fri, Sep 28, 2012 at 10:07 AM, Daniel Holth
wrote: Are we trying to kill setuptools? I'm not entirely sure, but we should stop trying to do that. The migration should take essentially forever as soon as it makes sense for each pypi publisher.
I'd certainly like to kill easy_install, and see any popular elements of setuptools metadata become officially defined *independently* of any given implementation.
3. A standard layout for installed files. This gets harder, because OS conventions come into play. But basically, the sysconfig locations are the way to encapsulate this. Oh, and kill the egg format :-) (Seriously, does the egg format offer any benefit other than the old multiversion support that no-one uses any more? If not, it probably should be allowed to die off).
4. A standard binary install format (wheel! :-))
I envision a system that uses a PEP site-packages-directory-with-one-distribution as the interface between build and install. The build system always installs into an empty directory with a predictable layout: module.py package/__init__.py distname-1.0.data/scripts/a-script.py distname-1.0.dist-info/METADATA The build system does not have to worry about annoying details like counting the number of packages that use a particular __init__.py or rename-installed-configuration-file-to-.orig only if it was modified from the previous install. You don't have to give the build system elevated privileges because it does not install. Once the build system is done, the installer knows how to copy the dist to its destination paths (potentially running as root, or on another machine that does not have a compiler) instead of necessarily putting it all under a single path. The installer worries about not overwriting config files and so forth. When the interface between build and install is simple "build a standard thing" -> "install a standard thing" it's not a problem to implement features like "index the metadata with sqlite" that you could never put in all the build systems. The other huge advantage to all the setuptools and distutils haters out there is that you no longer need to have the build system installed in the target environment, so you can use setuptools to build without having to install it to run. The egg format is technically equivalent to a site packages directory with only one thing in it. The only thing we change is to rename EGG-INFO to distname-1.0.dist-info and rename the searchable name PKG-INFO to the un-googleable name METADATA, and put more of the metadata into one file METADATA. Nothing is lost, so people who need to use lots of site-packages directories can still do so. Maybe those people will name them distname-1.0.dist/ or like gem put them in a folder where every subfolder is a dist. I don't see a reason to violently kill eggs as long as it is very easy to avoid using them. Instead of a RESOURCES file with all the files in a dist listed line-by-line (slow) the installer should be able to write the list of installation prefixes to a file distribution/_install_locations.py (the name is configurable, and it is only written on request). This idea comes from Bento. purelib = "../relative/path/to-purelib" platlib = "../relative/path/to-platlib" IIRC distutils in the standard library only knows how to generate PKG-INFO at the root of an sdist and it never does anything with the metadata. It would not be a big deal to copy that into a .dist-info directory. I would support a subset of the pysetup setup.cfg [metadata] for setup- and install- requirements instead of adding them to setup.py.
On Thu, Oct 4, 2012 at 6:25 PM, Paul Moore
2. Some level of standardised functionality for building and installing. By "standardised", I mean that given *any* sdist, no matter what build tool it uses under the scenes, there is a way of saying "build this, and put the output into the following directories". For distutils, this is --install-XXX. Distribute complicates this by changing the meaning of these options (you need --single-version-externally-managed and --record) but otherwise follows the standard. Add an API to wrap this (include autodetection of setuptools/distribute, and hooks for other tools to integrate in, and you're done). A wrapper like this might be a good thing to have in distlib.
Note that one of the core goals of wheel is to finally break the distutils conflation of "build" and "install". They're different things, but the lack of a platform-neutral binary format has meant that they've largely been conflated in practice for Python distribution. Any new build hook should be as hands off as possible and have a wheel (or a wheel-format directory) as the result, which is then installed according to the normal rules.
5. Conversion tools to build things like RPMs or MSIs from a wheel would likely be the best way to integrate platform-format installers.
I actually need to be able to generate an SRPM from an sdist, as SRPM -> RPM is the only build system Koji understands. However, the advantage of wheel is that it should be possible to automatically generate a SPEC file which is actually half decent (by running the build hook and introspecting the resulting wheel). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, Oct 4, 2012 at 5:47 PM, Nick Coghlan
On Thu, Oct 4, 2012 at 6:25 PM, Paul Moore
wrote: 2. Some level of standardised functionality for building and installing. By "standardised", I mean that given *any* sdist, no matter what build tool it uses under the scenes, there is a way of saying "build this, and put the output into the following directories". For distutils, this is --install-XXX. Distribute complicates this by changing the meaning of these options (you need --single-version-externally-managed and --record) but otherwise follows the standard. Add an API to wrap this (include autodetection of setuptools/distribute, and hooks for other tools to integrate in, and you're done). A wrapper like this might be a good thing to have in distlib.
Note that one of the core goals of wheel is to finally break the distutils conflation of "build" and "install". They're different things, but the lack of a platform-neutral binary format has meant that they've largely been conflated in practice for Python distribution. Any new build hook should be as hands off as possible and have a wheel (or a wheel-format directory) as the result, which is then installed according to the normal rules.
5. Conversion tools to build things like RPMs or MSIs from a wheel would likely be the best way to integrate platform-format installers.
I actually need to be able to generate an SRPM from an sdist, as SRPM -> RPM is the only build system Koji understands. However, the advantage of wheel is that it should be possible to automatically generate a SPEC file which is actually half decent (by running the build hook and introspecting the resulting wheel).
Note that the build manifest concept as used in bento (and used by wheel as well I believe) is designed specifically to allow conversion between binary formats (where it makes sense of course). It is still in flux, but I could convert eggs to windows installers, David
Nick Coghlan
The document has changed since then,
http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/core_packagin...
I read from your page there that Donald Stufft is working on a JSON-based metadata format. I've been looking that the same thing - a more flexible metadata format which directly maps to dicts - but I used YAML, as looking at too much JSON gives me eye-strain from all the extraneous quotes and braces. I believe that JSON is the right format to use at the moment, because PyYAML still has some bugs which I've run into while doing this work (also, of course, it's not in the stdlib). As the formats are readily interchangeable, there might be interest here in looking at the package.yaml that I've come up with. Since the metadata needs to support both the existing metadata and the additional things that e.g. setuptools supports via additional kwargs to setup(), I put together an ugly hack where I essentially mocked parts of distutils and setuptools, including the setup() call. This allows me to generate the YAML format automatically from most distributions on PyPI, using their setup.py. Here's a GIST with sample package.yaml files automatically generated from PyPI downloads of SQLAlchemy 0.7.8, Jinja2 2.6, Flask 0.9 and wheel 0.9.4. https://gist.github.com/3803556 The JSON format of the metadata is actually appended as a comment on the last line of the YAML metadata (I use that to report YAML bugs). I've not yet documented the schema for the metadata, as I'm still thinking about the details. I ran my hack on around 18,000 PyPI releases (basically, all the latest releases which are hosted on PyPI). For all but around 1300, I was able to generate package.yaml files. The ones which failed are those where no setup.py is present, or it's present, but can't be imported because it assumes that some third-party package is available. Comments welcome. Regards, Vinay Sajip
I like this kind of study. Fixing 1300 packages sounds a lot more manageable than fixing 18,000. (I took a similar look at setup.py but with the ast module instead of actually running the things. Your method is probably more accurate.) It would be very cool to know how many packages use if: statements to affect install_requires... I have tried to include the vital setuptools metadata in Metadata 1.3 without the json. It maps to an ordered dict. A few files like entry_points.txt stay in their own files, and are better off (more performant) that way, since you may be able to avoid parsing METADATA at all if you just want to know if a package has entry_points (os.path.exists(entry_points.txt)). Did you look at the bento ipkg (internal package metadata) format? A barebones one is at https://gist.github.com/3715068 Is there a good "download the latest versions of everything hosted on pypi" script? Mine was pretty terrible as it could not resume after a crash or after the data got stale.
On Sun, Sep 30, 2012 at 4:55 AM, Daniel Holth
I like this kind of study. Fixing 1300 packages sounds a lot more manageable than fixing 18,000. (I took a similar look at setup.py but with the ast module instead of actually running the things. Your method is probably more accurate.) It would be very cool to know how many packages use if: statements to affect install_requires...
Note that being able to convert a package does not mean the conversion is working. You need to make sure that installing something from this new format gives the same thing as installing from the setup.py. That's harder to test, obviously.
Is there a good "download the latest versions of everything hosted on pypi" script? Mine was pretty terrible as it could not resume after a crash or after the data got stale.
I would be interested in that as well, I wanted to do the same kind of analysis for Bento's convert command. David
David Cournapeau
Note that being able to convert a package does not mean the conversion is working. You need to make sure that installing something from this new format gives the same thing as installing from the setup.py. That's harder to test, obviously.
Right, and I've considered this. The most additional basic test I've done is to do the equivalent of "python setup.py sdist" using *only* the package.yaml, actually running "python setup.py sdist" and then comparing the two archives. While I don't have the stats to hand, most of the 18,000 PyPI packages are such that the comparison of archives shows no meaningful differences. Many of the failures are due to custom code in setup.py, such as command classes. My work so far also doesn't faithfully mock e.g. Cython or numpy, so that calls to their customisations to distutils from setup.py aren't captured, and so in those cases the resulting package.yaml is incomplete. However, that's perhaps just a matter on spending some more time on the mocking approach. Obviously checking sdist is just a first step, but it's disappointing to see how many packages on PyPI just fail the "download from PyPI, then run python setup.py sdist" test because of e.g. importing packages which don't exist. To my mind, any source package on PyPI should be downloadable and be able to have an sdist run on it to regenerate the archive, without needing any other packages to be present: it's a source package already, right? But perhaps that's what you get by allowing arbitrary code in setup.py, and eliminating setup.py is definitely a step in the right direction.
Is there a good "download the latest versions of everything hosted on pypi" script? Mine was pretty terrible as it could not resume after a crash or after the data got stale.
I would be interested in that as well, I wanted to do the same kind of analysis for Bento's convert command.
Not as a standalone script that anyone can use, unfortunately. I don't have the space to store all those downloads, and stuff on PyPI keeps getting updated, anyway: so what I did was to run a first pass using the XML-RPC API to get a list of package, version and archive URLs into a text file; my scripts then pick up individual packages, download them and do the mocking/capture to package.yaml, followed by the sdist comparison. I just use grep to filter the archive list to determine which packages to process. Every now and again I update that text file of archives and versions, see what changed and arrange to run my code over updated and new packages. I just keep the package.yaml files and the listings of the archives produced by the distutils/setuptools dist operation and my package.yaml-using version. Regards, Vinay Sajip
I have some unpublished work I should publish. Part of the point with what i'm trying to do is to define a standard for what is inside a package, but not really for how you take a particular set of files and turn it into that. So if you want to edit yaml you can have a yaml file and have a package creation tool that turns that yaml into the standard json that gets added to the package. The same can be said for a python file or a setup.cfg or whatever. Ideally the roles of package creation, building, and installation should be able to be completely separate. So my goal is to facilitate that by creating a standard way to describe all the data about a distribution, including extensible data in a way that any tool can serialize or deserialize it losslessly. On Saturday, September 29, 2012 at 5:54 AM, Vinay Sajip wrote:
Nick Coghlan
http://gmail.com)> writes: The document has changed since then, http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/core_packagin...
I read from your page there that Donald Stufft is working on a JSON-based metadata format. I've been looking that the same thing - a more flexible metadata format which directly maps to dicts - but I used YAML, as looking at too much JSON gives me eye-strain from all the extraneous quotes and braces.
I believe that JSON is the right format to use at the moment, because PyYAML still has some bugs which I've run into while doing this work (also, of course, it's not in the stdlib). As the formats are readily interchangeable, there might be interest here in looking at the package.yaml that I've come up with.
Since the metadata needs to support both the existing metadata and the additional things that e.g. setuptools supports via additional kwargs to setup(), I put together an ugly hack where I essentially mocked parts of distutils and setuptools, including the setup() call. This allows me to generate the YAML format automatically from most distributions on PyPI, using their setup.py.
Here's a GIST with sample package.yaml files automatically generated from PyPI downloads of SQLAlchemy 0.7.8, Jinja2 2.6, Flask 0.9 and wheel 0.9.4.
https://gist.github.com/3803556
The JSON format of the metadata is actually appended as a comment on the last line of the YAML metadata (I use that to report YAML bugs).
I've not yet documented the schema for the metadata, as I'm still thinking about the details.
I ran my hack on around 18,000 PyPI releases (basically, all the latest releases which are hosted on PyPI). For all but around 1300, I was able to generate package.yaml files. The ones which failed are those where no setup.py is present, or it's present, but can't be imported because it assumes that some third-party package is available.
Comments welcome.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org (mailto:Distutils-SIG@python.org) http://mail.python.org/mailman/listinfo/distutils-sig
Why not just let any line in pkg-info that starts with json/tag name: decode as json.
On Sep 30, 2012, at 1:03 AM, Donald Stufft
I have some unpublished work I should publish.
Part of the point with what i'm trying to do is to define a standard for what is inside a package, but not really for how you take a particular set of files and turn it into that. So if you want to edit yaml you can have a yaml file and have a package creation tool that turns that yaml into the standard json that gets added to the package. The same can be said for a python file or a setup.cfg or whatever. Ideally the roles of package creation, building, and installation should be able to be completely separate. So my goal is to facilitate that by creating a standard way to describe all the data about a distribution, including extensible data in a way that any tool can serialize or deserialize it losslessly.
On Saturday, September 29, 2012 at 5:54 AM, Vinay Sajip wrote:
Nick Coghlan
writes: The document has changed since then, http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/core_packagin...
I read from your page there that Donald Stufft is working on a JSON-based metadata format. I've been looking that the same thing - a more flexible metadata format which directly maps to dicts - but I used YAML, as looking at too much JSON gives me eye-strain from all the extraneous quotes and braces.
I believe that JSON is the right format to use at the moment, because PyYAML still has some bugs which I've run into while doing this work (also, of course, it's not in the stdlib). As the formats are readily interchangeable, there might be interest here in looking at the package.yaml that I've come up with.
Since the metadata needs to support both the existing metadata and the additional things that e.g. setuptools supports via additional kwargs to setup(), I put together an ugly hack where I essentially mocked parts of distutils and setuptools, including the setup() call. This allows me to generate the YAML format automatically from most distributions on PyPI, using their setup.py.
Here's a GIST with sample package.yaml files automatically generated from PyPI downloads of SQLAlchemy 0.7.8, Jinja2 2.6, Flask 0.9 and wheel 0.9.4.
https://gist.github.com/3803556
The JSON format of the metadata is actually appended as a comment on the last line of the YAML metadata (I use that to report YAML bugs).
I've not yet documented the schema for the metadata, as I'm still thinking about the details.
I ran my hack on around 18,000 PyPI releases (basically, all the latest releases which are hosted on PyPI). For all but around 1300, I was able to generate package.yaml files. The ones which failed are those where no setup.py is present, or it's present, but can't be imported because it assumes that some third-party package is available.
Comments welcome.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On Sunday, September 30, 2012 at 1:33 AM, Daniel Holth wrote:
Why not just let any line in pkg-info that starts with json/tag name: decode as json.
Why use a format that requires custom parsing knowledge to create an internal Python representation. It cannot even accurately represent all of the Metadata 1.2 data. The Project Url feature is a good example. The parser has to know that in order to take "Docs, http://docs.com/" and turn it into {"Docs": "https://docs.com/"} that it has to split on the , and strip the whitespace. Now in the future we add another item that needs a namespace, but this one needs to have "," in it's value, so now for this tag we decide to use : instead of ,. Leads to inconsistent syntax that needs to be reimplemented by anyone who wants to parse the file. The lack of power of the current encoding already causes issue needing to layer on custom bits, why not just use a standard serialization format that can properly serialize all of the values.
On Sep 30, 2012, at 2:07 AM, Donald Stufft
On Sunday, September 30, 2012 at 1:33 AM, Daniel Holth wrote:
Why not just let any line in pkg-info that starts with json/tag name: decode as json. Why use a format that requires custom parsing knowledge to create an internal Python representation. It cannot even accurately represent all of the Metadata 1.2 data. The Project Url feature is a good example. The parser has to know that in order to take "Docs, http://docs.com/" and turn it into {"Docs": "https://docs.com/"} that it has to split on the , and strip the whitespace. Now in the future we add another item that needs a namespace, but this one needs to have "," in it's value, so now for this tag we decide to use : instead of ,. Leads to inconsistent syntax that needs to be reimplemented by anyone who wants to parse the file.
The lack of power of the current encoding already causes issue needing to layer on custom bits, why not just use a standard serialization format that can properly serialize all of the values.
Ok
On Sun, Sep 30, 2012 at 6:03 AM, Donald Stufft
I have some unpublished work I should publish.
Part of the point with what i'm trying to do is to define a standard for what is inside a package, but not really for how you take a particular set of files and turn it into that. So if you want to edit yaml you can have a yaml file and have a package creation tool that turns that yaml into the standard json that gets added to the package. The same can be said for a python file or a setup.cfg or whatever. Ideally the roles of package creation, building, and installation should be able to be completely separate. So my goal is to facilitate that by creating a standard way to describe all the data about a distribution, including extensible data in a way that any tool can serialize or deserialize it losslessly.
Note that all this work has already been done in Bento. I understand the appeal of using an existing format like yaml, but it is not clear to me how one can handle conditional with it, and I think you want to handle conditionals in there (for platform-dependent dependencies). Bento also has a command to convert setup.py-based projects to bento internal format, adapting it to use another format should not be too difficult. David
On Sun, Sep 30, 2012 at 8:59 AM, David Cournapeau
On Sun, Sep 30, 2012 at 6:03 AM, Donald Stufft
wrote: I have some unpublished work I should publish.
Part of the point with what i'm trying to do is to define a standard for what is inside a package, but not really for how you take a particular set of files and turn it into that. So if you want to edit yaml you can have a yaml file and have a package creation tool that turns that yaml into the standard json that gets added to the package. The same can be said for a python file or a setup.cfg or whatever. Ideally the roles of package creation, building, and installation should be able to be completely separate. So my goal is to facilitate that by creating a standard way to describe all the data about a distribution, including extensible data in a way that any tool can serialize or deserialize it losslessly.
Note that all this work has already been done in Bento.
I am a huge fan of Bento. I will be converting my packages to use it as soon as is practical. I would like to add the "where are things installed.py" generation to the wheel standard somehow.
On Sunday, September 30, 2012 at 8:59 AM, David Cournapeau wrote:
Note that all this work has already been done in Bento.
I understand the appeal of using an existing format like yaml, but it is not clear to me how one can handle conditional with it, and I think you want to handle conditionals in there (for platform-dependent dependencies).
Bento also has a command to convert setup.py-based projects to bento internal format, adapting it to use another format should not be too difficult.
David Instead of conditionals the existing ideas use an environment marker, so instead of (pseudo format, I just woke up):
if python_version < 2.6: require: simplejson You would do: require: simplejson; python_version < 2.6 This gives you the same sort of ability however instead of using if statements it encodes it into the requirement string. I'm not completely in love with either system but I prefer the ; solution over conditionals because it makes the metadata static no matter what system you run it on and it makes it easy to use an existing format.
It's the same. Someone will write a bento conditionals to pep markers
compiler and it will be trivial.
On Sep 30, 2012 12:07 PM, "Donald Stufft"
On Sunday, September 30, 2012 at 8:59 AM, David Cournapeau wrote:
Note that all this work has already been done in Bento.
I understand the appeal of using an existing format like yaml, but it is not clear to me how one can handle conditional with it, and I think you want to handle conditionals in there (for platform-dependent dependencies).
Bento also has a command to convert setup.py-based projects to bento internal format, adapting it to use another format should not be too difficult.
David
Instead of conditionals the existing ideas use an environment marker, so instead of (pseudo format, I just woke up):
if python_version < 2.6: require: simplejson
You would do: require: simplejson; python_version < 2.6
This gives you the same sort of ability however instead of using if statements it encodes it into the requirement string. I'm not completely in love with either system but I prefer the ; solution over conditionals because it makes the metadata static no matter what system you run it on and it makes it easy to use an existing format.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On Sunday, September 30, 2012 at 12:38 PM, Daniel Holth wrote:
It's the same. Someone will write a bento conditionals to pep markers compiler and it will be trivial.
Right same concept, the difference being one requires a specialized parser and the other uses a standard parser available in every language.
On Sun, Sep 30, 2012 at 5:07 PM, Donald Stufft
On Sunday, September 30, 2012 at 8:59 AM, David Cournapeau wrote:
Note that all this work has already been done in Bento.
I understand the appeal of using an existing format like yaml, but it is not clear to me how one can handle conditional with it, and I think you want to handle conditionals in there (for platform-dependent dependencies).
Bento also has a command to convert setup.py-based projects to bento internal format, adapting it to use another format should not be too difficult.
David
Instead of conditionals the existing ideas use an environment marker, so instead of (pseudo format, I just woke up):
if python_version < 2.6: require: simplejson
You would do: require: simplejson; python_version < 2.6
Right, in that case, it would work, but in my experience, it is important to be able to apply conditionals to more than just requirements (packages definition and so on). I am not suggesting something very complicated (we don't want to re-create a language), but more something like cabal (see conditionals in http://www.haskell.org/cabal/users-guide/developing-packages.html#package-de...), or even RPM (http://www.rpm.org/max-rpm/s1-rpm-inside-conditionals.html). This has the advantage of using something that has already been battle-tested for several years. But again, syntax is a really minor point, and what matters the most are the features. I was not able to express what I wanted with yaml for bento, but would be more than happy to scrap the custom format if someone manages to. David
On Sunday, September 30, 2012 at 6:21 PM, David Cournapeau wrote:
Right, in that case, it would work, but in my experience, it is important to be able to apply conditionals to more than just requirements (packages definition and so on).
A significant problem is caused by the allowance of if statements in setup.py for trying to generically pull data out. Obviously the limited conditionals of Bento make that easier, but you still need to essentially maintain a matrix of every possible combination of conditions in order to accurately represent that data when you don't know what the target system is going to look at. If other fields benefit from the environment markers then by all means lets add them to other fields. A static piece of metadata that is unchanging is a powerful tool on the index side. It allows some of the current stuff i've been experimenting with to work very cleanly, such as being able to fetch an entire dependency tree in one request. Now again i'm talking solely about the in distribution format and not about however the tool the developer uses to create creates that file. So for example bento could easily keep it's conditional based processing of the info file, it would just need to be smart enough to "compact" it down to the environment markers in the standard file included with the distribution.
On Sun, Sep 30, 2012 at 11:35 PM, Donald Stufft
On Sunday, September 30, 2012 at 6:21 PM, David Cournapeau wrote:
Right, in that case, it would work, but in my experience, it is important to be able to apply conditionals to more than just requirements (packages definition and so on).
A significant problem is caused by the allowance of if statements in setup.py for trying to generically pull data out. Obviously the limited conditionals of Bento make that easier, but you still need to essentially maintain a matrix of every possible combination of conditions in order to accurately represent that data when you don't know what the target system is going to look at. If other fields benefit from the environment markers then by all means lets add them to other fields.
A static piece of metadata that is unchanging is a powerful tool on the index side. It allows some of the current stuff i've been experimenting with to work very cleanly, such as being able to fetch an entire dependency tree in one request.
I am not sure I understand the argument: whatever the syntax, if the feature is there, you will have the same problem ? The fact that it is used by existing solutions tend to convince me it is not a problem (cabal works much better than distutils/etc... in pretty much every way, including installing things with dependency from their package servers, not to speak about RPM). This could be abused of course, but it should not be too difficult to give suggestions in those cases (one big advantage of whatever-format-we-want compared to setup.py). Conditional markers do make the dependency harder to get right, as conditional in dependencies + provides is what makes dependency graph building complicated. It seems this can be solved neatly though (see http://www.slideshare.net/naderman/dependency-resolution-with-sat-symfony-li...). David
On Sunday, September 30, 2012 at 6:50 PM, David Cournapeau wrote:
I am not sure I understand the argument: whatever the syntax, if the feature is there, you will have the same problem ? The fact that it is used by existing solutions tend to convince me it is not a problem (cabal works much better than distutils/etc... in pretty much every way, including installing things with dependency from their package servers, not to speak about RPM).
I have a site that pulls data from python packages, one of my problems of late has been pulling requirements data out of a setuptools setup.py. Simply fetching the data for my current system is simple, however because of the use of conditionals that affect the final outcome of the metadata in order to make sure I get a true and complete accounting of all the metadata I would have to anticipate every combination of conditions used inside that file. Now as far as I can tell, the bento.info conditionals have that same problem except that they have a much narrow scope, so they have a smaller matrix. How would I given a package that has an ``if os(windows)`` conditional, get a representation of that files metadata as if it were in windows? I would need to override what os(windows) is? Now if this same package also has conditonals for flags, the matrix becomes larger and larger and I need to either create my own custom parser that just ignores or annotates the conditionals or I need to use the official parser and run it through once per item in the matrix to get the "whole story". Am I incorrect in that? Additionally, conditionals as they are (to my understanding, having never used bento but having read the docs and some of it's source code) implemented in bento complicate the parser and confuse reading a file with understanding a file. What additional power or benefit does: Library: InstallRequires: docutils, sphinx if os(windows): pywin32 provide over: { "Library": { "InstallRequires": [ "docutils", "sphinx", "pywin32; sys.playform == win32" ] } } (Note I dislike the ; <foo> syntax too, but I dislike the conditionals more.) Maybe i'm missing some powerful construct in the bento.info that warrants throwing out a lot of hard work and testing of a lot of individuals for these common formats, however I just don't see it in merely providing a simple conditional statement. The ``bento.parser`` package contains ~1700 SLOC now without investigating that terribly deeply I'm willing to bet almost all of that could be thrown out in the use of a standard format. I know you've mentioned Cabal a lot when talking about bento, and it's obvious it's been a very big influence on your design.
Conditional markers do make the dependency harder to get right, as conditional in dependencies + provides is what makes dependency graph building complicated. It seems this can be solved neatly though (see http://www.slideshare.net/naderman/dependency-resolution-with-sat-symfony-li...).
Good link :)
On Sun, Sep 30, 2012 at 7:40 PM, Donald Stufft
On Sunday, September 30, 2012 at 6:50 PM, David Cournapeau wrote:
I am not sure I understand the argument: whatever the syntax, if the feature is there, you will have the same problem ? The fact that it is used by existing solutions tend to convince me it is not a problem (cabal works much better than distutils/etc... in pretty much every way, including installing things with dependency from their package servers, not to speak about RPM).
I have a site that pulls data from python packages, one of my problems of late has been pulling requirements data out of a setuptools setup.py. Simply fetching the data for my current system is simple, however because of the use of conditionals that affect the final outcome of the metadata in order to make sure I get a true and complete accounting of all the metadata I would have to anticipate every combination of conditions used inside that file.
This is real setup.py problem and the packaging peps did a pretty good job of making this better (but no one uses environment markers yet). You might arrange to have it run on some number of common Pythons. You could also parse the setup.py with the ast module to figure out whether or not the install_requires was modified inside condition statements. You don't have to look at setup.py at all, you can just look for *.egg-info/requires.txt and you will get the requirements as they appeared on the publisher's machine. (pip always runs setup.py egg_info again, to get trustworthy metadata, while it is resolving deps).
Now as far as I can tell, the bento.info conditionals have that same problem except that they have a much narrow scope, so they have a smaller matrix.
How would I given a package that has an ``if os(windows)`` conditional, get a representation of that files metadata as if it were in windows? I would need to override what os(windows) is? Now if this same package also has conditonals for flags, the matrix becomes larger and larger and I need to either create my own custom parser that just ignores or annotates the conditionals or I need to use the official parser and run it through once per item in the matrix to get the "whole story". Am I incorrect in that?
Additionally, conditionals as they are (to my understanding, having never used bento but having read the docs and some of it's source code) implemented in bento complicate the parser and confuse reading a file with understanding a file.
What additional power or benefit does:
Library: InstallRequires: docutils, sphinx if os(windows): pywin32
provide over:
{ "Library": { "InstallRequires": [ "docutils", "sphinx", "pywin32; sys.playform == win32" ] } }
(Note I dislike the ; <foo> syntax too, but I dislike the conditionals more.)
Maybe i'm missing some powerful construct in the bento.info that warrants throwing out a lot of hard work and testing of a lot of individuals for these common formats, however I just don't see it in merely providing a simple conditional statement. The ``bento.parser`` package contains ~1700 SLOC now without investigating that terribly deeply I'm willing to bet almost all of that could be thrown out in the use of a standard format.
David C. would probably be the first to admit that he is not principally a compiler author. You are a Python programmer and you want to use {} instead of (indentation)? Don't answer that. He should probably produce an AST (not "import ast") from the bento.info with the conditionals, it sounds like it may currently interpret while parsing, I haven't looked. All the syntax arguments are a red herring. The other stuff "separating build phases cleanly using an on-disk format" and "extensible" is what makes Bento beautiful. Not having to write .ini files is a definite plus though.
Conditional markers do make the dependency harder to get right, as conditional in dependencies + provides is what makes dependency graph building complicated. It seems this can be solved neatly though (see http://www.slideshare.net/naderman/dependency-resolution-with-sat-symfony-li...).
Well in pkg_resources the default environment for environment marker functions (only supported in .dist-info style distributions) is: _VARS = {'sys.platform': sys.platform, 'python_version': '%s.%s' % sys.version_info[:2], # FIXME parsing sys.platform is not reliable, but there is no other # way to get e.g. 2.7.2+, and the PEP is defined with sys.version 'python_full_version': sys.version.split(' ', 1)[0], 'os.name': os.name, 'platform.version': platform.version(), 'platform.machine': platform.machine(), 'platform.python_implementation': python_implementation(), 'extra': None # wheel extension } Your users could send you that dict, or you could just evaluate every environment marker all as true and send them all the pkg-resources files that they could possibly need. Unfortunately no one is using environment markers. If you would like to use environment markers for your requirements, you can include in setup.cfg [metadata] requires-dist = argparse; python_version < '2.7' another_dep and package your dist with "python setup.py bdist_wheel". The deps in [metadata] will override the install_requires argument to setup.cfg. When using the patched pip to install from the wheel file, if markerlib and distribute >= 0.6.28 are installed, pip will resolve dependencies based on those markers.
On Mon, Oct 1, 2012 at 12:40 AM, Donald Stufft
On Sunday, September 30, 2012 at 6:50 PM, David Cournapeau wrote:
I am not sure I understand the argument: whatever the syntax, if the feature is there, you will have the same problem ? The fact that it is used by existing solutions tend to convince me it is not a problem (cabal works much better than distutils/etc... in pretty much every way, including installing things with dependency from their package servers, not to speak about RPM).
I have a site that pulls data from python packages, one of my problems of late has been pulling requirements data out of a setuptools setup.py. Simply fetching the data for my current system is simple, however because of the use of conditionals that affect the final outcome of the metadata in order to make sure I get a true and complete accounting of all the metadata I would have to anticipate every combination of conditions used inside that file.
Now as far as I can tell, the bento.info conditionals have that same problem except that they have a much narrow scope, so they have a smaller matrix.
How would I given a package that has an ``if os(windows)`` conditional, get a representation of that files metadata as if it were in windows? I would need to override what os(windows) is? Now if this same package also has conditonals for flags, the matrix becomes larger and larger and I need to either create my own custom parser that just ignores or annotates the conditionals or I need to use the official parser and run it through once per item in the matrix to get the "whole story". Am I incorrect in that?
Additionally, conditionals as they are (to my understanding, having never used bento but having read the docs and some of it's source code) implemented in bento complicate the parser and confuse reading a file with understanding a file.
What additional power or benefit does:
Library: InstallRequires: docutils, sphinx if os(windows): pywin32
provide over:
{ "Library": { "InstallRequires": [ "docutils", "sphinx", "pywin32; sys.playform == win32" ] } }
It is obviously exactly the same, it is just different syntax (which is why I don't understand the "fetching requirements" arguments, since whatever is true for bento is true of your format and vice et versa).
Maybe i'm missing some powerful construct in the bento.info that warrants throwing out a lot of hard work and testing of a lot of individuals for these common formats, however I just don't see it in merely providing a simple conditional statement. The ``bento.parser`` package contains ~1700 SLOC now without investigating that terribly deeply I'm willing to bet almost all of that could be thrown out in the use of a standard format.
What I am afraid of is that one will need to add a lots of those markers, not just for requirements, which will lose the point of using a common format. I would also recommend against json, because it does not allow for comments, yaml is much better in that regard.
I know you've mentioned Cabal a lot when talking about bento, and it's obvious it's been a very big influence on your design.
Mostly the syntax, actually. David
David Cournapeau
I am not suggesting something very complicated (we don't want to re-create a language), but more something like cabal (see conditionals in http://www.haskell.org/cabal/users-guide/developing-packages.html#package-de...), or even RPM (http://www.rpm.org/max-rpm/s1-rpm-inside-conditionals.html). This has the advantage of using something that has already been battle-tested for several years.
Both the Cabal and RPM conditionals seem fairly narrow in scope (e.g. os/ arch/impl/flag for Cabal), and therefore it seems that environment markers should be able to do the job. Although of course environment markers aren't battle tested, it seems worthwhile deploying them on the battlefield to see how they perform.
But again, syntax is a really minor point, and what matters the most are the features. I was not able to express what I wanted with yaml for bento, but would be more than happy to scrap the custom format if someone manages to.
If I can continue working on the YAML format that I've mentioned, I hope I can report some progress in this area in due course. Regards, Vinay Sajip
On Mon, Oct 1, 2012 at 8:14 AM, Vinay Sajip
David Cournapeau
writes: I am not suggesting something very complicated (we don't want to re-create a language), but more something like cabal (see conditionals in http://www.haskell.org/cabal/users-guide/developing-packages.html#package-de...), or even RPM (http://www.rpm.org/max-rpm/s1-rpm-inside-conditionals.html). This has the advantage of using something that has already been battle-tested for several years.
Both the Cabal and RPM conditionals seem fairly narrow in scope (e.g. os/ arch/impl/flag for Cabal), and therefore it seems that environment markers should be able to do the job. Although of course environment markers aren't battle tested, it seems worthwhile deploying them on the battlefield to see how they perform.
Note that in Cabal at least, those conditionals work not just for requirements, but for pretty much any section that is not metadata (so in the python world, you could condition on the package you want to install, etc...).
But again, syntax is a really minor point, and what matters the most are the features. I was not able to express what I wanted with yaml for bento, but would be more than happy to scrap the custom format if someone manages to.
If I can continue working on the YAML format that I've mentioned, I hope I can report some progress in this area in due course.
That's great. May I ask you to put this code somewhere ? This would allow me to try using this inside bento. It would allow using whatever format you are ending up with directly with existing projects (bento is is already used by some complex packages such as numpy or scipy). It would be a good way to ensure the format provides enough semantics. David
David Cournapeau
Note that in Cabal at least, those conditionals work not just for requirements, but for pretty much any section that is not metadata (so in the python world, you could condition on the package you want to install, etc...).
Right, but the concept of environment markers is simple enough that we should be able to extend to other areas. Requirements are just the most obvious application.
That's great. May I ask you to put this code somewhere ? This would allow me to try using this inside bento. It would allow using whatever format you are ending up with directly with existing projects (bento is is already used by some complex packages such as numpy or scipy). It would be a good way to ensure the format provides enough semantics.
Which code do you mean? Although I have written some code to produce metadata in YAML format, I have not yet got anything that consumes it to a reasonable point (apart from sdist generation, which should not be environment-specific). I will publish what I have as soon as it has reached a reasonable state of usefulness. Precisely because it isn't declarative, existing environment-specific code in setup.py files in PyPI archives is not easy to convert to environment markers automatically :-( Regards, Vinay Sajip
On Mon, Oct 1, 2012 at 3:15 PM, Vinay Sajip
David Cournapeau
writes: Note that in Cabal at least, those conditionals work not just for requirements, but for pretty much any section that is not metadata (so in the python world, you could condition on the package you want to install, etc...).
Right, but the concept of environment markers is simple enough that we should be able to extend to other areas. Requirements are just the most obvious application.
I think we just need to see how it looks like in real cases to get a good idea about tradeoffs syntax-wise.
That's great. May I ask you to put this code somewhere ? This would allow me to try using this inside bento. It would allow using whatever format you are ending up with directly with existing projects (bento is is already used by some complex packages such as numpy or scipy). It would be a good way to ensure the format provides enough semantics.
Which code do you mean? Although I have written some code to produce metadata in YAML format, I have not yet got anything that consumes it to a reasonable point (apart from sdist generation, which should not be environment-specific).
The code that produces yaml files. The point is precisely that it would be easy for me to consume this and produce the internal package representation in bento, which would then allow to configure, build and install packages using the bento format.
I will publish what I have as soon as it has reached a reasonable state of usefulness. Precisely because it isn't declarative, existing environment-specific code in setup.py files in PyPI archives is not easy to convert to environment markers automatically :-(
I am curious: do you attempt to parse the setup.py to get those environment markers ? I personally gave up on this and just run the setup.py to get whatever values are available in that precise environment. Given that versions are not normalized, I am afraid trying to do better is bound to fail, but maybe I am not smart enough to do it :) David
David Cournapeau
The code that produces yaml files. The point is precisely that it would be easy for me to consume this and produce the internal package representation in bento, which would then allow to configure, build and install packages using the bento format.
Well, it's not pretty, but here's a Gist of the method which produces YAML metadata from what's passed to setup(): https://gist.github.com/3812561
I am curious: do you attempt to parse the setup.py to get those environment markers ? I personally gave up on this and just run the setup.py to get whatever values are available in that precise
Lord, no, life's too short :-) Regards, Vinay Sajip
On Mon, Oct 1, 2012 at 5:06 PM, Vinay Sajip
David Cournapeau
writes: The code that produces yaml files. The point is precisely that it would be easy for me to consume this and produce the internal package representation in bento, which would then allow to configure, build and install packages using the bento format.
Well, it's not pretty, but here's a Gist of the method which produces YAML metadata from what's passed to setup():
Thanks. I don't think it is fair to expect pretty code there in any case. I noticed that you put the classifiers list as a string (same for platform). I think it is expected to be a list, no ? Maybe slightly more controversial, I think the manifest should be "evaluated". The current system of inclusion + exclusion is too baroque to my taste, and makes it near-impossible to make reproducible builds. David
David Cournapeau
I noticed that you put the classifiers list as a string (same for platform). I think it is expected to be a list, no ?
That's an oversight; there are doubtless others, too.
Maybe slightly more controversial, I think the manifest should be "evaluated". The current system of inclusion + exclusion is too baroque to my taste, and makes it near-impossible to make reproducible builds.
Would you care to give a little more detail about what you mean by "evaluate"? I've kept the manifest as it is for backward compatibility (i.e. so that my sanity checking of sdist follows the logic as is used by distutils/distribute). Regards, Vinay Sajip
On Mon, Oct 1, 2012 at 8:22 PM, Vinay Sajip
David Cournapeau
writes: I noticed that you put the classifiers list as a string (same for platform). I think it is expected to be a list, no ?
That's an oversight; there are doubtless others, too.
Sure. I guess I was just trying to get at getting the code released in a repo so that we can provide patches :)
Maybe slightly more controversial, I think the manifest should be "evaluated". The current system of inclusion + exclusion is too baroque to my taste, and makes it near-impossible to make reproducible builds.
Would you care to give a little more detail about what you mean by "evaluate"? I've kept the manifest as it is for backward compatibility (i.e. so that my sanity checking of sdist follows the logic as is used by distutils/distribute).
If you want to be backward compatible with distutils, then yes, you have to encode it as is. By evaluating, I meant specifying the list of files instead of using some higher level logic. Otherwise, the static format does not specify the actual content (and depends on way too many parameters). David
Tarek Ziadé
I would just leave it in hg.python.org and drop any single author header, and have the project driven by the community under the PSF governance.
If we want to thank contributors like you or me or anyone that helped in this code base we can always maintain a CONTRIBUTORS.txt file.
What I'd like to avoid is distlib becoming a project that's owned/driven by a single person -- even if in the current effort you are the person that is contributing and driving things.
Well, my focus here is certainly not on ownership, but I generally add copyright notices to all the software I produce as a matter of course. If all it takes is an additional "Licensed to the PSF through a Contributor Agreement" to regularise things, I'll certainly do that. But the only way it can be a multi-person project is for multiple people to get practically involved on a day-to-day level. I might seem to driving the project right now, but that's only an illusion due to the fact that I've got spare bandwidth at the moment to think about how things could work, and write some code, docs and tests. Other priorities might intervene at any point, so I certainly agree that the project shouldn't have a low bus factor. Contributors welcome! Just form an orderly line :-)
I just think it's important that this project stays under the python-dev umbrella as the "official" subproject of packaging/distutils2 to have a smoother transition later - and have you as its de-facto maintainer.
Hmmm ... holy grail, or poisoned chalice? ;-) Regards, Vinay Sajip
participants (9)
-
Antoine Pitrou
-
Daniel Holth
-
David Cournapeau
-
Donald Stufft
-
Georg Brandl
-
Nick Coghlan
-
Paul Moore
-
Tarek Ziadé
-
Vinay Sajip