packaging location ?
Hello I was wondering if anyone knows if the removed Lib/packaging directory landed in some other places after it was removed. We have http://hg.python.org/distutils2 be the 'packaging' version is a full py3-renamed version we need to keep mirrored Cheers Tarek
Hi, Lib/packaging is in the repository history, and in my backup clones, but it’s not visible in any branch head as we have no branch for 3.4 yet. I can bring the directory back with a simple Mercurial command. However, it’s not clear to me that we want to do that. At the inception of the project, we wanted a new distutils with support for the latest PEPs and improved extensibility. Then we found a number of problems in the PEPs; the last time I pointed the problems out I got no reply but “find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc., which is why I’m not sure that we can just bring back packaging in 3.4 as it was and continue with our previous roadmap. Cheers
On Wed, 12 Sep 2012 15:02:42 -0400
Éric Araujo
Hi,
Lib/packaging is in the repository history, and in my backup clones, but it’s not visible in any branch head as we have no branch for 3.4 yet. I can bring the directory back with a simple Mercurial command.
However, it’s not clear to me that we want to do that. At the inception of the project, we wanted a new distutils with support for the latest PEPs and improved extensibility. Then we found a number of problems in the PEPs; the last time I pointed the problems out I got no reply but “find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc., which is why I’m not sure that we can just bring back packaging in 3.4 as it was and continue with our previous roadmap.
People who want a whole new distutils architecture can start distutils3 (or repackaging) if they want. If I have to give my advice, I would favour re-integrating packaging in the stdlib or, better, integrating all changes, one by one, into distutils itself. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
On Thu, Sep 13, 2012 at 7:02 AM, Éric Araujo
Hi,
Lib/packaging is in the repository history, and in my backup clones, but it’s not visible in any branch head as we have no branch for 3.4 yet. I can bring the directory back with a simple Mercurial command.
However, it’s not clear to me that we want to do that. At the inception of the project, we wanted a new distutils with support for the latest PEPs and improved extensibility. Then we found a number of problems in the PEPs; the last time I pointed the problems out I got no reply but “find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc., which is why I’m not sure that we can just bring back packaging in 3.4 as it was and continue with our previous roadmap.
Cheers
+1 - FWIW, I'd like to see the previous project drive for consolidation done without landing in the stdlib, /until/ you've got something that folk are willingly migrating to en masse - at that point we'll know that the bulk of use cases are well satisfied, and that we won't need to be fighting an uphill battle for adoption. If folk are saying 'I would adopt but its not in the stdlib', well - I think thats ignorable TBH: the market of adopters that matter are those using setuptools/distribute/$other_thing today. I rather suspect we'll face things like 'I still support Python2.7' more than 'its not in the stdlib' for now. -Rob (who was just looking into what the state of the art to choose was yesterday)
On Wed, Sep 12, 2012 at 9:02 PM, Éric Araujo
“find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc.,
Yes, yes, but that's just the same old drama that pops up every time this is discussed with the same old arguments all over again. We'll never get anywhere if we care about *that*. The way to go forward is via PEPs, fix them if needed, implement in a separate package, stick into stdlib once it works. //Lennart
On Wed, Sep 12, 2012 at 3:28 PM, Lennart Regebro
On Wed, Sep 12, 2012 at 9:02 PM, Éric Araujo
wrote: “find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc.,
Yes, yes, but that's just the same old drama that pops up every time this is discussed with the same old arguments all over again. We'll never get anywhere if we care about *that*.
The way to go forward is via PEPs, fix them if needed, implement in a separate package, stick into stdlib once it works.
//Lennart
I'm happy to note that as of version 0.6.28 distutils (the setuptools fork) can now consume PEP 345 / 376 "Database of Installed Python Distributions" installations. Entry points could probably go in as an extension to the metadata, but at the moment they work as entry_points.txt with no changes and would be harmlessly ignored by "import packaging".
On Wed, Sep 12, 2012 at 3:28 PM, Lennart Regebro
On Wed, Sep 12, 2012 at 9:02 PM, Éric Araujo
wrote: “find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc.,
Yes, yes, but that's just the same old drama that pops up every time this is discussed with the same old arguments all over again. We'll never get anywhere if we care about *that*.
The way to go forward is via PEPs, fix them if needed, implement in a separate package, stick into stdlib once it works.
I agree with Lennart's and Antoine's advice of just move forward with what we have. If some PEPs need fixing then let's fix them, but we don't need to rock the horse even more by going overboard. Getting the sane, core bits into the stdlib as packaging is meant to is plenty to take on. If people want to reinvent stuff they can do it elsewhere. I personally don't care if it is done inside or outside the stdlib initially or if it stays in packaging or goes directly into distutils, but forward movement with what we have is the most important thing.
On Wed, 12 Sep 2012 18:07:42 -0400, Brett Cannon
On Wed, Sep 12, 2012 at 3:28 PM, Lennart Regebro
wrote: On Wed, Sep 12, 2012 at 9:02 PM, Éric Araujo
wrote: “find a PEP dictator and propose changes”. And when I started the thread about removing packaging in 3.3, hundreds of replies discussed changing the whole distutils architecture, splitting the project, exploring new systems, etc.,
Yes, yes, but that's just the same old drama that pops up every time this is discussed with the same old arguments all over again. We'll never get anywhere if we care about *that*.
The way to go forward is via PEPs, fix them if needed, implement in a separate package, stick into stdlib once it works.
I agree with Lennart's and Antoine's advice of just move forward with what we have. If some PEPs need fixing then let's fix them, but we don't need to rock the horse even more by going overboard. Getting the sane, core bits into the stdlib as packaging is meant to is plenty to take on. If people want to reinvent stuff they can do it elsewhere. I personally don't care if it is done inside or outside the stdlib initially or if it stays in packaging or goes directly into distutils, but forward movement with what we have is the most important thing.
When the removal was being pondered, the possibility of keeping certain bits that were more ready than others was discussed. Perhaps the best way forward is to put it back in bits, with the most finished (and PEP relevant) stuff going in first. That might also give non-packaging people bite-sized-enough chunks to actually digest and help with. --David
On Thu, Sep 13, 2012 at 8:43 AM, R. David Murray
When the removal was being pondered, the possibility of keeping certain bits that were more ready than others was discussed. Perhaps the best way forward is to put it back in bits, with the most finished (and PEP relevant) stuff going in first. That might also give non-packaging people bite-sized-enough chunks to actually digest and help with.
This is the plan I'm going to propose. The previous approach was to just throw the entirety of distutils2 in there, but there are some hard questions that doesn't address, and some use cases it doesn't handle. So, rather than importing it wholesale and making the stdlib the upstream for distutils2, I believe it makes more sense for distutils2 to remain an independent project, and we cherry pick bits and pieces for the standard library's new packaging module as they stabilise. In particular, Tarek was focused on being able to create *binary* RPMs automatically. That isn't enough for my purposes, I need to be able to create *source* RPMs, which can then be fed to the koji build service for conversion to binary RPMs in accordance with the (ideally) autogenerated spec file. A binary RPM that isn't built from a source RPM is no good to me, and the distutils2 support for this approach is awful, because setup.cfg inherits all the command model cruft from distutils which is stupidly hard to integrate with other build systems. I also want to be able to automate most dependency management, so people can write "Requires: python(pypi-dist-name)" in their RPM spec files and have it work, just as they can already write things like "Requires: perl(File::Rules)" I'm currently working on creating a coherent map of the status quo, that describes the overall process of software distribution across various phases (from development -> source archive -> building -> binary archive -> installation -> import) and looks at the tools and formats which exist at each step, both legacy (distutils/setuptools/distribute) and proposed (e.g. distutils2, bento, wheel), and the kinds of tasks which need to be automated. Part of the problem with distutils is that the phases of software distribution are not clearly documented, instead being implicit in the behaviour of setuptools. The distutils2 project, to date, has not remedied that deficiency, instead retaining the implicit overall workflow and just hacking on various pieces in order to "fix Python packaging". If we're going to get support from the scientific community (which has some of the more exotic build requirements going around) and the existing community that wants the full setuptools feature set rather than the subset currently standardised (primarily non-Django web developers in my experience), then we need to address *their* concerns as well, not just the concerns of those of us that don't like the opaque nature of setuptools and its preference for guessing in the presence of ambiguity. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, 13 Sep 2012 11:14:17 +1000
Nick Coghlan
On Thu, Sep 13, 2012 at 8:43 AM, R. David Murray
wrote: When the removal was being pondered, the possibility of keeping certain bits that were more ready than others was discussed. Perhaps the best way forward is to put it back in bits, with the most finished (and PEP relevant) stuff going in first. That might also give non-packaging people bite-sized-enough chunks to actually digest and help with.
This is the plan I'm going to propose. The previous approach was to just throw the entirety of distutils2 in there, but there are some hard questions that doesn't address, and some use cases it doesn't handle. So, rather than importing it wholesale and making the stdlib the upstream for distutils2, I believe it makes more sense for distutils2 to remain an independent project, and we cherry pick bits and pieces for the standard library's new packaging module as they stabilise.
How is that going to be useful? Most people use distutils / packaging as an application, not a library. If you provide only a subset of the necessary features, people won't use packaging. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
On Thursday, September 13, 2012 at 5:38 AM, Antoine Pitrou wrote:
Most people use distutils / packaging as an application, not a library. If you provide only a subset of the necessary features, people won't use packaging.
Not that I think current usage patterns matter since moving from setup.py to a static file, but that's not really true. The wide proliferation of setuptools shows pretty clearly that people are fine using distutils as a library. Even beyond that the popularity of pip shows that as well since very few people even directly interact with setup.py at all except to create the distributions.
On 9/13/12 11:38 AM, Antoine Pitrou wrote:
When the removal was being pondered, the possibility of keeping certain bits that were more ready than others was discussed. Perhaps the best way forward is to put it back in bits, with the most finished (and PEP relevant) stuff going in first. That might also give non-packaging people bite-sized-enough chunks to actually digest and help with. This is the plan I'm going to propose. The previous approach was to just throw the entirety of distutils2 in there, but there are some hard questions that doesn't address, and some use cases it doesn't handle. So, rather than importing it wholesale and making the stdlib
On Thu, Sep 13, 2012 at 8:43 AM, R. David Murray
wrote: the upstream for distutils2, I believe it makes more sense for distutils2 to remain an independent project, and we cherry pick bits and pieces for the standard library's new packaging module as they stabilise. How is that going to be useful? Most people use distutils / packaging as an application, not a library. If you provide only a subset of On Thu, 13 Sep 2012 11:14:17 +1000 Nick Coghlan
wrote: the necessary features, people won't use packaging. Regards
Antoine.
Yeah but we've been too ambitious. Here's my proposal - actually it's Nick's proposal but I want to make sure we're on the same page wrt steps, and I think that addresses Antoine concerns 1. create a new package, called pkglib (or whatever), located at hg .python.org as a new project that just strictly contains : - the PEP implementations - non controversial features like files parser, pypi index browser etc it's doable - since that's what we have done in distutils2. the modules that implements those PEPs are standalone Let's avoid by all means to put the old distutils command logic there. Let's have a strict process on every new thing we're adding there. 2. make pkglib python 2 *and* python 3 compatible - natively, not w/ 2to3 3. make distutils2, distribute, pip, bento, etc. use that and try to share as many bits as possible 4. ask each project to pour in pkglib anything that can be reused by others when 3.4 comes around, I guess we can decide if pkglib can go in or not. That way, we won't have the usual controversy about distutils' command machinery. People will use whatever tool and hopefully this tool will be based on pkgutil
On Thu, Sep 13, 2012 at 4:32 AM, Tarek Ziadé
Here's my proposal - actually it's Nick's proposal but I want to make sure we're on the same page wrt steps, and I think that addresses Antoine concerns
1. create a new package, called pkglib (or whatever), located at hg .python.org as a new project that just strictly contains :
...
That way, we won't have the usual controversy about distutils' command machinery. People will use whatever tool and hopefully this tool will be based on pkgutil
Hopefully it will be based on pkglib (or whatever) rather than pkgutil. ;) --Chris
On Thursday, September 13, 2012 at 7:32 AM, Tarek Ziadé wrote:
Yeah but we've been too ambitious.
Here's my proposal - actually it's Nick's proposal but I want to make sure we're on the same page wrt steps, and I think that addresses Antoine concerns
1. create a new package, called pkglib (or whatever), located at hg .python.org (http://python.org) as a new project that just strictly contains :
- the PEP implementations - non controversial features like files parser, pypi index browser etc
it's doable - since that's what we have done in distutils2. the modules that implements those PEPs are standalone
Let's avoid by all means to put the old distutils command logic there.
Let's have a strict process on every new thing we're adding there.
2. make pkglib python 2 *and* python 3 compatible - natively, not w/ 2to3
3. make distutils2, distribute, pip, bento, etc. use that and try to share as many bits as possible
4. ask each project to pour in pkglib anything that can be reused by others
I started messing around with yanking some of the parts of distutils2 (things I've been calling packaging primitives for lack of a better word). Don't have anything particularly usable yet, but the approach you're talking about is similar to what I started to do.
On 13 September 2012 12:32, Tarek Ziadé
Here's my proposal - actually it's Nick's proposal but I want to make sure we're on the same page wrt steps, and I think that addresses Antoine concerns
1. create a new package, called pkglib (or whatever), located at hg .python.org as a new project that just strictly contains :
- the PEP implementations - non controversial features like files parser, pypi index browser etc
it's doable - since that's what we have done in distutils2. the modules that implements those PEPs are standalone
+1. I've seen far too many implementation of code that reads/writes RECORD files for instance. They are all subtly different and manage to have incompatible obscure bugs :-( A reference implementation would be an excellent thing. In the stdlib by preference, external for older versions is as good as you can get. The key here is to avoid having packaging tools with any more dependencies than necessary (because there's a bootstrapping issue with installing those dependencies...)
Let's avoid by all means to put the old distutils command logic there.
+1
Let's have a strict process on every new thing we're adding there.
Hmm. Agreed up to a point, but please let's not make it so hard to change things that are present that people go off and do their own thing again[1]. OTOH, I agree let's be cautious about adding new things. Once pkglib goes into the stdlib, that's when things get strict. Let's leave a bit of flexibility while the details are thrashed out.
2. make pkglib python 2 *and* python 3 compatible - natively, not w/ 2to3
+0
3. make distutils2, distribute, pip, bento, etc. use that and try to share as many bits as possible
We can't "make" anyone use pkglib, but if it clearly makes it easier to support the standards than implementing them yourself, it should persuade people. And we should certainly advocate supporting PEPs by using the reference implementation rather than reimplementing it yourself.
4. ask each project to pour in pkglib anything that can be reused by others
+1, although again it'll be down to the projects whether they do actually contribute. Also this somewhat contradicts the "be strict" point above, which is why I'm lukewarm on "be strict". Practicality vs purity - getting contributors/users is more important than insisting that everything be standardised before it can be implemented.
when 3.4 comes around, I guess we can decide if pkglib can go in or not.
I'd have an explicit goal to be included in 3.4 (so that people can imagine an end to the need for pkgutil being a dependency of their installation tool). But agreed it should not be assumed that this will happen if it's not ready (that was the mistake with packaging).
That way, we won't have the usual controversy about distutils' command machinery. People will use whatever tool and hopefully this tool will be based on pkgutil
+1. It also means that people will have a much greater incentive to support the new standards, because it should simply be a case of calling the pkglib implementation. Paul.
On 9/13/12 2:45 PM, Paul Moore wrote:
4. ask each project to pour in pkglib anything that can be reused by others +1, although again it'll be down to the projects whether they do actually contribute. Also this somewhat contradicts the "be strict" point above, which is why I'm lukewarm on "be strict". Practicality vs purity - getting contributors/users is more important than insisting that everything be standardised before it can be implemented.
Let me take back 'strict process' and replace it with: Everything added in pkglib should be a basic feature/implementation that does not force the tools to change the way *they* see their build/release/UI processes. The most sophisticated feature I am thinking about is the set of apis in distutils2.pypi They are APIs to browse and interact with PyPI, and they are useful to many projects, so I think it fits my definition. otho, anything relating to compilation should not be added there, unless it's APIs to get some aggregated info, like on the top of platform/sys/etc Cheers Tarek
On Thu, Sep 13, 2012 at 9:47 PM, Chris Jerdonek
On Thu, Sep 13, 2012 at 4:32 AM, Tarek Ziadé
wrote: Here's my proposal - actually it's Nick's proposal but I want to make sure we're on the same page wrt steps, and I think that addresses Antoine concerns
1. create a new package, called pkglib (or whatever), located at hg .python.org as a new project that just strictly contains :
...
That way, we won't have the usual controversy about distutils' command machinery. People will use whatever tool and hopefully this tool will be based on pkgutil
Hopefully it will be based on pkglib (or whatever) rather than pkgutil. ;)
Actually, I'd be happy to do the rearrangement needed to turn pkgutil into a package rather than the current single module. I think we'd have people breaking out the torches and pitchforks if we ever ended up with modules or packages called packaging, pkglib *and* pkgutil all in the standard library. "distcore" might work, since they're core functionality for distribution more so than they are for packages. Anyway, the essential idea of getting those 4 modules (and support libraries) that almost made it into 3.3 into sufficiently good shape that they can be distributed independently of distutils2 and adopted as a dependency by multiple projects is a good one. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Fri, Sep 14, 2012 at 12:34 AM, Nick Coghlan
Anyway, the essential idea of getting those 4 modules (and support libraries) that almost made it into 3.3 into sufficiently good shape that they can be distributed independently of distutils2 and adopted as a dependency by multiple projects is a good one.
I also like Tarek's suggestion of including the pypi client APIs. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Hi, Le 13/09/2012 10:34, Nick Coghlan a écrit :
Actually, I'd be happy to do the rearrangement needed to turn pkgutil into a package rather than the current single module.
I very much prefer not mixing pkgutil (dealing with packages that you import) and build/distribution/installation support (dealing with packages/bundles/distributions/parcels/stuff that your distribute). Regards
On Fri, Sep 14, 2012 at 12:39 AM, Éric Araujo
Hi,
Le 13/09/2012 10:34, Nick Coghlan a écrit :
Actually, I'd be happy to do the rearrangement needed to turn pkgutil into a package rather than the current single module.
I very much prefer not mixing pkgutil (dealing with packages that you import) and build/distribution/installation support (dealing with packages/bundles/distributions/parcels/stuff that your distribute).
Tarek started it when he suggested "pkglib" :) I like "distcore" or "distlib", though. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Fri, Sep 14, 2012 at 12:39 AM, Éric Araujo
Hi,
Le 13/09/2012 10:34, Nick Coghlan a écrit :
Actually, I'd be happy to do the rearrangement needed to turn pkgutil into a package rather than the current single module.
I very much prefer not mixing pkgutil (dealing with packages that you import) and build/distribution/installation support (dealing with packages/bundles/distributions/parcels/stuff that your distribute).
Tarek started it when he suggested "pkglib" :) I like "distcore" or "distlib", though. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Thu, Sep 13, 2012 at 5:38 AM, Antoine Pitrou
On Thu, 13 Sep 2012 11:14:17 +1000 Nick Coghlan
wrote: On Thu, Sep 13, 2012 at 8:43 AM, R. David Murray
wrote: When the removal was being pondered, the possibility of keeping certain bits that were more ready than others was discussed. Perhaps the best way forward is to put it back in bits, with the most finished (and PEP relevant) stuff going in first. That might also give non-packaging people bite-sized-enough chunks to actually digest and help with.
This is the plan I'm going to propose. The previous approach was to just throw the entirety of distutils2 in there, but there are some hard questions that doesn't address, and some use cases it doesn't handle. So, rather than importing it wholesale and making the stdlib the upstream for distutils2, I believe it makes more sense for distutils2 to remain an independent project, and we cherry pick bits and pieces for the standard library's new packaging module as they stabilise.
How is that going to be useful? Most people use distutils / packaging as an application, not a library. If you provide only a subset of the necessary features, people won't use packaging.
Third-party install/packing software (pip, bento, even distribute) can still gradually absorb any standard pieces added to the stdlib for better interoperability and PEP compliance. I'm still strongly in favor of a `pysetup` like command making it into Python too, but in the meantime the top priority should be anything that supports better consistency across existing projects.
Nick Coghlan
I like "distcore" or "distlib", though.
I have set up a BitBucket repo called distlib, at https://bitbucket.org/vinay.sajip/distlib/ This has the following bits of distutils2 / packaging, updated to run on 2.x and 3.x with a single codebase, and including tests (though not docs, yet): version.py - version specifiers as per PEP 386 metadata.py - metadata as per PEPs 345, 314 and 241 markers.py - environment markers as per PEP 345 database.py - installed distributions as per PEP 376 depgraph.py - distribution dependency graph logic glob.py - globbing functionality The code was taken at around the time packaging was removed, and may not have more recent changes. Regards, Vinay Sajip
On 9/14/12 5:12 PM, Vinay Sajip wrote:
Nick Coghlan
writes: I like "distcore" or "distlib", though.
I have set up a BitBucket repo called distlib, at
https://bitbucket.org/vinay.sajip/distlib/
This has the following bits of distutils2 / packaging, updated to run on 2.x and 3.x with a single codebase, and including tests (though not docs, yet):
version.py - version specifiers as per PEP 386 metadata.py - metadata as per PEPs 345, 314 and 241 markers.py - environment markers as per PEP 345 database.py - installed distributions as per PEP 376 depgraph.py - distribution dependency graph logic glob.py - globbing functionality
The code was taken at around the time packaging was removed, and may not have more recent changes.
Regards, oh, cool !
maybe we could copy it at hg.python.org ?
Vinay Sajip
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com
On Sat, Sep 15, 2012 at 11:43 PM, Vinay Sajip
Sure, but I don't know if I can do it. IIUC it needs someone with an account on the server to create new repositories.
Depends how much you care about a pristine history - you can do a server side clone of an existing repo and then empty it out. And if you use http://hg.python.org/buildbot/empty/ as the starting point, there isn't even anything to empty out. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Depends how much you care about a pristine history - you can do a
server side clone of an existing repo and then empty it out. And if you use http://hg.python.org/buildbot/empty/ as the starting point, there isn't even anything to empty out.
Actually there are some files in there - a Makefile, configure script and some .bat files. I tried cloning it but I'm not allowed to create a repo at the top level - it says Please use a secondary level path such as "sandbox/distlib" Seems a shame not to make it a top-level repo. Regards, Vinay Sajip
On Sat, 15 Sep 2012 16:27:28 +0100 (BST)
Vinay Sajip
Depends how much you care about a pristine history - you can do a
server side clone of an existing repo and then empty it out. And if you use http://hg.python.org/buildbot/empty/ as the starting point, there isn't even anything to empty out.
Actually there are some files in there - a Makefile, configure script and some .bat files.
I tried cloning it but I'm not allowed to create a repo at the top level - it says
Please use a secondary level path such as "sandbox/distlib"
Seems a shame not to make it a top-level repo.
Well, if you really need it, it can certainly be created. On the other hand, if you are not using hg.python.org features such as commits e-mails or buildbots, it's also fine living on bitbucket until the project matures a bit. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net
On Fri, Sep 14, 2012 at 8:12 AM, Vinay Sajip
I have set up a BitBucket repo called distlib, at
https://bitbucket.org/vinay.sajip/distlib/
...
The code was taken at around the time packaging was removed, and may not have more recent changes.
Would it be possible or make sense for this effort to start by forking the previous Lib/packaging directory of the cpython mirror on bitbucket (or perhaps do this in a fork of the entire cpython mirror given that portions of this are meant to be cherry-pick merged back to cpython)? That way the continuity with the previous state of packaging would be clear as well as making later merging easier. Also, will the previous Doc/packaging docs live in this new repo? --Chris
On Sep 16, 2012, at 6:37 AM, Chris Jerdonek
On Fri, Sep 14, 2012 at 8:12 AM, Vinay Sajip
wrote: I have set up a BitBucket repo called distlib, at
https://bitbucket.org/vinay.sajip/distlib/
...
The code was taken at around the time packaging was removed, and may not have more recent changes.
Would it be possible or make sense for this effort to start by forking the previous Lib/packaging directory of the cpython mirror on bitbucket (or perhaps do this in a fork of the entire cpython mirror given that portions of this are meant to be cherry-pick merged back to cpython)? That way the continuity with the previous state of packaging would be clear as well as making later merging easier.
Also, will the previous Doc/packaging docs live in this new repo?
--Chris _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/dholth%40gmail.com
You would also wind up with the entire cpython history in every checkout. Mercurial allows merging unrelated repositories.
On 14 September 2012 16:12, Vinay Sajip
I have set up a BitBucket repo called distlib, at
https://bitbucket.org/vinay.sajip/distlib/
This has the following bits of distutils2 / packaging, updated to run on 2.x and 3.x with a single codebase, and including tests (though not docs, yet):
version.py - version specifiers as per PEP 386 metadata.py - metadata as per PEPs 345, 314 and 241 markers.py - environment markers as per PEP 345 database.py - installed distributions as per PEP 376 depgraph.py - distribution dependency graph logic glob.py - globbing functionality
A nice addition would be an API for managing the RECORD file. I would imagine functions to read/write the file (hiding the details of how to open the CSV file correctly in a cross-platform manner), functions to produce a list of the files installed for a distribution, and functions to validate (and maybe write) the hashes. If this would be useful, I'd be willing to write the code, although my API design skills aren't the best, so some advice on how the API should look would be nice :-) Paul
I agree with Lennart's and Antoine's advice of just move forward with what we have. If some PEPs need fixing then let's fix them, but we don't need to rock the horse even more by going overboard. Getting the sane, core bits into the stdlib as packaging is meant to is plenty to take on. If people want to reinvent stuff they can do it elsewhere. I personally don't care if it is done inside or outside the stdlib initially or if it stays in packaging or goes directly into distutils, but forward movement with what we have is the most important thing.
+100 I was excited about packaging in 2010, it is time to document and implement the specs we have. The sooner we do, the less confusing it will be for a newcomer who just wants to release a simple printer of nested lists to pypi.
On Mon, Sep 17, 2012 at 5:05 AM, Daniel Holth
I agree with Lennart's and Antoine's advice of just move forward with what we have. If some PEPs need fixing then let's fix them, but we don't need to rock the horse even more by going overboard. Getting the sane, core bits into the stdlib as packaging is meant to is plenty to take on. If people want to reinvent stuff they can do it elsewhere. I personally don't care if it is done inside or outside the stdlib initially or if it stays in packaging or goes directly into distutils, but forward movement with what we have is the most important thing.
+100
I was excited about packaging in 2010, it is time to document and implement the specs we have. The sooner we do, the less confusing it will be for a newcomer who just wants to release a simple printer of nested lists to pypi.
I've been chatting to Chris McDonough a bit as well, and one potentially useful thing would be to clearly document the setuptools/distribute metadata precisely as it is generated today. Currently these formats are entirely implicit in the implementation of the code that reads and writes them, as far as I can tell anyway. The distribute docs seem to do a decent job of explaining setup.py and the various setuptools specific arguments, but *not* what the file formats will look like inside the metadata directory when installed. The main advantages of this would be to make it clear: 1. What can setuptools metadata describe, that v1.2 of the official metadata standard cannot? 2. Does v1.3 allow everything that setuptools can currently describe (either directly, or as an extension)? 3. Does v1.3 allow some things to be expressed more clearly than they can be with setuptools? Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Paul Moore
A nice addition would be an API for managing the RECORD file. I would imagine functions to read/write the file (hiding the details of how to open the CSV file correctly in a cross-platform manner), functions to produce a list of the files installed for a distribution, and functions to validate (and maybe write) the hashes.
Producing the list of files installed for a distribution might be tool-specific (e.g. a tool might have additional tool-specific files over and above what any PEPs mandate). There's a method in database.py - Distribution.list_installed_files() - which reads and returns the RECORD entries as (path, hash, size) tuples. I think all that's needed (at the same level of abstraction) is a method write_installed_files(iterable_of_absolute_file_paths) which writes the file. This code is already in the distutils2.install_distinfo.install_distinfo.run() method. I'll pull it out into such a method. Anyone else, do chip in if you think this is insufficient or sub-optimal. Perhaps these discussions (on the detail) should continue on distutils-sig. I'll post there when I've made the changes. Regards, Vinay Sajip
On 16 September 2012 23:26, Vinay Sajip
I think all that's needed (at the same level of abstraction) is a method write_installed_files(iterable_of_absolute_file_paths) which writes the file. This code is already in the distutils2.install_distinfo.install_distinfo.run() method. I'll pull it out into such a method.
Possibly also useful (if only to standardise the approach) would be a method to check hashes against installed files. Paul
participants (15)
-
Antoine Pitrou
-
Brett Cannon
-
Chris Jerdonek
-
Daniel Holth
-
Donald Stufft
-
Erik Bray
-
Lennart Regebro
-
Nick Coghlan
-
Paul Moore
-
R. David Murray
-
Robert Collins
-
Tarek Ziadé
-
Vinay Sajip
-
Éric Araujo
-
Éric Araujo