Q about best practices now (or near future)
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial. But to do that I wanted to ask what the current best practices are. * Are we even close to suggesting wheels for source distributions? * Are we promoting (weakly, strongly?) the signing of distributions yet? * Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available. I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
* Are we promoting (weakly, strongly?) the signing of distributions yet?
No change.
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior. The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
On 17 Jul, 2013, at 17:46, Daniel Holth <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
Do you mean an sdist without a setup.py? That will likely take some time, for the time being projects will still need a setup.py that just prints information on how to build them (or bootstraps the actual wheel building tool). Ronald
On 17 July 2013 16:46, Daniel Holth <dholth@gmail.com> wrote:
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
Personally, I still reserve judgement on setuptools. But that's mainly if you actually use its features (you should carefully consider and understand the implications if you use its script wrapper functionality, for example). I see no reason to knee-jerk use it if you don't use any of its functionality, though. I may be in a minority on that, though :-)
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
By the way, what *does* MEBS mean? I've seen a few people use the term, but never found an explanation... Paul
On Wed, Jul 17, 2013 at 11:55 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 17 July 2013 16:46, Daniel Holth <dholth@gmail.com> wrote:
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
Personally, I still reserve judgement on setuptools. But that's mainly if you actually use its features (you should carefully consider and understand the implications if you use its script wrapper functionality, for example).
I see no reason to knee-jerk use it if you don't use any of its functionality, though. I may be in a minority on that, though :-)
One code path. Plus all your pip-using users are using it anyway. Many have seemed to not realize that "having dependencies" is one of "its features".
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
By the way, what *does* MEBS mean? I've seen a few people use the term, but never found an explanation...
It stands for the "Meta Build System (not an actual project)" which I proposed last September. A suitably nuts person could just layout their project like a wheel, edit the .dist-info by hand, zip and publish that.
On 17 Jul, 2013, at 17:55, Paul Moore <p.f.moore@gmail.com> wrote:
On 17 July 2013 16:46, Daniel Holth <dholth@gmail.com> wrote:
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
Personally, I still reserve judgement on setuptools. But that's mainly if you actually use its features (you should carefully consider and understand the implications if you use its script wrapper functionality, for example).
I see no reason to knee-jerk use it if you don't use any of its functionality, though. I may be in a minority on that, though :-)
I agree, and if metadata 2.0 and bdist_wheel support were added to distutils there'd be even less reason to use setuptools. I primarily use setuptools for its dependency system on installation, and that's nicely covered by using metadata 2.0, wheels and pip.
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
By the way, what *does* MEBS mean? I've seen a few people use the term, but never found an explanation...
MEta Build System. Ronald
Paul _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On Wed, Jul 17, 2013 at 11:46 AM, Daniel Holth <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
Then I'm thoroughly confused since the Wheel PEP says in its rationale that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone? -Brett
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
* Are we promoting (weakly, strongly?) the signing of distributions yet?
No change.
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
On Jul 17, 2013, at 12:39 PM, Brett Cannon <brett@python.org> wrote:
On Wed, Jul 17, 2013 at 11:46 AM, Daniel Holth <dholth@gmail.com> wrote: On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
Then I'm thoroughly confused since the Wheel PEP says in its rationale that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone?
-Brett
You *can* publish only Wheels, especially i your package is pure python. However it's a "built" package. You should still publish the sdist (and sdist 2.0 when that happens) because a Wheel is (essentially) derived from a sdist. It is easier for the tooling to install and in general you'll want to use them, but not everything supports Wheel and some people will want to build their own wheels. Think of Wheel as a debian package and the sdist as the source package. Ideally the majority of the time people will be installing from the Wheel but the sdist is still there for those who don't.
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
* Are we promoting (weakly, strongly?) the signing of distributions yet?
No change.
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Wed, Jul 17, 2013 at 12:45 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 17, 2013, at 12:39 PM, Brett Cannon <brett@python.org> wrote:
On Wed, Jul 17, 2013 at 11:46 AM, Daniel Holth <dholth@gmail.com> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote: the
User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
Then I'm thoroughly confused since the Wheel PEP says in its rationale that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone?
-Brett
You *can* publish only Wheels, especially i your package is pure python. However it's a "built" package. You should still publish the sdist (and sdist 2.0 when that happens) because a Wheel is (essentially) derived from a sdist.
It is easier for the tooling to install and in general you'll want to use them, but not everything supports Wheel and some people will want to build their own wheels. Think of Wheel as a debian package and the sdist as the source package. Ideally the majority of the time people will be installing from the Wheel but the sdist is still there for those who don't.
OK, that makes sense and what I understood wheels to be.Thanks for the clarification! Daniel's wording made me think suddenly that wheel files were only for distributions that had an extension or something. But it also sounds like that project providing wheel distributions is too early to include in the User's Guide. -Brett
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
* Are we promoting (weakly, strongly?) the signing of distributions yet?
No change.
* Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems.
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 17 July 2013 17:59, Brett Cannon <brett@python.org> wrote:
It is easier for the tooling to install and in general you'll want to use
them, but not everything supports Wheel and some people will want to build their own wheels. Think of Wheel as a debian package and the sdist as the source package. Ideally the majority of the time people will be installing from the Wheel but the sdist is still there for those who don't.
OK, that makes sense and what I understood wheels to be.Thanks for the clarification! Daniel's wording made me think suddenly that wheel files were only for distributions that had an extension or something.
I think that's the best way for people to think of sdist/wheel - it's precisely equivalent to srpm/rpm (or the debian equivalent as Donald points out) in the Unix world. And ultimately, the expectation is that people install from wheels even for pure-python projects that could just as easily be installed from source, for precisely the same reasons as people use rpms rather than srpms. Paul.
On 17 July 2013 17:59, Brett Cannon <brett@python.org> wrote:
But it also sounds like that project providing wheel distributions is too early to include in the User's Guide.
There are already many guides showing how to use distutils/setuptools to do things the old way. There are also confused bits of documentation/guides referring to now obsolete projects that at one point were touted as the future. It would be really good to have a guide that shows how the new working with wheels and metadata way is expected to work from the perspective of end users and package authors even if this isn't fully ready yet. I've been loosely following the packaging work long enough to see it change direction more than once. I still find it hard to see the complete picture for how pip, pypi, metadata, setuptools, setup.py, setup.json, wheels and sdists are expected to piece together in terms of what a package author is expected to do and how it affects end users. A guide (instead of a load of PEPs) would be a great way to clarify this for me and for the many others who haven't been following the progress of this at all. Oscar
On 18 Jul 2013 06:24, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
On 17 July 2013 17:59, Brett Cannon <brett@python.org> wrote:
But it also sounds like that project providing wheel distributions is
too
early to include in the User's Guide.
There are already many guides showing how to use distutils/setuptools to do things the old way. There are also confused bits of documentation/guides referring to now obsolete projects that at one point were touted as the future. It would be really good to have a guide that shows how the new working with wheels and metadata way is expected to work from the perspective of end users and package authors even if this isn't fully ready yet.
I've been loosely following the packaging work long enough to see it change direction more than once. I still find it hard to see the complete picture for how pip, pypi, metadata, setuptools, setup.py, setup.json, wheels and sdists are expected to piece together in terms of what a package author is expected to do and how it affects end users. A guide (instead of a load of PEPs) would be a great way to clarify this for me and for the many others who haven't been following the progress of this at all.
That's exactly what the packaging guide is for. It just needs volunteers to help write it. PEP 426 goes into a lot of detail on the various things that are supported, but a key thing to keep in mind is that metadata 2.0 is a 3.4.1 time frame idea, purely for resourcing reasons. The bundling proposed for 3.4 is about blessing setuptools & pip as the "obvious way to do it". Not the *only* way to do it (other build systems like d2to1 work, they just need a suitable setup.py shim, and other installers are possible too), just the obvious way. For better or for worse, I don't believe we have any more chances to ask developers to switch to a different front end (heck, quite a few projects still recommend easy_install or even downloading the sdist and running setup.py directly). Instead, we need to clearly document the current status of things, and start working towards *incremental*, *non-disruptive* changes in the way the back end operates. If we do it right, most users *shouldn't even notice* when the various tools are updated to produce and consume metadata 2.0 (which can be distributed in parallel with the existing metadata formats), unless they decide to use the additional features the enhanced schema makes possible. It's good that distil exists as a proof of concept, but the ship has sailed on the default language level installer: it will be pip. Updating both pip and setuptools to use distlib as a common backend may be a good idea in the long run (and probably a better notion than pip growing a programmatic API of its own), but that's not something I see as urgently needed. Cheers, Nick.
Oscar _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On Wed, Jul 17, 2013 at 6:12 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 Jul 2013 06:24, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
On 17 July 2013 17:59, Brett Cannon <brett@python.org> wrote:
But it also sounds like that project providing wheel distributions is
too
early to include in the User's Guide.
There are already many guides showing how to use distutils/setuptools to do things the old way. There are also confused bits of documentation/guides referring to now obsolete projects that at one point were touted as the future. It would be really good to have a guide that shows how the new working with wheels and metadata way is expected to work from the perspective of end users and package authors even if this isn't fully ready yet.
I've been loosely following the packaging work long enough to see it change direction more than once. I still find it hard to see the complete picture for how pip, pypi, metadata, setuptools, setup.py, setup.json, wheels and sdists are expected to piece together in terms of what a package author is expected to do and how it affects end users. A guide (instead of a load of PEPs) would be a great way to clarify this for me and for the many others who haven't been following the progress of this at all.
That's exactly what the packaging guide is for. It just needs volunteers to help write it.
PEP 426 goes into a lot of detail on the various things that are supported, but a key thing to keep in mind is that metadata 2.0 is a 3.4.1 time frame idea, purely for resourcing reasons. The bundling proposed for 3.4 is about blessing setuptools & pip as the "obvious way to do it". Not the *only* way to do it (other build systems like d2to1 work, they just need a suitable setup.py shim, and other installers are possible too), just the obvious way.
As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch. I think for the rewrite to move forward someone's going to need to own each part and specify upfront what assumptions are being made about the audience (e.g. they know what a package is and how to create one, etc.) and their abilities (can you say ``curl <url to ez_setup.py> | python`` to them and thus just link to the setuptools docs for installation?).
On 18 Jul 2013 08:18, "Brett Cannon" <brett@python.org> wrote:
On Wed, Jul 17, 2013 at 6:12 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 Jul 2013 06:24, "Oscar Benjamin" <oscar.j.benjamin@gmail.com>
On 17 July 2013 17:59, Brett Cannon <brett@python.org> wrote:
But it also sounds like that project providing wheel distributions
is too
early to include in the User's Guide.
There are already many guides showing how to use distutils/setuptools to do things the old way. There are also confused bits of documentation/guides referring to now obsolete projects that at one point were touted as the future. It would be really good to have a guide that shows how the new working with wheels and metadata way is expected to work from the perspective of end users and package authors even if this isn't fully ready yet.
I've been loosely following the packaging work long enough to see it change direction more than once. I still find it hard to see the complete picture for how pip, pypi, metadata, setuptools, setup.py, setup.json, wheels and sdists are expected to piece together in terms of what a package author is expected to do and how it affects end users. A guide (instead of a load of PEPs) would be a great way to clarify this for me and for the many others who haven't been following the progress of this at all.
That's exactly what the packaging guide is for. It just needs volunteers to help write it.
PEP 426 goes into a lot of detail on the various things that are supported, but a key thing to keep in mind is that metadata 2.0 is a 3.4.1 time frame idea, purely for resourcing reasons. The bundling proposed for 3.4 is about blessing setuptools & pip as the "obvious way to do it". Not
wrote: the *only* way to do it (other build systems like d2to1 work, they just need a suitable setup.py shim, and other installers are possible too), just the obvious way.
As of right now the User's Guide doesn't mention using setuptools for
building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch.
I think for the rewrite to move forward someone's going to need to own
each part and specify upfront what assumptions are being made about the audience (e.g. they know what a package is and how to create one, etc.) and their abilities (can you say ``curl <url to ez_setup.py> | python`` to them and thus just link to the setuptools docs for installation?). It would make sense to have targeted sections for "I am...": -... a new developer on Windows -... a new developer on Mac OS X -... a new developer on Linux -... an experienced Python developer on Windows -... an experienced Python developer on Mac OS X -... an experienced Python developer on Linux -... an experienced developer, new to Python, on Windows -... an experienced developer, new to Python, on Mac OS X -... an experienced developer, new to Python, on Linux Cheers, Nick.
As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch.
Although most of the text from the original Hitchhiker Guide is gone at this point (since the "fork" a few months back), the "Packaging Tutorial" as it is, is mostly still carryover from that. Don't take it is intentional new writing.
On 17 July 2013 23:18, Brett Cannon <brett@python.org> wrote:
As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch.
Just picking up on this question: 1. As Brett says, is the recommendation that everyone should use setuptools? 2. If that's the case, why aren't we bundling setuptools in the same way that we are bundling pip? 3. If we were bundling setuptools, pip wouldn't need to go through the rigmarole of vendoring it. Paul.
On Jul 18, 2013, at 3:20 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 17 July 2013 23:18, Brett Cannon <brett@python.org> wrote: As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch.
Just picking up on this question: 1. As Brett says, is the recommendation that everyone should use setuptools? 2. If that's the case, why aren't we bundling setuptools in the same way that we are bundling pip? 3. If we were bundling setuptools, pip wouldn't need to go through the rigmarole of vendoring it.
Personally I think pip should be vendoring setuptools regardless. A package manager with dependencies is strange and there have been quite a few problems caused by setuptools getting in a bad state.
Paul. _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 18 July 2013 08:29, Donald Stufft <donald@stufft.io> wrote:
Personally I think pip should be vendoring setuptools regardless. A package manager with dependencies is strange and there have been quite a few problems caused by setuptools getting in a bad state.
Agrred on the dependency point (but I don't consider "depends on something bundled with Python" as being an external dependency, hence my question). As regards vendoring, I'm reserving judgement until I see the code - I think getting something working is more important than discussing what might be hard to implement... Paul
Nick Coghlan <ncoghlan <at> gmail.com> writes:
It's good that distil exists as a proof of concept, but the ship has sailed on the default language level installer: it will be pip.
I understand that it's your call as the packaging czar, but was there any discussion about this before the decision was made? Any pros and cons of different approaches weighed up? Python 3.4 beta is still 5-6 months away. Call me naive, but I would normally have expected a PEP on the bundling of pip to be produced by an interested party/champion, then that people would discuss and refine the PEP on the mailing list, and *then* a pronouncement would be made. This is what PEP 1 describes as the PEP process. Instead, it seems a decision has already been made, and now an author/champion for a PEP is being sought ex post facto. With all due respect, this seems back to front - so it would be good to have a better understanding of the factors that went into the decision, including the timing of it. Can you shed some light on this? Thanks and regards, Vinay Sajip
On Jul 17, 2013, at 6:30 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
It's good that distil exists as a proof of concept, but the ship has sailed on the default language level installer: it will be pip.
I understand that it's your call as the packaging czar, but was there any discussion about this before the decision was made? Any pros and cons of different approaches weighed up? Python 3.4 beta is still 5-6 months away. Call me naive, but I would normally have expected a PEP on the bundling of pip to be produced by an interested party/champion, then that people would discuss and refine the PEP on the mailing list, and *then* a pronouncement would be made. This is what PEP 1 describes as the PEP process. Instead, it seems a decision has already been made, and now an author/champion for a PEP is being sought ex post facto. With all due respect, this seems back to front - so it would be good to have a better understanding of the factors that went into the decision, including the timing of it. Can you shed some light on this?
Thanks and regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
I think bundling pip or bundling nothing is the only thing that makes sense. There actually *is* a PEP however it took a different approach that has been (during the discussions about it) decided that a different way would be less error prone and more suitable. So now someone to write a PEP for that *new* way is being sought out. So it's not so much that a pronouncement was made prior to a PEP being written, but that a PEP was written, discussed, and a better way was found during that discussion. As far as I know you're free to make a competing PEP if you'd like. However I think the chances of it getting accepted are very low because the goal here is user convenience. It's hard to argue that pip isn't the installer with the most buy in in the community and thus bundling it (as opposed to a different installer) is the most convient thing for the most users. In many ways this makes things better for alternative installers because it gives a simple unified command to installing that third party installer without needing to handle bootstrapping. However because pip is bundled an alternative installer will likely need to provide significant benefits over pip in order to gain critical mass. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
I think bundling pip or bundling nothing is the only thing that makes sense. There actually *is* a PEP however it took a different approach that has been (during the discussions about it) decided that a different way would be less error
Donald Stufft <donald <at> stufft.io> writes: prone and
more suitable. So now someone to write a PEP for that *new* way is being sought out. So it's not so much that a pronouncement was made prior to a PEP being written, but that a PEP was written, discussed, and a better way was found during that discussion.
Which specific PEP are you referring to? I'm not aware of any PEP which refers to bundling anything with Python. If whichever PEP it was took a fairly different approach to the one being discussed, and no conclusion could be reached about it, that doesn't mean that PEP 1 shouldn't be followed - just that a new PEP needs to be written, espousing the new approach, and it needs to go through the PEP workflow. For example, PEP 386 was supplanted by PEP 440, because the earlier PEP had some flaws which the later PEP took care to address. The earlier metadata PEPs all built on one another, with PEP 426 being the latest.
As far as I know you're free to make a competing PEP if you'd like.
What would be the point, given that the decision has already been made by the packaging BDFL? If someone else had put forward the pip bundling PEP, I would certainly have commented on it like anyone else and participated in the discussions. I'm more concerned that the PEP process is not being followed than I'm concerned about "my particular approach vs. your particular approach vs. his/her particular approach". Regards, Vinay Sajip
On 18 Jul 2013 08:31, "Vinay Sajip" <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
It's good that distil exists as a proof of concept, but the ship has
sailed
on the default language level installer: it will be pip.
I understand that it's your call as the packaging czar, but was there any discussion about this before the decision was made? Any pros and cons of different approaches weighed up? Python 3.4 beta is still 5-6 months away. Call me naive, but I would normally have expected a PEP on the bundling of pip to be produced by an interested party/champion, then that people would discuss and refine the PEP on the mailing list, and *then* a pronouncement would be made. This is what PEP 1 describes as the PEP process. Instead, it seems a decision has already been made, and now an author/champion for a PEP is being sought ex post facto. With all due respect, this seems back to front - so it would be good to have a better understanding of the factors that went into the decision, including the timing of it. Can you shed some light on this?
Technically the decision *hasn't* been made - there is, as yet, no bundling PEP for me to consider for any installer, and I've decided not to accept Richard's bootstrapping PEP due to the issues around delaying the download to first use. I'd just like to have a bundling PEP posted before I make that official, so I can refer to it in the rejection notice. However, even without a PEP, I consider pip the only acceptable option, as I believe we have no credibility left to burn with the broader Python development community on tool choices. We've spent years telling everyone "use distribute over setuptools and pip over easy_install". The former sort of caught on (but it was subtle, since Linux distros all packaged distribute as setuptools anyway), and the latter has been quite effective amongst those that didn't need the binary egg format support. We're now telling people, OK setuptools is actually fine, but you should still use pip instead of easy_install and start using wheels instead of eggs. This is defensible, since even people using distribute were still importing setuptools. However, I simply see *no way* we could pull off a migration to a new recommended installer when the migration from the previous one to the current one is still far from complete :P Adding in the distutils2/packaging digression just lowers our collective credibility even further, and we also get some significant spillover from the Python 3 transition. Essentially, don't underestimate how thin the ice we're currently walking on is community-wise: people are irritated and even outright angry with the Python core development team, and they have good reasons to be. We need to remain mindful of that, and take it into account when deciding how to proceed. Cheers, Nick.
Thanks and regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Nick Coghlan <ncoghlan <at> gmail.com> writes:
Technically the decision *hasn't* been made - there is, as yet, no bundling PEP for me to consider for any installer, and I've decided not to accept Richard's bootstrapping PEP due to the issues around delaying the download to first use. I'd just like to have a bundling PEP posted before I make that official, so I can refer to it in the rejection notice.
Technically? Well, "that ship has sailed" seems pretty well decided to me. I know that "technically is the best kind of correct" :-) But IIUC, your reservations on PEP 439 (I didn't realise that was what Donald was referring to in his response) related to Richard's specific implementation. I posted an example getpip.py (very simple, I grant you) which would get setuptools and pip for users, without the need for bundling anything, plus proposed an equivalent upgrade for pyvenv which would do the same for venvs. There has been no discussion around getpip.py whatsoever, AFAIK.
However, even without a PEP, I consider pip the only acceptable option, as I believe we have no credibility left to burn with the broader Python development community on tool choices. We've spent years telling everyone
We don't need to burn any credibility at all. Perhaps python-dev lost some credibility when packaging got pulled from 3.3, even though it was a good decision made for the right reasons. But you only ask people to believe you when you have some new story to tell them, and pip is hardly new.
We're now telling people, OK setuptools is actually fine, but you should still use pip instead of easy_install and start using wheels instead of eggs. This is defensible, since even people using distribute were still importing setuptools.
This is something which arose from the coming together of setuptools and Distribute. There was no credibility lost promoting Distribute, since setuptools never supported Python 3 - until now. There's no credibility lost now promoting setuptools, since it is essentially now the same as Distribute without the need for compatibility workarounds.
However, I simply see *no way* we could pull off a migration to a new recommended installer when the migration from the previous one to the current one is still far from complete :P
Adding in the distutils2/packaging digression just lowers our collective credibility even further, and we also get some significant spillover from
Essentially, don't underestimate how thin the ice we're currently walking on is community-wise: people are irritated and even outright angry with the Python core development team, and they have good reasons to be. We need to remain mindful of that, and take it into account when deciding how to
I'm certainly not suggesting the time is right for migrating to a new recommended installer we have always promoted pip (over easy_install), and that doesn't need to change. It doesn't mean we have to bundle pip with Python - just make it easier to get it on Windows and OS X. Just a few days ago you were saying that python -m getpip would be good to have, then I created a getpip module, and now AFAICT it hasn't even been looked at, while people gear up to do shed-loads of work to bundle pip with Python. the Python 3 transition. Haters gonna hate. What're you gonna do? :-) proceed. Who are these angry, entitled people? Have they forgotten that Python is a volunteer project? Why do we owe such people anything? I'm not convinced that such people are representative of the wider community. To judge from the video of the packaging panel at PyCon 2013, people are perhaps disappointed that we haven't got further, but there was no animosity that I could detect. The atmosphere was pretty positive and what I saw was an endearing faith and hope that we would, in time, get things right. None of what you have said answers why the PEP process shouldn't be followed in this case. No compelling case has been made AFAICT for bundling pip as opposed to enabling python -m getpip, especially given that (a) the work involved in one is very small compared to the other, and (b) the result for the user is the same - they get to use setuptools and pip. Regards, Vinay Sajip
On 18 Jul 2013 09:37, "Vinay Sajip" <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
Technically the decision *hasn't* been made - there is, as yet, no bundling PEP for me to consider for any installer, and I've decided not to accept Richard's bootstrapping PEP due to the issues around delaying the download to first use. I'd just like to have a bundling PEP posted before
make that official, so I can refer to it in the rejection notice.
Technically? Well, "that ship has sailed" seems pretty well decided to me. I know that "technically is the best kind of correct" :-)
But IIUC, your reservations on PEP 439 (I didn't realise that was what Donald was referring to in his response) related to Richard's specific implementation. I posted an example getpip.py (very simple, I grant you) which would get setuptools and pip for users, without the need for bundling anything, plus proposed an equivalent upgrade for pyvenv which would do
I the
same for venvs. There has been no discussion around getpip.py whatsoever, AFAIK.
No, my reservations are about delaying the installation of pip to first use (or any time after the installation of Python). I don't care that much about the distinction between bundling and install-time bootstrapping and would appreciate a PEP that explicitly weighed up the pros and cons of those two approaches (at the very least bundling means you don't need a reliable network connection at install time, while install time bootstrapping avoids the problem of old versions of pip, and also gives a way to bootstrap older Python installations). Cheers, Nick.
However, even without a PEP, I consider pip the only acceptable option,
I believe we have no credibility left to burn with the broader Python development community on tool choices. We've spent years telling everyone
We don't need to burn any credibility at all. Perhaps python-dev lost some credibility when packaging got pulled from 3.3, even though it was a good decision made for the right reasons. But you only ask people to believe you when you have some new story to tell them, and pip is hardly new.
We're now telling people, OK setuptools is actually fine, but you should still use pip instead of easy_install and start using wheels instead of eggs. This is defensible, since even people using distribute were still importing setuptools.
This is something which arose from the coming together of setuptools and Distribute. There was no credibility lost promoting Distribute, since setuptools never supported Python 3 - until now. There's no credibility lost now promoting setuptools, since it is essentially now the same as Distribute without the need for compatibility workarounds.
However, I simply see *no way* we could pull off a migration to a new recommended installer when the migration from the previous one to the current one is still far from complete :P
I'm certainly not suggesting the time is right for migrating to a new recommended installer we have always promoted pip (over easy_install), and that doesn't need to change. It doesn't mean we have to bundle pip with Python - just make it easier to get it on Windows and OS X. Just a few days ago you were saying that python -m getpip would be good to have, then I created a getpip module, and now AFAICT it hasn't even been looked at, while people gear up to do shed-loads of work to bundle pip with Python.
Adding in the distutils2/packaging digression just lowers our collective credibility even further, and we also get some significant spillover from the Python 3 transition.
Haters gonna hate. What're you gonna do? :-)
Essentially, don't underestimate how thin the ice we're currently walking on is community-wise: people are irritated and even outright angry with
as the
Python core development team, and they have good reasons to be. We need to remain mindful of that, and take it into account when deciding how to proceed.
Who are these angry, entitled people? Have they forgotten that Python is a volunteer project? Why do we owe such people anything? I'm not convinced that such people are representative of the wider community.
To judge from the video of the packaging panel at PyCon 2013, people are perhaps disappointed that we haven't got further, but there was no animosity that I could detect. The atmosphere was pretty positive and what I saw was an endearing faith and hope that we would, in time, get things right.
None of what you have said answers why the PEP process shouldn't be followed in this case. No compelling case has been made AFAICT for bundling pip as opposed to enabling python -m getpip, especially given that (a) the work involved in one is very small compared to the other, and (b) the result for the user is the same - they get to use setuptools and pip.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
No, my reservations are about delaying the installation of pip to first use (or any time after the installation of Python). I don't care that much about
Nick Coghlan <ncoghlan <at> gmail.com> writes: the distinction between bundling and install-time bootstrapping and would appreciate a PEP that explicitly weighed up the pros and cons of those two approaches (at the very least bundling means you don't need a reliable network connection at install time, while install time bootstrapping avoids the problem of old versions of pip, and also gives a way to bootstrap older Python installations). Leaving aside specialised corporate setups with no access to PyPI, any installer is of very limited use without a reliable network connection. Most of the people we're expecting to reach with these changes will have always on network connections, or as near as makes no difference. However, pip and setuptools will change over time, and "-m getpip" allows upgrades to be done fairly easily, under user control. So ISTM we're really talking about an initial "python -m getpip" before lots and lots of "pip install this", "pip install that" etc. Did you (or anyone else) look at my getpip.py? In what way might it not be fit for purpose as a bootrstapper? If it can be readily modified to do what's needed (and I'll put in the work if I can), then given that bootstrapping was the original impetus lacking only an implementation which passed the "simple enough to explain, so a good idea" criterion, perhaps that situation can be rectified. Regards, Vinay Sajip
On Jul 17, 2013, at 8:03 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Leaving aside specialised corporate setups with no access to PyPI, any installer is of very limited use without a reliable network connection. Most of the people we're expecting to reach with these changes will have always on network connections, or as near as makes no difference. However, pip and setuptools will change over time, and "-m getpip" allows upgrades to be done fairly easily, under user control. So ISTM we're really talking about an initial "python -m getpip" before lots and lots of "pip install this", "pip install that" etc.
It's hardly true that this is only specialized corporate setups. Another situation off the top of my head would be at various meet ups or conferences where people are trying to teach new people and might have little or no access. Even assuming they *do* have access to the network, accessing the network includes a number of extra failure conditions. For instance pip 1.3+ is the first version of pip to include verification of SSL and we've had a fair number of people need help making pip be able to reach PyPI through their particular setups. Sometimes it's because the version of OpenSSL is old, other times they don't have OpenSSL at all, or they have a proxy between them and PyPI which is preventing them or requires additional configuration to make it work. Each possible failure condition is another thing that can go wrong for users, each one is another point of frustration and another reason not to fetch it if it can be helped. You state that an installer is of limited use without a network connection but that's not particularly true either. Especially with Wheels and the removal of the simple "setup.py install" and the current focus on having a local cache of pre-built wheels I suspect there to be a decent number of people wanting to install from local wheels. It is true that each problem has a solution, but they are different solutions for each problem and generally require that the person be aware of the problem and the solution prior to having it in order to work around it.
Did you (or anyone else) look at my getpip.py? In what way might it not be fit for purpose as a bootrstapper? If it can be readily modified to do what's needed (and I'll put in the work if I can), then given that bootstrapping was the original impetus lacking only an implementation which passed the "simple enough to explain, so a good idea" criterion, perhaps that situation can be rectified.
I did not look at your getpip.py. I've always believed that an explicit "fetch pip" step was not a reasonable step in the process. However bootstrapping had an implementation it's major issue was that it was implicit and that was deemed inappropriate. If you post it again I'll review it but I'll also be against actually using it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 18 Jul 2013 09:37, "Vinay Sajip" <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
Technically the decision *hasn't* been made - there is, as yet, no bundling PEP for me to consider for any installer, and I've decided not to accept Richard's bootstrapping PEP due to the issues around delaying the download to first use. I'd just like to have a bundling PEP posted before
make that official, so I can refer to it in the rejection notice.
Technically? Well, "that ship has sailed" seems pretty well decided to me. I know that "technically is the best kind of correct" :-)
But IIUC, your reservations on PEP 439 (I didn't realise that was what Donald was referring to in his response) related to Richard's specific implementation. I posted an example getpip.py (very simple, I grant you) which would get setuptools and pip for users, without the need for bundling anything, plus proposed an equivalent upgrade for pyvenv which would do
I the
same for venvs. There has been no discussion around getpip.py whatsoever, AFAIK.
However, even without a PEP, I consider pip the only acceptable option, as I believe we have no credibility left to burn with the broader Python development community on tool choices. We've spent years telling everyone
We don't need to burn any credibility at all. Perhaps python-dev lost some credibility when packaging got pulled from 3.3, even though it was a good decision made for the right reasons. But you only ask people to believe you when you have some new story to tell them, and pip is hardly new.
We're now telling people, OK setuptools is actually fine, but you should still use pip instead of easy_install and start using wheels instead of eggs. This is defensible, since even people using distribute were still importing setuptools.
This is something which arose from the coming together of setuptools and Distribute. There was no credibility lost promoting Distribute, since setuptools never supported Python 3 - until now. There's no credibility lost now promoting setuptools, since it is essentially now the same as Distribute without the need for compatibility workarounds.
However, I simply see *no way* we could pull off a migration to a new recommended installer when the migration from the previous one to the current one is still far from complete :P
I'm certainly not suggesting the time is right for migrating to a new recommended installer we have always promoted pip (over easy_install), and that doesn't need to change. It doesn't mean we have to bundle pip with Python - just make it easier to get it on Windows and OS X. Just a few days ago you were saying that python -m getpip would be good to have, then I created a getpip module, and now AFAICT it hasn't even been looked at, while people gear up to do shed-loads of work to bundle pip with Python.
Adding in the distutils2/packaging digression just lowers our collective credibility even further, and we also get some significant spillover from the Python 3 transition.
Haters gonna hate. What're you gonna do? :-)
It's not about haters - it's about not causing additional pain for people that we have already asked to put up with a lot. However solid our reasons for doing so were, we've deliberately created a bunch of additional work for various people.
Essentially, don't underestimate how thin the ice we're currently
on is community-wise: people are irritated and even outright angry with
walking the
Python core development team, and they have good reasons to be. We need to remain mindful of that, and take it into account when deciding how to proceed.
Who are these angry, entitled people? Have they forgotten that Python is a volunteer project? Why do we owe such people anything? I'm not convinced that such people are representative of the wider community.
I'm talking about people who don't get mad, they just walk away. Or they even stick around, grin, and bear it without complaint. They matter, even if they don't complain. We have a duty of care to our users to find the least disruptive path forward (that's why Python 3 was such a big deal - we chose the disruptive path because we couldn't see any other solution). In the case of packaging, that means finding a way to let educators and Python developers safely assume that end users, experienced or otherwise, will have ready access to the pip CLI. Cheers, Nick.
To judge from the video of the packaging panel at PyCon 2013, people are perhaps disappointed that we haven't got further, but there was no
animosity
that I could detect. The atmosphere was pretty positive and what I saw was an endearing faith and hope that we would, in time, get things right.
None of what you have said answers why the PEP process shouldn't be followed in this case. No compelling case has been made AFAICT for bundling pip as opposed to enabling python -m getpip, especially given that (a) the work involved in one is very small compared to the other, and (b) the result for the user is the same - they get to use setuptools and pip.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Nick Coghlan <ncoghlan <at> gmail.com> writes:
It's not about haters - it's about not causing additional pain for people
I'm talking about people who don't get mad, they just walk away. Or they even stick around, grin, and bear it without complaint. They matter, even if
We have a duty of care to our users to find the least disruptive path forward (that's why Python 3 was such a big deal - we chose the disruptive
I used the term loosely in response to your comment about irritated and angry people. they don't complain. path because we couldn't see any other solution).
In the case of packaging, that means finding a way to let educators and Python developers safely assume that end users, experienced or otherwise, will have ready access to the pip CLI.
I'm not arguing that people shouldn't have access to the pip CLI. It's not about pip vs. something else. I'm saying there's no real evidence that people having to run "python -m getpip" once per Python installation is any kind of deal-breaker, or that a lack of network connection is somehow a problem when getting pip, but not a problem when getting things off PyPI. More importantly, it doesn't seem like the PEP process has been followed, as other proposed alternatives (I mean the approach of "python -m getpip", as well as my specific suggested getpip.py) have not received adequate review or obvious negative feedback, nor have the pros and cons of bootstrapping vs. bundling been presented coherently and then pronounced upon. I'll stop going on about this topic now, though I will be happy have technical discussions if there's really any point. Regards, Vinay Sajip
On 18 July 2013 10:33, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes: More importantly, it doesn't seem like the PEP process has been followed, as other proposed alternatives (I mean the approach of "python -m getpip", as well as my specific suggested getpip.py) have not received adequate review or obvious negative feedback, nor have the pros and cons of bootstrapping vs. bundling been presented coherently and then pronounced upon.
Then (help) write the missing PEP! PEP's don't appear out of nowhere, they happen because people write them. That's why I sent a request to the list explicitly asking for someone to write a competitor to PEP 439 *because* I wasn't going to accept it, so we need something else to champion one or more of the alternatives. So far, Paul Nasrat is the only person who offered to take on that task, and he has yet to respond to my acceptance of that offer (which I'm not reading too much into at this point - I only sent that reply a day or two ago, and I expect that like the rest of us, Paul has plenty of other things to be working on in addition to Python packaging). There are only two approaches that are completely out of the running at this point: * implicit bootstrapping of pip at first use (as PEP 439 proposed) * promoting anything other than pip as the default installer Various other options for "how do we make it easier for end users to get started with pip" are all still technically on the table, including: * explicit command line based bootstrapping of pip by end users (just slightly cleaned up from the status quo) * creating Windows and Mac OS X installers for pip (since using wget/curl to run a script is either not possible or just an entirely strange notion there and forms a major part of the bootstrapping problem - after all, we expect people to be able to use the CPython Windows and Mac OS X installers just fine, why should they have any more trouble with an installer for pip?) * implicit bootstrapping of pip by the CPython Windows and Mac OS X installers * implicit bootstrapping of pip by the Python Launcher for Windows * bundling pip with the CPython Windows and Mac OS X installers (and using it to upgrade itself) * bundling pip with the Python Launcher for Windows (and using it to upgrade itself) Yes, I have my opinions and will try to nudge things in particular directions that I think are better, but until someone sits down and *actually writes the PEP for it*, I won't know how justified those opinions are. Even though I have already stated my dislike for some of these approaches (up to and including misstating that dislike as "not going to happen"), that just means the arguments in favour would need to be a bit more persuasive to convince me I am wrong. The problem statement also needs to be updated to cover the use case of an instructor running a class and wanting to offer a local PyPI server (or other cache) without a reliable network connection to the outside world, since *that* is the main argument against the bootstrapping based solutions. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jul 17, 2013, at 7:36 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Just a few days ago you were saying that python -m getpip would be good to have, then I created a getpip module, and now AFAICT it hasn't even been looked at, while people gear up to do shed-loads of work to bundle pip with Python.
There was discussion around ``python -m getpip`` and the general thinking of that thread was that expecting users to type in an explicit command was adding extra steps into the process (and placing a dependency on the network connection being available whenever they happen to want to install something) and that was less than desirable. On top of that it was also the general thinking of that thread that implicitly bootstrapping during the first run was too magical and too prone to breakages related to the network connection. Bundling at creation of the release files or during install time is what's in play at the moment. Personally I feel that bundling is the least error prone and most likely to work in the largest number of cases. Given that this one major target of this is beginners minimizing the number of places something can fail at seems to be the most useful option. Throw in the fact that it makes offline installations match the online installations better and I think it's the way it should go. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
There was discussion around ``python -m getpip`` and the general thinking of
Donald Stufft <donald <at> stufft.io> writes: that
thread was that expecting users to type in an explicit command was adding extra steps into the process (and placing a dependency on the network connection being available whenever they happen to want to install something) and that was
Well, it's just one additional command to type in - it's really neither here nor there as long as it's well documented. And the network connection argument is a bit of a straw man. Even if pip is present already, a typical pip invocation will fail if there is no network connection - hardly a good user experience. No reasonable user is going to complain if the instructions about installing packages include having a working network connection as a precondition. Whatever the technical merits of approach A vs. approach B, remember that my initial post was about following the process. Regards, Vinay Sajip
On Jul 17, 2013, at 8:16 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Well, it's just one additional command to type in - it's really neither here nor there as long as it's well documented.
There is already a getpip.py that's just not distributed with Python. So if "There is only one additional command to type" was the excuse than we already have that: curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python But for various reasons many projects have decided that expecting people to install the tools is difficult, especially for beginners and that simply documenting the command to install it was not enough. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python
Well it doesn't work on Windows, which would be a reasonable objection to using that specific approach.
But for various reasons many projects have decided that expecting people to install the tools is difficult, especially for beginners and that simply documenting the command to install it was not enough.
If it's that obvious, then why did Richard spend so long writing a bootstrap script, drafting PEP 439 etc.? Do you have any numbers on the "many projects"? Regards, Vinay Sajip
On Jul 17, 2013, at 8:38 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Donald Stufft <donald <at> stufft.io> writes:
curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python
Well it doesn't work on Windows, which would be a reasonable objection to using that specific approach.
But for various reasons many projects have decided that expecting people to install the tools is difficult, especially for beginners and that simply documenting the command to install it was not enough.
If it's that obvious, then why did Richard spend so long writing a bootstrap script, drafting PEP 439 etc.? Do you have any numbers on the "many projects"?
I never stated it was *obvious*. To me requiring an explicit bootstrap step was always a bad idea. It's an unfriendly UX that requires people to either know ahead of time if they already have pip installed, or try to use pip, notice it fail, run the bootstrapper, and then run the command they originally wanted to run. It also places a burden on every other project in the ecosystem to document that they need to first run `python -m getpip` and then run ``pip install project``. However Richard's implementation and the PEP was not an explicit bootstrap. It was an implicit bootstrap that upon the first execution of ``pip``would fetch and install pip and setuptools. The implicit bootstrap approach was more or less decided against for fear of being too magical and users not really being aware if they have or don't have pip. So to recap: Bootstrapping over the Networking in General - Requires network access - Extra failure points - OpenSSL Age - OpenSSL Available at all? - Proxies? - SSL Intercept Devices? Explicit bootstrapping - Everything from Bootstrapping over the network - Requires users (and projects) to use/document an explicit command Implicit Bootstrapping - Everything from Bootstrapping over the network - Users unsure if pip is installed or not (or at what point it will install) - "Magical" Bootstrap at Python Install Time - Everything from Bootstrapping over the network - Users possibly unaware that installer reaches the network - Some users tend to not be fans of installers "Phoning Home" - Privacy implications? Pre-Installation at Release Creation Time - Users might possibly have an older version of pip - ??? The older version of pip is just about the only real downside *for the users* of Python/pip that I can think of. This is already the case for most people using the pip provided by their Linux distribution and it's simple to upgrade the pip if the user requires a newer version of pip using ``pip install --upgrade pip``. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 18 July 2013 02:03, Donald Stufft <donald@stufft.io> wrote:
it's simple to upgrade the pip if the user requires a newer version of pip using ``pip install --upgrade pip`
Please don't gloss over the potential issues with upgrading in the face of in-use exe wrappers. We have a design for a solution, but as yet no working code. I expect to work on this, but my time is limited and I'm not at all sure there won't be issues still to resolve. (Obviously, anyone else is welcome to help, but it's a "windows issue", so I don't know how much interest there will be from non-Windows developers). Prior to the setuptools move away from 2to3, my standard response to anyone reporting issues with in-place upgrades of setuptools or pip (certainly on Windows, and in general anywhere else too) was "well, don't do that - remove and reinstall manually". Things are better now, but not yet perfect and I don't believe that there is a consensus that this is acceptable for a bundled pip. Paul
On Jul 18, 2013, at 2:45 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 02:03, Donald Stufft <donald@stufft.io> wrote: it's simple to upgrade the pip if the user requires a newer version of pip using ``pip install --upgrade pip`
Please don't gloss over the potential issues with upgrading in the face of in-use exe wrappers. We have a design for a solution, but as yet no working code. I expect to work on this, but my time is limited and I'm not at all sure there won't be issues still to resolve. (Obviously, anyone else is welcome to help, but it's a "windows issue", so I don't know how much interest there will be from non-Windows developers).
That's a bug ;) And will be worked around one way or another even if I need to install Windows to make it happen in time.
Prior to the setuptools move away from 2to3, my standard response to anyone reporting issues with in-place upgrades of setuptools or pip (certainly on Windows, and in general anywhere else too) was "well, don't do that - remove and reinstall manually". Things are better now, but not yet perfect and I don't believe that there is a consensus that this is acceptable for a bundled pip.
I consider "remove and reinstall" to be a terrible UX and if that's the best answer pip can give we need to fix that regardless. But as I said I don't mind ``python -mgetpip`` existing for one reason or another. I just don't think a bootstrap command is our best option for providing the most streamlined user experience. Either way running an old pip is hardly that big of a deal. Anyone using a Linux distro is likely to be running an older version unless they've gone out of their way to upgrade it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 18 July 2013 16:45, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 02:03, Donald Stufft <donald@stufft.io> wrote:
it's simple to upgrade the pip if the user requires a newer version of pip using ``pip install --upgrade pip`
Please don't gloss over the potential issues with upgrading in the face of in-use exe wrappers. We have a design for a solution, but as yet no working code. I expect to work on this, but my time is limited and I'm not at all sure there won't be issues still to resolve. (Obviously, anyone else is welcome to help, but it's a "windows issue", so I don't know how much interest there will be from non-Windows developers).
Prior to the setuptools move away from 2to3, my standard response to anyone reporting issues with in-place upgrades of setuptools or pip (certainly on Windows, and in general anywhere else too) was "well, don't do that - remove and reinstall manually". Things are better now, but not yet perfect and I don't believe that there is a consensus that this is acceptable for a bundled pip.
Making in-place upgrades using "pip install --upgrade pip" reliable on Windows is definitely the preferred solution, but it isn't a show stopper if it isn't ready for 3.4. Requiring that in-place upgrades be run as "python -m pip install --upgrade pip" would be acceptable, so long as the direct invocation ("pip install --upgrade pip") was detected and a clear error thrown suggesting the other command (this would be mildly annoying, but it's still a substantial improvement over the status quo). Something like: "Due to an unfortunate limitation of pip on Windows, direct upgrades are not supported. Please run 'python -m pip install --upgrade pip' to work around the problem." Shipping an msi installer for pip (perhaps bundling with setuptools) would also be an acceptable alternative. Bundling both with the "Python launcher for Windows" installer is definitely something we should consider for older versions (rather than updating the CPython installer). Either way, Windows users are used to downloading and running installers to get Python upgrades :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 18 July 2013 08:57, Nick Coghlan <ncoghlan@gmail.com> wrote:
Shipping an msi installer for pip (perhaps bundling with setuptools) would also be an acceptable alternative.
-1. I would suggest that this approach, if it were considered seriously, should be reviewed carefully by someone who understands MSI installers (not me!). Specifically, if I install pip via an MSI, then use "python -m pip install -U pip", will the "Add/Remove Programs" entry created by the MSI still uninstall cleanly? Broken uninstall options and incomplete package removals are a perennial problem on Windows, usually caused by messing with installed files outside control of the installer. Paul
On 18 July 2013 18:10, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 08:57, Nick Coghlan <ncoghlan@gmail.com> wrote:
Shipping an msi installer for pip (perhaps bundling with setuptools) would also be an acceptable alternative.
-1.
I would suggest that this approach, if it were considered seriously, should be reviewed carefully by someone who understands MSI installers (not me!). Specifically, if I install pip via an MSI, then use "python -m pip install -U pip", will the "Add/Remove Programs" entry created by the MSI still uninstall cleanly? Broken uninstall options and incomplete package removals are a perennial problem on Windows, usually caused by messing with installed files outside control of the installer.
This potential problem needs to be taken into account for any bundling solution as well. Explicit bootstrapping (with an install time option to invoke it in the CPython and Python launcher for Windows installers) is looking better all the time :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jul 18, 2013, at 4:22 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 July 2013 18:10, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 08:57, Nick Coghlan <ncoghlan@gmail.com> wrote:
Shipping an msi installer for pip (perhaps bundling with setuptools) would also be an acceptable alternative.
-1.
I would suggest that this approach, if it were considered seriously, should be reviewed carefully by someone who understands MSI installers (not me!). Specifically, if I install pip via an MSI, then use "python -m pip install -U pip", will the "Add/Remove Programs" entry created by the MSI still uninstall cleanly? Broken uninstall options and incomplete package removals are a perennial problem on Windows, usually caused by messing with installed files outside control of the installer.
This potential problem needs to be taken into account for any bundling solution as well. Explicit bootstrapping (with an install time option to invoke it in the CPython and Python launcher for Windows installers) is looking better all the time :)
That's only a problem if we make a MSI installer. Which I don't think we need to do.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
From: Paul Moore
On 18 July 2013 08:57, Nick Coghlan <ncoghlan@gmail.com> wrote:
Shipping an msi installer for pip (perhaps bundling with setuptools) would also be an acceptable alternative.
-1.
I would suggest that this approach, if it were considered seriously, should be reviewed carefully by someone who understands MSI installers (not me!). Specifically, if I install pip via an MSI, then use "python -m pip install -U pip", will the "Add/Remove Programs" entry created by the MSI still uninstall cleanly? Broken uninstall options and incomplete package removals are a perennial problem on Windows, usually caused by messing with installed files outside control of the installer. Paul
Also -1, and I've spent quite a lot of time writing MSIs recently... It could be solved, but wheels are a better fix for the problems that people solve with MSIs. MSIs are also useless when virtualenvs are involved, since there's basically a guarantee that its metadata will get out of sync with reality as soon as someone deletes the virtualenv. IMHO bundling pip (and all dependencies) with the installer is best. Any bootstrap script hitting the internet will need to pin the version, so you may as well include a zip of the files and extract them on install. That way you'll always get a pip that can upgrade itself, and if you do a repair install you'll get a working pip back. Steve
On Wed, Jul 17, 2013 at 8:16 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Donald Stufft <donald <at> stufft.io> writes:
There was discussion around ``python -m getpip`` and the general thinking of that thread was that expecting users to type in an explicit command was adding extra steps into the process (and placing a dependency on the network connection being available whenever they happen to want to install something) and that was
Well, it's just one additional command to type in - it's really neither here nor there as long as it's well documented.
And the network connection argument is a bit of a straw man. Even if pip is present already, a typical pip invocation will fail if there is no network connection - hardly a good user experience. No reasonable user is going to complain if the instructions about installing packages include having a working network connection as a precondition.
Whatever the technical merits of approach A vs. approach B, remember that my initial post was about following the process.
Regards,
Vinay Sajip
I didn't realize the current option was about bundling pip itself rather than including a simple bootstrap. I have favored the bootstrap approach (being any intentionally limited installer that you would be daft to use generally). The rationale is that we would want to avoid bundling a soon outdated "good enough" tool that people use instead of letting better pypi-hosted tools thrive. Setuptools is an example of a project that has this problem. Projects might use the [even more*] terrible distutils in preference, admonishing others to do the same, often without understanding why apart from "it's in the standard library". I didn't believe in the pip command that installs itself because I would have been irritated if pip was installed by surprise - maybe I have a reason to install it a different way - perhaps from source or from a system package. A bundled get-pip that avoids also having to install setuptools first, and that is secure, and easy to remember, would be super handy. The normal way to get pip these days is to install virtualenv. After you get it it's just one command to run and pretty convenient. * for the haters
On Jul 17, 2013, at 8:40 PM, Daniel Holth <dholth@gmail.com> wrote:
I didn't realize the current option was about bundling pip itself rather than including a simple bootstrap. I have favored the bootstrap approach (being any intentionally limited installer that you would be daft to use generally). The rationale is that we would want to avoid bundling a soon outdated "good enough" tool that people use instead of letting better pypi-hosted tools thrive.
Is the argument here that by including pip pre-installed that these other tools will be unable to compete? Because the same thing could be said for installing a bootstrapped as well. In fact in either option I expect the way an alternative installer to be installed would be via ``pip install foo`` regardless of if the person needs to type ``python -mgetpip`` first or not.
Setuptools is an example of a project that has this problem. Projects might use the [even more*] terrible distutils in preference, admonishing others to do the same, often without understanding why apart from "it's in the standard library".
It's for more reasons than it's in the standard library. setuptools has had a lot of misfeatures and a good bit of the angst against not using setuptools was due to easy_install not setuptools itself.
I didn't believe in the pip command that installs itself because I would have been irritated if pip was installed by surprise - maybe I have a reason to install it a different way - perhaps from source or from a system package.
A bundled get-pip that avoids also having to install setuptools first, and that is secure, and easy to remember, would be super handy.
For the record I'm not against including a method for fetching pip. I expect Linux distributions to uninstall pip from the Python and it would still of course be possible to uninstall the provided pip so an easy method to (re)install it if the users happen to do that and wish to get it back doesn't seem like a bad idea to me.
The normal way to get pip these days is to install virtualenv. After you get it it's just one command to run and pretty convenient.
* for the haters _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Let us not forgot that the pre-installed approach is hardly a new thing for package managers. Both Ruby and Node do this with their respective package managers in order to make it simpler for their users to install packages. So it's been shown that this type of setup can work. Do we really need extra tedium that users need to do? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
It's for more reasons than it's in the standard library. setuptools has
had a lot of misfeatures and a good bit of the angst against not using setuptools was due to easy_install not setuptools itself.
It's hard to disentangle the two - it's not as if the easy_install functionality is completely separate, and it's possible to change its behaviour independently. Another thing about setuptools which some don't especially like is that generated scripts reference pkg_resources, for no particularly good reason. Regards, Vinay Sajip
On 18 July 2013 17:50, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
It's for more reasons than it's in the standard library. setuptools has
had a lot of misfeatures and a good bit of the angst against not using setuptools was due to easy_install not setuptools itself.
It's hard to disentangle the two - it's not as if the easy_install functionality is completely separate, and it's possible to change its behaviour independently. Another thing about setuptools which some don't especially like is that generated scripts reference pkg_resources, for no particularly good reason.
It would actually be nice if "pkg_resources" and "setuptools-core" were available as separate PyPI distributions, and setuptools bundled them together with easy_install. It's a *long* way down the priority list thing (and will likely never make it to the top, although it may be more practical once pip vendors the bits it needs). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
It would actually be nice if "pkg_resources" and
"setuptools-core" were available as separate PyPI distributions, and setuptools bundled them together with easy_install.
This would seem to require quite a sizeable refactoring of setuptools, since the easy_install command is just an entry point for setuptools.command.easy_install.main(). Regards, Vinay Sajip
It would actually be nice if "pkg_resources" and "setuptools-core" were available as separate PyPI distributions, and setuptools bundled them together with easy_install. It's a *long* way down the priority list thing (and will likely never make it to the top, although it may be more practical once pip vendors the bits it needs).
the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources. you're left with 2 poor solutions: 1) rewriting script import lines, or 2) still installing setuptools anyway so, having a separate pkg_resources is higher up on the list I think for that reason. without a separate pkg_resources, I think the "dynamic install of setuptools" idea wins out, or no change at all.
the idea to have pip vendor setuptools crumbles a bit due to console
Marcus Smith <qwcode <at> gmail.com> writes: scripts needing pkg_resources. They don't *need* pkg_resources. All they're doing is taking a module name and the name of a nested object in the form 'a.b.c', and distlib-generated scripts show that no external references are needed. Here's the template for a distlib-generated script: SCRIPT_TEMPLATE = '''%(shebang)s if __name__ == '__main__': import sys, re def _resolve(module, func): __import__(module) mod = sys.modules[module] parts = func.split('.') result = getattr(mod, parts.pop(0)) for p in parts: result = getattr(result, p) return result try: sys.argv[0] = re.sub('-script.pyw?$', '', sys.argv[0]) func = _resolve('%(module)s', '%(func)s') rc = func() # None interpreted as 0 except Exception as e: # only supporting Python >= 2.6 sys.stderr.write('%%s\\n' %% e) rc = 1 sys.exit(rc) ''' I don't see any reason why setuptools couldn't be updated to use this approach. Regards, Vinay Sajip
On Thu, Jul 18, 2013 at 9:49 AM, Vinay Sajip <vinay_sajip@yahoo.co.uk>wrote:
Marcus Smith <qwcode <at> gmail.com> writes:
the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources.
They don't *need* pkg_resources. All they're doing is taking a module name and the name of a nested object in the form 'a.b.c', and distlib-generated scripts show that no external references are needed. Here's the template for a distlib-generated script:
pkg_resources scripts confirm the version. don't see that here? not necessary?
On Thu, Jul 18, 2013 at 1:01 PM, Marcus Smith <qwcode@gmail.com> wrote:
On Thu, Jul 18, 2013 at 9:49 AM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Marcus Smith <qwcode <at> gmail.com> writes:
the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources.
They don't *need* pkg_resources. All they're doing is taking a module name and the name of a nested object in the form 'a.b.c', and distlib-generated scripts show that no external references are needed. Here's the template for a distlib-generated script:
pkg_resources scripts confirm the version. don't see that here? not necessary?
It's useful when you have more than one version of things installed as eggs. pkg_resources does the full dependency resolution and adds everything to the sys.path. When you are not doing that then it's not needed.
Marcus Smith <qwcode <at> gmail.com> writes:
pkg_resources scripts confirm the version. don't see that here? not necessary?
The load_entry_point needs the dist name because of how it's implemented - it defers to the distribution instance. AFAICT there are no actual checks. def load_entry_point(dist, group, name): """Return `name` entry point of `group` for `dist` or raise ImportError""" return get_distribution(dist).load_entry_point(group, name) Regards, Vinay Sajip
The load_entry_point needs the dist name because of how it's implemented - it defers to the distribution instance. AFAICT there are no actual checks.
def load_entry_point(dist, group, name): """Return `name` entry point of `group` for `dist` or raise ImportError""" return get_distribution(dist).load_entry_point(group, name)
it checks the version. you get this. I have pip-1.5dev1 in this case, but a script wrapper referencing 1.4rc5 (pip)qwcode@qwcode:~/p/pypa/pip$ pip --version Traceback (most recent call last): File "/home/qwcode/.qwdev/pip/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/home/qwcode/.qwdev/pip/lib/python2.6/site-packages/pkg_resources.py", line 3011, in <module> parse_requirements(__requires__), Environment() File "/home/qwcode/.qwdev/pip/lib/python2.6/site-packages/pkg_resources.py", line 626, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==1.4rc5
On Thu, Jul 18, 2013 at 12:12 PM, Marcus Smith <qwcode@gmail.com> wrote:
It would actually be nice if "pkg_resources" and "setuptools-core" were available as separate PyPI distributions, and setuptools bundled them together with easy_install. It's a *long* way down the priority list thing (and will likely never make it to the top, although it may be more practical once pip vendors the bits it needs).
the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources. you're left with 2 poor solutions: 1) rewriting script import lines, or 2) still installing setuptools anyway
so, having a separate pkg_resources is higher up on the list I think for that reason. without a separate pkg_resources, I think the "dynamic install of setuptools" idea wins out, or no change at all.
I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do. It shouldn't make much difference whether the vendoring happens before or after pkg_resource's separation. The trickiest parts might be adding the undeclared pkg_resources / setuptools dependency when appropriate and figuring out whether we can install setuptools even if it's not available as a wheel. Meanwhile someone might add a flag or a plugin to setuptools' console_scripts handler to generate them in a different way. I am not worried that 99.9% of pypi-hosted packages depend on setuptools or distutils. It is enough to introduce only the possibility of getting along without it. For the rest it is appropriate to install and use setuptools to build packages that were in fact designed to use it.
I tried it out. pip can install setuptools when only pkg_resources is installed. The only thing stopping it is a small check for whether the current setuptools is of the distribute variety. On Thu, Jul 18, 2013 at 12:53 PM, Daniel Holth <dholth@gmail.com> wrote:
On Thu, Jul 18, 2013 at 12:12 PM, Marcus Smith <qwcode@gmail.com> wrote:
It would actually be nice if "pkg_resources" and "setuptools-core" were available as separate PyPI distributions, and setuptools bundled them together with easy_install. It's a *long* way down the priority list thing (and will likely never make it to the top, although it may be more practical once pip vendors the bits it needs).
the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources. you're left with 2 poor solutions: 1) rewriting script import lines, or 2) still installing setuptools anyway
so, having a separate pkg_resources is higher up on the list I think for that reason. without a separate pkg_resources, I think the "dynamic install of setuptools" idea wins out, or no change at all.
I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do. It shouldn't make much difference whether the vendoring happens before or after pkg_resource's separation. The trickiest parts might be adding the undeclared pkg_resources / setuptools dependency when appropriate and figuring out whether we can install setuptools even if it's not available as a wheel.
Meanwhile someone might add a flag or a plugin to setuptools' console_scripts handler to generate them in a different way.
I am not worried that 99.9% of pypi-hosted packages depend on setuptools or distutils. It is enough to introduce only the possibility of getting along without it. For the rest it is appropriate to install and use setuptools to build packages that were in fact designed to use it.
I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do.
I agree. there's 2 problems to be solved here 1) making pip a self-sufficient wheel installer (which requires some internal pkg_resources equivalent) 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions for #2, we have a few paths I think 1) bundle setuptools (and have pip install "pkg_resources" for console scripts, if it existed as a separate project) 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?)
Marcus Smith <qwcode <at> gmail.com> writes:
I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do.
I agree. there's 2 problems to be solved here
1) making pip a self-sufficient wheel installer (which requires some
2) removing the user headache of a setuptools build *dependency* for
internal pkg_resources equivalent) practically all current pypi distributions
for #2, we have a few paths I think
1) bundle setuptools (and have pip install "pkg_resources" for console
2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) 4) just be happy that the pip bootstrap/bundle efforts will alleviate the
scripts, if it existed as a separate project) pain in new versions of python (by pre-installing setuptools?) If setuptools changes the script generation, the need for pkg_resources is gone at least from that part of the picture. Perhaps you're forgetting that there already is an internal pkg_resources equivalent in my pip-distlib branch - this is a pkg_resources compatibility shim using pip.vendor.distlib which passed all the pip tests when it was submitted as a PR. Regards, Vinay Sajip
Perhaps you're forgetting that there already is an internal pkg_resources equivalent in my pip-distlib branch - this is a pkg_resources compatibility shim using pip.vendor.distlib which passed all the pip tests when it was submitted as a PR.
: ) no I haven't forgotten. I actually bring it up with others pretty often. my use of "pkg_resource equivalent" was actually a reference to your PR work. Marcus
On Thu, Jul 18, 2013 at 2:19 PM, Marcus Smith <qwcode@gmail.com> wrote:
Perhaps you're forgetting that there already is an internal pkg_resources equivalent in my pip-distlib branch - this is a pkg_resources compatibility shim using pip.vendor.distlib which passed all the pip tests when it was submitted as a PR.
: ) no I haven't forgotten. I actually bring it up with others pretty often. my use of "pkg_resource equivalent" was actually a reference to your PR work.
Marcus
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On Thu, Jul 18, 2013 at 1:24 PM, Marcus Smith <qwcode@gmail.com> wrote:
I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do.
I agree. there's 2 problems to be solved here
1) making pip a self-sufficient wheel installer (which requires some internal pkg_resources equivalent) 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions
for #2, we have a few paths I think
1) bundle setuptools (and have pip install "pkg_resources" for console scripts, if it existed as a separate project) 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?)
virtualenv /tmp/builder /tmp/builder/bin/pip wheel -w /tmp/wheels -r requirements.txt virtualenv /tmp/no-setuptools /tmp/no-setuptools/bin/pip install --use-wheel --find-links=/tmp/wheels --no-index -r requirements.txt That is the anti-setuptools workflow I envision. The build environment has an appropriate amount of setuptools and the no-setuptools environment has none. This gives you the option of not having setuptools if you don't want it, something that some people will appreciate. It does not try to avoid the non-problem of installing setuptools when you actually need it. Eventually there may be more sophisticated build requirements handling, for whatever that's worth, so that you might not have to have an explicit setuptools virtualenv. System packaging certainly doesn't install build requirements into their own isolated environment.
virtualenv /tmp/builder /tmp/builder/bin/pip wheel -w /tmp/wheels -r requirements.txt
people will expect to be able to do this globally (i.e not in a virtualenv). that's when the headache starts It does not try to avoid the non-problem of installing setuptools when you
actually need it
it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip)
fyi, I'm updating donald's original setuptools bundle issue with all of this as the choices become clearer https://github.com/pypa/pip/issues/1049 On Thu, Jul 18, 2013 at 2:08 PM, Marcus Smith <qwcode@gmail.com> wrote:
virtualenv /tmp/builder
/tmp/builder/bin/pip wheel -w /tmp/wheels -r requirements.txt
people will expect to be able to do this globally (i.e not in a virtualenv). that's when the headache starts
It does not try to avoid the non-problem of installing setuptools when you
actually need it
it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip)
On 18 July 2013 22:08, Marcus Smith <qwcode@gmail.com> wrote:
it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip)
Seriously, we're talking here about bundling pip with the Python installer. Why not just bundle setuptools as well? Don't vendor it, don't jump through hoops, just bundle it too, so that all Python environments can be assumed to have pip and setuptools present. (Note that I'm one of the least likely people to advocate setuptools around here, and yet even I don't see why we're working so hard to avoid just having the thing available...) It seems to me that by bundling pip but not setuptools, we're just making unnecessary work for ourselves. Paul
On Jul 18, 2013, at 5:56 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 22:08, Marcus Smith <qwcode@gmail.com> wrote: it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip)
Seriously, we're talking here about bundling pip with the Python installer. Why not just bundle setuptools as well? Don't vendor it, don't jump through hoops, just bundle it too, so that all Python environments can be assumed to have pip and setuptools present. (Note that I'm one of the least likely people to advocate setuptools around here, and yet even I don't see why we're working so hard to avoid just having the thing available...)
It seems to me that by bundling pip but not setuptools, we're just making unnecessary work for ourselves.
Paul _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Because a significant number of people have had issues with things breaking because their setuptools install got messed up. Typically some combination of things convinced pip to uninstall setuptools which then breaks pip completely (due to a reliance on pkg_resources) and breaks installing from sdists (due to a reliance on setuptools). This isn't a problem for most tools because they could just use pip to fix their dependencies. However when it's the package manager that breaks you're stuck fixing things manually. While it's obvious to you or I what the problem is I've found that the bulk of people who have these issues have no idea why they are getting the error and how to fix it. Bundling this means that pip is either installed and works, or it isn't installed. It makes it much simpler for end users to deal with and makes it much more robust. Right now this is particularly troublesome because there's a huge bug that's causing this to happen and I think i've not gone a day without having someone different run into the problem. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Jul 18, 2013 at 5:56 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 22:08, Marcus Smith <qwcode@gmail.com> wrote:
it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip)
Seriously, we're talking here about bundling pip with the Python installer. Why not just bundle setuptools as well? Don't vendor it, don't jump through hoops, just bundle it too, so that all Python environments can be assumed to have pip and setuptools present. (Note that I'm one of the least likely people to advocate setuptools around here, and yet even I don't see why we're working so hard to avoid just having the thing available...)
It seems to me that by bundling pip but not setuptools, we're just making unnecessary work for ourselves.
I'll see if I can do a patch. I don't think it will be hard at all, and I do think it's work that will eventually become necessary. PJE is correct that if we surprise people with non-pkg_resources console_scripts then we will break things for people who are more interested in a working packaging experience.
Daniel Holth <dholth <at> gmail.com> writes:
PJE is correct that if we surprise people with non-pkg_resources console_scripts then we will break things for people who are more interested in a working packaging experience.
Do you mean that you think multiple versions have to be supported, and that's why console scripts should remain pkg_resources - dependent? If you don't think that multiple version support is needed, then the non- pkg_resources versions of the script should be able to locate the function to call from the script, assuming it can import the module. Are you saying that the import or function call will fail, because the distribution didn't reference setuptools as a dependency, and yet expects it to be there? Regards, Vinay Sajip
On Jul 18, 2013, at 7:20 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Daniel Holth <dholth <at> gmail.com> writes:
PJE is correct that if we surprise people with non-pkg_resources console_scripts then we will break things for people who are more interested in a working packaging experience.
Do you mean that you think multiple versions have to be supported, and that's why console scripts should remain pkg_resources - dependent?
If you don't think that multiple version support is needed, then the non- pkg_resources versions of the script should be able to locate the function to call from the script, assuming it can import the module. Are you saying that the import or function call will fail, because the distribution didn't reference setuptools as a dependency, and yet expects it to be there?
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
I think the point is that people might be dependent on this functionality and changing it out from underneath them could break their world. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention. A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general). If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-) Regards, Vinay Sajip
On Jul 18, 2013, at 7:37 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general).
If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-)
Yes I'm against implementation defined features. However this is already the status quo for this particular implementation. Basically I'm worried we are trying to fix too much at once. One of the major reasons for distutils/packaging failing was it tried to fix the world in one fell swoop. I see this same pattern starting to happen here. The problem is each solution has a bunch of corner cases and gotchas and the more things we try to fix at once the less eyes we'll have on each individual one and the more rushed the entire toolchain is going to be. I think it's *really* important we limit the scope of what we fix at any one time. Right now we have PEP426, PEP440, PEP439, PEP427, Nick is talking about an Sdist 2.0 PEP, Daniel just posted another PEP I haven't looked at yet, this is going to be another PEP. On top of that we have a number of issues related to those PEPs but not specifically part of those PEPs. A lot of things is being done right now and I personally have having trouble keeping up and keeping things straight. I know i'm not the only one because I've had a number of participants of these discussions privately tell me that they aren't sure how I'm keeping up (and i'm struggling to do so). I really don't want us to ship a bunch of half baked / not entirely thought through solutions. So can we please limit our scope? Let's start by fixing the stuff we have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better.
Regards,
Vinay Sajip
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Jul 18, 2013, at 8:15 PM, Donald Stufft <donald@stufft.io> wrote:
So can we please limit our scope? Let's start by fixing the stuff we have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better.
Let me just specify though that i'm not stating exactly where that line should be drawn. I just see things heading in this direction and I think we're letting scope creep hit us hard and it will absolutely kill our efforts. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Jul 18, 2013 at 8:15 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 18, 2013, at 7:37 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general).
If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-)
Yes I'm against implementation defined features. However this is already the status quo for this particular implementation. Basically I'm worried we are trying to fix too much at once.
One of the major reasons for distutils/packaging failing was it tried to fix the world in one fell swoop. I see this same pattern starting to happen here. The problem is each solution has a bunch of corner cases and gotchas and the more things we try to fix at once the less eyes we'll have on each individual one and the more rushed the entire toolchain is going to be.
I think it's *really* important we limit the scope of what we fix at any one time. Right now we have PEP426, PEP440, PEP439, PEP427, Nick is talking about an Sdist 2.0 PEP, Daniel just posted another PEP I haven't looked at yet, this is going to be another PEP. On top of that we have a number of issues related to those PEPs but not specifically part of those PEPs.
A lot of things is being done right now and I personally have having trouble keeping up and keeping things straight. I know i'm not the only one because I've had a number of participants of these discussions privately tell me that they aren't sure how I'm keeping up (and i'm struggling to do so). I really don't want us to ship a bunch of half baked / not entirely thought through solutions.
So can we please limit our scope? Let's start by fixing the stuff we have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better.
I feel your pain. We might as well allow happy setuptools users to continue using setuptools. I don't care about making a pkg_resources console_scripts handler that does the same thing because we can just use the existing one. The more important contribution is to provide an alternative for people who are not happy setuptools users.
On Jul 18, 2013, at 8:33 PM, Daniel Holth <dholth@gmail.com> wrote:
We might as well allow happy setuptools users to continue using setuptools. I don't care about making a pkg_resources console_scripts handler that does the same thing because we can just use the existing one. The more important contribution is to provide an alternative for people who are not happy setuptools users.
I generally agree with this :) I just think that we need to close the loop on our current efforts before adding more things into the fray. The only major change to the eco system we've made so far that has actually *shipped* to end users is the distribute/setuptools merge and that's causing a lot of pain to people. Soon we'll at least have a pip version with prelim wheel support but I don't even know if it supports metadata 2.0 at all or not yet? I think there's a pre-release of wheel that does though? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Jul 18, 2013 at 8:36 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 18, 2013, at 8:33 PM, Daniel Holth <dholth@gmail.com> wrote:
We might as well allow happy setuptools users to continue using setuptools. I don't care about making a pkg_resources console_scripts handler that does the same thing because we can just use the existing one. The more important contribution is to provide an alternative for people who are not happy setuptools users.
I generally agree with this :) I just think that we need to close the loop on our current efforts before adding more things into the fray. The only major change to the eco system we've made so far that has actually *shipped* to end users is the distribute/setuptools merge and that's causing a lot of pain to people.
Soon we'll at least have a pip version with prelim wheel support but I don't even know if it supports metadata 2.0 at all or not yet? I think there's a pre-release of wheel that does though?
bdist_wheel will produce json metadata that generally conforms to the current PEP but no consumer takes advantage of it just yet. I added the "generator" key to the metadata so it would be easy to throw out outdated or buggy json metadata.
On Thu, Jul 18, 2013 at 8:33 PM, Daniel Holth <dholth@gmail.com> wrote:
On Thu, Jul 18, 2013 at 8:15 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 18, 2013, at 7:37 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk>
wrote:
I think the point is that people might be dependent on this
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their
world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
A PEP would allow standardisation of the multiple-versions feature it
it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general).
If it's not considered desirable and doesn't need support, then we only
need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-)
Yes I'm against implementation defined features. However this is already
functionality and the status quo for this particular implementation. Basically I'm worried we are trying to fix too much at once.
One of the major reasons for distutils/packaging failing was it tried to
fix the world in one fell swoop. I see this same pattern starting to happen here. The problem is each solution has a bunch of corner cases and gotchas and the more things we try to fix at once the less eyes we'll have on each individual one and the more rushed the entire toolchain is going to be.
I think it's *really* important we limit the scope of what we fix at any
one time. Right now we have PEP426, PEP440, PEP439, PEP427, Nick is talking about an Sdist 2.0 PEP, Daniel just posted another PEP I haven't looked at yet, this is going to be another PEP. On top of that we have a number of issues related to those PEPs but not specifically part of those PEPs.
A lot of things is being done right now and I personally have having
trouble keeping up and keeping things straight. I know i'm not the only one because I've had a number of participants of these discussions privately tell me that they aren't sure how I'm keeping up (and i'm struggling to do so). I really don't want us to ship a bunch of half baked / not entirely thought through solutions.
So can we please limit our scope? Let's start by fixing the stuff we
have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better.
I feel your pain.
We might as well allow happy setuptools users to continue using setuptools. I don't care about making a pkg_resources console_scripts handler that does the same thing because we can just use the existing one. The more important contribution is to provide an alternative for people who are not happy setuptools users.
Which is an argument, in my mind, to vendor setuptools over bundling (assuming people are using "bundling" as in "install setuptools next to pip or at least install a .pth file to access the vendored version"). Including pip with Python installers is blessing it as the installer, but if we include setuptools as well that would also be blessing setuptools as *the* building tool as well. If people's preference for virtualenv over venv simply because they didn't want to install pip manually has shown us anything is that the lazy path is the used path. If the long-term plan is to bless setuptools then go for the bundling, but if that decision has not been made yet then bundling may be premature if the bundling of pip with Python moves forward.
If the long-term plan is to bless setuptools then go for the bundling, but if that decision has not been made yet then bundling may be premature if the bundling of pip with Python moves forward.
Well, Nick has said that he thinks that "distlib is the future" (or, I assume, something like it - something that is based on PEPs and standardisation rather than a de facto implementation which a sizeable minority have problems with, though it's pragmatically acceptable for the majority). If distlib or something like it (standards-based) is to be the future, we have to be very careful. As I've said to Nick in an off-list mail, that sort of future is only going to fly if sufficient safeguards are in place such that we don't have to have compatibility shims for setuptools, pkg_resources and pip Python packages/APIs. Based on the actual work I did to replace pkg_resources with distlib in pip, it's not a thing I really want to do more of (or that anyone else should have to do). So, ISTM that pkg_resources and setuptools would need to be subsumed into pip so that they weren't externally visible - perhaps they would move to the pip.vendor package. Otherwise, we might was well accept pkg_resources and setuptools into the stdlib - no matter how many ifs and buts we put in the fine print, that's what we'd essentially have - with apologies to Robert Frost and to borrow from what Brett said, "the lazy road is the one most travelled, and that makes all the difference". Plus, there would need to be sufficient health warnings to indicate to people tempted to use these subsumed APIs or any pip API that they would be completely on their own as regards future-proofing. In my view this can't just be left up to the pip maintainers to decide on - it needs to be a condition set by python-dev, to apply if pip is shipped with Python. Otherwise, backward compatibility will tie our hands for ever (or at least, a very long time). Regards, Vinay Sajip
On Jul 19, 2013, at 11:20 AM, Brett Cannon <brett@python.org> wrote:
Which is an argument, in my mind, to vendor setuptools over bundling (assuming people are using "bundling" as in "install setuptools next to pip or at least install a .pth file to access the vendored version"). Including pip with Python installers is blessing it as the installer, but if we include setuptools as well that would also be blessing setuptools as *the* building tool as well. If people's preference for virtualenv over venv simply because they didn't want to install pip manually has shown us anything is that the lazy path is the used path.
I don't believe we want to bless setuptools in the long run hence why I want to vendor setuptools under pip.vendor.*. I believe pkg_resources should be split out and pip should just dynamically add it to the dependencies for anything that uses entry points. For it's own uses it should not generate scripts that depend on anything that isn't included with pip itself. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 20 July 2013 01:20, Brett Cannon <brett@python.org> wrote:
If the long-term plan is to bless setuptools then go for the bundling, but if that decision has not been made yet then bundling may be premature if the bundling of pip with Python moves forward.
PEP 426 is currently looking at blessing a subset of *setup.py* commands as an interim build system, without blessing any particular tool. At the moment, I don't list any required arguments for the individual commands, but I'm starting to think that needs to change. It's probably worth looking at the common subset currently supported by setuptools and d2to1, and figuring out which can be left out as "you need to know which build system the project is using and invoke them appropriately" and which we want to standardise. Something else I see as potentially getting blessed is "assume setuptools" as a fallback option for projects that don't publish 2.0+ metadata (part of which will include providing a pre-generated dist-info directory in the sdist, as well as a way to indicate how to generate the metadata in a raw source tarball or VCS checkout) That's why I'm OK with the idea of the pip team *only* supporting installing from wheels if setuptools isn't installed, and treating setuptools as an implicit install_requires dependency if it is necessary to install from a source distribution. Resolving all of this formally is a ways down the todo list though, and the problem of source-based (rather than wheel-based) integration is one of the big reasons I see nailing down the metadata 2.0 spec as a process that still has several months left to run rather than being "almost finished". At the moment I *don't* see a good projects-can-use-any-build-system-they-like story for the path from a Python project tarball to a built and published Fedora or RHEL RPM, and that concerns me (since making it practical to almost fully automate that chain is one of my goals). If you had asked me a couple of months ago, I would have said I thought we could get away with deferring the answers to these questions (and PEP 426 is currently written that way), but I now think we're better of continuing with the setuptools-compatible metadata approach for the time being, and taking the time to get metadata 2.0 *right* for both binary and source distribution, rather than having to follow it up with a metadata 2.1 to fix the source distribution side of things. Getting PEP 427 (wheel 1.0) approved reasonable quickly was necessary to provide a successor to eggs that pip was willing to adopt, but I no longer think there's the same urgency for the metadata 2.0 standard in PEP 426 (ever since Daniel realised that wheels could work just as a well with setuptools compatible metadata as they could with a new metadata standard). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 19 July 2013 09:37, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
It's a real requirement - Linux distros need it to work around parallel installation of backwards incompatible libraries in the system Python. Yes, it's an implementation defined feature of pkg_resources (not setuptools per se), but it's one that works well enough even if the error message can be opaque and the configuration can get a little arcane :)
A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general).
If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-)
I like the idea of switching to zc.buildout style entry points - it makes it easier to get pip to a point where "no setuptools" means "can only install from wheel files" rather than "can't install anything" (that way pip can install setuptools from a wheel if it needs to build something else from source). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jul 19, 2013, at 12:23 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
I like the idea of switching to zc.buildout style entry points - it makes it easier to get pip to a point where "no setuptools" means "can only install from wheel files" rather than "can't install anything" (that way pip can install setuptools from a wheel if it needs to build something else from source).
I plan on making pip bundle setuptools regardless. To underline how important that is, it's been discovered (though we are still working out _why_) that pip 1.3.1 on python 3.x+ is broken with setuptools 0.7+. Historically we haven't tested old versions of pip against new versions of setuptools (and with how quickly setuptools is releasing now a days that matrix is going to become very big very fast). Bundling setuptools makes things way more stable and alleviates a lot of long term support headaches. Also just to be specific entry points don't require setuptools, they require pkg_resources which currently is installed as part of setuptools but can likely be split out. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Jul 19, 2013, at 12:39 AM, Donald Stufft <donald@stufft.io> wrote:
On Jul 19, 2013, at 12:23 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
I like the idea of switching to zc.buildout style entry points - it makes it easier to get pip to a point where "no setuptools" means "can only install from wheel files" rather than "can't install anything" (that way pip can install setuptools from a wheel if it needs to build something else from source).
I plan on making pip bundle setuptools regardless.
To underline how important that is, it's been discovered (though we are still working out _why_) that pip 1.3.1 on python 3.x+ is broken with setuptools 0.7+. Historically we haven't tested old versions of pip against new versions of setuptools (and with how quickly setuptools is releasing now a days that matrix is going to become very big very fast).
Bundling setuptools makes things way more stable and alleviates a lot of long term support headaches.
Also just to be specific entry points don't require setuptools, they require pkg_resources which currently is installed as part of setuptools but can likely be split out.
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Just to expand a bit here. I think the only reason this worked at all historically is because setuptools hadn't changed much in the last few years so there wasn't much chance for regression. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 19 July 2013 05:23, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 19 July 2013 09:37, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
It's a real requirement - Linux distros need it to work around parallel installation of backwards incompatible libraries in the system Python. Yes, it's an implementation defined feature of pkg_resources (not setuptools per se), but it's one that works well enough even if the error message can be opaque and the configuration can get a little arcane :)
Just to be absolutely clear on my interest in this: 1. I believe (but cannot prove, so I'll accept others stating that I'm wrong) that many people using setuptools for the console-script entry point functionality, have no specific interest in or requirement for multi-version. As an example, take pip itself. So while it is true that functionality will be lost, I do not believe that users will actually be affected in the majority of cases. That's not to say that just removing the functionality without asking is valid. 2. Projects typically do not declare a runtime dependency on setuptools just because they use script wrappers. Maybe they should, but they don't. Again, pip is an example. So wheel-based installs of such projects can break on systems without setuptools (pkg_resources). This is going to be a bigger problem in future, as pip install from wheels does not need setuptools to be installed on the target (and if we vendor setuptools in pip, nor does install from sdist). Of course, after the first time you hit this, you install setuptools and it's never a problem again. But it's a bad user experience. 3. It's an issue for pip itself, as we explicitly do not want a dependency on a system installed setuptools. So we have to hack or replace the setuptools-generated wrappers. Paul.
On Fri, Jul 19, 2013 at 5:29 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 19 July 2013 05:23, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 19 July 2013 09:37, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
It's a real requirement - Linux distros need it to work around parallel installation of backwards incompatible libraries in the system Python. Yes, it's an implementation defined feature of pkg_resources (not setuptools per se), but it's one that works well enough even if the error message can be opaque and the configuration can get a little arcane :)
Just to be absolutely clear on my interest in this:
1. I believe (but cannot prove, so I'll accept others stating that I'm wrong) that many people using setuptools for the console-script entry point functionality, have no specific interest in or requirement for multi-version. As an example, take pip itself. So while it is true that functionality will be lost, I do not believe that users will actually be affected in the majority of cases. That's not to say that just removing the functionality without asking is valid.
2. Projects typically do not declare a runtime dependency on setuptools just because they use script wrappers. Maybe they should, but they don't. Again, pip is an example. So wheel-based installs of such projects can break on systems without setuptools (pkg_resources). This is going to be a bigger problem in future, as pip install from wheels does not need setuptools to be installed on the target (and if we vendor setuptools in pip, nor does install from sdist). Of course, after the first time you hit this, you install setuptools and it's never a problem again. But it's a bad user experience.
3. It's an issue for pip itself, as we explicitly do not want a dependency on a system installed setuptools. So we have to hack or replace the setuptools-generated wrappers.
Paul.
pip should just add pkg_resources as a dependency for any package that has console_scripts entry points.
On 19 July 2013 19:29, Paul Moore <p.f.moore@gmail.com> wrote:
On 19 July 2013 05:23, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 19 July 2013 09:37, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I think the point is that people might be dependent on this functionality and
changing it out from underneath them could break their world.
I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
It's a real requirement - Linux distros need it to work around parallel installation of backwards incompatible libraries in the system Python. Yes, it's an implementation defined feature of pkg_resources (not setuptools per se), but it's one that works well enough even if the error message can be opaque and the configuration can get a little arcane :)
Just to be absolutely clear on my interest in this:
1. I believe (but cannot prove, so I'll accept others stating that I'm wrong) that many people using setuptools for the console-script entry point functionality, have no specific interest in or requirement for multi-version. As an example, take pip itself. So while it is true that functionality will be lost, I do not believe that users will actually be affected in the majority of cases. That's not to say that just removing the functionality without asking is valid.
I was going to say it would affect Linux distro packagers (since the multi-version support is necessary for us to hack together something vaguely resembling parallel install support for Python libraries that make backwards incompatible changes), but then I remembered that at least Fedora & RHEL SRPMs generally call setup.py directly in the build phase (with setuptools as a build dependency). This means that what pip chooses when installing from source or a wheel won't actually affect distro packaging (since I assume other distros are doing something at least vaguely similar to what we do). With our widely deployed (but still highly specialised) use case out of the picture, I think you're probably right.
2. Projects typically do not declare a runtime dependency on setuptools just because they use script wrappers. Maybe they should, but they don't. Again, pip is an example. So wheel-based installs of such projects can break on systems without setuptools (pkg_resources). This is going to be a bigger problem in future, as pip install from wheels does not need setuptools to be installed on the target (and if we vendor setuptools in pip, nor does install from sdist). Of course, after the first time you hit this, you install setuptools and it's never a problem again. But it's a bad user experience.
3. It's an issue for pip itself, as we explicitly do not want a dependency on a system installed setuptools. So we have to hack or replace the setuptools-generated wrappers.
Right, I think the reasonable near term solutions are for pip to either: 1. generate zc.buildout style wrappers with absolute paths to avoid the implied runtime dependency 2. interpret use of script entry points as an implied dependency on setuptools and install it even if not otherwise requested Either way, pip would need to do something about its *own* command line script, which heavily favours option 1 Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Right, I think the reasonable near term solutions are for pip to either:
1. generate zc.buildout style wrappers with absolute paths to avoid the implied runtime dependency 2. interpret use of script entry points as an implied dependency on setuptools and install it even if not otherwise requested
Either way, pip would need to do something about its *own* command line script, which heavily favours option 1
Option 1 also would address some or all of the startup performance complaint. It occurs to me that it might actually be a good idea *not* to put the script wrappers in the standard entry points file, even if that's what setuptools does right now: if lots of packages use that approach, it'll slow down the effective indexing for code that's scanning multiple packages for something like a sqlalchemy adapter. (Alternately, we could use something like 'exports-some.group.name.json' so that each export group is a separate file; this would keep scripts separate from everything else, and optimize plugin searches falling in a particular group. In fact, the files needn't have any contents; it'd be okay to just parse the main .json for any distribution that has exports in the group you're looking for. i.e., the real purpose of the separation of entry points was always just to avoid loading metadata for distributions that don't have the kind of exports you're looking for. In the old world, few distributions exported anything, so just identifying whether a distribution had exports was sufficient. In the new world, more and more distributions over time will have some kind of export, so knowing *which* exports they have will become more important.)
On 20 July 2013 01:47, PJ Eby <pje@telecommunity.com> wrote:
On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Right, I think the reasonable near term solutions are for pip to either:
1. generate zc.buildout style wrappers with absolute paths to avoid the implied runtime dependency 2. interpret use of script entry points as an implied dependency on setuptools and install it even if not otherwise requested
Either way, pip would need to do something about its *own* command line script, which heavily favours option 1
Option 1 also would address some or all of the startup performance complaint.
It occurs to me that it might actually be a good idea *not* to put the script wrappers in the standard entry points file, even if that's what setuptools does right now: if lots of packages use that approach, it'll slow down the effective indexing for code that's scanning multiple packages for something like a sqlalchemy adapter.
(Alternately, we could use something like 'exports-some.group.name.json' so that each export group is a separate file; this would keep scripts separate from everything else, and optimize plugin searches falling in a particular group. In fact, the files needn't have any contents; it'd be okay to just parse the main .json for any distribution that has exports in the group you're looking for. i.e., the real purpose of the separation of entry points was always just to avoid loading metadata for distributions that don't have the kind of exports you're looking for. In the old world, few distributions exported anything, so just identifying whether a distribution had exports was sufficient. In the new world, more and more distributions over time will have some kind of export, so knowing *which* exports they have will become more important.)
A not-so-quick sketch of my current thinking: Two new fields in PEP 426: commands and exports Like the core dependency metadata, both get generated files: pydist-commands.json and pydist-exports.json (As far as the performance concern goes, I think longer term we'll probably move to a richer installation database format that includes an SQLite cache file managed by the installers. But near term, I like the idea of being able to check "has commands or not" and "has exports or not" with a single stat call for the appropriate file) Rather than using the "module.name:qualified.name" format (as the PEP currently does for the install_hooks), "export specifiers" would be defined as a mapping with the following subfields: * module * qualname (as per PEP 3155) * extra Both qualname and extra would be optional. "extra" indicates that the export is only present if that extra is installed. The top level commands field would have three subfields: "wrap_console", "wrap_gui" and "prebuilt". The wrap_console and wrap_gui subfields would both be maps of command names to export specifiers (i.e. requests for an installer to generate the appropriate wrappers), while prebuilt would be a mapping of command names to paths relative to the scripts directory (as strings). Note that given that Python 2.7+ and 3.2+ can execute packages with a __main__ submodule, the export specifier for a command entry *may* just be the module component and it should still work. The exports field is just a rebranded and slightly rearranged entry_points structure: the top level keys in the hash map are "export groups" (defined in the same way as metadata extensions are defined) and the individual entries in each export group are arbitrary keys (meaning determined by the export group) mapping to export specifiers. With this change, I may even move the current top level "install_hooks" field inside the "exports" field. Even if it stay at the top level, the values will become export specifiers rather than using the entry points string format. Not sure when I'll get that tidied up and incorporated into a new draft of PEP 426, but I think it covers everything. For those wondering about my dividing line between "custom string format" and "structured data": the custom string formats in PEP 426 should be limited to things that are likely to be passed as command line arguments (like requirement specifiers and their assorted components), or those where using structured data would be extraordinarily verbose (like environment markers). If I have any custom string formats still in there that don't fit either of those categories, then let me know and I'll see if I can replace them with structured data. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sat, Jul 20, 2013 at 2:10 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 20 July 2013 01:47, PJ Eby <pje@telecommunity.com> wrote:
On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Right, I think the reasonable near term solutions are for pip to either:
1. generate zc.buildout style wrappers with absolute paths to avoid the implied runtime dependency 2. interpret use of script entry points as an implied dependency on setuptools and install it even if not otherwise requested
Either way, pip would need to do something about its *own* command line script, which heavily favours option 1
Option 1 also would address some or all of the startup performance complaint.
It occurs to me that it might actually be a good idea *not* to put the script wrappers in the standard entry points file, even if that's what setuptools does right now: if lots of packages use that approach, it'll slow down the effective indexing for code that's scanning multiple packages for something like a sqlalchemy adapter.
(Alternately, we could use something like 'exports-some.group.name.json' so that each export group is a separate file; this would keep scripts separate from everything else, and optimize plugin searches falling in a particular group. In fact, the files needn't have any contents; it'd be okay to just parse the main .json for any distribution that has exports in the group you're looking for. i.e., the real purpose of the separation of entry points was always just to avoid loading metadata for distributions that don't have the kind of exports you're looking for. In the old world, few distributions exported anything, so just identifying whether a distribution had exports was sufficient. In the new world, more and more distributions over time will have some kind of export, so knowing *which* exports they have will become more important.)
A not-so-quick sketch of my current thinking:
Two new fields in PEP 426: commands and exports
Like the core dependency metadata, both get generated files: pydist-commands.json and pydist-exports.json
(As far as the performance concern goes, I think longer term we'll probably move to a richer installation database format that includes an SQLite cache file managed by the installers. But near term, I like the idea of being able to check "has commands or not" and "has exports or not" with a single stat call for the appropriate file)
Rather than using the "module.name:qualified.name" format (as the PEP currently does for the install_hooks), "export specifiers" would be defined as a mapping with the following subfields:
* module * qualname (as per PEP 3155) * extra
Both qualname and extra would be optional. "extra" indicates that the export is only present if that extra is installed.
The top level commands field would have three subfields: "wrap_console", "wrap_gui" and "prebuilt". The wrap_console and wrap_gui subfields would both be maps of command names to export specifiers (i.e. requests for an installer to generate the appropriate wrappers), while prebuilt would be a mapping of command names to paths relative to the scripts directory (as strings).
Note that given that Python 2.7+ and 3.2+ can execute packages with a __main__ submodule, the export specifier for a command entry *may* just be the module component and it should still work.
The exports field is just a rebranded and slightly rearranged entry_points structure: the top level keys in the hash map are "export groups" (defined in the same way as metadata extensions are defined) and the individual entries in each export group are arbitrary keys (meaning determined by the export group) mapping to export specifiers.
With this change, I may even move the current top level "install_hooks" field inside the "exports" field. Even if it stay at the top level, the values will become export specifiers rather than using the entry points string format.
Not sure when I'll get that tidied up and incorporated into a new draft of PEP 426, but I think it covers everything.
For those wondering about my dividing line between "custom string format" and "structured data": the custom string formats in PEP 426 should be limited to things that are likely to be passed as command line arguments (like requirement specifiers and their assorted components), or those where using structured data would be extraordinarily verbose (like environment markers). If I have any custom string formats still in there that don't fit either of those categories, then let me know and I'll see if I can replace them with structured data.
Cheers, Nick.
It may be worth mentioning that I am not aware of any package that uses the "entry point requires extra" feature. IIUC pkg_resources doesn't just check whether something's installed but attempts to add the requirements of the entry point's distribution and any requested extras to sys.path as part of resolution.
On 21 Jul 2013 04:43, "Daniel Holth" <dholth@gmail.com> wrote:
On Sat, Jul 20, 2013 at 2:10 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 20 July 2013 01:47, PJ Eby <pje@telecommunity.com> wrote:
On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan <ncoghlan@gmail.com>
wrote:
Right, I think the reasonable near term solutions are for pip to either:
1. generate zc.buildout style wrappers with absolute paths to avoid the implied runtime dependency 2. interpret use of script entry points as an implied dependency on setuptools and install it even if not otherwise requested
Either way, pip would need to do something about its *own* command line script, which heavily favours option 1
Option 1 also would address some or all of the startup performance complaint.
It occurs to me that it might actually be a good idea *not* to put the script wrappers in the standard entry points file, even if that's what setuptools does right now: if lots of packages use that approach, it'll slow down the effective indexing for code that's scanning multiple packages for something like a sqlalchemy adapter.
(Alternately, we could use something like 'exports-some.group.name.json' so that each export group is a separate file; this would keep scripts separate from everything else, and optimize plugin searches falling in a particular group. In fact, the files needn't have any contents; it'd be okay to just parse the main .json for any distribution that has exports in the group you're looking for. i.e., the real purpose of the separation of entry points was always just to avoid loading metadata for distributions that don't have the kind of exports you're looking for. In the old world, few distributions exported anything, so just identifying whether a distribution had exports was sufficient. In the new world, more and more distributions over time will have some kind of export, so knowing *which* exports they have will become more important.)
A not-so-quick sketch of my current thinking:
Two new fields in PEP 426: commands and exports
Like the core dependency metadata, both get generated files: pydist-commands.json and pydist-exports.json
(As far as the performance concern goes, I think longer term we'll probably move to a richer installation database format that includes an SQLite cache file managed by the installers. But near term, I like the idea of being able to check "has commands or not" and "has exports or not" with a single stat call for the appropriate file)
Rather than using the "module.name:qualified.name" format (as the PEP currently does for the install_hooks), "export specifiers" would be defined as a mapping with the following subfields:
* module * qualname (as per PEP 3155) * extra
Both qualname and extra would be optional. "extra" indicates that the export is only present if that extra is installed.
The top level commands field would have three subfields: "wrap_console", "wrap_gui" and "prebuilt". The wrap_console and wrap_gui subfields would both be maps of command names to export specifiers (i.e. requests for an installer to generate the appropriate wrappers), while prebuilt would be a mapping of command names to paths relative to the scripts directory (as strings).
Note that given that Python 2.7+ and 3.2+ can execute packages with a __main__ submodule, the export specifier for a command entry *may* just be the module component and it should still work.
The exports field is just a rebranded and slightly rearranged entry_points structure: the top level keys in the hash map are "export groups" (defined in the same way as metadata extensions are defined) and the individual entries in each export group are arbitrary keys (meaning determined by the export group) mapping to export specifiers.
With this change, I may even move the current top level "install_hooks" field inside the "exports" field. Even if it stay at the top level, the values will become export specifiers rather than using the entry points string format.
Not sure when I'll get that tidied up and incorporated into a new draft of PEP 426, but I think it covers everything.
For those wondering about my dividing line between "custom string format" and "structured data": the custom string formats in PEP 426 should be limited to things that are likely to be passed as command line arguments (like requirement specifiers and their assorted components), or those where using structured data would be extraordinarily verbose (like environment markers). If I have any custom string formats still in there that don't fit either of those categories, then let me know and I'll see if I can replace them with structured data.
Cheers, Nick.
It may be worth mentioning that I am not aware of any package that uses the "entry point requires extra" feature.
IIUC pkg_resources doesn't just check whether something's installed but attempts to add the requirements of the entry point's distribution and any requested extras to sys.path as part of resolution.
I see it as more useful for making an executable optional by defining a "cli" extra. If your project just gets installed as a dependency, no wrapper would get generated. Only if you went "pip install myproject[cli]" (or another project specifically depended on the cli extra) would it be installed. Cheers, Nick.
On Sat, Jul 20, 2013 at 8:08 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
I see it as more useful for making an executable optional by defining a "cli" extra. If your project just gets installed as a dependency, no wrapper would get generated.
Only if you went "pip install myproject[cli]" (or another project specifically depended on the cli extra) would it be installed.
Why stop there... how about environment markers for exports, too? ;-) And throw in an environment marker syntax for whether something was installed as a dependency or explicitly... ;-) (Btw, the above is a change from setuptools semantics, but I don't really see it as a problem; ISTM unlikely that anybody has used extras on a script wrapper. Extras on *other* entry points, however, *do* exist, at least IIRC. I'm pretty sure there was at least one concrete use case for them involving Chandler plugins when I originally implemented the feature. The possibility of having extras on a script is just a side effect, though, not an actually-intended feature; if you have the need, it actually makes more sense to just bundle the script in another package and require that pacakge from the extra, rather than putting it in the original package.)
On 21 July 2013 11:53, PJ Eby <pje@telecommunity.com> wrote:
On Sat, Jul 20, 2013 at 8:08 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
I see it as more useful for making an executable optional by defining a "cli" extra. If your project just gets installed as a dependency, no wrapper would get generated.
Only if you went "pip install myproject[cli]" (or another project specifically depended on the cli extra) would it be installed.
Why stop there... how about environment markers for exports, too? ;-) And throw in an environment marker syntax for whether something was installed as a dependency or explicitly... ;-)
I actually did think about various ideas along those lines (when pondering how build dependencies would work in practice), but realised that install time checks for that kind of thing would be problematic (since the dependencies for an extra might be present anyway, so why require that you explicitly request the extra *as well*?).
(Btw, the above is a change from setuptools semantics, but I don't really see it as a problem; ISTM unlikely that anybody has used extras on a script wrapper. Extras on *other* entry points, however, *do* exist, at least IIRC. I'm pretty sure there was at least one concrete use case for them involving Chandler plugins when I originally implemented the feature. The possibility of having extras on a script is just a side effect, though, not an actually-intended feature; if you have the need, it actually makes more sense to just bundle the script in another package and require that pacakge from the extra, rather than putting it in the original package.)
Ah, interesting! And thinking about it further, I believe any kind of "partial installation" of the *package itself* is a bad idea. Extras should just be a way to ask "are these optional dependencies present on this system?", without needing to worry about how they got there. For now, I'll switch export specifiers back to the concise "modulename:qualname" entry point format and add "Do we need to support the exported-only-if-extra-is-available feature?" as an open question. My current thinking is that the point you made about script wrappers (putting the wrapper in separate distribution and depending on that from an extra) applies to other plugins as well. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sat, Jul 20, 2013 at 10:54 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Extras should just be a way to ask "are these optional dependencies present on this system?", without needing to worry about how they got there.
Technically, they are a way to ask "can you get this for me?", since pkg_resources' API allows you to specify an installer callback when you ask to load an entry point. This means that an installer tool can dynamically obtain any extras it needs, not just check for their installation. To put it another way, it's not "exported only if extra is available", it's "exported, but make sure you have this first." A subtle difference, but important to the original use cases (see below).
For now, I'll switch export specifiers back to the concise "modulename:qualname" entry point format and add "Do we need to support the exported-only-if-extra-is-available feature?" as an open question. My current thinking is that the point you made about script wrappers (putting the wrapper in separate distribution and depending on that from an extra) applies to other plugins as well.
Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup. The other use case is similar, in that a plugin-heavy environment with self-upgrading abilities can defer *installation* of parts of a plug-in until it is actually used. (Which is why EntryPoint objects have a .require() method separate from .load() - you can loop over a relevant set of entry points to pre-test or pre-ensure that they're all available and dependencies are installed before importing any of them, even though .load() will also do that for a single entry point.) For the specific case of the meta build system itself, these use cases may be moot. For the overall use of exports, however, the use cases are still valuable for plugin-heavy apps. (Specifically, applications that use lots of plugins written by different people, and don't want to have to import everything at startup.) Indeed, this is the original use case for exports in the first place: it's a plugin system that doesn't require importing any plugins until you actually need a particular plugin's functionality. Extras just expand that slightly to "don't require installing things or putting them on sys.path until you need their functionality". Heck, if pip itself were split into two distributions, one of which were a command line script declared with an extra, pointing into the second distribution, it'd have dynamic bootstrapping. (Were it not for the part where it would need pip available to *do* the bootstrapping, of course. ;-) )
On 21 July 2013 16:46, PJ Eby <pje@telecommunity.com> wrote:
Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup.
This type of complexity is completely outside of my experience. So I'm going to have to defer to people who understand the relevant scenarios to assess any proposed solutions. But could I make a general plea for an element of "keep the simple cases simple" in both the PEP and the implementations, here? I think it's critical that we make sure that the 99% of users[1] who want to do nothing more than bundle up an app with a few dependencies can both understand the mechanisms for doing so, and can use them straight out of the box. Paul [1] Yes, that number is made up - but to put it into context, I don't believe I've ever used a distribution from PyPI with entry points depending on extras. In fact, the only case I know of where I've seen extras in *any* context is in wheel, and I've never used them even there.
On Sun, Jul 21, 2013 at 12:10 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 21 July 2013 16:46, PJ Eby <pje@telecommunity.com> wrote:
Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup.
This type of complexity is completely outside of my experience. So I'm going to have to defer to people who understand the relevant scenarios to assess any proposed solutions.
But could I make a general plea for an element of "keep the simple cases simple" in both the PEP and the implementations, here? I think it's critical that we make sure that the 99% of users[1] who want to do nothing more than bundle up an app with a few dependencies can both understand the mechanisms for doing so, and can use them straight out of the box.
Paul
[1] Yes, that number is made up - but to put it into context, I don't believe I've ever used a distribution from PyPI with entry points depending on extras. In fact, the only case I know of where I've seen extras in *any* context is in wheel, and I've never used them even there.
The extras system is simple and more importantly ubiquitous with tens of thousands of releases taking advantage of it. The proposed system of also having separate kinds of build and test dependencies with their own extras hasn't been demonstrated. Entry points having extras is an extension of entry points having simple distribution-level dependencies "entry point depends on beaglevote" -> "entry point depends on beaglevote[doghouse]". I can see how perhaps in the setuptools case it may have been more straightforward to include extras rather than to exclude them. Someone else may want to check again but the last time I checked all pypi-hosted distributions for "entry points depending on extras" I found none. It would be pretty safe to leave this particular feature out.
On 22 Jul 2013 01:46, "PJ Eby" <pje@telecommunity.com> wrote:
On Sat, Jul 20, 2013 at 10:54 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Extras should just be a way to ask "are these optional dependencies
present
on this system?", without needing to worry about how they got there.
Technically, they are a way to ask "can you get this for me?", since pkg_resources' API allows you to specify an installer callback when you ask to load an entry point. This means that an installer tool can dynamically obtain any extras it needs, not just check for their installation.
To put it another way, it's not "exported only if extra is available", it's "exported, but make sure you have this first." A subtle difference, but important to the original use cases (see below).
Ah, yes, I see the distinction (and it does make this notion conceptually simpler).
For now, I'll switch export specifiers back to the concise "modulename:qualname" entry point format and add "Do we need to support the exported-only-if-extra-is-available feature?" as an open question. My current thinking is that the point you made about script wrappers (putting the wrapper in separate distribution and depending on that from an extra) applies to other plugins as well.
Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup.
I'm working with Eric Snow on a scheme that I hope will allow module-specific path entries that aren't processed at interpreter startup and never get added to sys.path at all (even if you import the module). Assuming we can get it to work the way I hope (which is still a "maybe" at this point in time), it should then be possible to backport it to earlier versions as a metaimporter.
The other use case is similar, in that a plugin-heavy environment with self-upgrading abilities can defer *installation* of parts of a plug-in until it is actually used. (Which is why EntryPoint objects have a .require() method separate from .load() - you can loop over a relevant set of entry points to pre-test or pre-ensure that they're all available and dependencies are installed before importing any of them, even though .load() will also do that for a single entry point.)
OK, so as Daniel suggested, it's more like an export/entry-point specific "requires" field, but limited to the extras of the current distribution.
For the specific case of the meta build system itself, these use cases may be moot. For the overall use of exports, however, the use cases are still valuable for plugin-heavy apps. (Specifically, applications that use lots of plugins written by different people, and don't want to have to import everything at startup.)
Indeed, this is the original use case for exports in the first place: it's a plugin system that doesn't require importing any plugins until you actually need a particular plugin's functionality. Extras just expand that slightly to "don't require installing things or putting them on sys.path until you need their functionality".
OK, I understand the use case now. If I can come up with a relatively simple way to explain it, I'll keep it in the proposed metadata, otherwise I'll leave it to metadata extensions to handle the more sophisticated version where an export depends on an extra. Cheers, Nick.
Heck, if pip itself were split into two distributions, one of which were a command line script declared with an extra, pointing into the second distribution, it'd have dynamic bootstrapping. (Were it not for the part where it would need pip available to *do* the bootstrapping, of course. ;-) )
On Sun, Jul 21, 2013 at 6:44 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 22 Jul 2013 01:46, "PJ Eby" <pje@telecommunity.com> wrote:
Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup.
I'm working with Eric Snow on a scheme that I hope will allow module-specific path entries that aren't processed at interpreter startup and never get added to sys.path at all (even if you import the module). Assuming we can get it to work the way I hope (which is still a "maybe" at this point in time), it should then be possible to backport it to earlier versions as a metaimporter.
I haven't had a chance to look at that proposal at more than surface depth, but my immediate concern with it is that it seems to be at the wrong level of abstraction for the packaging system, i.e., just because you can import a module, doesn't mean you can get at its project metadata (e.g., how would you find its exports, or even know what distribution it belonged to?). (Also, I don't actually see how it would be useful or relevant to the use case we're talking about; it seems maybe orthogonal at best.)
OK, so as Daniel suggested, it's more like an export/entry-point specific "requires" field, but limited to the extras of the current distribution.
Correct: at the time, it seemed a lot simpler to me than supporting arbitrary requirements, and allows for more DRY, since entry points might share some requirements.
On 22 Jul 2013 13:26, "PJ Eby" <pje@telecommunity.com> wrote:
On Sun, Jul 21, 2013 at 6:44 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 22 Jul 2013 01:46, "PJ Eby" <pje@telecommunity.com> wrote:
Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup.
I'm working with Eric Snow on a scheme that I hope will allow module-specific path entries that aren't processed at interpreter
startup
and never get added to sys.path at all (even if you import the module). Assuming we can get it to work the way I hope (which is still a "maybe" at this point in time), it should then be possible to backport it to earlier versions as a metaimporter.
I haven't had a chance to look at that proposal at more than surface depth, but my immediate concern with it is that it seems to be at the wrong level of abstraction for the packaging system, i.e., just because you can import a module, doesn't mean you can get at its project metadata (e.g., how would you find its exports, or even know what distribution it belonged to?).
(Also, I don't actually see how it would be useful or relevant to the use case we're talking about; it seems maybe orthogonal at best.)
The file format involved in that proposal was deliberately designed so you could also use it to look for PEP 376 dist-info directories. However, you're right, I forgot about the distribution-name-may-not-equal-package-name problem, so that aspect is completely broken in the current proto-PEP :( Cheers, Nick.
OK, so as Daniel suggested, it's more like an export/entry-point
specific
"requires" field, but limited to the extras of the current distribution.
Correct: at the time, it seemed a lot simpler to me than supporting arbitrary requirements, and allows for more DRY, since entry points might share some requirements.
Nick Coghlan <ncoghlan <at> gmail.com> writes:
On 22 Jul 2013 01:46, "PJ Eby" <pje <at> telecommunity.com> wrote:
To put it another way, it's not "exported only if extra is available", it's "exported, but make sure you have this first." A subtle difference, but important to the original use cases (see below). Ah, yes, I see the distinction (and it does make this notion conceptually simpler).
Does "make sure you have this first" mean "install this if it's not present" or "raise an exception if it's not present"? AFAICT PEP 376 does not consider extras at all, and so does not have any standard way to store which extras a distribution was installed with. So what's the standard way of testing if "extra is available"? Regards, Vinay Sajip
On 22 Jul 2013 19:12, "Vinay Sajip" <vinay_sajip@yahoo.co.uk> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
On 22 Jul 2013 01:46, "PJ Eby" <pje <at> telecommunity.com> wrote:
To put it another way, it's not "exported only if extra is available", it's "exported, but make sure you have this first." A subtle difference, but important to the original use cases (see below). Ah, yes, I see the distinction (and it does make this notion
simpler).
Does "make sure you have this first" mean "install this if it's not
conceptually present"
or "raise an exception if it's not present"? AFAICT PEP 376 does not consider extras at all, and so does not have any standard way to store which extras a distribution was installed with. So what's the standard way of testing if "extra is available"?
Check if all the dependencies associated with that extra are present. That was my observation earlier: since extras aren't really a thing in their own right (they're just a shorthand for referring to an additional set of dependencies) allowing script wrapper generation to depend on an extra is likely a bad idea, since it may lead to a partially installed package. Since the check in pkg_resources is callback based, it's really up to the application looking for entry points to decide what unmet dependencies mean. Cheers, Nick.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 22 July 2013 12:22, Nick Coghlan <ncoghlan@gmail.com> wrote:
since extras aren't really a thing in their own right (they're just a shorthand for referring to an additional set of dependencies)
I'm still trying to be clear in my mind about what extras are, and how they should work. From this description, it occurs to me to ask, what is the difference between an extra and a (metadata only, empty) second distribution that depends on the base project as well as the "additional set of dependencies"? Is it just the admin overhead of registering a second project? Looking at extras this way gives a possible way of generating scripts only when the extras are present - just add the scripts to the dummy "extra" distribution. Paul.
On Mon, Jul 22, 2013 at 7:29 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 22 July 2013 12:22, Nick Coghlan <ncoghlan@gmail.com> wrote:
since extras aren't really a thing in their own right (they're just a shorthand for referring to an additional set of dependencies)
I'm still trying to be clear in my mind about what extras are, and how they should work. From this description, it occurs to me to ask, what is the difference between an extra and a (metadata only, empty) second distribution that depends on the base project as well as the "additional set of dependencies"? Is it just the admin overhead of registering a second project?
Looking at extras this way gives a possible way of generating scripts only when the extras are present - just add the scripts to the dummy "extra" distribution.
Yes, extras are *only* a way to create aliases for a set of dependencies. They are not recorded as installed. It should make no difference whether you install ipython[notebook], look up the dependencies for the ipython notebook and install them manually, or happen to have the ipython[notebook] dependencies installed and then later install ipython itself. What you get is a convenient way to install a distribution's optional dependencies without having to worry about whether the feature's dependencies change. It is a bad idea to make it too easy to install broken versions of distributions having missing scripts.
On 07/22/2013 06:31 AM, Daniel Holth wrote:
Yes, extras are *only* a way to create aliases for a set of dependencies. They are not recorded as installed. It should make no difference whether you install ipython[notebook], look up the dependencies for the ipython notebook and install them manually, or happen to have the ipython[notebook] dependencies installed and then later install ipython itself.
In the broad view I don't think this is true, when you consider uninstall. If I install ipython[notebook] and later uninstall ipython, it would be reasonable for the uninstaller to prompt me to uninstall all the ipython notebook dependencies by default, whereas it should not do so if I had installed them separately and directly. That said, the REQUESTED flag in PEP 376 is probably sufficient for this, so it may still be true that there's no need to store which extras were installed with a package. Carl
On 23 Jul 2013 05:53, "Carl Meyer" <carl@oddbird.net> wrote:
On 07/22/2013 06:31 AM, Daniel Holth wrote:
Yes, extras are *only* a way to create aliases for a set of dependencies. They are not recorded as installed. It should make no difference whether you install ipython[notebook], look up the dependencies for the ipython notebook and install them manually, or happen to have the ipython[notebook] dependencies installed and then later install ipython itself.
In the broad view I don't think this is true, when you consider uninstall. If I install ipython[notebook] and later uninstall ipython, it would be reasonable for the uninstaller to prompt me to uninstall all the ipython notebook dependencies by default, whereas it should not do so if I had installed them separately and directly.
That said, the REQUESTED flag in PEP 376 is probably sufficient for this, so it may still be true that there's no need to store which extras were installed with a package.
The safest logic for that kind of garbage collection feature is currently: * was it explicitly requested? * if not, does anything else (extra or not) still depend on it? Tracking extra requests directly currently falls into YAGNI territory in my opinion - if people want that level of control, then they really have a separate distribution rather than an extra. Cheers, Nick.
Carl
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 22 Jul 2013 21:29, "Paul Moore" <p.f.moore@gmail.com> wrote:
On 22 July 2013 12:22, Nick Coghlan <ncoghlan@gmail.com> wrote:
since extras aren't really a thing in their own right (they're just a
shorthand for referring to an additional set of dependencies)
I'm still trying to be clear in my mind about what extras are, and how
they should work. From this description, it occurs to me to ask, what is the difference between an extra and a (metadata only, empty) second distribution that depends on the base project as well as the "additional set of dependencies"? Is it just the admin overhead of registering a second project? Sort of. The idea of an extra is "We have installed all the code for this, but it won't work due runtime failures if these dependencies aren't available". With an actual separate distribution, you can't easily tell that the other distribution contains no code of its own, and naming and versioning gets more complicated. You also can't do the trick 426 adds where "*" means "all optional dependencies". For other package systems like RPM that don't have the notion of extras, then yes, an extra would probably be mapped to a virtual package (in the specific case of yum, it copes fairly well with version locked virtual packages like that).
Looking at extras this way gives a possible way of generating scripts only when the extras are present - just add the scripts to the dummy "extra" distribution.
Partial installs are problematic, since checking for optional dependencies is supposed to be a runtime thing, so it doesn't matter *how* those dependencies got there. Optional functionality like that would be better handled through a script that accepted subcommands, some of which would report an error if dependencies were missing. For a truly optional script, then it needs to be a genuinely separate package. Cheers, Nick.
Paul.
On Mon, Jul 22, 2013 at 7:29 AM, Paul Moore <p.f.moore@gmail.com> wrote:
I'm still trying to be clear in my mind about what extras are, and how they should work. From this description, it occurs to me to ask, what is the difference between an extra and a (metadata only, empty) second distribution that depends on the base project as well as the "additional set of dependencies"? Is it just the admin overhead of registering a second project?
That's one way of looking at it. But it's not implemented that way; it's more like environment markers -- i.e., conditional dependencies -- based on whether you want support for certain features that are, well, "extra". ;-)
Looking at extras this way gives a possible way of generating scripts only when the extras are present - just add the scripts to the dummy "extra" distribution.
Setuptools doesn't actually *have* a dummy distribution (just conditional requirements in the base), but I don't see a problem with only installing a script if you asked to install the extras that script needs. It probably would've been sensible to implement easy_install that way.
On 18 July 2013 18:24, Marcus Smith <qwcode@gmail.com> wrote:
I think it's still useful to have pip vendor just pkg_resources (as
pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do.
I agree. there's 2 problems to be solved here
1) making pip a self-sufficient wheel installer (which requires some internal pkg_resources equivalent) 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions
for #2, we have a few paths I think
1) bundle setuptools (and have pip install "pkg_resources" for console scripts, if it existed as a separate project) 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?)
As you say, for #1 using an internal pkg_resources (probably distlib's, why bother vendoring a second one?) works. Given that pip forces use of setuptools for *all* sdist builds, I think we have to bundle it for that purpose. I really dislike the need to do this, but I don't see a way round it. And if we do, we can just as easily use the real pkg_resources as distlib's emulation. As regards console scripts, I think they should be rewritten to remove the dependency on pkg_resources. That should be a setuptools fix, maybe triggered by a flag (possibly implied by --single-version-externally-managed, as the pkg_resources complexity is only needed when multi-versions are involved). If Jason's not comfortable with the change, then we'll probably have to find some way of doing it within pip, which will likely to be a fairly gross hack (unless we go really bleeding-edge and don't pit scripts into a wheel *at all* (or just omit exes and -script.py files, I don't know) and put the exports metadata in the wheel assuming that it's the wheel installer's job to create the wrappers. We can do that for pip install, and we just have to assume that other tools (wheel install, distlib) will do the same. TBH, my preference is for the metadata approach, do it correctly from the start. Paul
On Jul 18, 2013, at 4:36 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 18 July 2013 18:24, Marcus Smith <qwcode@gmail.com> wrote:
I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do.
I agree. there's 2 problems to be solved here
1) making pip a self-sufficient wheel installer (which requires some internal pkg_resources equivalent) 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions
for #2, we have a few paths I think
1) bundle setuptools (and have pip install "pkg_resources" for console scripts, if it existed as a separate project) 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?)
As you say, for #1 using an internal pkg_resources (probably distlib's, why bother vendoring a second one?) works.
Given that pip forces use of setuptools for *all* sdist builds, I think we have to bundle it for that purpose. I really dislike the need to do this, but I don't see a way round it. And if we do, we can just as easily use the real pkg_resources as distlib's emulation.
As regards console scripts, I think they should be rewritten to remove the dependency on pkg_resources. That should be a setuptools fix, maybe triggered by a flag (possibly implied by --single-version-externally-managed, as the pkg_resources complexity is only needed when multi-versions are involved). If Jason's not comfortable with the change, then we'll probably have to find some way of doing it within pip, which will likely to be a fairly gross hack (unless we go really bleeding-edge and don't pit scripts into a wheel *at all* (or just omit exes and -script.py files, I don't know) and put the exports metadata in the wheel assuming that it's the wheel installer's job to create the wrappers. We can do that for pip install, and we just have to assume that other tools (wheel install, distlib) will do the same.
TBH, my preference is for the metadata approach, do it correctly from the start.
Paul _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Console scripta aren't the only use of entry points fwiw. THere's other entry points programs use. I don't know if they all depend on setuptools or if just assume it's there. Technically the should depend but that would break things for those people. I think either way pkg_resources is going to need to be installed, but setuptools won't. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 18 July 2013 21:41, Donald Stufft <donald@stufft.io> wrote:
Console scripta aren't the only use of entry points fwiw. THere's other entry points programs use. I don't know if they all depend on setuptools or if just assume it's there. Technically the should depend but that would break things for those people.
I think either way pkg_resources is going to need to be installed, but setuptools won't.
If a project uses setuptools features at runtime, it should declare setuptools as a dependency. The difference with script wrappers is that the project didn't write the code, setuptools itself did. Any other use of entry points requires "import pkg_resources" in the user-written code, and should therefore be supported by having setuptools in the runtime dependency list. Paul
On Thu, Jul 18, 2013 at 4:36 PM, Paul Moore <p.f.moore@gmail.com> wrote:
As regards console scripts, I think they should be rewritten to remove the dependency on pkg_resources. That should be a setuptools fix,
As others have already mentioned, this is not a bug but a feature. Setuptools-generated scripts are linked to a specific version of the project, which means that you can install more than one version by renaming the scripts or installing the scripts to different directories. While other strategies are definitely possible, distlib's approach is not backward-compatible, as it means installing new versions of a project will change *existing scripts'* semantics, even if you installed the previous version's scripts to different locations and intended them to remain accessible. If you want an example of doing it right, see buildout, which hardcodes the entire sys.path of a script to refer to the exact versions of all dependencies; while this has different failure modes (i.e., dependence on absolute paths), it is more stable as to script semantics even than setuptools' default behavior.
maybe triggered by a flag (possibly implied by --single-version-externally-managed, as the pkg_resources complexity is only needed when multi-versions are involved).
That option does not preclude the existence of multiple versions, or the possibility of installing the same script to different directories for different installed versions. If you *must* do this, I suggest using buildout's approach of hardwiring sys.path in the script, only strengthened by checking for the actual existence and versions, rather than distlib's anything-goes approach. (Of course, as Donald points out, this won't do anything for those scripts that themselves make use of other packages' entry points: they will have a runtime dependency on pkg_resources anyway.)
PJ Eby <pje <at> telecommunity.com> writes:
As others have already mentioned, this is not a bug but a feature. Setuptools-generated scripts are linked to a specific version of the project, which means that you can install more than one version by renaming the scripts or installing the scripts to different directories.
While other strategies are definitely possible, distlib's approach is not backward-compatible, as it means installing new versions of a
Correct, because distlib does not support multiple installed versions of the same distribution, nor does it do the sys.path manipulations on the fly which have caused many people to have a problem with setuptools. Do people see this as a problem? I would have thought that venvs would allow people to deal with multiple versions in a less magical way. Regards, Vinay Sajip
On Thu, Jul 18, 2013 at 7:09 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
PJ Eby <pje <at> telecommunity.com> writes:
While other strategies are definitely possible, distlib's approach is not backward-compatible, as it means installing new versions of a
Correct, because distlib does not support multiple installed versions of the same distribution, nor does it do the sys.path manipulations on the fly which have caused many people to have a problem with setuptools.
Do people see this as a problem? I would have thought that venvs would allow people to deal with multiple versions in a less magical way.
So does buildout, which doesn't need venvs; it just (if you configure it that way) puts all your eggs in a giant cache directory and writes scripts with hardcoded sys.path to include the right ones. This is actually more explicit than venvs, since it doesn't depend on environment variables or on installation state. IOW, there are other choices available besides "implicit environment-based path" and "dynamically generated path". Even setuptools doesn't require that you have a dynamic path.
If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention.
Distutils lets you install things wherever you want; in the naive case you could use install --root to install every package to a version-specific directory and then use something like Gnu Stow to create symlink farms. Python supports explicit sys.path construction and modification, and of course people certainly "vendor" (i.e. bundle) their dependencies directly in order to have a specific version of them. So, I don't think it's accurate to consider multi-version installation a totally new feature. (And AFAIK, the point of contention isn't that setuptools *supports* multi-version installation, it's that it's the *default* implementation.) In any event, wheels are designed to be able to be used in the same way as eggs for multi-version installs. The question of *how* has been brought up by Nick before, and I've thrown out some counter-proposals. It's still an open issue as to how much *active* support will be provided, but my impression of the discussion is that even if the stdlib isn't exactly *encouraging* multi-version installs, we don't want to *break* them. Hence my suggestion that if you want to drop pkg_resources use from generated scripts, you should use buildout's approach (explicit sys.path baked into the script) rather than distlib's current laissez-faire approach. Or you can just check versions, I'm not that picky. All I want is that if you install a new version of a package and still have an old copy of the script, the old script should still run the old version, or at least give you an error telling you the script wasn't updated, rather than silently running a different version. Buildout's approach accomplishes this by hardcoding egg paths, so as long as you don't delete the eggs, everything is fine, and if you do delete any of them, you can see what's wrong by looking at the script source.
version of them. So, I don't think it's accurate to consider
multi-version installation a totally new feature. (And AFAIK, the point of contention isn't that setuptools *supports* multi-version installation, it's that it's the *default* implementation.)
That distutils features could be manipulated in some esoteric way doesn't mean that distutils supports multi-version installations - not by design, anyway. It's perfectly fine for setuptools, buildout and other third-party tools to support multi-version installations in whatever way they see fit - I only raised the question of a PEP because multi-version would be a significant new feature if in Python (leaving aside technicalities about whether something "bundled with Python" is "in Python"). Regards, Vinay Sajip
But it also sounds like that project providing wheel distributions is too early to include in the User's Guide.
My intention is for the user guide to cover building and installing wheels. https://bitbucket.org/pypa/python-packaging-user-guide/issue/11/include-inst...
Then I'm thoroughly confused since the Wheel PEP says in its rationale
Brett Cannon <brett <at> python.org> writes: that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone? Another way to look at it: The wheel contains all the code needed to use a distribution at run or build time - Python code, .so files, header files, data files, scripts. "Just stuff - no fluff" :-) The sdist generally contains all the files in the wheel, plus those needed to build the wheel (e.g. .pyx, .f, .c), + docs, tests, test data etc. but not the built files. This isn't hard and fast, though - an sdist could e.g. choose to include a .c file created from a .pyx, so that the user doesn't need to have Cython installed, but just a C compiler. Of course some people bundle their test code in a tests subpackage which would then end up in the wheel, but hopefully I've given the gist of the distinction. Regards, Vinay Sajip
On Jul 17, 2013, at 11:46 AM, Daniel Holth wrote:
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
You're not getting rid of sdists are you? Please note that without source distributions (preferably .tar.gz) your package will never get distributed on a Linux distro. Maybe the keyword here is "traditional" though. In that case, keep in mind that at least in Debian and its derivatives, we have a lot of tools that make it pretty trivial to package something setup.py based from PyPI. If/when that goes away, it will be more difficult to get new package updates, until the distro's supporting tools catch up. -Barry
On 17 July 2013 19:46, Barry Warsaw <barry@python.org> wrote:
You're not getting rid of sdists are you?
There are as-yet unspecified plans for a sdist 2.0 format. It is expected to fulfil the same role as current sdist, though, so no need to worry.
Please note that without source distributions (preferably .tar.gz) your package will never get distributed on a Linux distro.
Understood. I expect Nick is fully aware of the implications here :-)
Maybe the keyword here is "traditional" though. In that case, keep in mind that at least in Debian and its derivatives, we have a lot of tools that make it pretty trivial to package something setup.py based from PyPI. If/when that goes away, it will be more difficult to get new package updates, until the distro's supporting tools catch up.
The long-term intent is to remove executable setup.py. When this happens, definitely consumers (end users, Python tools like pip, and distro packaging systems) will have some migration work to do. Keeping that manageable will definitely be important. But doing nothing and staying where we are isn't really an option, so we'll have to accept and manage the pain. Paul
On Jul 17, 2013, at 07:56 PM, Paul Moore wrote:
The long-term intent is to remove executable setup.py. When this happens, definitely consumers (end users, Python tools like pip, and distro packaging systems) will have some migration work to do. Keeping that manageable will definitely be important. But doing nothing and staying where we are isn't really an option, so we'll have to accept and manage the pain.
Definitely. And if that leads to a declarative equivalent that we can reason about without executing, all the better. the-setup.cfg-is-dead,-long-live-the-setup.cfg-ly y'rs, -Barry
On 17 July 2013 19:46, Barry Warsaw <barry@python.org> wrote:
On Jul 17, 2013, at 11:46 AM, Daniel Holth wrote:
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
You're not getting rid of sdists are you?
Please note that without source distributions (preferably .tar.gz) your package will never get distributed on a Linux distro.
Maybe the keyword here is "traditional" though.
Yeah, I think what Daniel means is that the sdist->wheel transformation could be done by a tool unlike distutils and setuptools. The sdist as supplied would not be something that could be directly installed with 'python setup.py install' but it could be turned into a wheel by bento/waf/yaku/scons etc.
In that case, keep in mind that at least in Debian and its derivatives, we have a lot of tools that make it pretty trivial to package something setup.py based from PyPI. If/when that goes away, it will be more difficult to get new package updates, until the distro's supporting tools catch up.
I imagined that distro packaging tools would end up using the wheel as an intermediate format when building a deb from a source deb. Would that not make things easier long-term? In the short term, you can expect that whatever solution people use is likely to be convertible to a traditional sdist in some straight-forward way e.g. 'bentomaker sdist'. Oscar
On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote:
I imagined that distro packaging tools would end up using the wheel as an intermediate format when building a deb from a source deb.
Do you mean, the distro would download the wheel or that it would build it during the build step for the archive? Probably not the former, as any binary blobs in a wheel would both violate policy and likely be inappropriate for all the platforms we build for. -Barry
On 17 July 2013 20:39, Barry Warsaw <barry@python.org> wrote:
On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote:
I imagined that distro packaging tools would end up using the wheel as an intermediate format when building a deb from a source deb.
Do you mean, the distro would download the wheel or that it would build it during the build step for the archive? Probably not the former, as any binary blobs in a wheel would both violate policy and likely be inappropriate for all the platforms we build for.
I meant the latter. The source deb would comprise the sdist (that may or may not be "traditional") and other distro files. The author of the sdist designed it with the intention that it could be turned into a wheel in some way (perhaps not the traditional one). So the natural way to build it is to use the author's intended build mechanism, end up with a wheel, and then convert that to an installable deb. Oscar
On Jul 17, 2013, at 3:44 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
I meant the latter. The source deb would comprise the sdist (that may or may not be "traditional") and other distro files. The author of the sdist designed it with the intention that it could be turned into a wheel in some way (perhaps not the traditional one). So the natural way to build it is to use the author's intended build mechanism, end up with a wheel, and then convert that to an installable deb.
As far as I know that's not how distros package things. They'll take the source and package it into a source package for their platform, and then their build machines will build binary packages for all the archectures they support. I don't suspect the distros to use Wheels at all. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Wed, Jul 17, 2013 at 3:39 PM, Barry Warsaw <barry@python.org> wrote:
On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote:
I imagined that distro packaging tools would end up using the wheel as an intermediate format when building a deb from a source deb.
Do you mean, the distro would download the wheel or that it would build it during the build step for the archive? Probably not the former, as any binary blobs in a wheel would both violate policy and likely be inappropriate for all the platforms we build for.
-Barry
The distro packager will likely only have to type "python -m some_tool install ... " instead of "setup.py install ...". IIRC distro packaging normally does installation into some temporary directory which is then archived to create the distro package. The existence of wheel probably doesn't make any difference. However a pure-Python wheel on pypi might be something a distro could work with or it could be an intermediate format compiled just-in-time by the distro. The new json metadata probably will affect the distros more.
On 17 July 2013 20:52, Daniel Holth <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 3:39 PM, Barry Warsaw <barry@python.org> wrote:
On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote:
I imagined that distro packaging tools would end up using the wheel as an intermediate format when building a deb from a source deb.
Do you mean, the distro would download the wheel or that it would build it during the build step for the archive? Probably not the former, as any binary blobs in a wheel would both violate policy and likely be inappropriate for all the platforms we build for.
The distro packager will likely only have to type "python -m some_tool install ... " instead of "setup.py install ...". IIRC distro packaging normally does installation into some temporary directory which is then archived to create the distro package. The existence of wheel probably doesn't make any difference.
Currently sdists provides a relatively uniform interface in the way that the setup.py can be used for build/installation. If non-traditional sdists become commonplace then that will not be the case any more. On the other hand the wheel format provides not just a uniform interface but a formally specified one that I imagine is more suitable for the kind of automated processing that is done by distros. I'm not a distro packager but I imagined that they would find it more convenient to have tools that turn one formally specified format into another than to run the installation in a monkey-patched environment. Oscar
On 18 Jul 2013 01:46, "Daniel Holth" <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this
and so I figured I could use this opportunity to help with patches to
week the
User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
Argh, don't even suggest that. Such projects could never be included in a Linux distribution - we need the original source to push into a trusted build system. Cheers, Nick.
* Are we promoting (weakly, strongly?) the signing of distributions yet?
No change.
* Are we saying "use setuptools" for everyone, or still only if you
need it
(asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)?
Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior.
The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems. _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 17 July 2013 22:43, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 Jul 2013 01:46, "Daniel Holth" <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
Argh, don't even suggest that. Such projects could never be included in a Linux distribution - we need the original source to push into a trusted build system.
What do you mean by this? I interpret Daniel's comment as meaning that there's no setup.py in the sdist. And I think it's a great idea and that lots of others would be very happy to ditch the setup.py concept in favour of something entirely different from the distutils way of doing things. In another thread you mentioned the idea that someone would build without using distutils/setuptools by using a setup.py that simply invokes an alternate build system that is build-required by the sdist. That's fine for simple cases but how many 'python setup.py <command>'s should the setup.py support? Setuptools setup() supports the following: build, build_py, build_ext, build_clib, build_scripts, clean, install, install_lib, install_headers, install_scripts, install_data, sdist, register, bdist, bdist_dumb, bdist_rpm, bdist_wininst, upload, check, rotate, develop, setopt, saveopts, egg_info, upload_docs, install_egg_info, alias, easy_install, bdist_egg, test (Presumably bdist_wheel would be there if I had a newer setuptools). Oscar
On 18 Jul 2013 21:48, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
On 17 July 2013 22:43, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 Jul 2013 01:46, "Daniel Holth" <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org>
wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
Argh, don't even suggest that. Such projects could never be included in a Linux distribution - we need the original source to push into a trusted build system.
What do you mean by this?
I interpret Daniel's comment as meaning that there's no setup.py in the sdist. And I think it's a great idea and that lots of others would be very happy to ditch the setup.py concept in favour of something entirely different from the distutils way of doing things.
No, that's not what he said, he said no sdist at all. Wheel fills the role of a prebuilt binary format, it's not suitable as the *sole* upload format for a project. Tarball, sdist, wheel. Three different artifacts for three different phases of distribution.
In another thread you mentioned the idea that someone would build without using distutils/setuptools by using a setup.py that simply invokes an alternate build system that is build-required by the sdist. That's fine for simple cases but how many 'python setup.py <command>'s should the setup.py support?
Please read PEP 426, as I cover this in detail. If anything needs further clarification, please let me know. Cheers, Nick.
Setuptools setup() supports the following: build, build_py, build_ext, build_clib, build_scripts, clean, install, install_lib, install_headers, install_scripts, install_data, sdist, register, bdist, bdist_dumb, bdist_rpm, bdist_wininst, upload, check, rotate, develop, setopt, saveopts, egg_info, upload_docs, install_egg_info, alias, easy_install, bdist_egg, test
(Presumably bdist_wheel would be there if I had a newer setuptools).
Oscar
On 18 July 2013 13:13, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 Jul 2013 21:48, "Oscar Benjamin" <oscar.j.benjamin@gmail.com> wrote:
In another thread you mentioned the idea that someone would build without using distutils/setuptools by using a setup.py that simply invokes an alternate build system that is build-required by the sdist. That's fine for simple cases but how many 'python setup.py <command>'s should the setup.py support?
Please read PEP 426, as I cover this in detail. If anything needs further clarification, please let me know.
Okay, I have actually read that before but I forgot about that bit. It says: ''' In the meantime, the above operations will be handled through the distutils/setuptools command system: python setup.py dist_info python setup.py sdist python setup.py build_ext --inplace python setup.py test python setup.py bdist_wheel ''' That seems a sufficiently minimal set of commands. What I wonder when reading it is whether any other command line options are expected to be supported. For example if the setup.py is using distutils/setuptools then you could do something like: python setup.py sdist --dist-dir=some_dir Should it be explicitly not required that the setup.py should support any other invocation than those listed and should just report success/failure by error code? Also in the event of failure is it the job of setup.py to clean up after itself (since there's no clean command)? Oscar
On 18 Jul, 2013, at 13:48, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 17 July 2013 22:43, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 18 Jul 2013 01:46, "Daniel Holth" <dholth@gmail.com> wrote:
On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon <brett@python.org> wrote:
I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial.
But to do that I wanted to ask what the current best practices are.
* Are we even close to suggesting wheels for source distributions?
No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available.
I'd like to see an ambitious person begin uploading wheels that have no traditional sdist.
Argh, don't even suggest that. Such projects could never be included in a Linux distribution - we need the original source to push into a trusted build system.
What do you mean by this?
I interpret Daniel's comment as meaning that there's no setup.py in the sdist. And I think it's a great idea and that lots of others would be very happy to ditch the setup.py concept in favour of something entirely different from the distutils way of doing things.
In another thread you mentioned the idea that someone would build without using distutils/setuptools by using a setup.py that simply invokes an alternate build system that is build-required by the sdist. That's fine for simple cases but how many 'python setup.py <command>'s should the setup.py support?
I don't think that's clear at the moment. It could be as little as "bdist_wheel", that could be enough to interface to get from an extracted sdist to a wheel. The current focus is on defining a common metadata format (the metadata 2.0 JSON files) and a binary distribution format, and that's enough to keep the folks doing the actual work occupied for now. In the long run we'll probably end up with something like this: * Sources from a VCS (that is, project in the layout used by those doing development) | [tool specific] | V * sdist archive (sources + metadata.json + ???, to be specified) | [to be specified interface] | V * wheel archive | ["pip", PEP 376(?)] * installed package If I recall correctly the transformation from sdist to wheel is currently not specified because getting the last steps (binary distribution and installation) right is more important right now. The exact format of an sdist, and the interface for specifying how to build a wheel from an sdist is still open for discussion and experimentation. That is, what's the minimal tool that could be used to create wheels for distributions that contain one or more python packages with dependency information? And what would be needed for a more complex distribution with (optional) C extensions, data files, custom compilers, ...? The initial interface to the build system could well be a setup.py file that the build system will only invoke as "python setup.py bdist_wheel --bdist-dir=DIR" (with build-time depedencies specified in the metdata file) because that's easy to arrange for distutils/setuptools, and it should be easy enough to provide a dummy setup.py file with just that interface for alternative build systems. Ronald
participants (13)
-
Barry Warsaw
-
Brett Cannon
-
Carl Meyer
-
Daniel Holth
-
Donald Stufft
-
Marcus Smith
-
Nick Coghlan
-
Oscar Benjamin
-
Paul Moore
-
PJ Eby
-
Ronald Oussoren
-
Steve Dower
-
Vinay Sajip