PEP 439 and pip bootstrap updated
[firstly, my apologies for posting the announcement yesterday of the pip bootstrap implementation and PEP updates to the pypa-dev list instead of distutils-sig... I blame PyCon AU exhaustion :-)] Firstly, I've just made some additional changes to PEP 439 to include: - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) - mention the possibility of inclusion in a future Python 2.7 release - clarify the SSL certificate situation The bootstrap code has also been updated to: - not run the full pip command if it's "pip3 install setuptools" or either of the other two packages it has just installed (thus preventing a possibly confusing message to the user) - also install virtualenv The intention is that the pip, setuptools and actually all Python projects will promote a single bootstrap process: "pip3 install setuptools" or "pip3 install Django" And then there's instructions for getting "pip" if it's not installed. Exact wording etc. to be determined :-) The original message I sent to pypa-dev yesterday is below: The bootstrap that I wrote at the PyCon AU sprints to implement PEP 439 has been added to pypa on bitbucket: https://bitbucket.org/pypa/bootstrap I've also updated the PEP with the following changes: - mention current plans for HTTPS cert verification in Python 3.4+ (sans PEP reference for now) - remove setuptools note; setuptools will now be installed - mention bootstrapping target (user vs. system) and command-line options - mention python 2.6+ bootstrap possibility - remove consideration of support for unnecessary installation options (beyond -i/--index-url) - mention having pip default to --user when itself installed in ~/.local What the last item alludes to is the idea that it'd be nice if pip installed in ~/.local would default to installing packages also in ~/.local as though the --user switch had been provided. Otherwise the user needs to remember it every time they install a package. Note that the bootstrapping uses two different flags to control where the pip implementation is installed: --bootstrap and --bootstrap-system (these were chosen to encourage user installs). It would be ideal if pip could support those flags, as the pip3 command currently must remove them before invoking pip main. Once we're happy with the shape of pip3 we can fork it to Python 2 and use it as the canonical bootstrap script for installing pip and setuptools. I think we should also consider installing virtualenv in Python 2... Happy to clarify where needed and code review is welcome. It's been a looong four days here :-) Richard
On Jul 9, 2013, at 11:16 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote:
[firstly, my apologies for posting the announcement yesterday of the pip bootstrap implementation and PEP updates to the pypa-dev list instead of distutils-sig... I blame PyCon AU exhaustion :-)]
Firstly, I've just made some additional changes to PEP 439 to include:
- installing virtualenv as well (so now pip, setuptools and virtualenv are installed)
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
- mention the possibility of inclusion in a future Python 2.7 release - clarify the SSL certificate situation
The bootstrap code has also been updated to:
- not run the full pip command if it's "pip3 install setuptools" or either of the other two packages it has just installed (thus preventing a possibly confusing message to the user) - also install virtualenv
The intention is that the pip, setuptools and actually all Python projects will promote a single bootstrap process:
"pip3 install setuptools" or "pip3 install Django"
And then there's instructions for getting "pip" if it's not installed. Exact wording etc. to be determined :-)
The original message I sent to pypa-dev yesterday is below:
The bootstrap that I wrote at the PyCon AU sprints to implement PEP 439 has been added to pypa on bitbucket:
https://bitbucket.org/pypa/bootstrap
I've also updated the PEP with the following changes:
- mention current plans for HTTPS cert verification in Python 3.4+ (sans PEP reference for now) - remove setuptools note; setuptools will now be installed - mention bootstrapping target (user vs. system) and command-line options - mention python 2.6+ bootstrap possibility - remove consideration of support for unnecessary installation options (beyond -i/--index-url) - mention having pip default to --user when itself installed in ~/.local
What the last item alludes to is the idea that it'd be nice if pip installed in ~/.local would default to installing packages also in ~/.local as though the --user switch had been provided. Otherwise the user needs to remember it every time they install a package.
Note that the bootstrapping uses two different flags to control where the pip implementation is installed: --bootstrap and --bootstrap-system (these were chosen to encourage user installs). It would be ideal if pip could support those flags, as the pip3 command currently must remove them before invoking pip main.
Once we're happy with the shape of pip3 we can fork it to Python 2 and use it as the canonical bootstrap script for installing pip and setuptools. I think we should also consider installing virtualenv in Python 2...
Happy to clarify where needed and code review is welcome. It's been a looong four days here :-)
Richard _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 10 July 2013 13:20, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:16 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote: Firstly, I've just made some additional changes to PEP 439 to include:
- installing virtualenv as well (so now pip, setuptools and virtualenv are installed)
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
It's my understanding that people still install virtualenv in py3k. Richard
On Jul 9, 2013, at 11:47 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote:
On 10 July 2013 13:20, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:16 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote: Firstly, I've just made some additional changes to PEP 439 to include:
- installing virtualenv as well (so now pip, setuptools and virtualenv are installed)
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
It's my understanding that people still install virtualenv in py3k.
Richard
I just talked to Carl. He basically said that for 3.3+ pyenv itself should probably used and that "hopefully virtualenv will die in favor of of pyenv". Another reason I think that the bootstrap script shouldn't install virtualenv is that of scope. The point of bootstrapping was to make it so pip could be "included" with Python without actually including it. As far as i'm personally concerned it should concern itself with installing pip and setuptools (assuming we can't make setuptools optional in pip or bundled…). We don't need virtualenv to enable ``pip3 install foo`` so it shouldn't be installing it. Otoh it would be nicer if PyEnv was taken to integrate with pip (although this is possibly a different pip) in that when creating a new environment if pip has already been installed in the "parent" environment it would be copied over into the pyenv created environment. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 10 July 2013 14:18, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:47 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote:
On 10 July 2013 13:20, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:16 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote: Firstly, I've just made some additional changes to PEP 439 to include:
- installing virtualenv as well (so now pip, setuptools and virtualenv are installed)
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
It's my understanding that people still install virtualenv in py3k.
I just talked to Carl. He basically said that for 3.3+ pyenv itself should probably used and that "hopefully virtualenv will die in favor of of pyenv".
OK, thanks. I wonder whether virtualenv.org could mention pyvenv for Py3k users?
Another reason I think that the bootstrap script shouldn't install virtualenv is that of scope. The point of bootstrapping was to make it so pip could be "included" with Python without actually including it. As far as i'm personally concerned it should concern itself with installing pip and setuptools (assuming we can't make setuptools optional in pip or bundled…). We don't need virtualenv to enable ``pip3 install foo`` so it shouldn't be installing it.
pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be <wink>) so I think I'll retain it in that bootstrap code.
Otoh it would be nicer if PyEnv was taken to integrate with pip (although this is possibly a different pip) in that when creating a new environment if pip has already been installed in the "parent" environment it would be copied over into the pyenv created environment.
There's also the idea I mentioned yesterday: if pip is installed to the user local site-packages then it would be really good if pip's installs could also default to that rather than the system site-packages. In fact I consider it a bug that it does not, and I hope the pip devs will come to think that too :-) Richard
On Jul 10, 2013, at 12:37 AM, Richard Jones <r1chardj0n3s@gmail.com> wrote:
On 10 July 2013 14:18, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:47 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote:
On 10 July 2013 13:20, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:16 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote: Firstly, I've just made some additional changes to PEP 439 to include:
- installing virtualenv as well (so now pip, setuptools and virtualenv are installed)
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
It's my understanding that people still install virtualenv in py3k.
I just talked to Carl. He basically said that for 3.3+ pyenv itself should probably used and that "hopefully virtualenv will die in favor of of pyenv".
OK, thanks. I wonder whether virtualenv.org could mention pyvenv for Py3k users?
Probably!
Another reason I think that the bootstrap script shouldn't install virtualenv is that of scope. The point of bootstrapping was to make it so pip could be "included" with Python without actually including it. As far as i'm personally concerned it should concern itself with installing pip and setuptools (assuming we can't make setuptools optional in pip or bundled…). We don't need virtualenv to enable ``pip3 install foo`` so it shouldn't be installing it.
pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be <wink>) so I think I'll retain it in that bootstrap code.
Ok, I don't really care enough about that minor scope creep to object too heavily :)
Otoh it would be nicer if PyEnv was taken to integrate with pip (although this is possibly a different pip) in that when creating a new environment if pip has already been installed in the "parent" environment it would be copied over into the pyenv created environment.
There's also the idea I mentioned yesterday: if pip is installed to the user local site-packages then it would be really good if pip's installs could also default to that rather than the system site-packages. In fact I consider it a bug that it does not, and I hope the pip devs will come to think that too :-)
I don't have an opinion on this as I can't think of a single time I (personally) want to use the user local site-packages so that'd be something to convince the other pip devs of :D
Richard
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Richard Jones <r1chardj0n3s <at> gmail.com> writes:
pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be <wink>) so I think I'll retain it in that bootstrap code.
Perhaps I misunderstand, but what's the relevance of Python 2 contexts here? Aren't we talking about Python 3.4 and later? I agree with Donald's suggestion that virtualenv *not* be included, or are you saying that you want to include it for those users who have 3.4 *and* 2.x installed? If you include virtualenv, it makes it possible for people to mistakenly use it even though the recommended approach is to use the built-in venv support in Python. Exactly what benefit does including virtualenv provide in a 3.4+ installation? Regards, Vinay Sajip
On 10 July 2013 19:08, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Richard Jones <r1chardj0n3s <at> gmail.com> writes:
pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be <wink>) so I think I'll retain it in that bootstrap code.
Perhaps I misunderstand, but what's the relevance of Python 2 contexts here? Aren't we talking about Python 3.4 and later?
It makes sense to me (and Nick) to simplify the packaging overhead for users of Python 2. Currently the story is a bit of a mess (multiple sites with different approaches).
Exactly what benefit does including virtualenv provide in a 3.4+ installation?
That was kinda my question <wink> Richard
Richard Jones <richard <at> python.org> writes:
It makes sense to me (and Nick) to simplify the packaging overhead for users of Python 2. Currently the story is a bit of a mess (multiple sites with different approaches).
No argument there, but I still don't see the relevance of virtualenv in a 3.4+ context. The PEP states "Hereafter the installation of the 'pip implementation' will imply installation of setuptools and virtualenv." and, a few lines further down, "The bootstrap process will proceed as follows: 1. The user system has Python (3.4+) installed." I don't see any mention of backporting this bootstrap to 2.x.
Exactly what benefit does including virtualenv provide in a 3.4+ installation?
That was kinda my question <wink>
Sorry, it didn't come across like a question, more like a fait accompli :-) Regards, Vinay Sajip
On 10 July 2013 19:55, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Richard Jones <richard <at> python.org> writes:
It makes sense to me (and Nick) to simplify the packaging overhead for users of Python 2. Currently the story is a bit of a mess (multiple sites with different approaches).
No argument there, but I still don't see the relevance of virtualenv in a 3.4+ context. The PEP states
"Hereafter the installation of the 'pip implementation' will imply installation of setuptools and virtualenv."
Sorry I've not made this clearer. Per the discussion here I've removed that from the PEP. That version hasn't been built on the web server yet.
Exactly what benefit does including virtualenv provide in a 3.4+ installation?
That was kinda my question <wink>
Sorry, it didn't come across like a question, more like a fait accompli :-)
Poorly phrased, my apologies. Richard
On 10 Jul, 2013, at 11:40, Richard Jones <richard@python.org> wrote:
On 10 July 2013 19:08, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Richard Jones <r1chardj0n3s <at> gmail.com> writes:
pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be <wink>) so I think I'll retain it in that bootstrap code.
Perhaps I misunderstand, but what's the relevance of Python 2 contexts here? Aren't we talking about Python 3.4 and later?
It makes sense to me (and Nick) to simplify the packaging overhead for users of Python 2. Currently the story is a bit of a mess (multiple sites with different approaches).
New features in a bugfix release? You better hope the RM doesn't hear :-) That said, 2.7 will be around for a while and adding a consistent installation experience to both 3.4 and 2.7 does sound attractive and adding a new script shouldn't have that many side effects. What about backporting pyvenv as well? I guess that's way too invasive for a bugfix release. Ronald
On Wed, Jul 10, 2013 at 12:37 AM, Richard Jones <r1chardj0n3s@gmail.com> wrote:
pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be <wink>) so I think I'll retain it in that bootstrap code.
I agree it *should* be rare in most cases but it most assuredly is not. I can tell you from experience that a lot of people in the scientific community, for example, do not use virtualenv (sometimes with good reasons, but more often not). Erik
Hi Richard, On 07/09/2013 09:47 PM, Richard Jones wrote:
On 10 July 2013 13:20, Donald Stufft <donald@stufft.io> wrote:
On Jul 9, 2013, at 11:16 PM, Richard Jones <r1chardj0n3s@gmail.com> wrote: Firstly, I've just made some additional changes to PEP 439 to include:
- installing virtualenv as well (so now pip, setuptools and virtualenv are installed)
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
It's my understanding that people still install virtualenv in py3k.
They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it. The bootstrap should fix this very problem (i.e. make an installer available in every newly-created pyvenv) and thus encourage use of pyvenv (which is simpler, more reliable, and built-in) in place of virtualenv. I don't think it makes sense for the stdlib bootstrapper to install an inferior third-party tool instead of using a tool that is now built-in to the standard library on 3.3+. Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make sense for it to install virtualenv there (or, probably even better, for pyvenv to be backported along with the bootstrap). Carl
On 10 July 2013 14:19, Carl Meyer <carl@oddbird.net> wrote:
They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it.
Ah, thanks for clarifying that.
Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make sense for it to install virtualenv there (or, probably even better, for pyvenv to be backported along with the bootstrap).
I intend to create two forks; one for consideration in a 2.7.5 release as "pip" and the other for users of 2.6+ called "get-pip.py". Richard
On Wed, Jul 10, 2013 at 12:54 AM, Richard Jones <richard@python.org> wrote:
On 10 July 2013 14:19, Carl Meyer <carl@oddbird.net> wrote:
They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it.
Ah, thanks for clarifying that.
Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make sense for it to install virtualenv there (or, probably even better, for pyvenv to be backported along with the bootstrap).
I intend to create two forks; one for consideration in a 2.7.5 release as "pip" and the other for users of 2.6+ called "get-pip.py".
Why the specific shift between 2.7 and 2.6 in terms of naming? I realize you are differentiating between the bootstrap being pre-installed with Python vs. not, but is there really anything wrong with the script being called pip (or pip3 for Python 3.3/3.2) if it knows how to do the right thing to get pip up and going? IOW why not make the bootstrap what everyone uses to install pip and it just so happens to come pre-installed with Python 3.4 (and maybe Python 2.7)?
On 10 July 2013 13:46, Brett Cannon <brett@python.org> wrote:
pip (or pip3 for Python 3.3/3.2)
Sorry to butt in here, but can I just catch this point. There seems to be an ongoing series of assumptions over whether the bootstrap is called pip or pip3. The pep actually says the bootstrap will be called pip3, but I'm not happy with that - specifically because the *existing* pip is not called pip3. So, at present, if I (as a 100% Python 3 user) want to install a package, I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python". I would find it distinctly irritating if in Python 3.4 I have to type "pip3 bootstrap" to bootstrap pip - and even worse if *after* the bootstrap the command I use is still "pip". (And no, there is currently no "pip3" command installed by pip, and even if there were, I would not want to use it, I'm happy with the unsuffixed version). I appreciate that Unix users have different compatibility priorities here, but can I propose that on Windows at least, the bootstrap command is "pip" and that matches the "core" pip that will be downloaded? Oh - and one other thing, on Windows python is often not on the system PATH - that's what the py.exe launcher is for. So where will the pip bootstrap command be installed, and where will it install the real pip? And also, will the venv code be modified to install the pip bootstrap in the venv's Scripts directory? Does virtualenv need to change to do the same? What if pip has already been bootstrapped in the system Python? Maybe I need to properly review the PEP rather than just throwing out random thoughts :-) Paul
On Wed, Jul 10, 2013 at 9:43 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 10 July 2013 13:46, Brett Cannon <brett@python.org> wrote:
pip (or pip3 for Python 3.3/3.2)
Sorry to butt in here, but can I just catch this point. There seems to be an ongoing series of assumptions over whether the bootstrap is called pip or pip3. The pep actually says the bootstrap will be called pip3, but I'm not happy with that - specifically because the *existing* pip is not called pip3.
So, at present, if I (as a 100% Python 3 user) want to install a package, I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python".
But you should be typing python3 here, not python (and PEP 394 is trying to get people to start using python2 as the name to invoke).
I would find it distinctly irritating if in Python 3.4 I have to type "pip3 bootstrap" to bootstrap pip - and even worse if *after* the bootstrap the command I use is still "pip". (And no, there is currently no "pip3" command installed by pip, and even if there were, I would not want to use it, I'm happy with the unsuffixed version).
As Donald pointed out, you would always use pip3. The bootstrapping aspect is a behind-the-scenes thing; just consider the script as "launch pip if installed, else, bootstrap it in and then launch it".
I appreciate that Unix users have different compatibility priorities here, but can I propose that on Windows at least, the bootstrap command is "pip" and that matches the "core" pip that will be downloaded?
There won't be a difference in command-line usage.
Oh - and one other thing, on Windows python is often not on the system PATH - that's what the py.exe launcher is for. So where will the pip bootstrap command be installed, and where will it install the real pip?
Covered in the PEP: it will go into the user installation location as if --user had been specified.
And also, will the venv code be modified to install the pip bootstrap in the venv's Scripts directory?
In the PEP: goes into the venv.
Does virtualenv need to change to do the same? What if pip has already been bootstrapped in the system Python?
Then nothing special happens; the script just executes pip instead of triggering a bootstrap first.
Maybe I need to properly review the PEP rather than just throwing out random thoughts :-)
I feel like I just fed a bad habit. =)
On 10 July 2013 15:28, Brett Cannon <brett@python.org> wrote:
So, at present, if I (as a 100% Python 3 user) want to install a package,
I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python".
But you should be typing python3 here, not python (and PEP 394 is trying to get people to start using python2 as the name to invoke).
So - that's a major behaviour change on Windows. At the moment, Python 3.3 for Windows installs python.exe and pythonw.exe. There are no versioned executables at all. Are you saying that in 3.4 this will change? That will break so many things I have to believe you're wrong or I've misunderstood you. OTOH, adding python3.exe and python3w.exe (or is that pythonw3.exe?) which I can then ignore is fine by me (but in that case, the change doesn't affect my point about the pip command). As I say, I understand Unix is different. This is a purely Windows point - and in the context of the PEP, that's what I'm saying, please can we be careful to be clear whether the plan is for the new pip bootstrap to favour existing platform conventions or uniformity (with the further complication of needing to consider the full pip distribution's behaviour - and there, I will be lobbying hard against any change to require a pip3 command to be used, at least on Windows). As things stand, I can assume the PEP specifies Unix behaviour and is vague or silent on Windows variations, or I can ask for clarification, and for the results to be documented in the PEP. Up to now I was doing the former, but I'm moving towards the latter - hence my question(s). Paul.
On Wed, Jul 10, 2013 at 12:11 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 10 July 2013 15:28, Brett Cannon <brett@python.org> wrote:
So, at present, if I (as a 100% Python 3 user) want to install a package,
I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python".
But you should be typing python3 here, not python (and PEP 394 is trying to get people to start using python2 as the name to invoke).
So - that's a major behaviour change on Windows. At the moment, Python 3.3 for Windows installs python.exe and pythonw.exe. There are no versioned executables at all. Are you saying that in 3.4 this will change? That will break so many things I have to believe you're wrong or I've misunderstood you. OTOH, adding python3.exe and python3w.exe (or is that pythonw3.exe?) which I can then ignore is fine by me (but in that case, the change doesn't affect my point about the pip command).
Didn't know Windows was never updated to use a versioned binary. That's rather unfortunate. -Brett
As I say, I understand Unix is different. This is a purely Windows point - and in the context of the PEP, that's what I'm saying, please can we be careful to be clear whether the plan is for the new pip bootstrap to favour existing platform conventions or uniformity (with the further complication of needing to consider the full pip distribution's behaviour - and there, I will be lobbying hard against any change to require a pip3 command to be used, at least on Windows).
As things stand, I can assume the PEP specifies Unix behaviour and is vague or silent on Windows variations, or I can ask for clarification, and for the results to be documented in the PEP. Up to now I was doing the former, but I'm moving towards the latter - hence my question(s).
On 11 Jul 2013 04:56, "Brett Cannon" <brett@python.org> wrote:
On Wed, Jul 10, 2013 at 12:11 PM, Paul Moore <p.f.moore@gmail.com> wrote:
On 10 July 2013 15:28, Brett Cannon <brett@python.org> wrote:
So, at present, if I (as a 100% Python 3 user) want to install a
But you should be typing python3 here, not python (and PEP 394 is
package, I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python". trying to get people to start using python2 as the name to invoke).
So - that's a major behaviour change on Windows. At the moment, Python
3.3 for Windows installs python.exe and pythonw.exe. There are no versioned executables at all. Are you saying that in 3.4 this will change? That will break so many things I have to believe you're wrong or I've misunderstood you. OTOH, adding python3.exe and python3w.exe (or is that pythonw3.exe?) which I can then ignore is fine by me (but in that case, the change doesn't affect my point about the pip command).
Didn't know Windows was never updated to use a versioned binary. That's rather unfortunate.
Hence the PyLauncher project. Paul's right, though - the PEP is currently very *nix-centric. For Windows, we likely need to consider something based on "py -m pip", which then raises the question of whether or not that's what we should be supporting on *nix as well (with pip and pip3 as convenient shorthand). There's also the fact that the Python launcher is *already* available as a separate Windows installer for earlier releases. Perhaps we should just be bundling the bootstrap script with that for earlier Windows releases. Cheers, Nick.
-Brett
As I say, I understand Unix is different. This is a purely Windows point
As things stand, I can assume the PEP specifies Unix behaviour and is
vague or silent on Windows variations, or I can ask for clarification, and for the results to be documented in the PEP. Up to now I was doing the
- and in the context of the PEP, that's what I'm saying, please can we be careful to be clear whether the plan is for the new pip bootstrap to favour existing platform conventions or uniformity (with the further complication of needing to consider the full pip distribution's behaviour - and there, I will be lobbying hard against any change to require a pip3 command to be used, at least on Windows). former, but I'm moving towards the latter - hence my question(s).
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 10 July 2013 21:30, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 11 Jul 2013 04:56, "Brett Cannon" <brett@python.org> wrote:
Didn't know Windows was never updated to use a versioned binary. That's
rather unfortunate.
Hence the PyLauncher project.
Paul's right, though - the PEP is currently very *nix-centric. For Windows, we likely need to consider something based on "py -m pip", which then raises the question of whether or not that's what we should be supporting on *nix as well (with pip and pip3 as convenient shorthand).
There's also the fact that the Python launcher is *already* available as a separate Windows installer for earlier releases. Perhaps we should just be bundling the bootstrap script with that for earlier Windows releases.
Thanks Nick. I was part way through a much more laboured email basically saying the same thing :-) For reference, PEP 394 is the versioned binary PEP. It is explicitly Unix only and defers Windows to PEP 397 (pylauncher) as being "too complex" to cover alongside the Unix proposal :-) I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop. Paul.
On Jul 10, 2013, at 09:50 PM, Paul Moore wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
+1 -Barry
On Jul 10, 2013, at 5:39 PM, Barry Warsaw <barry@python.org> wrote:
On Jul 10, 2013, at 09:50 PM, Paul Moore wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
+1
-Barry _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
As long as the non -m way exists so I don't have to use it D: ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 10 July 2013 23:00, Donald Stufft <donald@stufft.io> wrote:
As long as the non -m way exists so I don't have to use it D:
Fair enough :-) Having a standard method (-m) and a platform-specific Unix method seems fine to me (and the Unix people can debate the merits of pip3 vs pip etc as much or as little as they want). It'll be nice seeing Unix be the non-standard one for a change :-) Paul
On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy. Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-) Richard
On Wed, Jul 10, 2013 at 9:09 PM, Richard Jones <richard@python.org> wrote:
On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
It's also fraught with historical baggage; remember xmlplus? That was extremely painful and something I believe everyone was glad to see go away. Having said that, there are two solutions to this. The compatible solution with older Python versions is to have the bootstrap download pip and have it installed as piplib or some other alternative name that is not masked by a pip stub in the stdlib. The dead-simple, extremely elegant solution (starting in Python 3.4) is to make pip a namespace package in the stdlib with nothing more than a __main__.py file that installs pip; no checking if it's installed and then running it, etc, just blindly install pip. Then, if you install pip as a regular package, it takes precedence and what's in the stdlib is completely ignored (this helps with any possible staleness with the stdlib's bootstrap script vs. what's in pip, etc.). You don't even need to change the __main__.py in pip as it stands today since namespace packages only work if no regular package is found. In case that didn't make sense, here is the file structure: python3.4/ pip/ __main__.py # Install pip, nothing more ~/.local/ bin/ pip # Literally a shebang and two lines of Python; see below lib/python3.4/site-packages pip/ # As it stands today __init__.py __main__.py ... This also means pip3 literally becomes ``import runpy; runpy.run_module('pip')``, so that is even easier to maintain (assuming pip's bin/ stub isn't already doing that because of backwards-compatibility concerns or something with __main__.py or runpy not existing far enough back, otherwise it should =). -Brett
On 11 July 2013 13:49, Brett Cannon <brett@python.org> wrote:
The dead-simple, extremely elegant solution (starting in Python 3.4) is to make pip a namespace package in the stdlib with nothing more than a __main__.py file that installs pip; no checking if it's installed and then running it, etc, just blindly install pip. Then, if you install pip as a regular package, it takes precedence and what's in the stdlib is completely ignored (this helps with any possible staleness with the stdlib's bootstrap script vs. what's in pip, etc.). You don't even need to change the __main__.py in pip as it stands today since namespace packages only work if no regular package is found.
Wow - that is exceptionally cool. I had never realised namespace packages would work like this.
This also means pip3 literally becomes ``import runpy; runpy.run_module('pip')``, so that is even easier to maintain (assuming pip's bin/ stub isn't already doing that because of backwards-compatibility concerns or something with __main__.py or runpy not existing far enough back, otherwise it should =).
The pip executable script/wrapper currently uses setuptools entry points and wrapper scripts. I'm not a fan of those, so I'd be happy to see the change you suggest, but OTOH they have been like that since long before I was involved with pip, and I have no idea if there are reasons they need to stay that way. Paul
On Jul 11, 2013, at 9:33 AM, Paul Moore <p.f.moore@gmail.com> wrote:
The pip executable script/wrapper currently uses setuptools entry points and wrapper scripts. I'm not a fan of those, so I'd be happy to see the change you suggest, but OTOH they have been like that since long before I was involved with pip, and I have no idea if there are reasons they need to stay that way.
Typically the reasoning is because of the .exe wrapper. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Jul 11, 2013 at 9:39 AM, Donald Stufft <donald@stufft.io> wrote:
On Jul 11, 2013, at 9:33 AM, Paul Moore <p.f.moore@gmail.com> wrote:
The pip executable script/wrapper currently uses setuptools entry points and wrapper scripts. I'm not a fan of those, so I'd be happy to see the change you suggest, but OTOH they have been like that since long before I was involved with pip, and I have no idea if there are reasons they need to stay that way.
Typically the reasoning is because of the .exe wrapper.
And if people want to promote the -m option then the executable scripts just become a secondary convenience. Plus you can't exactly require setuptools to create those scripts at install-time with Python if that's when they are going to be installed.
On Thu, Jul 11, 2013 at 10:20 AM, Brett Cannon <brett@python.org> wrote:
And if people want to promote the -m option then the executable scripts just become a secondary convenience. Plus you can't exactly require setuptools to create those scripts at install-time with Python if that's when they are going to be installed.
You don't need setuptools in order to include .exe wrappers, though: there's nothing setuptools-specific about the .exe files, they just run a matching, adjacent 'foo-script.py', which can contain whatever you want. Just take the appropriate wrapper .exe, and rename it to whatever 'foo' you want. IOW, if you want to ship a pip.exe on windows that just does "from pip import __main__; __main__()" (or whatever), you can do precisely that, no setuptools needed.
On 11 July 2013 18:05, PJ Eby <pje@telecommunity.com> wrote:
On Thu, Jul 11, 2013 at 10:20 AM, Brett Cannon <brett@python.org> wrote:
And if people want to promote the -m option then the executable scripts just become a secondary convenience. Plus you can't exactly require setuptools to create those scripts at install-time with Python if that's when they are going to be installed.
You don't need setuptools in order to include .exe wrappers, though: there's nothing setuptools-specific about the .exe files, they just run a matching, adjacent 'foo-script.py', which can contain whatever you want. Just take the appropriate wrapper .exe, and rename it to whatever 'foo' you want.
IOW, if you want to ship a pip.exe on windows that just does "from pip import __main__; __main__()" (or whatever), you can do precisely that, no setuptools needed.
With the launcher, a .py file with the relevant #! line set pretty much covers things. It's not an exe, although there are very few things I know of that need specifically an exe file, and if you want to omit the ".py" suffix when invoking it you need to add .py to PATHEXT. But actual exe launchers are much less critical nowadays, I believe. What *is* important, though, is some level of consistency. Before setuptools promoted the idea of declaraive entry points, distributions shipped with a ridiculous variety of attempts to make cross-platform launchers (many of which didn't work very well). I care a lot more about promoting a consistent cross-platform approach than about arguing for any particular solution... Paul Paul.
On Thu, Jul 11, 2013 at 9:33 AM, Paul Moore <p.f.moore@gmail.com> wrote:
On 11 July 2013 13:49, Brett Cannon <brett@python.org> wrote:
The dead-simple, extremely elegant solution (starting in Python 3.4) is to make pip a namespace package in the stdlib with nothing more than a __main__.py file that installs pip; no checking if it's installed and then running it, etc, just blindly install pip. Then, if you install pip as a regular package, it takes precedence and what's in the stdlib is completely ignored (this helps with any possible staleness with the stdlib's bootstrap script vs. what's in pip, etc.). You don't even need to change the __main__.py in pip as it stands today since namespace packages only work if no regular package is found.
Wow - that is exceptionally cool. I had never realised namespace packages would work like this.
Not exceptionally cool ... and that's why the namespace_package form is popular, since the first package in a set of namespace packages that gets it wrong breaks everything.
On Thu, Jul 11, 2013 at 10:29 AM, Daniel Holth <dholth@gmail.com> wrote:
On 11 July 2013 13:49, Brett Cannon <brett@python.org> wrote:
The dead-simple, extremely elegant solution (starting in Python 3.4) is
to
make pip a namespace package in the stdlib with nothing more than a __main__.py file that installs pip; no checking if it's installed and
On Thu, Jul 11, 2013 at 9:33 AM, Paul Moore <p.f.moore@gmail.com> wrote: then
running it, etc, just blindly install pip. Then, if you install pip as a regular package, it takes precedence and what's in the stdlib is completely ignored (this helps with any possible staleness with the stdlib's bootstrap script vs. what's in pip, etc.). You don't even need to change the __main__.py in pip as it stands today since namespace packages only work if no regular package is found.
Wow - that is exceptionally cool. I had never realised namespace packages would work like this.
Not exceptionally cool ... and that's why the namespace_package form is popular, since the first package in a set of namespace packages that gets it wrong breaks everything.
I'm really not following that sentence. You are saying the idea is bad, but is that in general or for this specific case? And you say it's popular because people get it wrong which breaks everything? And how can namespace packages be popular if they are new to Python 3.3 (the ability to execute them with -m is new in Python 3.4)? Are you talking about pkgutil's extend_path hack because I'm talking about NamespaceLoader in importlib? I'm just not seeing the downside. We control the stdlib and pip, so we know the expected interaction and we are purposefully using the override mechanics so it's not going to get messed up by us if we consciously use it (and obviously have tests for it).
On Jul 11, 2013, at 10:47 AM, Brett Cannon <brett@python.org> wrote:
I'm just not seeing the downside. We control the stdlib and pip, so we know the expected interaction and we are purposefully using the override mechanics so it's not going to get messed up by us if we consciously use it (and obviously have tests for it).
I don't think it's especially a problem for pip. I think Daniel was just speaking how the behavior you suggested we could exploit to make this happen has been a major issue for namespace packages in the past using the other methods. However I'm not sure how it's going to work… python -m pip is going to import the pip namespace package yes? And then when pip is installed it'll shadow that, but in the original process where we ran python -m pip won't the namespace package have been cached in sys.modules already? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Thu, Jul 11, 2013 at 10:52 AM, Donald Stufft <donald@stufft.io> wrote:
On Jul 11, 2013, at 10:47 AM, Brett Cannon <brett@python.org> wrote:
I'm just not seeing the downside. We control the stdlib and pip, so we know the expected interaction and we are purposefully using the override mechanics so it's not going to get messed up by us if we consciously use it (and obviously have tests for it).
I don't think it's especially a problem for pip. I think Daniel was just speaking how the behavior you suggested we could exploit to make this happen has been a major issue for namespace packages in the past using the other methods.
However I'm not sure how it's going to work… python -m pip is going to import the pip namespace package yes? And then when pip is installed it'll shadow that, but in the original process where we ran python -m pip won't the namespace package have been cached in sys.modules already?
Yes, but you can clear it out of sys.modules before executing runpy to get the desired effect of falling through to the regular package (runpy wouldn't import pip.__main__ so you literally just need ``del sys.modules['pip']``). You could also pull the old pkgutil.extend_path() trick and use the append method on the _NamespacePath object to directly add the new directory that pip was installed to and then import pip.runner.main(), but that feels like more of a hack to me (but then again I'm rather comfortable mucking with the import system =).
On Thu, Jul 11, 2013 at 11:50 AM, Brett Cannon <brett@python.org> wrote:
On Thu, Jul 11, 2013 at 10:52 AM, Donald Stufft <donald@stufft.io> wrote:
On Jul 11, 2013, at 10:47 AM, Brett Cannon <brett@python.org> wrote:
I'm just not seeing the downside. We control the stdlib and pip, so we know the expected interaction and we are purposefully using the override mechanics so it's not going to get messed up by us if we consciously use it (and obviously have tests for it).
I don't think it's especially a problem for pip. I think Daniel was just speaking how the behavior you suggested we could exploit to make this happen has been a major issue for namespace packages in the past using the other methods.
However I'm not sure how it's going to work… python -m pip is going to import the pip namespace package yes? And then when pip is installed it'll shadow that, but in the original process where we ran python -m pip won't the namespace package have been cached in sys.modules already?
Yes, but you can clear it out of sys.modules before executing runpy to get the desired effect of falling through to the regular package (runpy wouldn't import pip.__main__ so you literally just need ``del sys.modules['pip']``). You could also pull the old pkgutil.extend_path() trick and use the append method on the _NamespacePath object to directly add the new directory that pip was installed to and then import pip.runner.main(), but that feels like more of a hack to me (but then again I'm rather comfortable mucking with the import system =).
And if you're still worried you can always invalidate the cache of the finder representing the parent directory pip got installed to (or all finder caches if you really want to get jumpy).
On Jul 11, 2013, at 11:50 AM, Brett Cannon <brett@python.org> wrote:
Yes, but you can clear it out of sys.modules before executing runpy to get the desired effect of falling through to the regular package (runpy wouldn't import pip.__main__ so you literally just need ``del sys.modules['pip']``). You could also pull the old pkgutil.extend_path() trick and use the append method on the _NamespacePath object to directly add the new directory that pip was installed to and then import pip.runner.main(), but that feels like more of a hack to me (but then again I'm rather comfortable mucking with the import system =).
Ok, Just making sure :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
(Oops, started this yesterday, got distracted and never hit send) On 11 July 2013 11:09, Richard Jones <richard@python.org> wrote:
On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in
documentation,
examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following: 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions getpip would expose at least one function: def bootstrap(index_url=None, system_install=False): ... And executing it as a main module would either: 1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments That way, installation instructions can simply say to unconditionally do: python -m getpip And that will either: 1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want) The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer. Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place. Cheers, Nick.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy. Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-) Richard
+1. No magic side effects will make everyone happier. On Jul 11, 2013 5:48 PM, "Nick Coghlan" <ncoghlan@gmail.com> wrote:
(Oops, started this yesterday, got distracted and never hit send)
On 11 July 2013 11:09, Richard Jones <richard@python.org> wrote:
On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in
documentation,
examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following:
1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions
getpip would expose at least one function:
def bootstrap(index_url=None, system_install=False): ...
And executing it as a main module would either:
1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments
That way, installation instructions can simply say to unconditionally do:
python -m getpip
And that will either:
1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want)
The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer.
Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place.
Cheers, Nick.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 07/11/2013 03:48 PM, Nick Coghlan wrote:
I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following:
1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions
getpip would expose at least one function:
def bootstrap(index_url=None, system_install=False): ...
And executing it as a main module would either:
1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments
That way, installation instructions can simply say to unconditionally do:
python -m getpip
And that will either:
1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want)
The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer.
Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place.
+1. This sounds far better to me than the implicit bootstrapping. Carl
On Jul 11, 2013, at 6:00 PM, Carl Meyer <carl@oddbird.net> wrote:
On 07/11/2013 03:48 PM, Nick Coghlan wrote:
I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following:
1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions
getpip would expose at least one function:
def bootstrap(index_url=None, system_install=False): ...
And executing it as a main module would either:
1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments
That way, installation instructions can simply say to unconditionally do:
python -m getpip
And that will either:
1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want)
The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer.
Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place.
+1. This sounds far better to me than the implicit bootstrapping.
Carl _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Generally +1, the one negative point I see is it's kinda a degradation in functionality to need to type ``python -m getpip`` in every PyEnv (coming from virtualenv). Maybe PyEnv can be smart enough to automatically install pip that's installed in the interpreter it's installed from? Maybe that's too much magic and the answer will be that tools like virtualenvwrapper will continue to exist and wrap that for you. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
+1 Explicit is better than implicit. Amending venv to automatically install pip (as suggested by Donald) may be worth doing. I'm +0 on that (with the proviso that there's a --no-pip option in that case). OTOH, the venv module is very extensible and writing your own wrapper to import getpip and call bootstrap is pretty much trivial. On 11 July 2013 22:48, Nick Coghlan <ncoghlan@gmail.com> wrote:
(Oops, started this yesterday, got distracted and never hit send)
On 11 July 2013 11:09, Richard Jones <richard@python.org> wrote:
On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in
documentation,
examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following:
1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions
getpip would expose at least one function:
def bootstrap(index_url=None, system_install=False): ...
And executing it as a main module would either:
1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments
That way, installation instructions can simply say to unconditionally do:
python -m getpip
And that will either:
1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want)
The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer.
Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place.
Cheers, Nick.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
I hope we will also arrive at a pip that doesn't need to be individually installed per venv... On Jul 11, 2013 6:13 PM, "Paul Moore" <p.f.moore@gmail.com> wrote:
+1 Explicit is better than implicit.
Amending venv to automatically install pip (as suggested by Donald) may be worth doing. I'm +0 on that (with the proviso that there's a --no-pip option in that case). OTOH, the venv module is very extensible and writing your own wrapper to import getpip and call bootstrap is pretty much trivial.
On 11 July 2013 22:48, Nick Coghlan <ncoghlan@gmail.com> wrote:
(Oops, started this yesterday, got distracted and never hit send)
On 11 July 2013 11:09, Richard Jones <richard@python.org> wrote:
On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in
documentation,
examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following:
1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions
getpip would expose at least one function:
def bootstrap(index_url=None, system_install=False): ...
And executing it as a main module would either:
1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments
That way, installation instructions can simply say to unconditionally do:
python -m getpip
And that will either:
1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want)
The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer.
Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place.
Cheers, Nick.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia On 11 July 2013 06:50, Paul Moore <p.f.moore@gmail.com> wrote:
I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop.
"python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy.
Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-)
Richard
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
The point of PEP 439 is that the current situation of "but first do this" for any given 3rd-party package installation was a bad thing and we desire to move away from it. The PEP therefore proposes to allow "just do this" to eventually become the narrative. The direction this conversation is heading is removing that very significant primary benefit, and I'm not convinced there's any point to the PEP in that case. Richard
On Jul 11, 2013, at 10:12 PM, Richard Jones <richard@python.org> wrote:
The point of PEP 439 is that the current situation of "but first do this" for any given 3rd-party package installation was a bad thing and we desire to move away from it. The PEP therefore proposes to allow "just do this" to eventually become the narrative. The direction this conversation is heading is removing that very significant primary benefit, and I'm not convinced there's any point to the PEP in that case.
Richard _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Now that I think about it some more I agree (and this was one of my sticking points with PyEnvs). There's already an API given to people who want to run a command to install pip: ``curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python`` Now that's platform dependent obviously but even then I don't see anyone documenting people should do that before installing things and I do think that blessing a script like that in the stdlib seems kind of pointless. The UX of the PEP as written as whenever you want to install something you run ``pip3 install foo``. The fact that pip _isn't_ bundled with Python and is instead fetched from PyPI is an implementation detail. It provides the major benefit of bundling it with Python without tying packaging to the release cycle of the stdlib (which has proven disastrous with distutils). We should remember that in general people have considered PyEnv that ships with Python 3.3 an inferior alternative to virtualenv largely in part because they have to fetch setuptools and pip prior to using it whereas in virtualenv they do not. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
We should remember that in general people have considered PyEnv that ships with Python 3.3 an inferior alternative to virtualenv largely in part because they have to fetch setuptools and pip prior to using it whereas in virtualenv they do not.
Let's remember, that's a consequence of packaging being pulled from 3.3 - the original plan was to have the ability to install stuff in venvs without third- party software being necessary. There is no real barrier to using setuptools/pip with Python 3.3+ venvs: For example, I published the pyvenvex.py script which creates venvs and installs setuptools and pip in a single step: https://gist.github.com/vsajip/4673395 Admittedly it's "only a Gist" and not especially publicised to the wider community, but that could be addressed. The current situation, as I see it, is a transitional one. When distlib-like functionality becomes available in the stdlib, other approaches will be possible, which improve upon what's possible with setuptools and pip. I've demonstrated some of this using distil. When targeting Python 3.4, shouldn't we be looking further than just advancing the status quo a little bit? It's been said numerous times that "executable setup.py" must go. ISTM that, notwithstanding "practicality beats purity", a pip bootstrap in Python would bless executable setup.py and help to extend its lifespan. Regards, Vinay Sajip
On 12 Jul 2013 18:36, "Vinay Sajip" <vinay_sajip@yahoo.co.uk> wrote:
Donald Stufft <donald <at> stufft.io> writes:
We should remember that in general people have considered PyEnv that
with Python 3.3 an inferior alternative to virtualenv largely in part because they have to fetch setuptools and pip prior to using it whereas in virtualenv they do not.
Let's remember, that's a consequence of packaging being pulled from 3.3 -
original plan was to have the ability to install stuff in venvs without
party software being necessary.
There is no real barrier to using setuptools/pip with Python 3.3+ venvs: For example, I published the pyvenvex.py script which creates venvs and installs setuptools and pip in a single step:
https://gist.github.com/vsajip/4673395
Admittedly it's "only a Gist" and not especially publicised to the wider community, but that could be addressed.
The current situation, as I see it, is a transitional one. When distlib-like functionality becomes available in the stdlib, other approaches will be possible, which improve upon what's possible with setuptools and pip. I've demonstrated some of this using distil. When targeting Python 3.4, shouldn't we be looking further than just advancing the status quo a little bit?
It's been said numerous times that "executable setup.py" must go. ISTM
ships the third- that,
notwithstanding "practicality beats purity", a pip bootstrap in Python would bless executable setup.py and help to extend its lifespan.
Some day pip will get a "wheel only" mode, and that's the step that will kill off the need to run setup.py on production machines even when using the Python specific tools. Blessing both setuptools and pip as the "obvious way to do it" is designed to give us the wedge we need to start a gradual transition to that world without facing the initial barriers to adoption that were part of what scuttled the distutils2 effort. Cheers, Nick.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Nick Coghlan <ncoghlan <at> gmail.com> writes:
Some day pip will get a "wheel only" mode, and that's the step that will kill off the need to run setup.py on production machines even when using the Python specific tools.
Blessing both setuptools and pip as the "obvious way to do it" is designed to give us the wedge we need to start a gradual transition to that world without facing the initial barriers to adoption that were part of what scuttled the distutils2 effort.
I think wheel is a good part of that wedge. Considering the barriers to adoption of distutils2: 1. Distutils2 expected people to migrate their setup.py to setup.cfg while providing only minimal help in doing so. I have gotten quite far in addressing the migration issue, in that I already have fully declarative metadata, *automatically* generated from setup.py / setup.cfg, and distil can do dependency resolution and installation using that metadata for a large number of distributions currently existing on PyPI. The automatic process might not be perfected yet, but it already does much of what one might expect given that it doesn't do e.g. exhaustive analysis of a setup.py to determine all possible code paths, etc. so it can't capture all environment- dependent info. 2. Distutils2 did not do any dependency resolution (not having any index metadata it could rely on for dependency information), but that's not the case with distlib. While it's not a full-blown solver, distlib's dependency resolution appears at least as good as setuptools'. 3. Windows seemed to be an afterthought for distutils2 - that's not the case with distlib. Although it may not be necessary because of the existence of the Python launcher for Windows, distlib has provision for e.g. native executables on Windows, just as setuptools does. 4. Distutils2 did not provide some functionality that setuptools users have come to rely on - e.g. entry points and package resources functionality. Distlib makes good many of these omissions, to the point where an implementation of pip using distlib to replace pkg_resources functionality has been developed and passes the pip test suite. 5. Distutils2 did not support the version scheme that setuptools does, but only the PEP 386 version scheme, which was another migration roadblock. Distlib supports PEP 440 versioning *and* setuptools versioning, so that barrier to adoption is no longer present. 6. Distutils2 did not provide "editable" installations for developers, but distil does (using ordinary .pth files, not setuptools-style "executable" ones). 7. Because wheel was not available in the distutils2 world, it would be hard for distutils to provide a build infrastructure as mature as distutils / setuptools extensions as provided by NumPy, SciPy etc. However, now that the wheel specification exists, and wheels can be built using setup.py and installed using distlib, there's much less of a reason to require setuptools and pip at installation time, and more of a reason to give developers reasons to provide their distributions in wheel format. While I'm not claiming that distlib is feature-complete, and while it doesn't have the benefit of being battle-tested through widespread usage, I'm asserting that it removes the barriers to adoption which distutils2 had, at least those identified above. I'm hoping that those who might disagree will speak up by identifying other barriers to adoption which I've failed to identify, or any requirements that I've failed to address satisfactorily in distlib. Regards, Vinay Sajip
On Fri, Jul 12, 2013 at 4:35 AM, Vinay Sajip <vinay_sajip@yahoo.co.uk>wrote:
Donald Stufft <donald <at> stufft.io> writes:
We should remember that in general people have considered PyEnv that ships with Python 3.3 an inferior alternative to virtualenv largely in part because they have to fetch setuptools and pip prior to using it whereas in virtualenv they do not.
Let's remember, that's a consequence of packaging being pulled from 3.3 - the original plan was to have the ability to install stuff in venvs without third- party software being necessary.
I think it's also a consequence of having to remember how to install pip. I don't have the get-pip.py URL memorized in order to pass it to curl to download for executing. At least with Nick's suggestion there is nothing more to remember than to run getpip right after you create your venv. It's also a consequence of habit and laziness, both of which programmers are notorious about holding on two with both hands as tightly as possible. =)
There is no real barrier to using setuptools/pip with Python 3.3+ venvs: For example, I published the pyvenvex.py script which creates venvs and installs setuptools and pip in a single step:
https://gist.github.com/vsajip/4673395
Admittedly it's "only a Gist" and not especially publicised to the wider community, but that could be addressed.
The example in the venv docs actually does something similar but with distribute and pip: http://docs.python.org/3.4/library/venv.html#an-example-of-extending-envbuil.... I have filed a bug to update it to setuptools: http://bugs.python.org/issue18434 .
The current situation, as I see it, is a transitional one. When distlib-like functionality becomes available in the stdlib, other approaches will be possible, which improve upon what's possible with setuptools and pip. I've demonstrated some of this using distil. When targeting Python 3.4, shouldn't we be looking further than just advancing the status quo a little bit?
It's been said numerous times that "executable setup.py" must go. ISTM that, notwithstanding "practicality beats purity", a pip bootstrap in Python would bless executable setup.py and help to extend its lifespan.
I don't think that analogy is quite fair. It's not like setup.py either runs something if it's installed OR installs it and then continues execution. Having installation code execute arbitrary code is not a good thing, but executing code as part of executing an app make sense. =) But I do see the point you're trying to make. I'm personally +0 on the explicit install and +1 on the implicit bootstrap. I'm fine with adding a --no-bootstrap option that prevents the implicit install if people want to block it, or even prompting by default if people want to install and have a --bootstrap option for those that want to happen automatically for script usage. If this were a library we are talking about then I would feel differently, but since this is an app I don't feel bad about it. Then again as long at the getpip script simply exits quietly if pip is already installed then that's not a big thing either.
On Jul 12, 2013, at 4:35 AM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
The current situation, as I see it, is a transitional one. When distlib-like functionality becomes available in the stdlib, other approaches will be possible, which improve upon what's possible with setuptools and pip. I've demonstrated some of this using distil. When targeting Python 3.4, shouldn't we be looking further than just advancing the status quo a little bit?
It's been said numerous times that "executable setup.py" must go. ISTM that, notwithstanding "practicality beats purity", a pip bootstrap in Python would bless executable setup.py and help to extend its lifespan.
There's very little reason why a pip bootstrap script couldn't unpack a wheel instead of using setup.py. Infact I've advocated for this and plan on contributing a bare bones wheel installation routine that would work well enough to get pip and setuptools installed. I'm also against adding distlib-like functionality to the stdlib. At least at this point in time. We've seen the far reaching effects that adding a packaging lib directly to the stdlib can have. I don't want to see us repeat the mistakes of the past and add distlib into the stdlib. Maybe in time once the packaging world isn't evolving so rapidly and distlib has had a lot of real world use that can be an option. The benefit for me in the way the pip/setuptools bootstrap is handled is that it's not merely imported into the stdlib and called done. It'll fetch the latest pip during each bootstrap, making it not a point of stagnation like distutils was. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
I'm also against adding distlib-like functionality to the stdlib. At least at this point in time. We've seen the far reaching effects that adding a packaging lib directly to the stdlib can have. I don't want to see us repeat the mistakes of the past and add distlib into the stdlib. Maybe in time once the packaging world isn't evolving so rapidly and distlib has had a lot of real world use that can be an option. The benefit for me in the way the pip/
On the question of whether distlib should or shouldn't be added to the stdlib, obviously that's for others to decide. My belief is that infrastructure areas like this need *some* stdlib underpinning. Also, distlib is pretty low-level, at the level of mechanism rather than policy, so there's no reason to be too paranoid about it in general terms. There's also some element of chicken and egg - inertia being what it is, I wouldn't expect *any* new packaging software outside the stdlib to gain significant adoption at any reasonable rate while the status quo is good enough for many people. But the status quo doesn't seem to allow any room for innovation. Distil is completely self-contained and does not require distlib to be in the stdlib, but it already does what could reasonably have been expected of packaging (if it had got into 3.3) and then some. What's more, it doesn't require installing into every venv - one copy covers all venvs (2.6+), user site-packages and system site-packages.
setuptools bootstrap is handled is that it's not merely imported into the stdlib and called done. It'll fetch the latest pip during each bootstrap, making it not a point of stagnation like distutils was.
My pyvenvex script does this now. For venvs, that's the bootstrap right there. Of course, in cases where you want repeatability, getting the latest version each time might not be what you want :-) Regards, Vinay Sajip
On Jul 12, 2013, at 12:16 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Donald Stufft <donald <at> stufft.io> writes:
I'm also against adding distlib-like functionality to the stdlib. At least at this point in time. We've seen the far reaching effects that adding a packaging lib directly to the stdlib can have. I don't want to see us repeat the mistakes of the past and add distlib into the stdlib. Maybe in time once the packaging world isn't evolving so rapidly and distlib has had a lot of real world use that can be an option. The benefit for me in the way the pip/
On the question of whether distlib should or shouldn't be added to the stdlib, obviously that's for others to decide. My belief is that infrastructure areas like this need *some* stdlib underpinning. Also, distlib is pretty low-level, at the level of mechanism rather than policy, so there's no reason to be too paranoid about it in general terms. There's also some element of chicken and egg - inertia being what it is, I wouldn't expect *any* new packaging software outside the stdlib to gain significant adoption at any reasonable rate while the status quo is good enough for many people. But the status quo doesn't seem to allow any room for innovation.
Eh, installing a pure Python Wheel is pretty simple. Especially if you restrict the options it can have. I don't see any reason why the bootstrap script can't include that as an internal implementation detail. I think it's kind of funny when folks say that new packaging software *needs* to be in the standard library when setuptools has pretty emphatically shown us that no it doesn't. People have problems with packaging, solve them without throwing away the world and they'll migrate.
Distil is completely self-contained and does not require distlib to be in the stdlib, but it already does what could reasonably have been expected of packaging (if it had got into 3.3) and then some. What's more, it doesn't require installing into every venv - one copy covers all venvs (2.6+), user site-packages and system site-packages.
pip used to have this and it was removed as a misfeature as it caused more problems then it solved.
setuptools bootstrap is handled is that it's not merely imported into the stdlib and called done. It'll fetch the latest pip during each bootstrap, making it not a point of stagnation like distutils was.
My pyvenvex script does this now. For venvs, that's the bootstrap right there.
Of course, in cases where you want repeatability, getting the latest version each time might not be what you want :-)
I haven't read your script in depth. But if that's all that's needed let's make sure it's done automatically for folks.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
Eh, installing a pure Python Wheel is pretty simple. Especially if you restrict the options it can have. I don't see any reason why the bootstrap script can't include that as an internal implementation detail.
Sorry, I don't understand what you mean here, in terms of which of my points you are responding to.
I think it's kind of funny when folks say that new packaging software *needs* to be in the standard library when setuptools has pretty emphatically shown us that no it doesn't. People have problems with packaging, solve them without throwing away the world and they'll migrate.
Inertia definitely is a thing - otherwise why complain that an explicit bootstrap is much worse than an implicit one? The difference in work to use one rather than another isn't that great. I'm not saying that distlib (or any equivalent software) *has* or *needs* to be in the stdlib, merely that adoption will be faster if it is, and also that it is the right kind of software (infrastructure) which could reasonably be expected to be in the stdlib of a language which is acclaimed for (amongst other things) "batteries included". Setuptools, while not itself in the stdlib, built on packaging software that was, so the cases are not quite equivalent. Users did not have to do a major shift away from "executable setup.py", but if we're asking them to do that, it's slightly more work to migrate, even if you don't "throw away the world". And of course I agree that easing migration is important, which is why I've worked on migrating setup.py logic to declarative PEP 426, as far as is practicable.
pip used to have this and it was removed as a misfeature as it caused more problems then it solved.
Was it exactly the same? I don't remember this. I'd be interested in the specifics - can you point me to any more detailed information about this?
I haven't read your script in depth
There's not much to it, it shouldn't take too long to review :-) Regards, Vinay Sajip
On Jul 12, 2013, at 1:10 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Donald Stufft <donald <at> stufft.io> writes:
Eh, installing a pure Python Wheel is pretty simple. Especially if you restrict the options it can have. I don't see any reason why the bootstrap script can't include that as an internal implementation detail.
Sorry, I don't understand what you mean here, in terms of which of my points you are responding to.
Maybe I misunderstood your point :) I thought you were saying that by installing pip using setup.py install we are "blessing" setup.py install again? I was saying we don't need to do that.
I think it's kind of funny when folks say that new packaging software *needs* to be in the standard library when setuptools has pretty emphatically shown us that no it doesn't. People have problems with packaging, solve them without throwing away the world and they'll migrate.
Inertia definitely is a thing - otherwise why complain that an explicit bootstrap is much worse than an implicit one? The difference in work to use one rather than another isn't that great. I'm not saying that distlib (or any equivalent software) *has* or *needs* to be in the stdlib, merely that adoption will be faster if it is, and also that it is the right kind of software (infrastructure) which could reasonably be expected to be in the stdlib of a language which is acclaimed for (amongst other things) "batteries included".
Setuptools, while not itself in the stdlib, built on packaging software that was, so the cases are not quite equivalent. Users did not have to do a major shift away from "executable setup.py", but if we're asking them to do that, it's slightly more work to migrate, even if you don't "throw away the world". And of course I agree that easing migration is important, which is why I've worked on migrating setup.py logic to declarative PEP 426, as far as is practicable.
I'm not overly found of bootstrapping setuptools itself, but I think unless pip comes along and bundles setuptools like it has done distlib it's a nesceary evil right now. Ideally In the future we can move things to where setuptools is just a build tool and isn't something needed at install time unless you're doing a build. I generally agree that a packaging library is the type of item that belongs in a stdlib, I don't think it belongs in there *yet*. We can work around it not being there, and that means we can be more agile about it and evolve the tooling till we are happy with them instead of trying to get it in as quickly as possible to make things easier in the short term and possibly harder in the long term.
pip used to have this and it was removed as a misfeature as it caused more problems then it solved.
Was it exactly the same? I don't remember this. I'd be interested in the specifics - can you point me to any more detailed information about this?
I haven't read your script in depth
There's not much to it, it shouldn't take too long to review :-)
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
Maybe I misunderstood your point :) I thought you were saying that by installing pip using setup.py install we are "blessing" setup.py install again? I was saying we don't need to do that.
Okay, I see. I'm used to comments referring to points directly above them, and my comment about blessings was at the end of my post. I meant that pip itself, and not just the bootstrap, uses "setup.py install". I would have thought that pip don't need no steenking blessing from anyone :-), but that's what the PEP is about, after all.
I'm not overly found of bootstrapping setuptools itself, but I think unless pip comes along and bundles setuptools like it has done distlib it's a nesceary evil right now. Ideally In the future we can move things
But aren't you in favour of getting the latest version of setuptools and pip each time?
to where setuptools is just a build tool and isn't something needed at install time unless you're doing a build.
That "unless" - that stops the clean separation between build and install which wheel enables, and which would be a Good Thing to encourage.
I generally agree that a packaging library is the type of item that belongs in a stdlib, I don't think it belongs in there *yet*. We can work around it not being there, and that means we can be more agile about it and evolve the tooling till we are happy with them instead of trying to get it in as quickly as possible to make things easier in the short term and possibly harder in the long term.
Oh, I agree there's no sense in rushing things. But how do we know when we're happy enough (or not) with something? When we try it out, that's when we can form an opinion - not before. It's been a good while since I first announced distil, both as a test-bed for distlib, but also as a POC for better user experiences with packaging. Apart from Paul Moore (thanks, Paul!), I've had precious little specific feedback from anyone here (and believe me, I'd welcome adverse feedback if it's warranted). It could all be a steaming pile of the proverbial, or the best thing since sliced proverbials, but there's no way to know. Of course there are good reasons for this - we are all busy people. Inertia, thy ways are many :-) Regards, Vinay Sajip
On Fri, Jul 12, 2013 at 12:16 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk>wrote:
Donald Stufft <donald <at> stufft.io> writes:
I'm also against adding distlib-like functionality to the stdlib. At least at this point in time. We've seen the far reaching effects that adding a packaging lib directly to the stdlib can have. I don't want to see us repeat the mistakes of the past and add distlib into the stdlib. Maybe in time once the packaging world isn't evolving so rapidly and distlib has had a lot of real world use that can be an option. The benefit for me in the way the pip/
On the question of whether distlib should or shouldn't be added to the stdlib, obviously that's for others to decide. [SNIP]
Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib.
On Jul 12, 2013, at 2:00 PM, Brett Cannon <brett@python.org> wrote:
Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib.
This is basically the exact opposite of what Nick has said the intent has been (Ecosystem first). Adding packaging tools beyond bootstrapping pip at this point in the game is IMO a huge mistake. If what Nick has said and PEPs are not appropriate for things that don't have a module in the standard lib well that's fine I guess. I just won't worry about trying to write PEPs :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On Fri, Jul 12, 2013 at 2:16 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 12, 2013, at 2:00 PM, Brett Cannon <brett@python.org> wrote:
Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib.
This is basically the exact opposite of what Nick has said the intent has been (Ecosystem first).
Not at all as no module will go in immediately until after a PEP has landed and been vetted as needed.
Adding packaging tools beyond bootstrapping pip at this point in the game is IMO a huge mistake. If what Nick has said and PEPs are not appropriate for things that don't have a module in the standard lib well that's fine I guess.
You misunderstand what I mean. I'm just saying that *if* anything were to go into the stdlib it would only be to have code which implements a PEP in the stdlib to prevent everyone from re-implementing a standard.
I just won't worry about trying to write PEPs :)
No, the PEPs are important to prevent version skew and make sure everyone is on the same page. And that's also what a module in the stdlib would do; make sure everyone is on the same page in terms of semantics by using a single code base. I mean I wouldn't expect anything more than maybe code parsing the JSON metadata that does some validation and parsing version numbers that can support comparisons and verifying platform requirements; that's it. Stuff that every installation tool will need to do in order to follow the PEPs properly. And it wouldn't go in until everyone was very happy with the PEPs and ready to commit to code enshrining it in the stdlib. Otherwise I hope distlib moves into PyPA and everyone who develops installation tools, etc. uses that library.
On Jul 12, 2013, at 3:25 PM, Brett Cannon <brett@python.org> wrote:
On Fri, Jul 12, 2013 at 2:16 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 12, 2013, at 2:00 PM, Brett Cannon <brett@python.org> wrote:
Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib.
This is basically the exact opposite of what Nick has said the intent has been (Ecosystem first).
Not at all as no module will go in immediately until after a PEP has landed and been vetted as needed.
Adding packaging tools beyond bootstrapping pip at this point in the game is IMO a huge mistake. If what Nick has said and PEPs are not appropriate for things that don't have a module in the standard lib well that's fine I guess.
You misunderstand what I mean. I'm just saying that *if* anything were to go into the stdlib it would only be to have code which implements a PEP in the stdlib to prevent everyone from re-implementing a standard.
I just won't worry about trying to write PEPs :)
No, the PEPs are important to prevent version skew and make sure everyone is on the same page. And that's also what a module in the stdlib would do; make sure everyone is on the same page in terms of semantics by using a single code base.
I mean I wouldn't expect anything more than maybe code parsing the JSON metadata that does some validation and parsing version numbers that can support comparisons and verifying platform requirements; that's it. Stuff that every installation tool will need to do in order to follow the PEPs properly. And it wouldn't go in until everyone was very happy with the PEPs and ready to commit to code enshrining it in the stdlib. Otherwise I hope distlib moves into PyPA and everyone who develops installation tools, etc. uses that library.
I could probably be convinced about something that makes handling versions easier going into the standard lib, but that's about it. There's a few reasons that I don't want these things added to the stdlib themselves. One of the major ones is that of "agility". We've seen with distutils how impossible it can be to make improvements to the system. Now some of this is made better with the way the new system is being designed with versioned metadata but it doesn't completely go away. We can look at Python's past to see just how long any individual version sticks around and we can assume that if something gets added now that particular version will be around for a long time. Another is because of how long it can take a new version of Python to become "standard", especially in the 3.x series since the entire 3.x series itself isn't standard, any changes made to the standard lib won't be usable for years and years. This can be mitigated by releasing a backport on PyPI, but if every version of Python but the latest one is going to require installing these libs from PyPI in order to usefully interact with the "world", then you might as well just require all versions of Python to install bits from PyPI. Yet another is by blessing a particular implementation, that implementations behaviors become the standard (indeed the way the PEP system generally works for this is once it's been added to the standard lib the PEP is a historical document and the documentation becomes the standard). However packaging is not like Enums or urllibs, or smtp. We are essentially defining a protocol, one that non Python tools will be expected to use (for Debian and RPMs for example). We are using these PEPs more like a RFC than a proposal to include something in the stdlib. There's also the case of usefulness. You mention some code that can parse the JSON metadata and validate it. Weel assumingly we'll have the metadata for 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the metadata. Maybe we release a metadata 3.0 and now it can't parse any metadata. But even if it can parse the metadata what does it do with it? The major places you'd be validating the metadata (other than merely consuming it) is either on the tools that create packages or in PyPI performing checks on a valid file upload. In the build tool case they are going to either need to write their own code for actually creating the package or, more likely, they'll reuse something like distlib. If those tools are already going to be using a distlib-like library then we might as just keep the validation code in there. Now the version parsing stuff which I said I could be convinced is slightly different. It is really sort of it's own thing. It's not dependent on the other pieces of packaging to be useful, and it's not versioned. It's also the only bit that's really useful on it's own. People consuming the (future) PyPI API could use it to fully depict the actual metadata so it's kind of like JSON itself in that regard. The installer side of things the purist side of me doesn't like adding it to the standard library for all the same reasons but the pragmatic side of me wants it there because it enables fetching the other bits that are needed for "pip install X" to be a reasonable official response to these kind of questions. But I pushed for and still believe that if a prerequisite for doing that involves "locking" in pip or any of it's dependencies by adding them to the standard library then I am vehemently against doing it. Wow that was a lot of words... ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
I could probably be convinced about something that makes handling versions easier going into the standard lib, but that's about it.
That seems completely arbitrary to me. Why just versions? Why not, for example, support for the wheel format? Why not agreed metadata formats?
There's a few reasons that I don't want these things added to the stdlib themselves.
One of the major ones is that of "agility". We've seen with distutils how impossible it can be to make improvements to the system. Now some of this
You say that, but setuptools, the poster child of packaging, improved quite a lot on distutils. I'm not convinced that it would have been as successful if there were no distutils in the stdlib, but of course you may disagree. I'm well aware of the "the stdlib is where software goes to die" school of thought, and I have considerable sympathy for where it's coming from, but let's not throw the baby out with the bathwater. The agility argument could be made for lots of areas of functionality, to the point where you just basically never add anything new to the stdlib because you're worried about an inability to cope with change. Also, it doesn't seem right to point to particular parts of the stdlib which were hard to adapt to changing requirements and draw the conclusion that all software added to the stdlib would be equally hard to adapt. Of course one could look at a specific piece of software and assess its adaptability, but otherwise, isn't it verging on just arm-waving?
is made better with the way the new system is being designed with versioned metadata but it doesn't completely go away. We can look at Python's past to see just how long any individual version sticks around and we can assume that if something gets added now that particular version will be around for a long time.
That doesn't mean that overall improvements can't take place in the stdlib. For example, getopt -> optparse -> argparse.
Another is because of how long it can take a new version of Python to become "standard", especially in the 3.x series since the entire 3.x series itself isn't standard, any changes made to the standard lib won't be usable for years and years. This can be mitigated by releasing a backport on PyPI, but if every version of Python but the latest one is going to require installing these libs from PyPI in order to usefully interact with the "world", then you might as well just require all versions of Python to install bits from PyPI.
Well, other approaches have been looked at - for example, accepting things into the stdlib but warning users about the provisional nature of some APIs. I think that where interoperability between different packaging tools is needed, that's where the argument for something in the stdlib is strongest, as Brett said.
Yet another is by blessing a particular implementation, that implementations behaviors become the standard (indeed the way the PEP system generally works for this is once it's been added to the standard lib the PEP is a historical document and the documentation becomes the standard). However packaging is
That's because the PEP is needed to advocate the inclusion in the stdlib and as a record of the discussion and rationale for accepting/rejecting whatever was advocated, but there's no real benefit in keeping the PEP updated as the stdlib component gets refined from its real-world exposure through being in the stdlib.
not like Enums or urllibs, or smtp. We are essentially defining a protocol, one that non Python tools will be expected to use (for Debian and RPMs for example). We are using these PEPs more like a RFC than a proposal to include something in the stdlib.
But we can assume that there will either be N different implementations of everything in the RFCs from the ground up, by N different tools, or ideally one canonical implementation in the stdlib that the tool makers can use (but are not forced to use if they don't want to). You might say that if there were some kick-ass implementation of these RFCs on PyPI people would just gravitate to it and the winner would be obvious, but I don't see things working like that. In the web space, look at HTTP Request/Response objects as an example: Pyramid, Werkzeug, Django all have their own, don't really interoperate in practice (though it was a goal of WSGI), and there's very little to choose between them technically. Just a fair amount of duplicated effort on something so low-level, which would have been better spent on truly differentiating features.
There's also the case of usefulness. You mention some code that can parse the JSON metadata and validate it. Weel assumingly we'll have the metadata for 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the metadata. Maybe we release a metadata 3.0 and now it can't parse any metadata. But even if it can parse the metadata what does it do with it? The major places you'd be validating the metadata (other than merely consuming it) is either on the tools that create packages or in PyPI performing checks on a valid file upload. In the build tool case they are going to either need to write their own code for actually creating the package or, more likely, they'll reuse something like distlib. If those tools are already going to be using a distlib-like library then we might as just keep the validation code in there.
Is that some blessed-by-being-in-the-stdlib kind of library that everyone uses, or one of several balkanised versions a la HTTP Request / Response? If it's not somehow blessed, why should a particular packaging project use it, even if it's technically up to the job?
Now the version parsing stuff which I said I could be convinced is slightly different. It is really sort of it's own thing. It's not dependent on the other pieces of packaging to be useful, and it's not versioned. It's also the only bit that's really useful on it's own. People consuming the (future) PyPI API could use it to fully depict the actual metadata so it's kind of like JSON itself in that regard.
That's only because some effort has gone into looking at version comparisons, ordering, pre-/post-/dev-releases, etc. and considering the requirements in some detail. It looks OK now, but so did PEP 386 to many people who hadn't considered the ordering of dev versions of pre-/post-releases. Who's to say that some other issue won't come up that we haven't considered? It's not a reason for doing nothing.
The installer side of things the purist side of me doesn't like adding it to the standard library for all the same reasons but the pragmatic side of me wants it there because it enables fetching the other bits that are needed for "pip install X" to be a reasonable official response to these kind of questions. But I pushed for and still believe that if a prerequisite for doing that involves "locking" in pip or any of it's dependencies by adding them to the standard library then I am vehemently against doing it.
Nobody seems to be suggesting doing that, though. Regards, Vinay Sajip
On Jul 12, 2013, at 7:14 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Donald Stufft <donald <at> stufft.io> writes:
I could probably be convinced about something that makes handling versions easier going into the standard lib, but that's about it.
That seems completely arbitrary to me. Why just versions? Why not, for example, support for the wheel format? Why not agreed metadata formats?
As I said in my email, because it's more or less standalone and it has the greatest utility outside of installers/builders/archivers/indexes.
There's a few reasons that I don't want these things added to the stdlib themselves.
One of the major ones is that of "agility". We've seen with distutils how impossible it can be to make improvements to the system. Now some of this
You say that, but setuptools, the poster child of packaging, improved quite a lot on distutils. I'm not convinced that it would have been as successful if there were no distutils in the stdlib, but of course you may disagree.
I've looked at many other languages where they had widely successful packaging tools that weren't added to the standard lib until they were ubiquitous and stable. Something the new tools for Python are not. So I don't think adding it to the standard library is required. And setuptools improved it *outside* of the standard library while distutils itself stagnated. I would venture to guess that if distutils *hadn't* been in the standard library than setuptools could have simply been patches to distutils instead of needing to be essentially "replace" distutils and it just so happened to reuse some it's functionality. So pointing towards setuptools just exposes the fact that improving it in the standard library was hard enough that it was done externally.
I'm well aware of the "the stdlib is where software goes to die" school of thought, and I have considerable sympathy for where it's coming from, but let's not throw the baby out with the bathwater. The agility argument could be made for lots of areas of functionality, to the point where you just basically never add anything new to the stdlib because you're worried about an inability to cope with change. Also, it doesn't seem right to point to particular parts of the stdlib which were hard to adapt to changing requirements and draw the conclusion that all software added to the stdlib would be equally hard to adapt. Of course one could look at a specific piece of software and assess its adaptability, but otherwise, isn't it verging on just arm-waving?
Well I am of the mind that the standard library is where software goes to die, and I'm also of the mind that a smaller standard library and a strong packaging story and ecosystem is far superior. But that's not what I'm advocating here. A key point to almost every other part of the standard library is if stagnates or falls behind or is unable to adapt then you simply don't use it. This is not a hard thing to do for something like httplib, urllib2, urlib, etc because it's what people have *done* in projects like requests. One persons choice to use url lib in his software has little to no bearing on someone else who might choose to use requests. However a packaging system needs interoperability. My choice to use a particular package software, if there is no interoperability, DRASTICALLY affects you if you want to use my software at all. A huge thing i've been trying to push for is decoupling packaging from a specific implementation so that we have a "protocol" (ala HTTP) and not a "tool" (ala distutils). However the allure of working to the implementation and not the standard is fairly high when there is a singular blessed implementation.
is made better with the way the new system is being designed with versioned metadata but it doesn't completely go away. We can look at Python's past to see just how long any individual version sticks around and we can assume that if something gets added now that particular version will be around for a long time.
That doesn't mean that overall improvements can't take place in the stdlib. For example, getopt -> optparse -> argparse.
It's funny you picked and example where improvements *couldn't* take place and the entire system had to be thrown out and a new one written. getopt had to become a new module named opt parse, which had to become a new module named argparse in order to make changes to it. I don't think we need to have distutils, distlib, futurelib, even-further-futurelib and I think that makes packaging even more confusing then it needs to be. This also ties in with the above where one persons use of getopt instead of argparse doesn't drastically affect another person using a different one.
Another is because of how long it can take a new version of Python to become "standard", especially in the 3.x series since the entire 3.x series itself isn't standard, any changes made to the standard lib won't be usable for years and years. This can be mitigated by releasing a backport on PyPI, but if every version of Python but the latest one is going to require installing these libs from PyPI in order to usefully interact with the "world", then you might as well just require all versions of Python to install bits from PyPI.
Well, other approaches have been looked at - for example, accepting things into the stdlib but warning users about the provisional nature of some APIs.
Provisional API's still exist in that version of Python and the only way someone would get a new one is by installing a package. I think that this makes the problem even *worse* because now you're adding API's to the standard library that have a good chance of needing to change and needing to require people to install a package (with no good way to communicate to someone that they need to update it since it's a standard library package and not a versioned installed package).
I think that where interoperability between different packaging tools is needed, that's where the argument for something in the stdlib is strongest, as Brett said.
You can gain interoperability in a few ways. One way is to just pick an implementation and make that the standard. Another is to define *actual* standards. The second one is harder, requires more thought and work. But it means that completely different software can work together. It means that something written in Ruby can easily work with a python package without shelling out to Python or without trying to copy all the implementation details and having to guess which ones are significant or not.
Yet another is by blessing a particular implementation, that implementations behaviors become the standard (indeed the way the PEP system generally works for this is once it's been added to the standard lib the PEP is a historical document and the documentation becomes the standard). However packaging is
That's because the PEP is needed to advocate the inclusion in the stdlib and as a record of the discussion and rationale for accepting/rejecting whatever was advocated, but there's no real benefit in keeping the PEP updated as the stdlib component gets refined from its real-world exposure through being in the stdlib.
And that's fine for a certain class of problems. It's not that useful for something where you want interoperability outside of that tool. How terrible would it be if HTTP was "well whatever Apache does, that's what HTTP is".
not like Enums or urllibs, or smtp. We are essentially defining a protocol, one that non Python tools will be expected to use (for Debian and RPMs for example). We are using these PEPs more like a RFC than a proposal to include something in the stdlib.
But we can assume that there will either be N different implementations of everything in the RFCs from the ground up, by N different tools, or ideally one canonical implementation in the stdlib that the tool makers can use (but are not forced to use if they don't want to). You might say that if there were some kick-ass implementation of these RFCs on PyPI people would just gravitate to it and the winner would be obvious, but I don't see things working like that. In the web space, look at HTTP Request/Response objects as an example: Pyramid, Werkzeug, Django all have their own, don't really interoperate in practice (though it was a goal of WSGI), and there's very little to choose between them technically. Just a fair amount of duplicated effort on something so low-level, which would have been better spent on truly differentiating features.
A singular blessed tool in the standard library incentivizes the standard becoming and implementation detail. I *want* there to be multiple implementations written by different people working on different "slices" of the problem. That incentivizes doing the extra work on PEPs and other documents so that we maintain a highly documented standard. It's true that adding something to the standard library doesn't rule that out but it provides an incentive against properly doing standards because it's easier and simpler to just change it in the implementation.
There's also the case of usefulness. You mention some code that can parse the JSON metadata and validate it. Weel assumingly we'll have the metadata for 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the metadata. Maybe we release a metadata 3.0 and now it can't parse any metadata. But even if it can parse the metadata what does it do with it? The major places you'd be validating the metadata (other than merely consuming it) is either on the tools that create packages or in PyPI performing checks on a valid file upload. In the build tool case they are going to either need to write their own code for actually creating the package or, more likely, they'll reuse something like distlib. If those tools are already going to be using a distlib-like library then we might as just keep the validation code in there.
Is that some blessed-by-being-in-the-stdlib kind of library that everyone uses, or one of several balkanised versions a la HTTP Request / Response? If it's not somehow blessed, why should a particular packaging project use it, even if it's technically up to the job?
It's not blessed and a particular packaging project should use it if it fits their needs and they want to use it. Or they shouldn't use it if they don't want. Standards exist for a reason. So you can have multiple implementations that all work together.
Now the version parsing stuff which I said I could be convinced is slightly different. It is really sort of it's own thing. It's not dependent on the other pieces of packaging to be useful, and it's not versioned. It's also the only bit that's really useful on it's own. People consuming the (future) PyPI API could use it to fully depict the actual metadata so it's kind of like JSON itself in that regard.
That's only because some effort has gone into looking at version comparisons, ordering, pre-/post-/dev-releases, etc. and considering the requirements in some detail. It looks OK now, but so did PEP 386 to many people who hadn't considered the ordering of dev versions of pre-/post-releases. Who's to say that some other issue won't come up that we haven't considered? It's not a reason for doing nothing.
I didn't make any claims as to it's stability or the amount of testing that went into it. My ability to be convinced of that stems primarily from the fact that it's sort of a side piece of the whole packaging infrastructure and toolchain and it's also a piece that is most likely to be useful on it's own.
The installer side of things the purist side of me doesn't like adding it to the standard library for all the same reasons but the pragmatic side of me wants it there because it enables fetching the other bits that are needed for "pip install X" to be a reasonable official response to these kind of questions. But I pushed for and still believe that if a prerequisite for doing that involves "locking" in pip or any of it's dependencies by adding them to the standard library then I am vehemently against doing it.
Nobody seems to be suggesting doing that, though.
I was (trying?) to explain that my belief doesn't extend to only distlib here and instead to the entire toolchain.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 13 July 2013 01:02, Donald Stufft <donald@stufft.io> wrote:
The installer side of things the purist side of me doesn't like adding it to the standard library for all the same reasons but the pragmatic side of me wants it there because it enables fetching the other bits that are needed for "pip install X" to be a reasonable official response to these kind of questions. But I pushed for and still believe that if a prerequisite for doing that involves "locking" in pip or any of it's dependencies by adding them to the standard library then I am vehemently against doing it.
Nobody seems to be suggesting doing that, though.
I was (trying?) to explain that my belief doesn't extend to only distlib here and instead to the entire toolchain.
(The above quote may not be the best point to comment on, but I wanted to avoid quoting the whole text just to make a general point on this subject) In my view packaging (specifically install) tools are a bit different from other things, because generally packaging tools with dependencies suck. Look at pip's reliance on setuptools, for example. Managing setuptools with pip is a pain, and bootstrapping pip involves getting setuptools installed without already having pip available. I'm +1 on having basic infrastructure in the stdlib, because that way people can concentrate on innovating in more important areas of packaging rather than endlessly reinventing basic stuff. The trick is knowing what counts as basic infrastructure. Things I have *regularly* come up against, over a long period of writing little tools: 1. Version parsing and ordering (I usually use distutils.LooseVersion, just because it's available and "close enough" :-() 2. Reading metadata from distributions (not just parsing it, but getting it out of dist-info files or sdists and the like as well) 3. Installing wheels 4. Locating distributions on PyPI or local indexes At the moment, my choices are to write my own code (usually meaning far more code than the actual functionality I want to write!) require distlib (not good in a tool for building zero-dependency venvs from the ground up, for example) or vendoring in distlib (impractical in a one-file script). Having something in the stdlib (even if it's only able to bootstrap distlib or an alternative) solves all of these issues. Paul
From: Donald Stufft <donald@stufft.io>
As I said in my email, because it's more or less standalone and it has the greatest utility outside of installers/builders/archivers/indexes.
Even if that were true, it doesn't mean that it's the *only* thing that's worth considering.
I've looked at many other languages where they had widely successful packaging tools that weren't added to the standard lib until they were ubiquitous and stable. Something the new tools for Python are not. So I don't think adding it to the standard library is required.
As I said earlier, I'm not arguing for *premature* inclusion of distlib or anything else in the stdlib. I'm only saying that there's less likelihood that any one approach outside the stdlib will get univerally adopted, leading to balkanisation.
to reuse some it's functionality. So pointing towards setuptools just exposes the fact that improving it in the standard library was hard enough that it was done externally.
It seems like it wasn't for technical reasons that this approach was taken, just as Distribute wasn't forked from setuptools for technical reasons.
Well I am of the mind that the standard library is where software goes to die, and
No kidding? :-)
want to use my software at all. A huge thing i've been trying to push for is decoupling packaging from a specific implementation so that we have a "protocol" (ala HTTP) and not a "tool" (ala distutils). However the allure of working to the implementation and not the standard is fairly high when there is a singular blessed implementation.
I'm not aware of this - have you published any protocols around the work you're doing on warehouse, which Nick said was going to be the next-generation PyPI?
It's funny you picked and example where improvements *couldn't* take place and the entire system had to be thrown out and a new one written. getopt had to become a new module named opt parse, which had to become a new module named argparse
I picked that example specifically to show that even if things go wrong, it's not the end of the world.
You can gain interoperability in a few ways. One way is to just pick an implementation
If that were done, it wouldn't make any difference whether the thing picked were in the stdlib or not. But people have a tendency to roll their own stuff, whether there's a good technical reason or not.
and make that the standard. Another is to define *actual* standards. The second one is harder, requires more thought and work. But it means that completely different software can work together. It means that something written in Ruby can easily work with a python package without shelling out to Python or without
That's exactly why there are all these packaging PEPs around, isn't it?
And that's fine for a certain class of problems. It's not that useful for something where you want interoperability outside of that tool. How terrible would it be if HTTP was "well whatever Apache does, that's what HTTP is".
That wouldn't have been so terrible if you replace "Apache" with "W3C", since you would have a reference implementation by the creators of the standard.
A singular blessed tool in the standard library incentivizes the standard becoming and implementation detail. I *want* there to be multiple implementations written by different people working on different "slices" of the problem. That incentivizes doing the extra work on PEPs and other documents so that we maintain a highly documented standard. It's true that adding something to the standard library doesn't rule that out but it provides an incentive against properly doing standards because it's easier and simpler to just change it in the implementation.
Are you planning to produce any standards relating to PyPI-like functionality? This is important for the dependency resolution "slice", amongst others. The flip side of this coin is, talking in the abstract without any working code is sub-optimal. It's reasonable for standards and implementations of them to grow together, because each informs the other, at least in the early stages. Most standards PEPs are accepted with a reference implementation in place.
It's not blessed and a particular packaging project should use it if it fits their needs and they want to use it. Or they shouldn't use it if they don't want. Standards exist for a reason. So you can have multiple implementations that all work together.
That's true independent of whether one particular implementation of the standard is blessed in some way.
I didn't make any claims as to it's stability or the amount of testing that went into it. My ability to be convinced of that stems primarily from the fact that it's sort of a side piece of the whole packaging infrastructure and toolchain and it's also a piece that is most likely to be useful on it's own.
But the arguments about agility and stability apply to any software - version-handling doesn't get a special pass. Proper version handling is central to dependency resolution and is hardly a side issue, though it's not especially complicated. I'll just finish by re-iterating that I think there should be some stdlib underpinning for packaging in general, and that there should be some focus on exactly what that underpinning should be, and that I'm by no means saying that distlib is it. I consider distlib as still in its early days but showing some promise (and deserving of more peer review than it has received to date). Regards, Vinay Sajip
On 13 July 2013 13:12, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I'll just finish by re-iterating that I think there should be some stdlib underpinning for packaging in general, and that there should be some focus on exactly what that underpinning should be, and that I'm by no means saying that distlib is it. I consider distlib as still in its early days but showing some promise (and deserving of more peer review than it has received to date).
+1 to all of this Paul
On Jul 13, 2013, at 8:12 AM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
I'm not aware of this - have you published any protocols around the work you're doing on warehouse, which Nick said was going to be the next-generation PyPI?
I think we're talking past each other at this point but I wanted to respond to this point. Warehouse will evolve by publishing standards yes. Currently its not making API changes and is primarily working on taking the existing APIs and porting them to a modern framework, adding tests, etc. I do have some changes I want to make to the API and I've started a PEP to propose it that once it's done will be published for discussion here at distutils-sig. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
On 13 July 2013 05:25, Brett Cannon <brett@python.org> wrote:
On Fri, Jul 12, 2013 at 2:16 PM, Donald Stufft <donald@stufft.io> wrote:
On Jul 12, 2013, at 2:00 PM, Brett Cannon <brett@python.org> wrote:
Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib.
This is basically the exact opposite of what Nick has said the intent has been (Ecosystem first).
Not at all as no module will go in immediately until after a PEP has landed and been vetted as needed.
Adding packaging tools beyond bootstrapping pip at this point in the game is IMO a huge mistake. If what Nick has said and PEPs are not appropriate for things that don't have a module in the standard lib well that's fine I guess.
You misunderstand what I mean. I'm just saying that *if* anything were to go into the stdlib it would only be to have code which implements a PEP in the stdlib to prevent everyone from re-implementing a standard.
What Brett is saying is closer to what I was thinking when we were discussing this at the language summit. However, I'm no longer sure distlib will be quite baked enough to suggest bundling it in 3.4, in which case it will only be a "pip install distlib" away (that's the entire point of PEP 439).
I just won't worry about trying to write PEPs :)
No, the PEPs are important to prevent version skew and make sure everyone is on the same page. And that's also what a module in the stdlib would do; make sure everyone is on the same page in terms of semantics by using a single code base.
I mean I wouldn't expect anything more than maybe code parsing the JSON metadata that does some validation and parsing version numbers that can support comparisons and verifying platform requirements; that's it. Stuff that every installation tool will need to do in order to follow the PEPs properly. And it wouldn't go in until everyone was very happy with the PEPs and ready to commit to code enshrining it in the stdlib. Otherwise I hope distlib moves into PyPA and everyone who develops installation tools, etc. uses that library.
Vinay already moved both distlib and pylauncher over to the PyPA account on BitBucket: https://bitbucket.org/pypa/ PEP 439 is the critical piece, since that's the one that takes the pressure off getting the other components (including distlib and pkg_resources) into the base installers. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 12 July 2013 16:17, Donald Stufft <donald@stufft.io> wrote:
There's very little reason why a pip bootstrap script couldn't unpack a wheel instead of using setup.py. Infact I've advocated for this and plan on contributing a bare bones wheel installation routine that would work well enough to get pip and setuptools installed.
I've written more than one bare-bones wheel installation script myself. They are easy to write (credit to Daniel for developing a format that's very simple to process!). I'm happy to donate any of the code that's useful. Here's one I've used in the past: https://gist.github.com/pfmoore/5985969 Paul
The goal is that it will be equally easy to install packages built with any build system. We are on our way. Getting rid of an executable build script is no longer a goal. Builds inherently need that often. But we don't want people extending distutils against their will. On Jul 12, 2013 11:59 AM, "Paul Moore" <p.f.moore@gmail.com> wrote:
On 12 July 2013 16:17, Donald Stufft <donald@stufft.io> wrote:
There's very little reason why a pip bootstrap script couldn't unpack a wheel instead of using setup.py. Infact I've advocated for this and plan on contributing a bare bones wheel installation routine that would work well enough to get pip and setuptools installed.
I've written more than one bare-bones wheel installation script myself. They are easy to write (credit to Daniel for developing a format that's very simple to process!). I'm happy to donate any of the code that's useful. Here's one I've used in the past: https://gist.github.com/pfmoore/5985969
Paul
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
Daniel Holth <dholth <at> gmail.com> writes:
Getting rid of an executable build script is no longer a goal. Builds inherently need that often. But we don't want people extending distutils against their will.
Perhaps I should have been clearer - I meant "executable setup.py install", and as I understand it, it is a goal to get rid of that. Regarding "executable setup.py build", that's less of an issue than for installing, but IIUC, it is still not ideal. Many of the hacks that people have made around distutils/setuptools relate to building, not just installing, or am I wrong? Regards, Vinay Sajip
On Jul 12, 2013, at 1:28 PM, Vinay Sajip <vinay_sajip@yahoo.co.uk> wrote:
Daniel Holth <dholth <at> gmail.com> writes:
Getting rid of an executable build script is no longer a goal. Builds inherently need that often. But we don't want people extending distutils against their will.
Perhaps I should have been clearer - I meant "executable setup.py install", and as I understand it, it is a goal to get rid of that.
Yes it's a goal to get rid of setup.py install, but I doubt it will ever fully be gone. At least not for a long time. There's almost 150k source dist packages on PyPI and I'm going to assume the vast bulk of them have a setup.py.
Regarding "executable setup.py build", that's less of an issue than for installing, but IIUC, it is still not ideal. Many of the hacks that people have made around distutils/setuptools relate to building, not just installing, or am I wrong?
It's not ideal, but it's also largely only an issue on the machine of the developer who is packaging the software. If they are fine with the hacks then there's not a major reason to move them away from that.
Regards,
Vinay Sajip
_______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
Yes it's a goal to get rid of setup.py install, but I doubt it will ever fully be gone. At least not for a long time. There's almost 150k source dist packages on PyPI and I'm going to assume the vast bulk of them have a setup.py.
True, but distil seems to be able to install a fair few (certainly the ones which don't do significant special processing in their setup.py, such as moving files around and creating files) without ever executing setup.py.
It's not ideal, but it's also largely only an issue on the machine of the developer who is packaging the software. If they are fine with the hacks then there's not a major reason to move them away from that.
It's a smaller community than the users of those projects, and I don't know what the numbers of affected developers are. Obviously it's up to each project how they do their stuff, but from my understanding the NumPy/SciPy communities aren't especially happy with the extensions they've had to do (else, why Bento?) Regards, Vinay Sajip
On 12 July 2013 12:12, Richard Jones <richard@python.org> wrote:
The point of PEP 439 is that the current situation of "but first do this" for any given 3rd-party package installation was a bad thing and we desire to move away from it. The PEP therefore proposes to allow "just do this" to eventually become the narrative. The direction this conversation is heading is removing that very significant primary benefit, and I'm not convinced there's any point to the PEP in that case.
That was never the primary benefit to my mind. The status quo sucks because there is *no* simple answer to "first, do this", not because some kind of bootstrapping is needed. The problem in my view is that the "first, do this" step is currently a mess of various arcane platform dependendent incantations that may or may not work (and may even contradict each other) and can't be readily incorporated into an automatic script because they're not idempotent. Accordingly, I consider simplifying that "first, do this" step to "python -m getpip" to be a major upgrade from the status quo: * unlike curl, wget and "python -c" incantations, it's immediately obvious to a reader what it is supposed to do: "Get pip" * unlike curl, wget and "python -c" incantations, it can be easily made platform independent * unlike curl, wget and "python -c" incantations, it can be easily made idempotent (so it does nothing if it has already been run) * through "getpip.bootstrap" it will provide the infrastructure to easily add automatic bootstrapping to other tools In particular, it establishes the infrastructure to have pyvenv automatically bootstrap the installer into each venv, even when it isn't installed system wide (which is the key missing feature of pyvenv relative to virtualenv). Having the retrieval of pip happen automagically as part of an install command initially sounded nice, but I'm now a firm -1 on that because making it work cleanly in a cross-platform way that doesn't conflict with a manual pip install has proven to require several awkward compromises that make it an ugly solution: * we have to introduce a new install command (pip3 vs pip) to avoid packaging problems on Linux distros * this is then inconsistent with Windows (which doesn't have separate versioning for the Python 3 installation) * we have to introduce new bootstrap arguments to pip * we have to special case installation of pip and its dependencies to avoid odd looking warning messages * the implementation is tricky to explain * it doesn't work nicely with the "py" launcher on Windows (or equivalents that may be added to other platforms) If your reaction is "well, in that case, I don't want to write it anymore", I will be disappointed, but that won't stop me from rejecting this approach and waiting for someone else to volunteer to write the more explicit version based on your existing bootstrap code. I'd prefer not to do that though - I'd prefer it if I can persuade you that "python -m getpip" *is* a major upgrade over the status quo that is worth implementing, and one that adheres to the Zen of Python, in particular: * Explicit is better than implicit * Simple is better than complex * Readability counts * Errors should never pass silently, unless explicitly silenced * In the face of ambiguity, refuse the temptation to guess * If the implementation is hard to explain, it's a bad idea. * If the implementation is easy to explain, it may be a good idea. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 12 July 2013 15:11, Nick Coghlan <ncoghlan@gmail.com> wrote:
In particular, it establishes the infrastructure to have pyvenv automatically bootstrap the installer into each venv, even when it isn't installed system wide (which is the key missing feature of pyvenv relative to virtualenv).
The other thing I will note is that *if* we decide to add an implicit bootstrap later (which I doubt will happen, but you never know), then having "getpip" available as an importable module will also make *that* easier to write (since the heart of the current bootstrap code could be replaced by "from getpip import bootstrap") Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jul 12, 2013, at 1:19 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 12 July 2013 15:11, Nick Coghlan <ncoghlan@gmail.com> wrote: In particular, it establishes the infrastructure to have pyvenv automatically bootstrap the installer into each venv, even when it isn't installed system wide (which is the key missing feature of pyvenv relative to virtualenv).
The other thing I will note is that *if* we decide to add an implicit bootstrap later (which I doubt will happen, but you never know), then having "getpip" available as an importable module will also make *that* easier to write (since the heart of the current bootstrap code could be replaced by "from getpip import bootstrap")
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
I prefer the implicit bootstrap approach, but if the explicit bootstrap approach is chosen then something special needs to be done for pyvenv. If an explicit bootstrap is required for every pyvenv then I'm going to guess that people are going to just continue using virtualenv. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
Donald Stufft <donald <at> stufft.io> writes:
I prefer the implicit bootstrap approach, but if the explicit bootstrap approach is chosen then something special needs to be done for pyvenv.
The original pyvenv script did install Distribute and pip, but that functionality was removed before beta because Distribute and pip are third-party packages. If that restriction is lifted, we can easily replace the pyvenv script in Python with pyvenvex, and then (as I understand it) that is equivalent to an implicit bootstrap. Regards, Vinay Sajip
On Jul 10, 2013, at 02:43 PM, Paul Moore wrote:
I would find it distinctly irritating if in Python 3.4 I have to type "pip3 bootstrap" to bootstrap pip - and even worse if *after* the bootstrap the command I use is still "pip". (And no, there is currently no "pip3" command installed by pip, and even if there were, I would not want to use it, I'm happy with the unsuffixed version).
I have a lot of sympathy for this, and the general issue has come up in a number of different contexts, e.g. nostests/nosetests3 and so on. On a distro like Debian, this just adds more gunk to /usr/bin, especially since some scripts are also minor-version dependent. One approach is to use `$python -m nose` or in this case `$python -m pip` which cuts down on the number of scripts, is unambiguous, but is far from convenient and may not work in all cases, e.g. for older Python's that don't support -m or don't support it for packages. I think there was a thread on python-ideas about this, but in the back of my mind, I have this horrible idea for a version-aware relauncher you could use in your shebang line. Something like: #! /usr/bin/pylaunch So that you could do something like: $ nosetests -3 $ nosetests -2 $ nosetests -3.3 $ nosetests -2.7 and it would relaunch itself using the correct Python version, consuming the version argument so the actual script wouldn't see it. I'm not sure if the convenience is worth it, and I'm sorry for making you throw up a little in your mouth there. -Barry
When bundled the script is supposed to mask the fact you don't have pip installed. Basically if you type pip3 install requests it will first install setuptools and pip and then pass the command into the real pip. If it was called get pip then the workflow would be "attempt to install", "run get-pip", "rerun the original install command" On Jul 10, 2013, at 8:46 AM, Brett Cannon <brett@python.org> wrote:
On Wed, Jul 10, 2013 at 12:54 AM, Richard Jones <richard@python.org> wrote:
On 10 July 2013 14:19, Carl Meyer <carl@oddbird.net> wrote:
They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it.
Ah, thanks for clarifying that.
Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make sense for it to install virtualenv there (or, probably even better, for pyvenv to be backported along with the bootstrap).
I intend to create two forks; one for consideration in a 2.7.5 release as "pip" and the other for users of 2.6+ called "get-pip.py".
Why the specific shift between 2.7 and 2.6 in terms of naming? I realize you are differentiating between the bootstrap being pre-installed with Python vs. not, but is there really anything wrong with the script being called pip (or pip3 for Python 3.3/3.2) if it knows how to do the right thing to get pip up and going? IOW why not make the bootstrap what everyone uses to install pip and it just so happens to come pre-installed with Python 3.4 (and maybe Python 2.7)? _______________________________________________ Distutils-SIG maillist - Distutils-SIG@python.org http://mail.python.org/mailman/listinfo/distutils-sig
On 10 July 2013 05:19, Carl Meyer <carl@oddbird.net> wrote:
It's my understanding that people still install virtualenv in py3k.
They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it.
One other problem I have, personally, with pyvenv, is that the activate code for powershell is significantly less user-friendly than that in virtualenv. Add to that the fact that the Python release cycle is significantly slower than that of virtualenv, and using dev versions of Python is far less practical for day to day use, and that's why I stick to virtualenv at the moment (that and the pip point mentioned already). I really ought to post a patch for Python to upgrade the activate script to use the one from virtualenv. Are there any licensing/ownership issues that might make this problematic? For example, the script is signed and I don't know if that signature is attributable to someone specific... Paul
Carl Meyer <carl <at> oddbird.net> writes:
They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it.
True, though I've provided a script to do that very thing: https://gist.github.com/vsajip/4673395 Of course, that'll now need to be changed to install setuptools rather than distribute :-) Regards, Vinay Sajip
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/09/2013 11:20 PM, Donald Stufft wrote:
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
Environments generated by pyvenv lack setuptools, which makes them un-useful compared to those generated by virtualenv. Virtualenv is also useful across the important set of Python versions (2.6, 2.7, 3.2, 3.3), which pyvenv (or any shipped-in-core varieant) can never be. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlHdGyYACgkQ+gerLs4ltQ5gMQCfZuHj7XyIWv+Wru0rA5VTk//1 JxkAoILDxz0Yn8zOLWP0jOGCc/gDikY8 =15US -----END PGP SIGNATURE-----
On 10 July 2013 18:28, Tres Seaver <tseaver@palladion.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/09/2013 11:20 PM, Donald Stufft wrote:
doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/
Environments generated by pyvenv lack setuptools, which makes them un-useful compared to those generated by virtualenv.
Yes, but Python 3.4 will have the pip bootstrap which automatically installs setuptools. Unless you mean that pyvenv itself (sans pip) would be more useful with setuptools?
Virtualenv is also useful across the important set of Python versions (2.6, 2.7, 3.2, 3.3), which pyvenv (or any shipped-in-core varieant) can never be.
Yes, that's why I suggested the Python 2 version will install virtualenv :-) There's currently no plan to release a Python 3.3 version of the bootstrap, and certainly not one for a Py3k version lower than that. Hm. We can think about it though. Richard
participants (14)
-
Barry Warsaw
-
Brett Cannon
-
Carl Meyer
-
Daniel Holth
-
Donald Stufft
-
Erik Bray
-
Nick Coghlan
-
Paul Moore
-
PJ Eby
-
Richard Jones
-
Richard Jones
-
Ronald Oussoren
-
Tres Seaver
-
Vinay Sajip