Hello,
Following an earlier discussion on python-ideas [1], we would like to
propose the following PEP for review. Discussion is welcome. The PEP
can also be viewed in HTML form at
http://www.python.org/dev/peps/pep-0408/
[1] http://mail.python.org/pipermail/python-ideas/2012-January/013246.html
Eli
---------------------------
PEP: 408
Title: Standard library __preview__ package
Version: $Revision$
Last-Modified: $Date$
Author: Nick Coghlan
+0. I think the idea is right, and will help to get good quality modules in at a faster rate. However it is compensating for a lack of interface and packaging standardization in the 3rd party module world.
On Fri, Jan 27, 2012 at 11:48 PM, Matt Joiner
+0. I think the idea is right, and will help to get good quality modules in at a faster rate. However it is compensating for a lack of interface and packaging standardization in the 3rd party module world.
No, it really isn't. virtualenv and pip already work *beautifully*, so long as you're in an environment where: 1. Due diligence isn't a problem 2. Network connectivity isn't a problem 3. You *already know* about virtual environments and the Python Package Index 4. You either don't need dependencies written in C, or the ones you need are written to compile cleanly under distutils and you aren't on Windows (because Microsoft consider building fully functional binaries from source to be an optional extra people should be charged for rather than a fundamental feature of an operating system) It would probably be worth adding a heading specifically countering this myth, though. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Hi, A small comment from a user perspective. Since a package in preview is strongly linked to a given version of Python, any program taking advantage of it becomes strongly specific to a given version of Python. Such programs will of course break for any upgrade or downgrade of python version. To make the reason for the breakage more explicit, I believe that the PEP should provide examples of correct versionned usage of the module. Something along the lines of : if sys.version_info[:2] == (3, X): from __preview__ import example else: raise ImportError( 'Package example is only available as preview in Python version 3.X. Please check the documentation of your version of Python to see if and how you can get the package example.' ) cheers, Philippe
On 27/01/2012 14:37, Philippe Fremy wrote:
Hi,
A small comment from a user perspective.
Since a package in preview is strongly linked to a given version of Python, any program taking advantage of it becomes strongly specific to a given version of Python.
Such programs will of course break for any upgrade or downgrade of python version. To make the reason for the breakage more explicit, I believe that the PEP should provide examples of correct versionned usage of the module.
Something along the lines of :
if sys.version_info[:2] == (3, X): from __preview__ import example else: raise ImportError( 'Package example is only available as preview in Python version 3.X. Please check the documentation of your version of Python to see if and how you can get the package example.' )
A more normal incantation, as is often the way for packages that became parts of the standard library after first being a third party library (sometimes under a different name, e.g. simplejson -> json): try: from __preview__ import thing except ImportError: import thing So no need to target a very specific version of Python. Michael
cheers,
Philippe _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
A more normal incantation, as is often the way for packages that became parts of the standard library after first being a third party library (sometimes under a different name, e.g. simplejson -> json):
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python.
I think this is suboptimal, having to guess where modules are located, you end up with this in every module: try: import cjson as json except ImportError: try: import simplejson as json except ImportError: import json as json Perhaps the versioned import stuff could be implemented (whatever the syntax may be), in order that something like this can be done instead: import regex('__preview__') import regex('3.4') Where clearly the __preview__ version makes no guarantees about interface or implementation whatsoever. etc.
On 27/01/2012 15:35, Matt Joiner wrote:
A more normal incantation, as is often the way for packages that became parts of the standard library after first being a third party library (sometimes under a different name, e.g. simplejson -> json):
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python. I think this is suboptimal, having to guess where modules are located, you end up with this in every module:
try: import cjson as json except ImportError: try: import simplejson as json except ImportError: import json as json
It's trivial to wrap in a function though - or do the import in one place and then import the package from there. Michael
Perhaps the versioned import stuff could be implemented (whatever the syntax may be), in order that something like this can be done instead:
import regex('__preview__') import regex('3.4')
Where clearly the __preview__ version makes no guarantees about interface or implementation whatsoever.
etc.
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
On 27/01/2012 16:25, Michael Foord wrote:
On 27/01/2012 14:37, Philippe Fremy wrote:
Hi,
A small comment from a user perspective.
Since a package in preview is strongly linked to a given version of Python, any program taking advantage of it becomes strongly specific to a given version of Python.
Such programs will of course break for any upgrade or downgrade of python version. To make the reason for the breakage more explicit, I believe that the PEP should provide examples of correct versionned usage of the module.
Something along the lines of :
if sys.version_info[:2] == (3, X): from __preview__ import example else: raise ImportError( 'Package example is only available as preview in Python version 3.X. Please check the documentation of your version of Python to see if and how you can get the package example.' )
A more normal incantation, as is often the way for packages that became parts of the standard library after first being a third party library (sometimes under a different name, e.g. simplejson -> json):
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python.
According to the PEP, the interface may change betweeen __preview__ and final inclusion in stdlib. It would be unwise as a developer to assume that a program written for the preview version will work correctly in the stdlib version, wouldn't it ? I would use your "normal" incantation only after checking that no significant API change have occured after stdlib integration. By the way, if as Antoine suggests, the package remain available in __preview__ even after it's accepted in the stdlib, how is the user supposed to deal with possible API changes ? cheers, Philippe
Hello Philippe,
On Fri, 27 Jan 2012 17:09:08 +0100
Philippe Fremy
According to the PEP, the interface may change betweeen __preview__ and final inclusion in stdlib. It would be unwise as a developer to assume that a program written for the preview version will work correctly in the stdlib version, wouldn't it ?
I would use your "normal" incantation only after checking that no significant API change have occured after stdlib integration.
By the way, if as Antoine suggests, the package remain available in __preview__ even after it's accepted in the stdlib, how is the user supposed to deal with possible API changes ?
The API *may* change but it would probably not change much anyway. Consider e.g. the "regex" module: it aims at compatibility with the standard "re" module; there may be additional APIs (e.g. new flags), but whoever uses it with the standard "re" API would not see any difference between the __preview__ version and the final version. cheers Antoine.
Something along the lines of :
if sys.version_info[:2] == (3, X): from __preview__ import example else: raise ImportError( 'Package example is only available as preview in Python version 3.X. Please check the documentation of your version of Python to see if and how you can get the package example.' )
A more normal incantation, as is often the way for packages that became parts of the standard library after first being a third party library (sometimes under a different name, e.g. simplejson -> json):
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python.
Yep, this is what I had in mind. And it appeared too trivial to place it in the PEP. Eli
Eli Bendersky wrote:
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python.
Yep, this is what I had in mind. And it appeared too trivial to place it in the PEP.
Trivial and wrong. Since thing and __preview__.thing may have subtle, or major, API differences, how do you use it? try: result = thing.foo(a, b, c) + thing.bar(x) except AttributeError: # Must be the preview version result = thing.foobar(a, c, b, x) -- Steven
On 27/01/2012 20:48, Steven D'Aprano wrote:
Eli Bendersky wrote:
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python.
Yep, this is what I had in mind. And it appeared too trivial to place it in the PEP.
Trivial and wrong.
Since thing and __preview__.thing may have subtle, or major, API differences, how do you use it?
No, potentially wrong in cases where the APIs are different. Even with the try...except ImportError dance around StringIO / cStringIO there are some API differences. But for a lot of use cases it works fine (simplejson and json aren't *identical*, but it works for most people). Michael
try: result = thing.foo(a, b, c) + thing.bar(x) except AttributeError: # Must be the preview version result = thing.foobar(a, c, b, x)
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
Michael Foord wrote:
On 27/01/2012 20:48, Steven D'Aprano wrote:
Eli Bendersky wrote:
try: from __preview__ import thing except ImportError: import thing
So no need to target a very specific version of Python.
Yep, this is what I had in mind. And it appeared too trivial to place it in the PEP.
Trivial and wrong.
Since thing and __preview__.thing may have subtle, or major, API differences, how do you use it?
No, potentially wrong in cases where the APIs are different. Even with the try...except ImportError dance around StringIO / cStringIO there are some API differences. But for a lot of use cases it works fine (simplejson and json aren't *identical*, but it works for most people).
Okay, granted, I accept your point. But I think we need to distinguish between these cases. In the case of StringIO and cStringIO, API compatibility is expected, and differences are either bugs or implementation differences that you shouldn't be relying on. In the case of the typical[1] __preview__ module, one of the motivations of adding it to __preview__ is to test the existing API. We should expect changes, even if in practice often there won't be. We might hope for no API changes, but we should plan for the case where there will be. And that rules out the "try import" dance for the typical __preview__ module. There may be modules which graduate and keep the same API. In those cases, people will quickly work out the import dance on their own, it's a very common idiom. But we shouldn't advertise it as the right way to deal with __preview__, since that implies the expectation of API stability, and we want to send the opposite message: __preview__ is the last time the API can change without a big song and dance, so be prepared for it to change. I'm with Nick on this one: if you're not prepared to change "from __preview__ import module" to "import module" in your app, then you certainly won't be prepared to deal with the potential API changes and you aren't the target audience for __preview__. [1] I am fully aware of the folly of referring to a "typical" example of something that doesn't exist yet <wink> -- Steven
No, potentially wrong in cases where the APIs are different. Even with the try...except ImportError dance around StringIO / cStringIO there are some API differences. But for a lot of use cases it works fine (simplejson and json aren't *identical*, but it works for most people).
Okay, granted, I accept your point.
But I think we need to distinguish between these cases.
In the case of StringIO and cStringIO, API compatibility is expected, and differences are either bugs or implementation differences that you shouldn't be relying on.
I just recently ran into a compatibility of StringIO and cStringIO. It's a good thing it's documented: "Another difference from the StringIO module is that calling StringIO() with a string parameter creates a read-only object. Unlike an object created without a string parameter, it does not have write methods. These objects are not generally visible. They turn up in tracebacks as StringI and StringO." But it did cause me a couple of minutes of grief until I found this piece in the docs and wrote a work-around. But no, even in the current stable stdlib, the "try import ... except import from elsewhere" trick doesn't "just work" for StringIO/cStringIO. And as far as I can understand this is documented, not a bug or some obscure implementation detail. My point is that if our users accept *this*, in the stable stdlib, I see no reason they won't accept the same happening between __preview__ and a graduated module, when they (hopefully) understand the intention of __preview__. Eli
Eli Bendersky writes:
My point is that if our users accept *this*, in the stable stdlib, I see no reason they won't accept the same happening between __preview__ and a graduated module, when they (hopefully) understand the intention of __preview__.
If it doesn't happen with sufficiently high frequency and annoyance factors to make attempting to use both the __preview__ and graduated versions in the same code base unacceptable to most users, then __preview__ is unnecessary, and the PEP should be rejected.
On Sat, Jan 28, 2012 at 07:41, Stephen J. Turnbull
Eli Bendersky writes:
> My point is that if our users accept *this*, in the stable stdlib, I > see no reason they won't accept the same happening between __preview__ > and a graduated module, when they (hopefully) understand the intention > of __preview__.
If it doesn't happen with sufficiently high frequency and annoyance factors to make attempting to use both the __preview__ and graduated versions in the same code base unacceptable to most users, then __preview__ is unnecessary, and the PEP should be rejected.
API differences such as changing one method to another (perhaps repeated over several methods) is unacceptable for stdlib modules. On the other hand, for a determined user importing from either __preview__ or the graduated version it's only a matter of a few lines in a conditional import. IMHO this is much preferable to having the module either external or in the stdlib, because that imposes another external dependency. But I think that the issue of keeping __preview__ in a later release is just an "implementation detail" of the PEP and shouldn't be seen as its main decision point. Eli
On Fri, 27 Jan 2012 15:21:33 +0200
Eli Bendersky
Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome. The PEP can also be viewed in HTML form at http://www.python.org/dev/peps/pep-0408/
A big +1 from me.
Assuming the module is then promoted to the the standard library proper in release ``3.X+1``, it will be moved to a permanent location in the library::
import example
And importing it from ``__preview__`` will no longer work.
Why not leave it accessible through __preview__ too?
Benefits for the core development team --------------------------------------
Currently, the core developers are really reluctant to add new interfaces to the standard library.
A nit, but I think "reluctant" is enough and "really" makes the tone very defensive :)
Relationship with PEP 407 =========================
PEP 407 proposes a change to the core Python release cycle to permit interim releases every 6 months (perhaps limited to standard library updates). If such a change to the release cycle is made, the following policy for the ``__preview__`` namespace is suggested:
* For long term support releases, the ``__preview__`` namespace would always be empty. * New modules would be accepted into the ``__preview__`` namespace only in interim releases that immediately follow a long term support release.
Well this is all speculative (due to the status of PEP 407) but I think a simpler approach of having a __preview__ namespace in all releases (including LTS) would be easier to handler for both us and our users. People can refrain from using anything in __preview__ if that's what they prefer. The naming and the double underscores make it quite recognizable at the top of a source file :-)
Preserving pickle compatibility -------------------------------
A pickled class instance based on a module in ``__preview__`` in release 3.X won't be unpickle-able in release 3.X+1, where the module won't be in ``__preview__``. Special code may be added to make this work, but this goes against the intent of this proposal, since it implies backward compatibility. Therefore, this PEP does not propose to preserve pickle compatibility.
Wouldn't it be a good argument to keep __preview__.XXX as an alias? Regards Antoine.
On 27/01/2012 15:09, Antoine Pitrou wrote:
On Fri, 27 Jan 2012 15:21:33 +0200 Eli Bendersky
wrote: Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome. The PEP can also be viewed in HTML form at http://www.python.org/dev/peps/pep-0408/ A big +1 from me.
Assuming the module is then promoted to the the standard library proper in release ``3.X+1``, it will be moved to a permanent location in the library::
import example
And importing it from ``__preview__`` will no longer work. Why not leave it accessible through __preview__ too?
+1 The point about pickling is one good reason, minimising code breakage (due to package name changing) is another. Michael
Benefits for the core development team --------------------------------------
Currently, the core developers are really reluctant to add new interfaces to the standard library. A nit, but I think "reluctant" is enough and "really" makes the tone very defensive :)
Relationship with PEP 407 =========================
PEP 407 proposes a change to the core Python release cycle to permit interim releases every 6 months (perhaps limited to standard library updates). If such a change to the release cycle is made, the following policy for the ``__preview__`` namespace is suggested:
* For long term support releases, the ``__preview__`` namespace would always be empty. * New modules would be accepted into the ``__preview__`` namespace only in interim releases that immediately follow a long term support release. Well this is all speculative (due to the status of PEP 407) but I think a simpler approach of having a __preview__ namespace in all releases (including LTS) would be easier to handler for both us and our users. People can refrain from using anything in __preview__ if that's what they prefer. The naming and the double underscores make it quite recognizable at the top of a source file :-)
Preserving pickle compatibility -------------------------------
A pickled class instance based on a module in ``__preview__`` in release 3.X won't be unpickle-able in release 3.X+1, where the module won't be in ``__preview__``. Special code may be added to make this work, but this goes against the intent of this proposal, since it implies backward compatibility. Therefore, this PEP does not propose to preserve pickle compatibility. Wouldn't it be a good argument to keep __preview__.XXX as an alias?
Regards
Antoine.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
Michael Foord writes:
Assuming the module is then promoted to the the standard library proper in release ``3.X+1``, it will be moved to a permanent location in the library::
import example
And importing it from ``__preview__`` will no longer work. Why not leave it accessible through __preview__ too?
+1
Er, doesn't this contradict your point about using try: from __preview__ import spam except ImportError: import spam ? I think it's a bad idea to introduce a feature that's *supposed* to break (in the sense of "make a break", ie, change the normal pattern) with every release and then try to avoid breaking (in the sense of "causing an unexpected failure") code written by people who don't want to follow the discipline of keeping up with changing APIs. If they want that stability, they should wait for the stable release. Modules should become unavailable from __preview__ as soon as they have a stable home.
On 28/01/2012 04:44, Stephen J. Turnbull wrote:
Michael Foord writes:
Assuming the module is then promoted to the the standard library proper in release ``3.X+1``, it will be moved to a permanent location in the library::
import example
And importing it from ``__preview__`` will no longer work. Why not leave it accessible through __preview__ too?
+1
Er, doesn't this contradict your point about using
try: from __preview__ import spam except ImportError: import spam
?
I think it's a bad idea to introduce a feature that's *supposed* to break (in the sense of "make a break", ie, change the normal pattern) with every release and then try to avoid breaking (in the sense of "causing an unexpected failure") code written by people who don't want to follow the discipline of keeping up with changing APIs. If they want that stability, they should wait for the stable release.
Modules should become unavailable from __preview__ as soon as they have a stable home.
I like not breaking people's code where *possible*. Michael -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
Michael Foord wrote:
On 28/01/2012 04:44, Stephen J. Turnbull wrote:
I think it's a bad idea to introduce a feature that's *supposed* to break (in the sense of "make a break", ie, change the normal pattern) with every release and then try to avoid breaking (in the sense of "causing an unexpected failure") code written by people who don't want to follow the discipline of keeping up with changing APIs. If they want that stability, they should wait for the stable release.
Modules should become unavailable from __preview__ as soon as they have a stable home.
I like not breaking people's code where *possible*.
__preview__ is not about stability. It's about making code easily available for testing before the API freezes. If nothing has changed once it graduates, how hard is it to change a few lines of code from from __preview__ import blahblahblah to import blahblahblah ? It seems to me that including a __preview__ package in production software is a mistake, and not its intention. ~Ethan~
Assuming the module is then promoted to the the standard library proper in release ``3.X+1``, it will be moved to a permanent location in the library::
import example
And importing it from ``__preview__`` will no longer work.
Why not leave it accessible through __preview__ too?
I guess there's no real problem with leaving it accessible, as long as it's clear that the API may have changed between releases. I.e. when a package "graduates" and is also left accessible through __preview__, it should obviously be just a pointer to the same package, so if the API changed, code that imported it from __preview__ in a previous release may stop working.
Benefits for the core development team --------------------------------------
Currently, the core developers are really reluctant to add new interfaces to the standard library.
A nit, but I think "reluctant" is enough and "really" makes the tone very defensive :)
Agreed, I will change this
Relationship with PEP 407 =========================
PEP 407 proposes a change to the core Python release cycle to permit interim releases every 6 months (perhaps limited to standard library updates). If such a change to the release cycle is made, the following policy for the ``__preview__`` namespace is suggested:
* For long term support releases, the ``__preview__`` namespace would always be empty. * New modules would be accepted into the ``__preview__`` namespace only in interim releases that immediately follow a long term support release.
Well this is all speculative (due to the status of PEP 407) but I think a simpler approach of having a __preview__ namespace in all releases (including LTS) would be easier to handler for both us and our users. People can refrain from using anything in __preview__ if that's what they prefer. The naming and the double underscores make it quite recognizable at the top of a source file :-)
I agree that it's speculative, and would recommend to decouple the two PEPs. They surely can live on their own and aren't tied. If PEP 407 gets accepted, this section can be reworded appropriately.
Preserving pickle compatibility -------------------------------
A pickled class instance based on a module in ``__preview__`` in release 3.X won't be unpickle-able in release 3.X+1, where the module won't be in ``__preview__``. Special code may be added to make this work, but this goes against the intent of this proposal, since it implies backward compatibility. Therefore, this PEP does not propose to preserve pickle compatibility.
Wouldn't it be a good argument to keep __preview__.XXX as an alias?
Good point. Eli
On 27/01/2012 15:09, Antoine Pitrou wrote:
On Fri, 27 Jan 2012 15:21:33 +0200 Eli Bendersky
wrote: Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome. The PEP can also be viewed in HTML form at http://www.python.org/dev/peps/pep-0408/
A big +1 from me.
Actually a pretty big -1 from me. I'd prefer to see the standard library getting smaller, not bigger, and packages being upgradeable independently from the Python version as a result. Every time I see things like the following I cry a little inside: try: try: from py2stdliblocation import FooBar as Foo except ImportError: from py3stdliblocation import foo as Foo except ImportError: from pypilocation import Foo Now we're talking about having to add __preview__ into that mix too? :'( Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk
Chris Withers wrote:
Every time I see things like the following I cry a little inside:
try: try: from py2stdliblocation import FooBar as Foo except ImportError: from py3stdliblocation import foo as Foo except ImportError: from pypilocation import Foo
The syntax is inelegant, but the concept is straightforward and simple and not worth tears. "I need a thing called Foo, which can be found here, or here, or here. Use the first one found." In principle this is not terribly different from the idea of a search PATH when looking for an executable, except the executable can be found under different names as well as different locations.
Now we're talking about having to add __preview__ into that mix too?
As I understand it, Guido nixed that idea. (Or did I imagine that?) Preview modules will be just added to the std lib as normal, and you have to read the docs to find out they're preview. -- Steven
On 2/3/2012 6:18 PM, Steven D'Aprano wrote:
Now we're talking about having to add __preview__ into that mix too?
As I understand it, Guido nixed that idea. (Or did I imagine that?)
No, you are right, discussion should cease. It is already marked 'rejected' and listed under Abandoned, Withdrawn, and Rejected PEPs.
Preview modules will be just added to the std lib as normal, and you have to read the docs to find out they're preview.
What's New should say so too. -- Terry Jan Reedy
Terry Reedy wrote:
On 2/3/2012 6:18 PM, Steven D'Aprano wrote:
Now we're talking about having to add __preview__ into that mix too?
As I understand it, Guido nixed that idea. (Or did I imagine that?)
No, you are right, discussion should cease. It is already marked 'rejected' and listed under Abandoned, Withdrawn, and Rejected PEPs.
Preview modules will be just added to the std lib as normal, and you have to read the docs to find out they're preview.
What's New should say so too.
A thought comes to mind... It strikes me that it would be helpful sometimes to programmatically recognise "preview" modules in the std lib. Could we have a recommendation in PEP 8 that such modules should have a global variable called PREVIEW, and non-preview modules should not, so that the recommended way of telling them apart is with hasattr(module, "PREVIEW")? -- Steven
On 4 February 2012 11:25, Steven D'Aprano
It strikes me that it would be helpful sometimes to programmatically recognise "preview" modules in the std lib. Could we have a recommendation in PEP 8 that such modules should have a global variable called PREVIEW, and non-preview modules should not, so that the recommended way of telling them apart is with hasattr(module, "PREVIEW")?
In what situation would you want that when you weren't referring to a specific module? If you're referring to a specific module and you really care, just check sys.version. (That's annoying and ugly enough that it'd probably make you thing about why you are doing it - I cannot honestly think of a case where I'd actually want to check in code if a module is a preview - hence my question as to what your use case is). Feels like YAGNI to me. Paul.
+1 On Feb 4, 2012 8:37 PM, "Paul Moore"
On 4 February 2012 11:25, Steven D'Aprano
wrote: It strikes me that it would be helpful sometimes to programmatically recognise "preview" modules in the std lib. Could we have a
in PEP 8 that such modules should have a global variable called PREVIEW, and non-preview modules should not, so that the recommended way of telling
recommendation them
apart is with hasattr(module, "PREVIEW")?
In what situation would you want that when you weren't referring to a specific module? If you're referring to a specific module and you really care, just check sys.version. (That's annoying and ugly enough that it'd probably make you thing about why you are doing it - I cannot honestly think of a case where I'd actually want to check in code if a module is a preview - hence my question as to what your use case is).
Feels like YAGNI to me. Paul. _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
Paul Moore wrote:
On 4 February 2012 11:25, Steven D'Aprano
wrote: It strikes me that it would be helpful sometimes to programmatically recognise "preview" modules in the std lib. Could we have a recommendation in PEP 8 that such modules should have a global variable called PREVIEW, and non-preview modules should not, so that the recommended way of telling them apart is with hasattr(module, "PREVIEW")?
In what situation would you want that when you weren't referring to a specific module? If you're referring to a specific module and you really care, just check sys.version. (That's annoying and ugly enough that it'd probably make you thing about why you are doing it - I cannot honestly think of a case where I'd actually want to check in code if a module is a preview - hence my question as to what your use case is).
What's the use-case for any sort of introspection functionality? I would say that the ability to perform introspection is valuable in and of itself, regardless of any other concrete benefits. We expect that modules may change APIs between the preview and non-preview ("stable") releases. I can see value in (say) being forewarned of API changes from the interactive interpreter, without having to troll through documentation looking for changes, or waiting for an exception. Or having to remember exactly which version modules were added in, and when they left preview. (Will this *always* be one release later? I doubt it.) If you don't believe that preview modules will change APIs, or that it would be useful to detect this programmatically when using such a module, then there's probably nothing I can say to convince you otherwise. But I think it will be useful. Python itself has a sys.version so you can detect feature sets and changes in semantics; this is just the same thing, only milder. The one obvious way[1] is to explicitly tag modules as preview, and the simplest way to do this is with an attribute. (Non-preview modules shouldn't have the attribute at all -- non-preview is the default state, averaged over the entire lifetime of a module in the standard library.) It would be just nice to sit down at the interactive interpreter and see whether a module you just imported was preview or not, without having to look it up in the docs. I do nearly everything at the interpreter: I read docs using help(), I check where modules are located using module.__file__. This is just more of the same. Some alternatives: 1) Don't try to detect whether it is a preview module, but use EAFP to detect features that have changed: try: result = spam.foo(x, y) except AttributeError: # Must be a preview release. Do something else. result = spam.bar(y, x) This is preferred so long as the differences between preview and stable releases are big, obvious changes like a function being renamed. But if there are subtle changes that you care about, things get dicey. spam.foo may not raise an exception, but just do something completely unexpected. 2) As you suggest, write version-specific code: if sys.version >= "3.4": result = spam.foo(x, y) else: # Preview release. result = spam.bar(y, x) This starts to get messy fast, particularly if (worst case, and I *really* hope this doesn't happen!) modules get put into preview, then get withdrawn, then a few releases later get put back in. This sort of mess shouldn't ever happen with non-preview modules, but preview modules explicitly have weaker guarantees. And I can never remember when modules were added to the std lib.
Feels like YAGNI to me.
When people talk about YAGNI, they are referring to the principle that you shouldn't waste time and effort over-engineering a complex solution or providing significant additional functionality for no obvious gain. I don't think that PREVIEW = True in a module *quite* counts as over-engineered. [1] Disclaimer: I am not Dutch. -- Steven
On Feb 5, 2012 5:36 PM, "Steven D'Aprano"
Paul Moore wrote:
On 4 February 2012 11:25, Steven D'Aprano
wrote: It strikes me that it would be helpful sometimes to programmatically recognise "preview" modules in the std lib. Could we have a
in PEP 8 that such modules should have a global variable called PREVIEW, and non-preview modules should not, so that the recommended way of telling
apart is with hasattr(module, "PREVIEW")?
In what situation would you want that when you weren't referring to a specific module? If you're referring to a specific module and you really care, just check sys.version. (That's annoying and ugly enough that it'd probably make you thing about why you are doing it - I cannot honestly think of a case where I'd actually want to check in code if a module is a preview - hence my question as to what your use case is).
What's the use-case for any sort of introspection functionality? I would say that the ability to perform introspection is valuable in and of itself, regardless of any other concrete benefits.
We expect that modules may change APIs between the preview and non-preview ("stable") releases. I can see value in (say) being forewarned of API changes from the interactive interpreter, without having to troll
recommendation them through documentation looking for changes, or waiting for an exception. Or having to remember exactly which version modules were added in, and when they left preview. (Will this *always* be one release later? I doubt it.)
If you don't believe that preview modules will change APIs, or that it
would be useful to detect this programmatically when using such a module, then there's probably nothing I can say to convince you otherwise. But I think it will be useful. Python itself has a sys.version so you can detect feature sets and changes in semantics; this is just the same thing, only milder.
The one obvious way[1] is to explicitly tag modules as preview, and the
simplest way to do this is with an attribute. (Non-preview modules shouldn't have the attribute at all -- non-preview is the default state, averaged over the entire lifetime of a module in the standard library.)
It would be just nice to sit down at the interactive interpreter and see
whether a module you just imported was preview or not, without having to look it up in the docs. I do nearly everything at the interpreter: I read docs using help(), I check where modules are located using module.__file__. This is just more of the same.
Some alternatives:
1) Don't try to detect whether it is a preview module, but use EAFP to
detect features that have changed:
try: result = spam.foo(x, y) except AttributeError: # Must be a preview release. Do something else. result = spam.bar(y, x)
This is preferred so long as the differences between preview and stable
releases are big, obvious changes like a function being renamed. But if there are subtle changes that you care about, things get dicey. spam.foo may not raise an exception, but just do something completely unexpected.
2) As you suggest, write version-specific code:
if sys.version >= "3.4": result = spam.foo(x, y) else: # Preview release. result = spam.bar(y, x)
This starts to get messy fast, particularly if (worst case, and I
*really* hope this doesn't happen!) modules get put into preview, then get withdrawn, then a few releases later get put back in. This sort of mess shouldn't ever happen with non-preview modules, but preview modules explicitly have weaker guarantees.
And I can never remember when modules were added to the std lib.
Feels like YAGNI to me.
When people talk about YAGNI, they are referring to the principle that
you shouldn't waste time and effort over-engineering a complex solution or providing significant additional functionality for no obvious gain. I don't think that
PREVIEW = True
in a module *quite* counts as over-engineered.
How about sys.preview_modules to list all the preview modules in the current release? This would be useful at the interactive prompt, at the least. -eric
On 27/01/2012 15:34, Benjamin Peterson wrote:
2012/1/27 Eli Bendersky
: Criteria for "graduation" ------------------------- I think you also need "Criteria for being placed in __preview__". Do we just toss everything someone suggests in?
And given that permanently deleting something from __preview__ would be a big deal (deciding it didn't make the grade and should never graduate), the criteria shouldn't be much less strict than for adopting a package into the standard library. i.e. once something gets into __preview__ people are going to assume it will graduate at some point - __preview__ is a place for apis to stabilise and mature, not a place for dubious libraries that we may or may not want in the standard library at some point. Michael -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
On Fri, Jan 27, 2012 at 17:34, Benjamin Peterson
2012/1/27 Eli Bendersky
: Criteria for "graduation" -------------------------
I think you also need "Criteria for being placed in __preview__". Do we just toss everything someone suggests in?
I hoped to have this covered by: "In any case, modules that are proposed to be added to the standard library, whether via __preview__ or directly, must fulfill the acceptance conditions set by PEP 2." PEP 2 is quite detailed and I saw no need to repeat large chunks of it here. The idea is that all the same restrictions and caveats apply. The thing that goes away is promise for future API stability. Eli
Eli Bendersky
Hello,
Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome. The PEP can also be viewed in HTML form at http://www.python.org/dev/peps/pep-0408/
[1] http://mail.python.org/pipermail/python-ideas/2012-January/013246.html
I'm -1 on this, for a pretty simple reason. Something goes into __preview__, instead of it's final destination directly because it needs feedback/possibly changes. However, given the release cycle of the stdlib (~18 months), any feedback it gets can't be seen by actual users until it's too late. Essentially you can only get one round of stdlib. I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib. Alex
On Jan 27, 2012, at 05:26 PM, Alex wrote:
I'm -1 on this, for a pretty simple reason. Something goes into __preview__, instead of it's final destination directly because it needs feedback/possibly changes. However, given the release cycle of the stdlib (~18 months), any feedback it gets can't be seen by actual users until it's too late. Essentially you can only get one round of stdlib.
I'm -1 on this as well. It just feels like the completely wrong way to stabilize an API, and I think despite the caveats that are explicit in __preview__, Python will just catch tons of grief from users and haters about API instability anyway, because from a practical standpoint, applications written using __preview__ APIs *will* be less stable. It also won't improve the situation for prospective library developers because they're locked into Python's development cycle anyway. I also think the benefit to users is a false one since it will be much harder to write applications that are portable across Python releases.
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
I agree with everything Alex said here. -Barry
On Fri, 27 Jan 2012 16:10:51 -0500
Barry Warsaw
I'm -1 on this as well. It just feels like the completely wrong way to stabilize an API, and I think despite the caveats that are explicit in __preview__, Python will just catch tons of grief from users and haters about API instability anyway, because from a practical standpoint, applications written using __preview__ APIs *will* be less stable.
Well, obviously __preview__ is not for the most conservative users. I think the name clearly conveys the idea that you are trying out something which is not in its definitive state, doesn't it?
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
I agree with everything Alex said here.
The idea that being on PyPI is sufficient is nice but flawed (the IPaddr example). PyPI doesn't guarantee any visibility (how many packages are there?). Furthermore, having users is not a guarantee that the API is appropriate, either; it just means that the API is appropriate for *some* users. On the other hand, __preview__ would clearly signal that something is on the verge of being frozen as an official stdlib API, and would prompt people to actively try it. Regards Antoine.
On 27 January 2012 21:48, Antoine Pitrou
Well, obviously __preview__ is not for the most conservative users. I think the name clearly conveys the idea that you are trying out something which is not in its definitive state, doesn't it?
Agreed. But that in turn implies to me that __preview__.foo should not be maintained as an alias for foo once it gets "promoted". Firstly, because if you're not comfortable with changing your code to make the simple change to remove the __preview__ prefix in the import, then how could you be comfortable with using a module with no compatibility guarantee anyway? (BTW, I assume that the normal incantation would actually be "from __preview__ import foo", as that limits the module name change to the import statement).
The idea that being on PyPI is sufficient is nice but flawed (the IPaddr example). PyPI doesn't guarantee any visibility (how many packages are there?). Furthermore, having users is not a guarantee that the API is appropriate, either; it just means that the API is appropriate for *some* users.
Agreed entirely. We need a way to signal somehow that a module is being seriously considered for stdlib inclusion. That *would* result in more uptake, and hence more testing and feedback. As an example, I would definitely try out MRAB's regex module if it were in __preview__, but even though I keep meaning to, I've never actually got round to bothering to download from PyPI - I end up just using the stdlib re for my one-off scripts.
On the other hand, __preview__ would clearly signal that something is on the verge of being frozen as an official stdlib API, and would prompt people to actively try it.
Precisely. It's in effect a "last call for feedback", and people should view it that way, in my opinion. Paul.
On Jan 27, 2012, at 10:02 PM, Paul Moore wrote:
Agreed entirely. We need a way to signal somehow that a module is being seriously considered for stdlib inclusion. That *would* result in more uptake, and hence more testing and feedback.
I'm just not convinced that's a message that we can clearly articulate to users of the library. I think most people will see it in the module documentation, just use it, and then complain when it's gone. -Barry
On Jan 27, 2012, at 10:48 PM, Antoine Pitrou wrote:
On Fri, 27 Jan 2012 16:10:51 -0500 Barry Warsaw
wrote: I'm -1 on this as well. It just feels like the completely wrong way to stabilize an API, and I think despite the caveats that are explicit in __preview__, Python will just catch tons of grief from users and haters about API instability anyway, because from a practical standpoint, applications written using __preview__ APIs *will* be less stable.
Well, obviously __preview__ is not for the most conservative users. I think the name clearly conveys the idea that you are trying out something which is not in its definitive state, doesn't it?
Maybe. I could quibble about the name, but let's not bikeshed on that right now. The problem as I see it is that __preview__ will be very tempting to use in production. In fact, its use case is almost predicated on that. (We want you to use it so you can tell us if the API is good.) Once people use it, they will probably ship code that relies on it, and then the pressure will be applied to us to continue to support that API even if a newer, better one gets promoted out of __preview__. I worry that over time, for all practical purposes, there won't be much difference between __preview__ and the stdlib.
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
I agree with everything Alex said here.
The idea that being on PyPI is sufficient is nice but flawed (the IPaddr example). PyPI doesn't guarantee any visibility (how many packages are there?). Furthermore, having users is not a guarantee that the API is appropriate, either; it just means that the API is appropriate for *some* users.
I can't argue with that, it's just that I don't think __preview__ solves that problem. And it seems to me that __preview__ introduces a whole 'nother set of problems on top of that. So taking the IPaddr example further. Would having it in the stdlib, relegated to an explicitly unstable API part of the stdlib, increase eyeballs enough to generate the kind of API feedback we're looking for, without imposing an additional maintenance burden on us? If you were writing an app that used something in __preview__, how would you provide feedback on what parts of the API you'd want to change, *and* how would you adapt your application to use those better APIs once they became available 18 months from now? I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away. I'm also nervous about it from an OS vender point of view. Should I reject any applications that import from __preview__? Or do I have to make a commitment to support those APIs longer than Python does because the application that uses it is important to me? I think the OS vendor problem is easier with an application that uses some PyPI package, because I can always make that package available to the application by pulling in the version I care about. It's harder if a newer, incompatible version is released upstream and I want to provide both, but I don't think __preview__ addresses that. A robust, standard approach to versioning of modules would though, and I think would better solve what __preview__ is trying to solve.
On the other hand, __preview__ would clearly signal that something is on the verge of being frozen as an official stdlib API, and would prompt people to actively try it.
I'm not so sure about that. If I were to actively try it, I'm not sure how much motivation I'd have to rewrite key parts of my code when an incompatible version gets promoted to the un__preview__d stdlib. -Barry
On Fri, 27 Jan 2012 17:54:14 -0500
Barry Warsaw
On Jan 27, 2012, at 10:48 PM, Antoine Pitrou wrote:
On Fri, 27 Jan 2012 16:10:51 -0500 Barry Warsaw
wrote: I'm -1 on this as well. It just feels like the completely wrong way to stabilize an API, and I think despite the caveats that are explicit in __preview__, Python will just catch tons of grief from users and haters about API instability anyway, because from a practical standpoint, applications written using __preview__ APIs *will* be less stable.
Well, obviously __preview__ is not for the most conservative users. I think the name clearly conveys the idea that you are trying out something which is not in its definitive state, doesn't it?
Maybe. I could quibble about the name, but let's not bikeshed on that right now. The problem as I see it is that __preview__ will be very tempting to use in production. In fact, its use case is almost predicated on that. (We want you to use it so you can tell us if the API is good.)
That's my opinion too. But using it in production doesn't mean you lose control on the code and its users. Perhaps you are used to a kind of production where the code gets disseminated all over the GNUniverse :) But for most people "production" means a single server or machine where they have entire control.
If you were writing an app that used something in __preview__, how would you provide feedback on what parts of the API you'd want to change, *and* how would you adapt your application to use those better APIs once they became available 18 months from now?
For the former, the normal channels probably apply (bug tracker or python-dev). For the latter, depending on the API change, catching e.g. AttributeError on module lookup, or TypeError on function call, or explicitly examining the Python version are all plausible choices. Let's take another example: the regex module, where the API is unlikely to change much (since it's meant to be re-compatible), and the main concerns are ease of maintenance, data-wise compatibility with re (rather than API-wise), performance, and the like.
I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away.
Hmm, isn't that a bit pessimistic about our users?
I'm also nervous about it from an OS vender point of view. Should I reject any applications that import from __preview__? Or do I have to make a commitment to support those APIs longer than Python does because the application that uses it is important to me?
Well, is the application supported upstream? If yes, then there shouldn't be any additional burden. If no, then you have a complication indeed.
A robust, standard approach to versioning of modules would though, and I think would better solve what __preview__ is trying to solve.
I don't think versioning can replace API stability. __preview__ is explicitly and visibly special, and that's a protection against us becoming too complacent.
On the other hand, __preview__ would clearly signal that something is on the verge of being frozen as an official stdlib API, and would prompt people to actively try it.
I'm not so sure about that. If I were to actively try it, I'm not sure how much motivation I'd have to rewrite key parts of my code when an incompatible version gets promoted to the un__preview__d stdlib.
Obviously you would only use a module from __preview__ if the functionality is exciting enough for you (or the cost/benefit ratio is good enough). Regards Antoine.
On Sat, Jan 28, 2012 at 8:54 AM, Barry Warsaw
I think the OS vendor problem is easier with an application that uses some PyPI package, because I can always make that package available to the application by pulling in the version I care about. It's harder if a newer, incompatible version is released upstream and I want to provide both, but I don't think __preview__ addresses that. A robust, standard approach to versioning of modules would though, and I think would better solve what __preview__ is trying to solve.
I'd be A-OK with an explicit requirement that *any* module shipped in __preview__ must have a third-party supported multi-version compatible alternative on PyPI. (PEP 2 actually pretty much says that should be the case, but making it mandatory in the __preview__ case would be a good idea). As an OS vendor, you'd then be able to say: "Don't use __preview__, since that will cause problems when we next upgrade the system Python. Use the PyPI version instead." Then the stdlib docs for that module (while it is in __preview__) would say "If you are able to easily use third party packages, package <X> offers this API for multiple Python versions with stronger API stability guarantees. This preview version of the module is for use in environments that specifically target a single Python version and/or where the use of third party packages outside the standard library poses additional complications beyond simply downloading and installing the code." Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jan 28, 2012, at 11:37 AM, Nick Coghlan wrote:
Then the stdlib docs for that module (while it is in __preview__) would say "If you are able to easily use third party packages, package <X> offers this API for multiple Python versions with stronger API stability guarantees. This preview version of the module is for use in environments that specifically target a single Python version and/or where the use of third party packages outside the standard library poses additional complications beyond simply downloading and installing the code."
Would it be acceptable then for a distro to disable __preview__ or empty it out? The thinking goes like this: if you would normally use an __preview__ module because you can't get approval to download some random package from PyPI, well then your distro probably could or should provide it, so get it from them. In fact, if the number of __preview__ modules is kept low, *and* PyPI equivalents were a requirement, then a distro vendor could just ensure those PyPI versions are available as distro packages outside of the __preview__ stdlib namespace (i.e. in their normal third-party namespace). Then folks developing on that platform could just use the distro package and ignore __preview__. If that's acceptable, then maybe it should be explicitly so in the PEP. -Barry
On 1/27/2012 8:48 PM, Barry Warsaw wrote:
The thinking goes like this: if you would normally use an __preview__ module because you can't get approval to download some random package from PyPI, well then your distro probably could or should provide it, so get it from them.
That is my thought about the entire __preview__ concept. Anything that would/should go into __preview__ would be better off being packaged for a couple of key distros (e.g., Ubuntu/Fedora/Gentoo) where they would get better visibility than just being on PyPI and would be more flexible in terms of release schedule to allow API changes. If the effort being put into making the __preview__ package was put into packaging those modules for distros, then you would get the same exposure with better flexibility and a better maintenance story. The whole idea of __preview__ seems to be a workaround for the difficult packaging story for Python modules on common distros -- stuffing them into __preview__ is a cheat to get the distro packagers to distribute these interesting modules since we would be bundling them. However, as you have pointed out, it would very desirable to them to not do so. So in the end, these modules may not receive as wide of visibility as the PEP suggests. I could very easily imagine the more stable distributions refusing or patching anything that used __preview__ in order to eliminate difficulties. -- Scott Dial scott@scottdial.com
On Sat, 28 Jan 2012 00:09:13 -0500
Scott Dial
On 1/27/2012 8:48 PM, Barry Warsaw wrote:
The thinking goes like this: if you would normally use an __preview__ module because you can't get approval to download some random package from PyPI, well then your distro probably could or should provide it, so get it from them.
That is my thought about the entire __preview__ concept. Anything that would/should go into __preview__ would be better off being packaged for a couple of key distros (e.g., Ubuntu/Fedora/Gentoo) where they would get better visibility than just being on PyPI and would be more flexible in terms of release schedule to allow API changes.
This is a red herring. First, not everyone uses a distro. There are almost a million monthly downloads of the Windows installers. Second, what a distro puts in their packages has nothing to do with considering a module for inclusion in the Python stdlib. Besides, I don't understand how being packaged by a distro makes a difference. My distro has thousands of packages, many of them quite obscure. OTOH, being shipped in the stdlib *and* visibly documented on python.org (in the stdlib docs, in the what's new, etc.) will make a difference. Regards Antoine.
On 28/01/2012 05:09, Scott Dial wrote:
On 1/27/2012 8:48 PM, Barry Warsaw wrote:
The thinking goes like this: if you would normally use an __preview__ module because you can't get approval to download some random package from PyPI, well then your distro probably could or should provide it, so get it from them. That is my thought about the entire __preview__ concept. Anything that would/should go into __preview__ would be better off being packaged for a couple of key distros (e.g., Ubuntu/Fedora/Gentoo) where they would get better visibility than just being on PyPI and would be more flexible in terms of release schedule to allow API changes.
If the effort being put into making the __preview__ package was put into packaging those modules for distros,
That effort wouldn't be put in though. Largely those involved in working on Python are not the ones packaging for Linux distributions. So it isn't an alternative to __preview__ - it could happily be done alongside it though. Those who work on Python won't just switch to Linux if this proposal isn't accepted, they'll do different work on Python instead.
then you would get the same exposure Packaging libraries for Linux gets you no exposure on Windows or the Mac, so __preview__ is wider.
with better flexibility and a better maintenance story. The whole idea of __preview__ seems to be a workaround for the difficult packaging story for Python modules on common distros I don't know where you got that impression. :-)
One of the reasons for __preview__ is that it means integrating libraries with the Python build and test systems, for all platforms. Packaging for [some-variants-of] Linux only doesn't do anything for this. All the best, Michael
-- stuffing them into __preview__ is a cheat to get the distro packagers to distribute these interesting modules since we would be bundling them.
However, as you have pointed out, it would very desirable to them to not do so. So in the end, these modules may not receive as wide of visibility as the PEP suggests. I could very easily imagine the more stable distributions refusing or patching anything that used __preview__ in order to eliminate difficulties.
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
On Sat, Jan 28, 2012 at 11:48 AM, Barry Warsaw
Would it be acceptable then for a distro to disable __preview__ or empty it out?
The thinking goes like this: if you would normally use an __preview__ module because you can't get approval to download some random package from PyPI, well then your distro probably could or should provide it, so get it from them. In fact, if the number of __preview__ modules is kept low, *and* PyPI equivalents were a requirement, then a distro vendor could just ensure those PyPI versions are available as distro packages outside of the __preview__ stdlib namespace (i.e. in their normal third-party namespace). Then folks developing on that platform could just use the distro package and ignore __preview__.
If that's acceptable, then maybe it should be explicitly so in the PEP.
I think that's an excellent idea - in that case, the distro vendor is taking over the due diligence responsibilities, which are the main point of __preview__. Similarly, sumo distributions like ActiveState or Python(x, y) could choose to add the PyPI version. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sat, Jan 28, 2012 at 4:37 PM, Nick Coghlan
I think that's an excellent idea - in that case, the distro vendor is taking over the due diligence responsibilities, which are the main point of __preview__.
Heh, contradicted myself in my next email. python-dev handling due diligence is a key benefit for *stdlib inclusion*, not __preview__ per se. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 28 January 2012 01:48, Barry Warsaw
The thinking goes like this: if you would normally use an __preview__ module because you can't get approval to download some random package from PyPI, well then your distro probably could or should provide it, so get it from them. In fact, if the number of __preview__ modules is kept low, *and* PyPI equivalents were a requirement, then a distro vendor could just ensure those PyPI versions are available as distro packages outside of the __preview__ stdlib namespace (i.e. in their normal third-party namespace). Then folks developing on that platform could just use the distro package and ignore __preview__.
Just so that you know that such cases exist, I am in a position where I have access to systems with (distro-supplied) Python installed. I can use anything supplied with Python (i.e., the stdlib - and __preview__ would fall into this category as well). And yet I have essentially no means of gaining access to any 3rd party modules, whether they are packaged by the distro or obtained from PyPI. (And "build your own" isn't an option in many cases, if only because a C compiler may well not be available!) This is essentially due to corporate inertia and bogged down "do-nothing" policies rather than due dilligence or supportability concerns. But it is a reality for me (and many others, I suspect). Having said this, of course, the same corporate inertia means that Python 3.3 is a pipe-dream for me in those environments for many years yet. So ignoring them may be reasonable. Just some facts to consider :-) Paul.
__preview__ would fall into this category as well). And yet I have essentially no means of gaining access to any 3rd party modules, whether they are packaged by the distro or obtained from PyPI. (And "build your own" isn't an option in many cases, if only because a C compiler may well not be available!) This is essentially due to corporate inertia and bogged down "do-nothing" policies rather than due dilligence or supportability concerns. But it is a reality for me (and many others, I suspect).
Having said this, of course, the same corporate inertia means that Python 3.3 is a pipe-dream for me in those environments for many years yet. So ignoring them may be reasonable.
You clearly want access to external modules sooner. A preview namespace addresses this indirectly. The separated stdlib versioning concept is far superior for this use case.
__preview__ would fall into this category as well). And yet I have essentially no means of gaining access to any 3rd party modules, whether they are packaged by the distro or obtained from PyPI. (And "build your own" isn't an option in many cases, if only because a C compiler may well not be available!) This is essentially due to corporate inertia and bogged down "do-nothing" policies rather than due dilligence or supportability concerns. But it is a reality for me (and many others, I suspect).
Having said this, of course, the same corporate inertia means that Python 3.3 is a pipe-dream for me in those environments for many years yet. So ignoring them may be reasonable. You clearly want access to external modules sooner. A preview namespace addresses this indirectly. The separated stdlib versioning concept is far superior for this use case. There are two proposals for the standard library - one is to do development in a separate repository to make it easier for other implementations to contribute. To my understanding this proposal is mildly controversial, but doesn't involve changing the way the standard
On 28/01/2012 13:55, Matt Joiner wrote: library is distributed or versioned. A separate proposal about standard library versioning has been floated but is *much* more controversial and therefore much less likely to happen. So I wouldn't hold your breath on it... All the best, Michael Foord
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
Executive summary: If the promise to remove the module from __preview__ is credible (ie, strictly kept), then __preview__ will have a specific audience in those who want the stdlib candidate code and are willing to deal with a certain amount of instability in that code. (Whether that audience is big enough to be worth the effort of managing __preview__ is another question.) Barry Warsaw writes:
I agree with everything Alex said here.
I don't necessarily disagree. But:
I can't argue with that, it's just that I don't think __preview__ solves [the visibility] problem.
I do disagree with that. I frequently refer to the library reference for modules that do what I need, but almost never to PyPI (my own needs are usually not very hard to program, but if there's a stdlib module it's almost surely far more general, robust, and tested than my special-case code would be; PyPI provides far less of a robustness guarantee than a stdlib candidate would). I don't know how big or important a use case this is, though I think that Antoine's point that a similar argument applies to those who develop software for their own internal use (like me, but they have actual standards for QA) is valid.
I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away.
So maybe the Hon. Mr. Broytman would be willing to supply a form letter for those folks, too. "We promised to remove the module from __preview__, and we did. We warned you the API would be likely unstable, and it was. You have no complaint." would be the gist.
A robust, standard approach to versioning of modules would though, and I think would better solve what __preview__ is trying to solve.
I suspect that "robust, standard approach to versioning of modules" is an oxymoron. The semantics of "module version" from the point of view of application developers and users is very complex, and cannot be encapsulated in a linear sequence. The only reliable comparison that can be done on versions is equality (and Python knows that; that's why there is a stdlib bound to the core in the first place!)
I'm not so sure about that. If I were to actively try it, I'm not sure how much motivation I'd have to rewrite key parts of my code when an incompatible version gets promoted to the un__preview__d stdlib.
So use the old version of Python. You do that anyway. Or avoid APIs where you are unwilling to deal with more or less frequent changes. You do that anyway. And if you're motivated enough, use __preview__. I don't understand what you think you lose here.
On Sat, Jan 28, 2012 at 3:22 PM, Stephen J. Turnbull
Executive summary:
If the promise to remove the module from __preview__ is credible (ie, strictly kept), then __preview__ will have a specific audience in those who want the stdlib candidate code and are willing to deal with a certain amount of instability in that code.
People need to remember there's another half to this equation: the core dev side. The reason *regex* specifically isn't in the stdlib already is largely due to (perhaps excessive) concerns about the potential maintenance burden. It's not a small chunk of code and we don't want to deal with another bsddb. That's the main roadblock to inclusion. Not lack of user demand. Not blindness to the problems with re. Just concerns about maintainability. Add to that some niggling concerns about backwards compatibility in obscure corner cases that may not be exercised by current users. And so we have an impasse. Matthew has indicated he's happy to include it and maintain it as part of the core, but it hasn't really gone anywhere because we don't currently have a good way to address those maintainability concerns (aside from saying "you're worrying about it too much", which isn't what I would call persuasive). That's what __preview__ gives us: a way to deal with the *negative* votes that keep positive additions out of the standard library. Most of the PEP's arguments for due diligence etc are actually talking about why we want things in the standard library in the first place, rather than about __preview__ in particular. The core idea behind the __preview__ namespace is to allow *3* possible responses when a module is proposed for stdlib inclusion: 1. Yes, that's a good idea, we'll add it (cf. lzma for 3.3) 2. Maybe, so we'll add it to __preview__ for a release and see if it blows up in our face (hopefully at least regex for 3.3, maybe ipaddr and daemon as well) 3. No, not going to happen. Currently, anything where we would answer "2" ends up becoming a "3" by default, and that's not a good thing for the long-term health of the language. The reason this will be more effective in building core developer confidence than third party distribution via PyPI is due to a few different things: - we all run the test suite, so we get to see that the software builds and tests effectively - we know what our own buildbots cover, so we know it's passing on all those platforms - we'll get to see more of the related discussions in channels we monitor *anyway* (i.e. the bug tracker, python-dev) As far as the criteria for failing to graduate goes, I'd say something that ends up in __preview__ will almost always make it into the main part of the standard library, with the following exceptions: - excessive build process, test suite and buildbot instability. Whether this is due to fragile test cases or fragile code, we don't want to deal with another bsddb. If the test suite can't be stabilised over the course of an entire feature release, then the module would most likely be rejected rather than allowing it to graduate to the standard library. - strongly negative (or just plain confused) user feedback. We deal with feedback on APIs all the time. Sometimes we add new ones, or tweak the existing ones. Occasionally we'll judge them to be irredeemably broken and just plain remove them (cf. CObject, contextlib.nested, Bastion, rexec). This wouldn't change just because a module was in __preview__ - instead, we'd just have another option available to us (i.e. rejecting the module for stdlib inclusion post-preview rather than trying to fix it). Really, the main benefit for end users doesn't lie in __preview__ itself: it lies in the positive effect __preview__ will have on the long term evolution of the standard library, as it aims to turn python-dev's inherent conservatism (which is a good thing!) into a speed bump rather than a road block. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan writes:
People need to remember there's another half to this equation: the core dev side.
Why? There's nothing about it in the PEP.<wink/>
The reason *regex* specifically isn't in the stdlib already is largely due to (perhaps excessive) concerns about the potential maintenance burden.
But then giving regex as an example seems to contradict the PEP: "The only difference between preview APIs and the rest of the standard library is that preview APIs are explicitly exempted from the usual backward compatibility guarantees," "in principle, most modules in the __preview__ package should eventually graduate to the stable standard library," and "whenever the Python core development team decides that a new module should be included into the standard library, but isn't sure about whether the module's API is optimal". True, there were a few bits spilled on the possibility of being "without sufficient developer support to maintain it," but I read that as a risk that is basically a consequence of instability of the API. The rationale is entirely focused on API instability, and a focus on API instability is certainly the reason for calling it "__preview__" rather than "__experimental__". I don't have an opinion on whether this is an argument for rejecting the PEP or for rewriting it (specifically, seriously beefing up the "after trying it, maybe we won't want to maintain it" rationale). I also think that if "we need to try it to decide if the maintenance burden is acceptable" is a rationale, the name "__experimental__" should be seriously reconsidered as more accurately reflecting the intended content of the package.
On Sat, Jan 28, 2012 at 6:38 PM, Stephen J. Turnbull
I don't have an opinion on whether this is an argument for rejecting the PEP or for rewriting it (specifically, seriously beefing up the "after trying it, maybe we won't want to maintain it" rationale). I also think that if "we need to try it to decide if the maintenance burden is acceptable" is a rationale, the name "__experimental__" should be seriously reconsidered as more accurately reflecting the intended content of the package.
I think it's an argument for rewriting it (and, as you point out, perhaps reverting to __experimental__ as the proposed name). Eli started from a draft I wrote a while back and my own thinking on the topic wasn't particularly clear (in fact, it's only this thread that has really clarified things for me). The main thing I've realised is that the end user benefits currently discussed in the PEP are really about the importance of a robust *standard library*. They aren't specific to the new namespace at all - that part of the rationale is really only needed to counter the predictable "who cares about the standard library, we can just use PyPI!" responses (and the answer is, "lots of people that can't or won't use PyPI modules for a wide range of reasons"). The only reason to add a new double-underscore namespace is to address *core developer* concerns in cases where we're *almost* sure that we want to add the module to the standard library, but aren't quite prepared to commit to maintaining it for the life of the 3.x series (cf. 2.x and the ongoing problems we had with keeping the bsddb module working properly, especially before Jesus Cea stepped up to wrangle it into submission). It's basically us saying to Python users "We're explicitly flagging this PyPI module for inclusion in the next major Python release. We've integrated it into our build process, test suite and binary releases, so you don't even have to download it from PyPI in order to try it out, you can just import it from the __preview__ namespace (although you're still free to download it from PyPI if you prefer - in fact, if you need to support multiple Python versions, we actively recommend it!). There's still a small chance this module won't make the grade and will be dropped from the standard library entirely (that's why it's only a preview), but most likely it will move into the main part of the standard library with full backwards compatibility guarantees in the next release". Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 28 January 2012 09:18, Nick Coghlan
It's basically us saying to Python users "We're explicitly flagging this PyPI module for inclusion in the next major Python release. We've integrated it into our build process, test suite and binary releases, so you don't even have to download it from PyPI in order to try it out, you can just import it from the __preview__ namespace (although you're still free to download it from PyPI if you prefer - in fact, if you need to support multiple Python versions, we actively recommend it!). There's still a small chance this module won't make the grade and will be dropped from the standard library entirely (that's why it's only a preview), but most likely it will move into the main part of the standard library with full backwards compatibility guarantees in the next release".
+1. Paul.
On 28/01/2012 13:04, Paul Moore wrote:
On 28 January 2012 09:18, Nick Coghlan
wrote: It's basically us saying to Python users "We're explicitly flagging this PyPI module for inclusion in the next major Python release. We've integrated it into our build process, test suite and binary releases, so you don't even have to download it from PyPI in order to try it out, you can just import it from the __preview__ namespace (although you're still free to download it from PyPI if you prefer - in fact, if you need to support multiple Python versions, we actively recommend it!). There's still a small chance this module won't make the grade and will be dropped from the standard library entirely (that's why it's only a preview), but most likely it will move into the main part of the standard library with full backwards compatibility guarantees in the next release". +1.
Yep, nice way of putting it - and summing up the virtues of the approach. (Although I might say "most likely it will move into the main part of the standard library with full backwards compatibility guarantees in a future release".) Michael
Paul. _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
On Sat, Jan 28, 2012 at 5:04 AM, Paul Moore
On 28 January 2012 09:18, Nick Coghlan
wrote: It's basically us saying to Python users "We're explicitly flagging this PyPI module for inclusion in the next major Python release. We've integrated it into our build process, test suite and binary releases, so you don't even have to download it from PyPI in order to try it out, you can just import it from the __preview__ namespace (although you're still free to download it from PyPI if you prefer - in fact, if you need to support multiple Python versions, we actively recommend it!). There's still a small chance this module won't make the grade and will be dropped from the standard library entirely (that's why it's only a preview), but most likely it will move into the main part of the standard library with full backwards compatibility guarantees in the next release".
+1.
Hm. You could do this just as well without a __preview__ package. You just flag the module as experimental in the docs and get on with your life. We have some experience with this in Google App Engine. We used to use a separate "labs" package in our namespace and when packages were deemed stable enough they were moved from labs to non-labs. But the move always turned out to be a major pain, causing more breakage than we would have had if we had simply kept the package location the same but let the API mutate. Now we just put new, experimental packages in the right place from the start, and put a loud "experimental" banner on all pages of their docs, which is removed once the API is stable. There is much less pain now: while incompatible changes do happen for experimental package, they are not frequent, and rarely earth-shattering, and usually the final step is simply removing the banner without making any (incompatible) changes to the code. This means that the final step is painless for early adopters, thereby rewarding them for their patience instead of giving them one final kick while they sort out the import changes. So I do not support the __preview__ package. I think we're better off flagging experimental modules in the docs than in their name. For the specific case of the regex module, the best way to adoption may just be to include it in the stdlib as regex and keep it there. Any other solution will just cause too much anxiety. -- --Guido van Rossum (python.org/~guido)
On Jan 28, 2012, at 09:15 AM, Guido van Rossum wrote:
So I do not support the __preview__ package. I think we're better off flagging experimental modules in the docs than in their name. For the specific case of the regex module, the best way to adoption may just be to include it in the stdlib as regex and keep it there. Any other solution will just cause too much anxiety.
+1 What does the PEP give you above this "simple as possible" solution? -Barry
On Sat, 28 Jan 2012 13:14:36 -0500
Barry Warsaw
On Jan 28, 2012, at 09:15 AM, Guido van Rossum wrote:
So I do not support the __preview__ package. I think we're better off flagging experimental modules in the docs than in their name. For the specific case of the regex module, the best way to adoption may just be to include it in the stdlib as regex and keep it there. Any other solution will just cause too much anxiety.
+1
What does the PEP give you above this "simple as possible" solution?
"I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away." That problem would be much worse if some modules were simply marked "experimental" in the doc, rather than put in a separate namespace. You will see people copying recipes found on the internet without knowing that they rely on unstable APIs. Regards Antoine.
Antoine Pitrou
On Jan 28, 2012, at 09:15 AM, Guido van Rossum wrote:
So I do not support the __preview__ package. I think we're better off flagging experimental modules in the docs than in their name. For
On Sat, 28 Jan 2012 13:14:36 -0500 Barry Warsaw
wrote: the specific case of the regex module, the best way to adoption may just be to include it in the stdlib as regex and keep it there. Any other solution will just cause too much anxiety.
+1
What does the PEP give you above this "simple as possible" solution?
"I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away."
That problem would be much worse if some modules were simply marked "experimental" in the doc, rather than put in a separate namespace. You will see people copying recipes found on the internet without knowing that they rely on unstable APIs.
How. About doing them the way we do depreciated modules, and have them spit warnings to stderr? Maybe add a flag and environment variable to disable that. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Le samedi 28 janvier 2012 à 10:46 -0800, Mike Meyer a écrit :
Antoine Pitrou
wrote: On Jan 28, 2012, at 09:15 AM, Guido van Rossum wrote:
So I do not support the __preview__ package. I think we're better off flagging experimental modules in the docs than in their name. For
On Sat, 28 Jan 2012 13:14:36 -0500 Barry Warsaw
wrote: the specific case of the regex module, the best way to adoption may just be to include it in the stdlib as regex and keep it there. Any other solution will just cause too much anxiety.
+1
What does the PEP give you above this "simple as possible" solution?
"I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away."
That problem would be much worse if some modules were simply marked "experimental" in the doc, rather than put in a separate namespace. You will see people copying recipes found on the internet without knowing that they rely on unstable APIs.
How. About doing them the way we do depreciated modules, and have them spit warnings to stderr? Maybe add a flag and environment variable to disable that.
You're proposing that new experimental modules spit warnings when you use them? I don't think that's a good way of promoting their use :) (something we do want to do even though we also want to convey the idea that they're not yet "stable" or "fully approved") Regards Antoine.
Antoine Pitrou
Antoine Pitrou
wrote: You will see people copying recipes found on the internet without knowing that they rely on unstable APIs.
How. About doing them the way we do depreciated modules, and have
Le samedi 28 janvier 2012 à 10:46 -0800, Mike Meyer a écrit : them
spit warnings to stderr? Maybe add a flag and environment variable to disable that.
You're proposing that new experimental modules spit warnings when you use them?
To be explicit, when the system loada them.
I don't think that's a good way of promoting their use :)
And importing something from __preview__or __experimental__or whatever won't? This thread did include the suggestion that they go into their final location instead of a magic module.
(something we do want to do even though we also want to convey the idea that they're not yet "stable" or "fully approved")
Doing it with a message pointing at the page describing the status makes sure users read the docs before using them. That solves the problem of using them without realizing it. -- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
You're proposing that new experimental modules spit warnings when you use them?
To be explicit, when the system loada them.
There are many reasons to import a module, such as viewing its documentation. And the warning will trigger if the import happens in non-user code, such as a library; or when there is a fallback for the module not being present. People usually get annoyed by intempestive warnings which don't warn about an actual problem.
(something we do want to do even though we also want to convey the idea that they're not yet "stable" or "fully approved")
Doing it with a message pointing at the page describing the status makes sure users read the docs before using them.
Sure, it's just much less user-friendly than conveying that idea in the module's namespace. Besides, it only works if warnings are not silenced. People are used to __future__ (and I've seen no indication that they don't like it). __preview__ is another application of the same pattern (using a special namespace to indicate the status of a feature). Regards Antoine.
On Sat, Jan 28, 2012 at 3:02 PM, Antoine Pitrou
There are many reasons to import a module, such as viewing its documentation. And the warning will trigger if the import happens in non-user code, such as a library; or when there is a fallback for the module not being present. People usually get annoyed by intempestive warnings which don't warn about an actual problem.
As an alternative, how about a __preview__ or __provisional__ attribute on modules that are in this provisional state? So just add that big warning to the docs, as Guido suggested, and set the attribute as a programmatic indicator. Perhaps also add sys.provisional_modules (or wherever) to explicitly give the full list for the current Python version. -eric
Le samedi 28 janvier 2012 à 16:03 -0700, Eric Snow a écrit :
On Sat, Jan 28, 2012 at 3:02 PM, Antoine Pitrou
wrote: There are many reasons to import a module, such as viewing its documentation. And the warning will trigger if the import happens in non-user code, such as a library; or when there is a fallback for the module not being present. People usually get annoyed by intempestive warnings which don't warn about an actual problem.
As an alternative, how about a __preview__ or __provisional__ attribute on modules that are in this provisional state? So just add that big warning to the docs, as Guido suggested, and set the attribute as a programmatic indicator. Perhaps also add sys.provisional_modules (or wherever) to explicitly give the full list for the current Python version.
Well, how often do you examine the attributes of a module before using it? I think that's a much too obscure way to convey the information. Regards Antoine.
On Sat, Jan 28, 2012 at 4:08 PM, Antoine Pitrou
Le samedi 28 janvier 2012 à 16:03 -0700, Eric Snow a écrit :
On Sat, Jan 28, 2012 at 3:02 PM, Antoine Pitrou
wrote: There are many reasons to import a module, such as viewing its documentation. And the warning will trigger if the import happens in non-user code, such as a library; or when there is a fallback for the module not being present. People usually get annoyed by intempestive warnings which don't warn about an actual problem.
As an alternative, how about a __preview__ or __provisional__ attribute on modules that are in this provisional state? So just add that big warning to the docs, as Guido suggested, and set the attribute as a programmatic indicator. Perhaps also add sys.provisional_modules (or wherever) to explicitly give the full list for the current Python version.
Well, how often do you examine the attributes of a module before using it? I think that's a much too obscure way to convey the information.
Granted. However, actively looking for the attribute is only one of the lesser use-cases. The key is that it allows you to check any library programmatically for dependence on any of the provisional modules. The warning in the docs is important, but being able to have code check for it is important too. As a small bonus, it would show up in help for the module and in dir(). -eric
On Sun, Jan 29, 2012 at 3:15 AM, Guido van Rossum
Hm. You could do this just as well without a __preview__ package. You just flag the module as experimental in the docs and get on with your life.
We have some experience with this in Google App Engine. We used to use a separate "labs" package in our namespace and when packages were deemed stable enough they were moved from labs to non-labs. But the move always turned out to be a major pain, causing more breakage than we would have had if we had simply kept the package location the same but let the API mutate. Now we just put new, experimental packages in the right place from the start, and put a loud "experimental" banner on all pages of their docs, which is removed once the API is stable.
There is much less pain now: while incompatible changes do happen for experimental package, they are not frequent, and rarely earth-shattering, and usually the final step is simply removing the banner without making any (incompatible) changes to the code. This means that the final step is painless for early adopters, thereby rewarding them for their patience instead of giving them one final kick while they sort out the import changes.
So I do not support the __preview__ package. I think we're better off flagging experimental modules in the docs than in their name. For the specific case of the regex module, the best way to adoption may just be to include it in the stdlib as regex and keep it there. Any other solution will just cause too much anxiety.
I'm willing to go along with that (especially given your report of AppEngine's experience with the "labs" namespace). Can we class this as a pronouncement on PEP 408? That is, "No to adding a __preview__ namespace, but yes to adding regex directly for 3.3"? Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sat, Jan 28, 2012 at 5:33 PM, Nick Coghlan
I'm willing to go along with that (especially given your report of AppEngine's experience with the "labs" namespace).
Can we class this as a pronouncement on PEP 408? That is, "No to adding a __preview__ namespace, but yes to adding regex directly for 3.3"?
Yup. We seem to have a tendency to over-analyze decisions a bit lately (witness the hand-wringing about the hash collision DoS attack). For those who worry about people who copy recipes that stop working, I think they're worrying too much. If people want to take a shortcut without reading the documentation or understanding the code they are copying, fine, but they should realize the limitations of free advice. I don't mean to put down the many great recipes that exist or the value of copying code to get started quickly. But I think our liability as maintainers of the library is sufficiently delineated when we clearly mark a module as experimental in the documentation. (Recipe authors should ideally also add this warning to their recipe if it depends on an experimental API.) Finally, if you really want to put warnings in whenever an experimental module is being used, make it a silent warning, like SilentDeprecationWarning. That allows people to request more strict warnings without unduly alarming the users of an app. -- --Guido van Rossum (python.org/~guido)
On Sun, Jan 29, 2012 at 1:29 PM, Guido van Rossum
On Sat, Jan 28, 2012 at 5:33 PM, Nick Coghlan
wrote: I'm willing to go along with that (especially given your report of AppEngine's experience with the "labs" namespace).
Can we class this as a pronouncement on PEP 408? That is, "No to adding a __preview__ namespace, but yes to adding regex directly for 3.3"?
Yup. We seem to have a tendency to over-analyze decisions a bit lately (witness the hand-wringing about the hash collision DoS attack).
I have now updated PEP 408 accordingly (i.e. rejected, but with a specific note about regex). And (since Alex Gaynor brought it up off-list), I'll explicitly note here that I'm taking your approval as granting the special permission PEP 399 needs to accept a C extension module without a pure Python equivalent. Patches to *add* a pure Python version for use by other implementations are of course welcome (in practice, I suspect it's likely only in PyPy that such an engine would be fast enough to be usable). Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sun, 29 Jan 2012 16:42:28 +1000
Nick Coghlan
On Sun, Jan 29, 2012 at 1:29 PM, Guido van Rossum
wrote: On Sat, Jan 28, 2012 at 5:33 PM, Nick Coghlan
wrote: I'm willing to go along with that (especially given your report of AppEngine's experience with the "labs" namespace).
Can we class this as a pronouncement on PEP 408? That is, "No to adding a __preview__ namespace, but yes to adding regex directly for 3.3"?
Yup. We seem to have a tendency to over-analyze decisions a bit lately (witness the hand-wringing about the hash collision DoS attack).
I have now updated PEP 408 accordingly (i.e. rejected, but with a specific note about regex).
It would be nice if that pronouncement or decision could outline the steps required to include an "experimental" module in the stdlib, and the steps required to move it from "experimental" to "stable". Regards Antoine.
On Tue, Jan 31, 2012 at 4:59 AM, Antoine Pitrou
It would be nice if that pronouncement or decision could outline the steps required to include an "experimental" module in the stdlib, and the steps required to move it from "experimental" to "stable".
Actually, that's a good idea - Eli, care to try your hand at writing up a counter-PEP to 408 that more explicitly documents Guido's preferred approach? It should document a standard note to be placed in the module documentation and in What's New for experimental/provisional/whatever modules. For example: "The <X> module has been included in the standard library on a provisional basis. While major changes are not anticipated, as long as this notice remains in place, backwards incompatible changes are permitted if deemed necessary by the standard library developers. Such changes will not be made gratuitously - they will occur only if serious API flaws are uncovered that were missed prior to inclusion of the module. If the small chance of such changes is not acceptable for your use, the module is also available from PyPI with full backwards compatibility guarantees." (include direct link to module on PyPI) As far as the provisional->stable transition goes, I'd say there are a couple of options: 1. Just make it part of the normal release process to ask for each provisional module "This hasn't been causing any dramas, shall we drop the provisional warning?" 2. Explicitly create 'release blocker' tracker issues for the *next* release whenever a provisional module is added. These will basically say "either drop the provisional warning for module <X> or bump this issue along to the next release" Former is obviously easier, latter means we're less likely to forget to do it. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jan 28, 2012, at 07:29 PM, Guido van Rossum wrote:
Finally, if you really want to put warnings in whenever an experimental module is being used, make it a silent warning, like SilentDeprecationWarning. That allows people to request more strict warnings without unduly alarming the users of an app.
I'll just note too that we have examples of "stable" APIs in modules being used successfully in the field for years, and still having long hand-wringing debates about whether the API choices are right or not. <cough>email</cough> Nothing beats people beating on it heavily for years in production code to shake things out. I often think a generic answer to "did I get the API right" could be "no, but it's okay" :) -Barry
On Mon, Jan 30, 2012 at 8:44 AM, Barry Warsaw
Nothing beats people beating on it heavily for years in production code to shake things out. I often think a generic answer to "did I get the API right" could be "no, but it's okay" :)
Heh, my answer to complaints about the urrlib (etc) APIs being horrendous in the modern web era is to point out that they were put together in an age where "web" mostly meant "unauthenticated HTTP GET requests". They're hard to use for modern authentication protocols because they *predate* widespread use of such things... Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
I think an advocacy of 3rd party modules would start with modules such as
ipaddr, requests, regex. Linking directly to them from the python core
documentation, while requesting they hold a successful moratorium in order
to be included in a later standard module release.
On Jan 30, 2012 10:47 AM, "Nick Coghlan"
On Mon, Jan 30, 2012 at 8:44 AM, Barry Warsaw
wrote: Nothing beats people beating on it heavily for years in production code to shake things out. I often think a generic answer to "did I get the API right" could be "no, but it's okay" :)
Heh, my answer to complaints about the urrlib (etc) APIs being horrendous in the modern web era is to point out that they were put together in an age where "web" mostly meant "unauthenticated HTTP GET requests".
They're hard to use for modern authentication protocols because they *predate* widespread use of such things...
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
On 1/28/2012 2:10 AM, Nick Coghlan wrote:
On Sat, Jan 28, 2012 at 3:22 PM, Stephen J. Turnbull
wrote: Executive summary:
If the promise to remove the module from __preview__ is credible (ie, strictly kept), then __preview__ will have a specific audience in those who want the stdlib candidate code and are willing to deal with a certain amount of instability in that code.
People need to remember there's another half to this equation: the core dev side.
The reason *regex* specifically isn't in the stdlib already is largely due to (perhaps excessive) concerns about the potential maintenance burden. It's not a small chunk of code and we don't want to deal with another bsddb.
...
Really, the main benefit for end users doesn't lie in __preview__ itself: it lies in the positive effect __preview__ will have on the long term evolution of the standard library, as it aims to turn python-dev's inherent conservatism (which is a good thing!) into a speed bump rather than a road block.
I was -0 on this proposal, but after Nick's discussion above I'm now +1. I also think it's worth thinking about how multiprocessing would have benefited from the __preview__ process. And for people saying "just use PyPI": that tends to exclude many Windows users from trying out packages that aren't pure Python.
On 27/01/2012 22:54, Barry Warsaw wrote:
On Jan 27, 2012, at 10:48 PM, Antoine Pitrou wrote:
I'm -1 on this as well. It just feels like the completely wrong way to stabilize an API, and I think despite the caveats that are explicit in __preview__, Python will just catch tons of grief from users and haters about API instability anyway, because from a practical standpoint, applications written using __preview__ APIs *will* be less stable. Well, obviously __preview__ is not for the most conservative users. I
On Fri, 27 Jan 2012 16:10:51 -0500 Barry Warsaw
wrote: think the name clearly conveys the idea that you are trying out something which is not in its definitive state, doesn't it? Maybe. I could quibble about the name, but let's not bikeshed on that right now. The problem as I see it is that __preview__ will be very tempting to use in production. In fact, its use case is almost predicated on that. (We want you to use it so you can tell us if the API is good.) Once people use it, they will probably ship code that relies on it, and then the pressure will be applied to us to continue to support that API even if a newer, better one gets promoted out of __preview__. I worry that over time, for all practical purposes, there won't be much difference between __preview__ and the stdlib.
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib. I agree with everything Alex said here. The idea that being on PyPI is sufficient is nice but flawed (the IPaddr example). PyPI doesn't guarantee any visibility (how many packages are there?). Furthermore, having users is not a guarantee that the API is appropriate, either; it just means that the API is appropriate for *some* users. I can't argue with that, it's just that I don't think __preview__ solves that problem. And it seems to me that __preview__ introduces a whole 'nother set of problems on top of that.
So taking the IPaddr example further. Would having it in the stdlib, relegated to an explicitly unstable API part of the stdlib, increase eyeballs enough to generate the kind of API feedback we're looking for, without imposing an additional maintenance burden on us?
I think the answer is yes. That's kind of the crux of the matter I guess.
If you were writing an app that used something in __preview__, how would you provide feedback on what parts of the API you'd want to change, The bugtracker.
*and* how would you adapt your application to use those better APIs once they became available 18 months from now?
How do users do it for the standard library? Using the third party version is one way.
I think we'll just see folks using the unstable APIs and then complaining when we remove them, even though they *know* *upfront* that these APIs will go away.
I'm also nervous about it from an OS vender point of view. Should I reject any applications that import from __preview__? Or do I have to make a commitment to support those APIs longer than Python does because the application that uses it is important to me?
I think the OS vendor problem is easier with an application that uses some PyPI package, because I can always make that package available to the application by pulling in the version I care about. It's harder if a newer, incompatible version is released upstream and I want to provide both, but I don't think __preview__ addresses that. A robust, standard approach to versioning of modules would though, and I think would better solve what __preview__ is trying to solve.
Don't OS vendors go further and say "pin your dependency to the version we ship", whether it's in the Python standard library or not? So "just use a more recent version from pypi" is explicitly not an option for people using system packages. As OS packagers tend to target a specific version of python, using __preview__ for that version would be fine - and when they upgrade to the next version applications may need fixing in the same way as they would if the system packaged a new release of the third party library. (When moving between Ubuntu distributions I've found that my software using system packages often needs to change because the version of some library has now changed.) Plus having a package in __preview__ has no bearing on whether or not the system packages the third party version, so I think it's a bit of a red-herring. Michael
On the other hand, __preview__ would clearly signal that something is on the verge of being frozen as an official stdlib API, and would prompt people to actively try it. I'm not so sure about that. If I were to actively try it, I'm not sure how much motivation I'd have to rewrite key parts of my code when an incompatible version gets promoted to the un__preview__d stdlib.
-Barry _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
n Sat, Jan 28, 2012 at 3:26 AM, Alex
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
And what about a project like regex, which *has* the eyeballs on PyPI, but the core devs aren't confident enough of its maintainability yet to be happy about adding it directly to the stdlib with full backwards compatibility guarantees? The easy answer for us in that context is to just not add it (i.e. the status quo), which isn't a healthy outcome for the overall language ecosystem. Really, regex is the *reason* this PEP exists: we *know* we need to either replace or seriously enhance "re" (since its Unicode handling isn't up to scratch), but we're only *pretty sure* adding "regex" to the stdlib is the right answer. Adding "__preview__.regex" instead gives us a chance to back out if we uncover serious problems (e.g. with the cross-platform support). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jan 28, 2012, at 11:13 AM, Nick Coghlan wrote:
Really, regex is the *reason* this PEP exists: we *know* we need to either replace or seriously enhance "re" (since its Unicode handling isn't up to scratch), but we're only *pretty sure* adding "regex" to the stdlib is the right answer. Adding "__preview__.regex" instead gives us a chance to back out if we uncover serious problems (e.g. with the cross-platform support).
I'd also feel much better about this PEP if we had specific ways to measure success. If, for example, regex were added to Python 3.3, but removed from 3.4 because we didn't get enough feedback about it, then I'd consider the approach put forward in this PEP to be a failure. Experiments that fail are *okay* of course, if they are viewed as experiments, there are clear metrics to measure their success, and we have the guts to end the experiment if it doesn't work out. Of course, if it's a resounding success, then that's fantastic too. -Barry
On Fri, Jan 27, 2012 at 12:26 PM, Alex
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
Strongly agree.
FWIW I'm now -1 for this idea. Stronger integration with PyPI and
packaging systems is much preferable. Python core public releases are
no place for testing.
On Sat, Jan 28, 2012 at 2:42 AM, Matt Joiner
On Fri, Jan 27, 2012 at 12:26 PM, Alex
wrote: I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
Strongly agree.
On Sat, Jan 28, 2012 at 9:49 AM, Matt Joiner
FWIW I'm now -1 for this idea. Stronger integration with PyPI and packaging systems is much preferable. Python core public releases are no place for testing.
+1. I'd much rather just use the module from PyPI. It would be good to have a practical guide on how to manage the transition from third-party to core library module though. A PEP with a list of modules earmarked for upcoming inclusion in the standard library (and which Python version they're intended to be included in) might focus community effort on using, testing and fixing modules before they make it into core and fixing becomes a lot harder. Schiavo Simon
+1. I'd much rather just use the module from PyPI.
It would be good to have a practical guide on how to manage the transition from third-party to core library module though. A PEP with a list of modules earmarked for upcoming inclusion in the standard library (and which Python version they're intended to be included in) might focus community effort on using, testing and fixing modules before they make it into core and fixing becomes a lot harder.
+1 for your +1, and earmarking. That's the word I was looking for, and instead chose "advocacy".
Wiadomość napisana przez Simon Cross w dniu 28 sty 2012, o godz. 08:58:
+1. I'd much rather just use the module from PyPI.
It would be good to have a practical guide on how to manage the transition from third-party to core library module though. A PEP with a list of modules earmarked for upcoming inclusion in the standard library (and which Python version they're intended to be included in) might focus community effort on using, testing and fixing modules before they make it into core and fixing becomes a lot harder.
+1 -- Best regards, Łukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o.
On Sat, Jan 28, 2012 at 5:49 PM, Matt Joiner
FWIW I'm now -1 for this idea. Stronger integration with PyPI and packaging systems is much preferable. Python core public releases are no place for testing.
People saying this: we KNOW this approach doesn't work in all cases. If it worked perfectly, regex would be in the standard library by now. Don't consider this PEP a purely theoretical proposal, because it isn't. It's really being put forward to solve a specific problem: the fact that we need to do something about re's lack of proper Unicode support [1]. Those issues are actually hard to solve, so replacing re with Matthew Barnett's regex module (just as re itself was a replacement for the original regex module) that already addresses most of them seems like a good way forward, but this is currently being blocked because there are still a few lingering concerns with maintainability and backwards compatibility. We *need* to break the impasse preventing its inclusion in the standard library, and __preview__ lets us do that without running roughshod over the legitimate core developer concerns raised in the associated tracker issue [2]. With the current criteria for stdlib inclusion, it doesn't *matter* if a module is oh-so-close to being accepted: it gets rejected anyway, just like a module that has no chance of ever being suitable. There is currently *no* path forward for resolving any stdlib-specific concerns that arise with already popular PyPI modules, and so such situations remain unresolved and key components of the standard library stagnate. While regex is the current poster-child for this problem, it's quite likely that similar problems will arise in the future. Kenneth Reitz's requests module is an obvious candidate: it's enormously popular with users, Kenneth has indicated he's amenable to the idea of stdlib inclusion once the feature set is sufficiently stable (i.e. not for 3.3), but I expect there will be legitimate concerns with incorporating it, given its scope. Cheers, Nick. [1] http://bugs.python.org/issue?%40search_text=&ignore=file%3Acontent&title=&%40columns=title&id=&%40columns=id&stage=&creation=&creator=tchrist&activity=&%40columns=activity&%40sort=activity&actor=&nosy=&type=&components=&versions=&dependencies=&assignee=&keywords=&priority=&%40group=priority&status=1&%40columns=status&resolution=&nosy_count=&message_count=&%40pagesize=50&%40startwith=0&%40queryname=&%40old-queryname=&%40action=search [2] http://bugs.python.org/issue2636 -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 1/28/2012 3:55 AM, Nick Coghlan wrote: I am currently -something on the proposal as it because it will surely create a lot of hassles and because I do not think it is necessary the best solution to the motivating concerns.
Don't consider this PEP a purely theoretical proposal, because it isn't. It's really being put forward to solve a specific problem: the fact that we need to do something about re's lack of proper Unicode support [1]. Those issues are actually hard to solve, so replacing re with Matthew Barnett's regex module (just as re itself was a replacement for the original regex module) that already addresses most of them seems like a good way forward, but this is currently being blocked because there are still a few lingering concerns with maintainability and backwards compatibility.
I find the concern about 'maintainability' a bit strange as regex seems to be getting more maintainance and improvement than re. The re author is no longer active. If neither were in the library, and we were considering both, regex would certainly win, at least from a user view. Tom Christiansen reviewed about 8 unicode-capable extended r. e. packages, including both re and regex, and regex came out much better. The concern about back compatibility ignores the code that re users cannot write. In any case, that problem would be solved by adding regex in addition to re instead of as a replacement. If it were initially added as __preview__.regex, would the next step be to call it regex? or change it to re and remove the current package?. If the former, I think we might as well do it now. If the latter, that is different from what the pep proposes.
While regex is the current poster-child for this problem,
I see it as a special case that is not really addressed by the Pep. The other proposed use-case for __preview__ is packages whose api is not stable. Such packages may need their api changed a lot sooner than 18-24 months. Or, their api may change for a lot longer than just one release cycle. So the PEP would be best suited for packages who api may be fixed but might need code-breaking adjustments *once* in 18 months. A counter-proposal: add an __x__ package to site-packages. Document the contents separately in an X-Library manual. Let the api of such packages change with every micro release. Don't guarantee that modules won't disappear completely. Don't put a time limit on residence there before being moved up (to the stdlib) or out. Packages that track volatile external standards could stay there indefinitely. If an module is moved to stdlib, leave a stub for at least two versions that emits a deprecation warning (to switch to import a instead of __x__.a) and a notice that the doc has moved, along with importing the contents of the stdlib version. (This would work for the __preview__ proposal also.) -- Terry Jan Reedy
On Sat, 28 Jan 2012 02:49:40 -0500
Matt Joiner
FWIW I'm now -1 for this idea. Stronger integration with PyPI and packaging systems is much preferable.
That will probably never happen. "pip install XXX" is the best we (python-dev and the community) can do. "import some_module" won't magically start fetching some_module from PyPI if it isn't installed on your system. So the bottom line is: we would benefit from an intermediate status between "available on PyPI" and "shipped as a stable API in the stdlib". The __preview__ proposal does just that in an useful way; are there any alternatives you'd like to suggest? Regards Antoine.
Hi, Am 27.01.2012 um 18:26 schrieb Alex:
I'm -1 on this, for a pretty simple reason. Something goes into __preview__, instead of it's final destination directly because it needs feedback/possibly changes. However, given the release cycle of the stdlib (~18 months), any feedback it gets can't be seen by actual users until it's too late. Essentially you can only get one round of stdlib.
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
I agree with Alex on this: The iterations – even with PEP 407 – would be wayyy too long to be useful. As for the only downside: How about endorsing certain pypi projects as possible future additions in order to give them more exposure? I'm sure there is some nice way for that. Plus: Everybody could pin the version their code depends on right now, so updates wouldn't break anything. I.e. api users would have more peace of mind and api developers could develop more aggressively. Bye, -h
On Fri, Jan 27, 2012 at 9:26 AM, Alex
Eli Bendersky
writes: Hello,
Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome. The PEP can also be viewed in HTML form at http://www.python.org/dev/peps/pep-0408/
[1] http://mail.python.org/pipermail/python-ideas/2012-January/013246.html
I'm -1 on this, for a pretty simple reason. Something goes into __preview__, instead of it's final destination directly because it needs feedback/possibly changes. However, given the release cycle of the stdlib (~18 months), any feedback it gets can't be seen by actual users until it's too late. Essentially you can only get one round of stdlib.
I think a significantly healthier process (in terms of maximizing feedback and getting something into it's best shape) is to let a project evolve naturally on PyPi and in the ecosystem, give feedback to it from an inclusion perspective, and then include it when it becomes ready on it's own merits. The counter argument to this is that putting it in the stdlib gets you signficantly more eyeballs (and hopefully more feedback, therefore), my only response to this is: if it doesn't get eyeballs on PyPi I don't think there's a great enough need to justify it in the stdlib.
-1 from me as well. How is the __preview__ namespace any different than the PendingDeprecationWarning that nobody ever uses? Nobody is likely to write significant code depending on anything in __preview__ thus the amount of feedback received would be low. A better way to get additional feedback would be to promote libraries that we are considering including by way of direct links to them on pypi from the relevant areas of the Python documentation (including the Module Reference / Index pages?) for that release and let the feedback on them roll in via that route. An example of this working: ipaddr is ready to go in. It got the eyeballs and API modifications while still a pypi library as a result of the discussion around the time it was originally suggested as being added. I or any other committers have simply not added it yet. -gps
On 29 January 2012 21:39, Gregory P. Smith
An example of this working: ipaddr is ready to go in. It got the eyeballs and API modifications while still a pypi library as a result of the discussion around the time it was originally suggested as being added. I or any other committers have simply not added it yet.
Interesting. I recall the API debates and uncertainty, but I don't recall having seen anything to indicate that it all got resolved and we're essentially "ready to go". If I were looking for an IP address library, I wouldn't know where to go, and I certainly wouldn't know that there was an option that would become part of the stdlib. Not sure that counts as the approach "working"... (although I concede that my lack of a *real* need for an IP address library may be a contributing factor to my lack of knowledge...) Paul.
On 1/29/2012 4:39 PM, Gregory P. Smith wrote:
An example of this working: ipaddr is ready to go in. It got the eyeballs and API modifications while still a pypi library as a result of the discussion around the time it was originally suggested as being added. I or any other committers have simply not added it yet.
This is wrong. PEP 3144 was not pronounced upon, so ipaddr is not just waiting for someone to commit it; it's waiting on consensus and pronouncement. PEP 3144 wasn't pronounced upon because there were significant disagreements about the design of the API proposed in the PEP. As it stands, I believe the authorship of ipaddr either decided that they were not going to compromise their module or lost interest. See Nick Coghlan's summary: http://mail.python.org/pipermail//python-ideas/2011-August/011305.html -- Scott Dial scott@scottdial.com
Maybe that's another example of waiting too long for the perfect
decision though. In the last ~12 months, ipaddr was downloaded at
least 11,000 times from its home
(http://code.google.com/p/ipaddr-py/downloads/list). There's been a
fair amount of changes over that time and a new release was put out 10
days ago. What are the stats for the "competing" package?
--Guido
On Mon, Jan 30, 2012 at 10:19 AM, Scott Dial
On 1/29/2012 4:39 PM, Gregory P. Smith wrote:
An example of this working: ipaddr is ready to go in. It got the eyeballs and API modifications while still a pypi library as a result of the discussion around the time it was originally suggested as being added. I or any other committers have simply not added it yet.
This is wrong. PEP 3144 was not pronounced upon, so ipaddr is not just waiting for someone to commit it; it's waiting on consensus and pronouncement.
PEP 3144 wasn't pronounced upon because there were significant disagreements about the design of the API proposed in the PEP. As it stands, I believe the authorship of ipaddr either decided that they were not going to compromise their module or lost interest.
See Nick Coghlan's summary:
http://mail.python.org/pipermail//python-ideas/2011-August/011305.html
-- Scott Dial scott@scottdial.com _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
-- --Guido van Rossum (python.org/~guido)
Eli Bendersky wrote:
Hello,
Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome.
I think you need to emphasize that modules in __preview__ are NOT expected to have a forward-compatible, stable, API. This is a feature of __preview__, not a bug, and I believe it is the most important feature. I see responses to this PEP that assume that APIs will be stable, and that having a module fail to graduate out of __preview__ should be an extraordinary event. But if this is the case, then why bother with __preview__? It just adds complexity to the process -- if __preview__.spam and spam are expected to be the same, then just spam straight into the std lib and be done with it. This PEP only makes sense if we assume that __preview__.spam and spam *will* be different, even if only in minor ways, and that there might not even be a spam. There should be no expectation that every __preview__ module must graduate, or that every standard library module must go through __preview__. If it is stable and uncontroversial, __preview__ adds nothing to the process. Even when there are candidates for inclusion with relatively stable APIs, like regex, we should *assume* that there will be API differences between __preview__.regex and regex, simply because it is less harmful to expect changes that don't eventuate than to expect stability and be surprised by changes. This, I believe, rules out Antoine's suggestion that modules remain importable from __preview__ even after graduation to a full member of the standard library. We simply can't say have all three of these statements true at the same time: 1) regular standard library modules are expected to be backward compatible 2) __preview__ modules are not expected to be forward compatible 3) __preview__.spam is an alias to regular standard library spam At least one of them has to go. Since both 1) and 2) are powerful features, and 3) is only a convenience, the obvious one to drop is 3). I note that the PEP, as it is currently written, explicitly states that __preview__.spam will be dropped when it graduates to spam. This is a good thing and should not be changed. Keeping __preview__.spam around after graduation is, I believe, actively harmful. It adds complexity to the developer's decision-making process ("Should I import spam from __preview__, or just import spam? What's the difference?"). It gives a dangerous impression that code written for __preview__.spam will still work for spam. We should be discouraging simple-minded recipes like try: import spam except ImportError: from __preview__ import spam spam.foo(a, b, c) since they undermine the vital feature of __preview__ that the signature and even the existence of spam.foo is subject to change. I would go further and suggest that __preview__ be explicitly called __unstable__. If that name is scary, and it frightens some users off, good! The last thing we want is when 3.4 comes around to have dozens of bug reports along the line of "spam.foo() and __preview__.spam.foo() have different function signatures and aren't compatible". Of course they do. That's why __preview__.spam existed in the first place, to allow the API to mature without the expectation that it was already stable. Since __preview__.spam (or, as I would prefer, __unstable__.spam) and spam cannot be treated as drop-in replacements, what is __preview__.spam good for? Without a stable API, __preview__.spam is not suitable for use in production applications that expect to run under multiple versions of the standard library. I think the PEP needs more use-cases on who might use __preview__.spam, and why. These come to my mind: * if you don't care about Python 3.x+1, then there is no reason not to treat Python 3.x's __preview__.spam as stable; * rapid development proof-of-concept software ("build one to throw away") can safely use __preview__.spam, since they are expected to be replaced anyway; * one-use scripts; * use at the interactive interpreter; * any other time where forward-compatibility is not required. I am reminded of the long, often acrimonious arguments that took place on Python-Dev a few years back about the API for the ipaddr library. A lot of the arguments could have been short-circuited if we had said "putting ipaddr into __preview__ does not constitute acceptance of its API". (On the other hand, if __preview__ becomes used in the future for library authors to fob-off criticism for 18 months in the hope it will just be forgotten, then this will be a bad thing.) -- Steven
On Sat, Jan 28, 2012 at 6:43 AM, Steven D'Aprano
This PEP only makes sense if we assume that __preview__.spam and spam *will* be different, even if only in minor ways, and that there might not even be a spam. There should be no expectation that every __preview__ module must graduate, or that every standard library module must go through __preview__. If it is stable and uncontroversial, __preview__ adds nothing to the process.
Yes, the PEP already points to lzma as an example of a module with a sufficiently obvious API that it didn't need to go through a preview round.
Keeping __preview__.spam around after graduation is, I believe, actively harmful. It adds complexity to the developer's decision-making process ("Should I import spam from __preview__, or just import spam? What's the difference?"). It gives a dangerous impression that code written for __preview__.spam will still work for spam.
Yes, this was exactly the reasoning behind removing the names from __preview__ namespace when the modules graduated. It sets a line in the sand: "An API compatibility break is not only allowed, it is 100% guaranteed. If you are not prepared to deal with this, then you are *not* part of the target audience for the __preview__ namespace. Wait until the module reaches the main section of the standard library before you start using it, or else download a third party supported version with backwards compatibility guarantees from PyPI. The __preview__ namespace is not designed for anything that requires long term support spanning multiple Python version - it is intended for use in single version environments, such as intranet web services and student classrooms"
I would go further and suggest that __preview__ be explicitly called __unstable__. If that name is scary, and it frightens some users off, good!
Hmm, the problem with "unstable" is that we only mean the *API* is unstable. The software itself will be as thoroughly tested as everything else we ship.
I think the PEP needs more use-cases on who might use __preview__.spam, and why. These come to my mind:
* if you don't care about Python 3.x+1, then there is no reason not to treat Python 3.x's __preview__.spam as stable;
* rapid development proof-of-concept software ("build one to throw away") can safely use __preview__.spam, since they are expected to be replaced anyway;
* one-use scripts;
* use at the interactive interpreter;
* any other time where forward-compatibility is not required.
A specific list of use cases is a good idea. I'd add a couple more: * in a student classroom where the concept of PyPI and third party packages has yet to be introduced * for an intranet web service deployment where due diligence adds significant overhead to any use of third party packages
I am reminded of the long, often acrimonious arguments that took place on Python-Dev a few years back about the API for the ipaddr library. A lot of the arguments could have been short-circuited if we had said "putting ipaddr into __preview__ does not constitute acceptance of its API".
Yep, there's a reason 'ipaddr' was high on the list of modules this could be used for :) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Jan 28, 2012, at 11:27 AM, Nick Coghlan wrote:
* for an intranet web service deployment where due diligence adds significant overhead to any use of third party packages
Which really means that *we* are assuming the responsibility for this due diligence. And of course, we should not add anything to __preview__ without consent (and contributor agreement) of the upstream developers. -Barry
On 27/01/2012 20:43, Steven D'Aprano wrote:
Eli Bendersky wrote:
Hello,
Following an earlier discussion on python-ideas [1], we would like to propose the following PEP for review. Discussion is welcome.
I think you need to emphasize that modules in __preview__ are NOT expected to have a forward-compatible, stable, API. This is a feature of __preview__, not a bug, and I believe it is the most important feature.
I see responses to this PEP that assume that APIs will be stable,
I didn't see responses like that - the *point* of this pep is to allow an api we think *should* be in the standard library stabilise and mature (that's how I see it anyway). There is a difference between "not yet stable" and "we will make huge gratuitous changes" though. We *might* make huge gratuitous changes, but only if they're really needed (meaning they're huge but not gratuitous I guess).
and that having a module fail to graduate out of __preview__ should be an extraordinary event.
I would say this will probably be the case. Once we add something there will be resistance to removing it and we shouldn't let things rot in __preview__ either. I would say failing to graduate would be the exception, although maybe not extraordinary.
But if this is the case, then why bother with __preview__? It just adds complexity to the process -- if __preview__.spam and spam are expected to be the same, then just spam straight into the std lib and be done with it.
I think you're misunderstanding what was suggested. The suggestion was that once spam has graduated from __preview__ into stdlib, that __preview__.spam should remain as an alias - so that code using it from __preview__ at least has a fighting chance of working.
This PEP only makes sense if we assume that __preview__.spam and spam *will* be different,
I disagree. Once there is a spam they should remain the same. __preview__ is for packages that haven't yet made it into the standard library - not a place for experimenting with apis that are already there.
even if only in minor ways, and that there might not even be a spam. There should be no expectation that every __preview__ module must graduate,
Graduate or die however.
or that every standard library module must go through __preview__. If it is stable and uncontroversial, __preview__ adds nothing to the process.
Sure. __preview__ is for things that *need* previewing. All the best, Michael Foord
Even when there are candidates for inclusion with relatively stable APIs, like regex, we should *assume* that there will be API differences between __preview__.regex and regex, simply because it is less harmful to expect changes that don't eventuate than to expect stability and be surprised by changes.
This, I believe, rules out Antoine's suggestion that modules remain importable from __preview__ even after graduation to a full member of the standard library. We simply can't say have all three of these statements true at the same time:
1) regular standard library modules are expected to be backward compatible 2) __preview__ modules are not expected to be forward compatible 3) __preview__.spam is an alias to regular standard library spam
At least one of them has to go. Since both 1) and 2) are powerful features, and 3) is only a convenience, the obvious one to drop is 3). I note that the PEP, as it is currently written, explicitly states that __preview__.spam will be dropped when it graduates to spam. This is a good thing and should not be changed.
Keeping __preview__.spam around after graduation is, I believe, actively harmful. It adds complexity to the developer's decision-making process ("Should I import spam from __preview__, or just import spam? What's the difference?"). It gives a dangerous impression that code written for __preview__.spam will still work for spam.
We should be discouraging simple-minded recipes like
try: import spam except ImportError: from __preview__ import spam spam.foo(a, b, c)
since they undermine the vital feature of __preview__ that the signature and even the existence of spam.foo is subject to change.
I would go further and suggest that __preview__ be explicitly called __unstable__. If that name is scary, and it frightens some users off, good! The last thing we want is when 3.4 comes around to have dozens of bug reports along the line of "spam.foo() and __preview__.spam.foo() have different function signatures and aren't compatible". Of course they do. That's why __preview__.spam existed in the first place, to allow the API to mature without the expectation that it was already stable.
Since __preview__.spam (or, as I would prefer, __unstable__.spam) and spam cannot be treated as drop-in replacements, what is __preview__.spam good for? Without a stable API, __preview__.spam is not suitable for use in production applications that expect to run under multiple versions of the standard library.
I think the PEP needs more use-cases on who might use __preview__.spam, and why. These come to my mind:
* if you don't care about Python 3.x+1, then there is no reason not to treat Python 3.x's __preview__.spam as stable;
* rapid development proof-of-concept software ("build one to throw away") can safely use __preview__.spam, since they are expected to be replaced anyway;
* one-use scripts;
* use at the interactive interpreter;
* any other time where forward-compatibility is not required.
I am reminded of the long, often acrimonious arguments that took place on Python-Dev a few years back about the API for the ipaddr library. A lot of the arguments could have been short-circuited if we had said "putting ipaddr into __preview__ does not constitute acceptance of its API".
(On the other hand, if __preview__ becomes used in the future for library authors to fob-off criticism for 18 months in the hope it will just be forgotten, then this will be a bad thing.)
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
participants (25)
-
Alex
-
Antoine Pitrou
-
Barry Warsaw
-
Benjamin Peterson
-
Chris Withers
-
Eli Bendersky
-
Eric Snow
-
Eric V. Smith
-
Ethan Furman
-
Gregory P. Smith
-
Guido van Rossum
-
Hynek Schlawack
-
Matt Joiner
-
Michael Foord
-
Mike Meyer
-
Nick Coghlan
-
Paul Moore
-
Philippe Fremy
-
Scott Dial
-
Simon Cross
-
Stephen J. Turnbull
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Terry Reedy
-
Łukasz Langa