![](https://secure.gravatar.com/avatar/003518259e6453b071aeaa7f6e6ed37b.jpg?s=120&d=mm&r=g)
Hi, our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: 1) deprecate something and add a DeprecationWarning; 2) forget about it after a while; 3) wait a few versions until someone notices it; 4) actually remove it; I suggest to follow the following process: 1) deprecate something and add a DeprecationWarning; 2) decide how long the deprecation should last; 3) use the deprecated-remove[1] directive to document it; 4) add a test that fails after the update so that we remember to remove it[2]; Other related issues: PendingDeprecationWarnings: * AFAIK the difference between PDW and DW is that PDW are silenced by default; * now DW are silence by default too, so there are no differences; * I therefore suggest we stop using it, but we can leave it around[3] (other projects might be using it for something different); Deprecation Progression: Before, we more or less used to deprecated in release X and remove in X+1, or add a PDW in X, DW in X+1, and remove it in X+2. I suggest we drop this scheme and just use DW until X+N, where N is >=1 and depends on what is being removed. We can decide to leave the DW for 2-3 versions before removing something widely used, or just deprecate in X and remove in X+1 for things that are less used. Porting from 2.x to 3.x: Some people will update directly from 2.7 to 3.2 or even later versions (3.3, 3.4, ...), without going through earlier 3.x versions. If something is deprecated on 3.2 but not in 2.7 and then is removed in 3.3, people updating from 2.7 to 3.3 won't see any warning, and this will make the porting even more difficult. I suggest that: * nothing that is available and not deprecated in 2.7, will be removed until 3.x (x needs to be defined); * possibly we start backporting warnings to 2.7 so that they are visible while running with -3; Documenting the deprecations: In order to advertise the deprecations, they should be documented: * in their doc, using the deprecated-removed directive (and possibly not the 'deprecated' one); * in the what's new, possibly listing everything that is currently deprecated, and when it will be removed; Django seems to do something similar[4]. (Another thing I would like is a different rending for deprecated functions. Some part of the docs have a deprecation warning on the top of the section and the single functions look normal if you miss that. Also while linking to a deprecated function it would be nice to have it rendered with a different color or something similar.) Testing the deprecations: Tests that fail when a new release is made and the version number is bumped should be added to make sure we don't forget to remove it. The test should have a related issue with a patch to remove the deprecated function and the test. Setting the priority of the issue to release blocker or deferred blocker can be done in addition/instead, but that works well only when N == 1 (the priority could be updated for every release though). The tests could be marked with an expected failure to give some time after the release to remove them. All the deprecation-related tests might be added to the same file, or left in the test file of their module. Where to add this: Once we agree about the process we should write it down somewhere. Possible candidates are: * PEP387: Backwards Compatibility Policy[5] (it has a few lines about this); * a new PEP; * the devguide; I think having it in a PEP would be good, the devguide can then link to it. Best Regards, Ezio Melotti [0]: http://bugs.python.org/issue13248 [1]: deprecated-removed doesn't seem to be documented in the documenting doc, but it was added here: http://hg.python.org/cpython/rev/03296316a892 [2]: see e.g. http://hg.python.org/cpython/file/default/Lib/unittest/test/test_case.py#l11... [3]: we could also introduce a MetaDeprecationWarning and make PendingDeprecationWarning inherit from it so that it can be used to pending-deprecate itself. Once PendingDeprecationWarning is gone, the MetaDeprecationWarning will become useless and can then be used to meta-deprecate itself. [4]: https://docs.djangoproject.com/en/dev/internals/deprecation/ [5]: http://www.python.org/dev/peps/pep-0387/
![](https://secure.gravatar.com/avatar/db5f70d2f2520ef725839f046bdc32fb.jpg?s=120&d=mm&r=g)
On Mon, 24 Oct 2011 15:58:11 +0300 Ezio Melotti <ezio.melotti@gmail.com> wrote:
I suggest to follow the following process: 1) deprecate something and add a DeprecationWarning; 2) decide how long the deprecation should last; 3) use the deprecated-remove[1] directive to document it; 4) add a test that fails after the update so that we remember to remove it[2];
This sounds like a nice process.
PendingDeprecationWarnings: * AFAIK the difference between PDW and DW is that PDW are silenced by default; * now DW are silence by default too, so there are no differences; * I therefore suggest we stop using it, but we can leave it around[3]
Agreed as well.
[3]: we could also introduce a MetaDeprecationWarning and make PendingDeprecationWarning inherit from it so that it can be used to pending-deprecate itself. Once PendingDeprecationWarning is gone, the MetaDeprecationWarning will become useless and can then be used to meta-deprecate itself.
People may start using MetaDeprecationWarning to deprecate their metaclasses. It sounds wrong to deprecate it. Regards Antoine.
![](https://secure.gravatar.com/avatar/e8600d16ba667cc8d7f00ddc9f254340.jpg?s=120&d=mm&r=g)
On Mon, Oct 24, 2011 at 06:17, Antoine Pitrou <solipsis@pitrou.net> wrote:
On Mon, 24 Oct 2011 15:58:11 +0300 Ezio Melotti <ezio.melotti@gmail.com> wrote:
I suggest to follow the following process: 1) deprecate something and add a DeprecationWarning; 2) decide how long the deprecation should last; 3) use the deprecated-remove[1] directive to document it; 4) add a test that fails after the update so that we remember to remove it[2];
This sounds like a nice process.
I have thought about this extensively when I did the stdlib reorg for Python 3, and the only difference from approach Ezio's is proposing was I was thinking of introducing a special deprecate() function to warnings or something that took a Python version argument so it would automatically turn into an error once the version bump occurred. But then I realized other apps wouldn't necessarily care, so short of adding an argument which let people specify a different version number to compare against, I kind of sat on the idea. I also thought about specifying when to go from PendingDeprecationWarning to DeprecationWarning, but as has been suggested, PendingDeprecationWarning is not really useful to the core anymore since But adding something to test.support for our tests which requires a specified version # would also work and be less invasive to users, eg. with test.support.deprecated(remove_in='3.4'): deprecated_func() And obviously if we don't plan on removing the feature any time soon, the test can specify Python 4.0 as the removal version. But the important thing is to require some specification in the test so we don't forget to stick to our contract of when to remove something. P.S.: Did we ever discuss naming py3k Python 4 instead, in honor of King Arthur from Holy Grail not being able to ever count straight to three (eg. the holy hand grenade scene)? Maybe we need to have the next version of Python be Python 6 since the Book of Armaments says you should have 4, and 5 is right out. =) -Brett
PendingDeprecationWarnings: * AFAIK the difference between PDW and DW is that PDW are silenced by default; * now DW are silence by default too, so there are no differences; * I therefore suggest we stop using it, but we can leave it around[3]
Agreed as well.
[3]: we could also introduce a MetaDeprecationWarning and make PendingDeprecationWarning inherit from it so that it can be used to pending-deprecate itself. Once PendingDeprecationWarning is gone, the MetaDeprecationWarning will become useless and can then be used to meta-deprecate itself.
People may start using MetaDeprecationWarning to deprecate their metaclasses. It sounds wrong to deprecate it.
Regards
Antoine.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
![](https://secure.gravatar.com/avatar/b700adeedd4c123ba839b960ceec0ef1.jpg?s=120&d=mm&r=g)
Hi, +1 to all Ezio said. One specific remark: PendingDeprecationWarning could just become an alias of DeprecationWarning, but maybe there is code out there that relies on the distinction, and there is no real value in making it an alias (there is value in removing it altogether, but we can’t do that, can we?). I don’t see the need to deprecate PDW, except in documentation, and am -1 to the metaclass idea (no need). Cheers
![](https://secure.gravatar.com/avatar/60cac87fb9e2b5689242622999656cb0.jpg?s=120&d=mm&r=g)
On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote:
Hi, our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: 1) deprecate something and add a DeprecationWarning; 2) forget about it after a while; 3) wait a few versions until someone notices it; 4) actually remove it;
I suggest to follow the following process: 1) deprecate something and add a DeprecationWarning; 2) decide how long the deprecation should last; 3) use the deprecated-remove[1] directive to document it; 4) add a test that fails after the update so that we remember to remove it[2];
How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal. Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. There is rarely a need to actually remove support for something in the standard library. That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes. Raymond
![](https://secure.gravatar.com/avatar/d4a79c9e25d6445a9af28dad4885678a.jpg?s=120&d=mm&r=g)
On 2011-11-28, at 10:30 , Raymond Hettinger wrote:
On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal. Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. There is rarely a need to actually remove support for something in the standard library. The problem with "deprecating and not removing" (and worse, only informally deprecating by leaving a note in the documentation) is that you end up with zombie APIs: there are tons of tutorials & such on the web talking about them, they're not maintained, nobody really cares about them (but users who found them via Google) and they're all around harmful.
It's the current state of many JDK 1.0 and 1.1 APIs and it's dreadful, most of them are more than a decade out of date, sometimes retrofitted for new interfaces (but APIs using them usually are *not* fixed, keeping them in their state of partial death), sometimes still *taught*, all of that because they're only informally deprecated (at best, sometimes not even that as other APIs still depend on them). It's bad for (language) users because they use outdated and partially unmaintained (at least in that it's not improved) APIs and it's bad for (language) maintainers in that once in a while they still have to dive into those things and fix bugs cropping up without the better understanding they have from the old APIs or the cleaner codebase they got from it. Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse.
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On Mon, Nov 28, 2011 at 7:53 PM, Xavier Morel <catch-all@masklinn.net> wrote:
Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse.
But restricting ourselves to cleaning out such APIs every 10 years or so with a major version bump is also a potentially viable option. So long as the old APIs are fully tested and aren't actively *harmful* to creating reasonable code (e.g. optparse) then refraining from killing them before the (still hypothetical) 4.0 is reasonable. OTOH, genuinely problematic APIs that ideally wouldn't have survived even the 3.x transition (e.g. the APIs that the 3.x subprocess module inherited from the 2.x commands module that run completely counter to the design principles of the subprocess module) should probably still be considered for removal as soon as is reasonable after a superior alternative is made available. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/ccaf3605554d69e53a7b8a7f8174697d.jpg?s=120&d=mm&r=g)
On 2011-11-28, at 13:06 , Nick Coghlan wrote:
On Mon, Nov 28, 2011 at 7:53 PM, Xavier Morel <catch-all@masklinn.net> wrote:
Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse.
But restricting ourselves to cleaning out such APIs every 10 years or so with a major version bump is also a potentially viable option.
So long as the old APIs are fully tested and aren't actively *harmful* to creating reasonable code (e.g. optparse) then refraining from killing them before the (still hypothetical) 4.0 is reasonable. Sure, the original proposal leaves the deprecation timelines as TBD and I hope I did not give the impression of setting up a timeline (that was not the intention). Ezio's original proposal could simply be implemented by having the second step ("decide how long the deprecation should last") default to "the next major release", I don't think that goes against his proposal, and in case APIs are actively harmful (e.g. very hard to use correctly) the deprecation timeline can be accelerated specifically for that case.
![](https://secure.gravatar.com/avatar/5615a372d9866f203a22b2c437527bbb.jpg?s=120&d=mm&r=g)
Xavier Morel wrote:
Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse.
I would much rather have my code relying on "zombie" APIs and keep working, than to have that code suddenly stop working when the zombie is removed. Working code should stay working. Unless the zombie is actively harmful, what's the big deal if there is a newer, better way of doing something? If it works, and if it's fast enough, why force people to "fix" it? It is a good thing that code or tutorials from Python 1.5 still (mostly) work, even when there are newer, better ways of doing something. I see a lot of newbies, and the frustration they suffer when they accidentally (carelessly) try following 2.x instructions in Python3, or vice versa, is great. It's bad enough (probably unavoidable) that this happens during a major transition like 2 to 3, without it also happening during minor releases. Unless there is a good reason to actively remove an API, it should stay as long as possible. "I don't like this and it should go" is not a good reason, nor is "but there's a better way you should use". When in doubt, please don't break people's code. -- Steven
![](https://secure.gravatar.com/avatar/607cfd4a5b41fe6c886c978128b9c03e.jpg?s=120&d=mm&r=g)
On 12:14 pm, steve@pearwood.info wrote:
Xavier Morel wrote:
Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse.
I would much rather have my code relying on "zombie" APIs and keep working, than to have that code suddenly stop working when the zombie is removed. Working code should stay working. Unless the zombie is actively harmful, what's the big deal if there is a newer, better way of doing something? If it works, and if it's fast enough, why force people to "fix" it?
It is a good thing that code or tutorials from Python 1.5 still (mostly) work, even when there are newer, better ways of doing something. I see a lot of newbies, and the frustration they suffer when they accidentally (carelessly) try following 2.x instructions in Python3, or vice versa, is great. It's bad enough (probably unavoidable) that this happens during a major transition like 2 to 3, without it also happening during minor releases.
Unless there is a good reason to actively remove an API, it should stay as long as possible. "I don't like this and it should go" is not a good reason, nor is "but there's a better way you should use". When in doubt, please don't break people's code.
+1 Jean-Paul
![](https://secure.gravatar.com/avatar/cb6a2b464b7190c48e4f148a0e51d13a.jpg?s=120&d=mm&r=g)
On Mon, Nov 28, 2011 at 11:14 PM, Steven D'Aprano <steve@pearwood.info> wrote:
Xavier Morel wrote:
Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse.
I would much rather have my code relying on "zombie" APIs and keep working, than to have that code suddenly stop working when the zombie is removed. Working code should stay working. Unless the zombie is actively harmful, what's the big deal if there is a newer, better way of doing something? If it works, and if it's fast enough, why force people to "fix" it?
It is a good thing that code or tutorials from Python 1.5 still (mostly) work, even when there are newer, better ways of doing something. I see a lot of newbies, and the frustration they suffer when they accidentally (carelessly) try following 2.x instructions in Python3, or vice versa, is great. It's bad enough (probably unavoidable) that this happens during a major transition like 2 to 3, without it also happening during minor releases.
Unless there is a good reason to actively remove an API, it should stay as long as possible. "I don't like this and it should go" is not a good reason, nor is "but there's a better way you should use". When in doubt, please don't break people's code.
This is a great argument. But people want to see new, bigger better things in the standard library, and the #1 reason cited against this is "we already have too much". I think that's where the issue lies: Either lots of cool nice stuff is added and supported (we all want our favourite things in the standard lib for this reason), and or the old stuff lingers... I'm sure a while ago there was mention of a "staging" area for inclusion in the standard library. This attracts interest, stabilization, and quality from potential modules for inclusion. Better yet, the existing standard library ownership is somehow detached from the CPython core, so that changes enabling easier customization to fit other implementations (jpython, pypy etc.) are possible. tl;dr old stuff blocks new hotness. make room or separate standard library concerns from cpython
-- Steven _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
![](https://secure.gravatar.com/avatar/334b870d5b26878a79b2dc4cfcc500bc.jpg?s=120&d=mm&r=g)
Matt Joiner writes:
This is a great argument. But people want to see new, bigger better things in the standard library, and the #1 reason cited against this is "we already have too much". I think that's where the issue lies: Either lots of cool nice stuff is added and supported (we all want our favourite things in the standard lib for this reason), and or the old stuff lingers...
Deprecated features are pretty much irrelevant to the height of the bar for new features. The problem is that there are a limited number of folks doing long term maintenance of the standard library, and an essentially unlimited supply of one-off patches to add cool new features (not backed by a long term warranty of maintenance by the contributor). So deprecated features do add some burden of maintenance for the core developers, as Michael points out -- but removing *all* of them on short notice would not really make it possible to *add* features *in a maintainable way* any faster.
I'm sure a while ago there was mention of a "staging" area for inclusion in the standard library. This attracts interest, stabilization, and quality from potential modules for inclusion.
But there's no particular reason to believe it will attract more contributors willing to do long-term maintenance, and *somebody* has to maintain the staging area.
![](https://secure.gravatar.com/avatar/db5f70d2f2520ef725839f046bdc32fb.jpg?s=120&d=mm&r=g)
On Tue, 29 Nov 2011 00:19:50 +0900 "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Deprecated features are pretty much irrelevant to the height of the bar for new features. The problem is that there are a limited number of folks doing long term maintenance of the standard library, and an essentially unlimited supply of one-off patches to add cool new features (not backed by a long term warranty of maintenance by the contributor).
Actually, we don't often get patches for new features. Many new features are implemented by core developers themselves. Regards Antoine.
![](https://secure.gravatar.com/avatar/334b870d5b26878a79b2dc4cfcc500bc.jpg?s=120&d=mm&r=g)
Antoine Pitrou writes:
Actually, we don't often get patches for new features. Many new features are implemented by core developers themselves.
Right. That's not inconsistent with what I wrote, as long as would-be feature submitters realize what the standards for an acceptable feature patch are.
![](https://secure.gravatar.com/avatar/7eb570f10c31357bd86b16251be8e99e.jpg?s=120&d=mm&r=g)
Raymond Hettinger wrote:
How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal. Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead.
There is rarely a need to actually remove support for something in the standard library.
That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes.
I'm strongly against breaking backwards compatiblity between minor versions (e.g. 3.2 and 3.3). If something is removed in this manner, the transition period should at least be very, very long. To me, deprecating an API means "this code will not get new features and possibly not even (big) fixes". It's important for the long term health of a project to be able to deprecate and eventually remove code that is no longer maintained. So, I think we should have a clear and working deprecation policy, and Ezio's suggestion sounds good to me. There should be a clean way to state, in both code and documentation, that something is deprecated, do not use in new code. Furthermore, deprecated code should actually be removed when the time comes, be it Python 4.0 or something else. Petri
![](https://secure.gravatar.com/avatar/01aa7d6d4db83982a2f6dd363d0ee0f3.jpg?s=120&d=mm&r=g)
On Nov 28, 2011, at 03:36 PM, Petri Lehtinen wrote:
Raymond Hettinger wrote:
That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes.
I'm strongly against breaking backwards compatiblity between minor versions (e.g. 3.2 and 3.3). If something is removed in this manner, the transition period should at least be very, very long.
+1 It's even been a pain when porting between Python 2.x and 3. You'll see some things that were carried forward into Python 3.0 and 3.1 but are now gone in 3.2. So if you port from 2.7 -> 3.2 for example, you'll find a few things missing (the intobject.h aliases come to mind). For those reasons I think we need to be conservative about removing stuff. Once the world is all on Python 3 <wink> we can think about removing code. Cheers, -Barry
![](https://secure.gravatar.com/avatar/463a381eaf9c0c08bc130a1bea1874ee.jpg?s=120&d=mm&r=g)
On 28/11/2011 13:36, Petri Lehtinen wrote:
How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal. Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead.
There is rarely a need to actually remove support for something in the standard library.
That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes. I'm strongly against breaking backwards compatiblity between minor versions (e.g. 3.2 and 3.3). If something is removed in this manner,
Raymond Hettinger wrote: the transition period should at least be very, very long.
We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology. Nonetheless, our usual deprecation policy has been a *minimum* of deprecated for two releases and removed in a third (if at all) - which is about five years from deprecation to removal given our normal release rate. The water is muddied by Python 3, where we may deprecate something in Python 3.1 and remove in 3.3 (hypothetically) - but users may go straight from Python 2.7 to 3.3 and skip the deprecation period altogether... So we should be extra conservative about removals in Python 3 (for the moment at least).
To me, deprecating an API means "this code will not get new features and possibly not even (big) fixes". It's important for the long term health of a project to be able to deprecate and eventually remove code that is no longer maintained.
The issue is that deprecated code can still be a maintenance burden. Keeping deprecated APIs around can require effort just to keep them working and may actively *prevent* other changes / improvements. All the best, Michael Foord
So, I think we should have a clear and working deprecation policy, and Ezio's suggestion sounds good to me. There should be a clean way to state, in both code and documentation, that something is deprecated, do not use in new code. Furthermore, deprecated code should actually be removed when the time comes, be it Python 4.0 or something else.
Petri _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.u...
-- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html
![](https://secure.gravatar.com/avatar/7eb570f10c31357bd86b16251be8e99e.jpg?s=120&d=mm&r=g)
Michael Foord wrote:
We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology.
Even though (in the documentation) Python's version number components are called major, minor, micro, releaselevel and serial, in this order? So when the minor version component is increased it's a major version increment? :)
![](https://secure.gravatar.com/avatar/512cfbaf98d63ca4acd57b2df792aec6.jpg?s=120&d=mm&r=g)
On Tue, Nov 29, 2011 at 02:46:06PM +0200, Petri Lehtinen wrote:
Michael Foord wrote:
We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology.
Even though (in the documentation) Python's version number components are called major, minor, micro, releaselevel and serial, in this order? So when the minor version component is increased it's a major version increment? :)
When the major version component is increased it's a World Shattering Change, isn't it?! ;-) Oleg. -- Oleg Broytman http://phdru.name/ phd@phdru.name Programmers don't die, they just GOSUB without RETURN.
![](https://secure.gravatar.com/avatar/db5f70d2f2520ef725839f046bdc32fb.jpg?s=120&d=mm&r=g)
On Tue, 29 Nov 2011 14:46:06 +0200 Petri Lehtinen <petri@digip.org> wrote:
Michael Foord wrote:
We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology.
Even though (in the documentation) Python's version number components are called major, minor, micro, releaselevel and serial, in this order? So when the minor version component is increased it's a major version increment? :)
Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough. Regards Antoine.
![](https://secure.gravatar.com/avatar/01aa7d6d4db83982a2f6dd363d0ee0f3.jpg?s=120&d=mm&r=g)
On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote:
Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough.
Agreed, but it's too late to change it. I look at it as the attributes of the namedtuple being evocative of the traditional names for the digit positions, not the assignment of those positions to Python's semantics. -Barry
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw <barry@python.org> wrote:
On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote:
Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough.
Agreed, but it's too late to change it. I look at it as the attributes of the namedtuple being evocative of the traditional names for the digit positions, not the assignment of those positions to Python's semantics.
Hmm, I wonder about that. Perhaps we could add a second set of names in parallel with the "major.minor.micro" names: "series.feature.maint". That would, after all, reflect what is actually said in practice: - release series: 2.x, 3.x (usually used in a form like "In the 3.x series, X is true. In 2.x, Y is true) - feature release: 2.7, 3.2, etc - maintenance release: 2.7.2, 3.2.1, etc I know I tend to call feature releases major releases and I'm far from alone in that. The discrepancy in relation to sys.version_info is confusing, but we can't make 'major' refer to a different field without breaking existing programs. But we *can* change:
sys.version_info sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0)
to instead read: sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) while allowing 'major' as an alias of 'series', 'minor' as an alias of 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd have started down the path towards coherent terminology for the three fields in the version numbers (by accepting that 'major' has now become irredeemably ambiguous in the context of CPython releases). This idea of renaming all three fields has come up before, but I believe we got stuck on the question of what to call the first number (i.e. the one I'm calling the "series" here). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/cd2e442c42c95ed534e4197df4300222.jpg?s=120&d=mm&r=g)
2011/11/29 Nick Coghlan <ncoghlan@gmail.com>:
On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw <barry@python.org> wrote:
On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote:
Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough.
Agreed, but it's too late to change it. I look at it as the attributes of the namedtuple being evocative of the traditional names for the digit positions, not the assignment of those positions to Python's semantics.
Hmm, I wonder about that. Perhaps we could add a second set of names in parallel with the "major.minor.micro" names: "series.feature.maint".
That would, after all, reflect what is actually said in practice: - release series: 2.x, 3.x (usually used in a form like "In the 3.x series, X is true. In 2.x, Y is true) - feature release: 2.7, 3.2, etc - maintenance release: 2.7.2, 3.2.1, etc
I know I tend to call feature releases major releases and I'm far from alone in that. The discrepancy in relation to sys.version_info is confusing, but we can't make 'major' refer to a different field without breaking existing programs. But we *can* change:
sys.version_info sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0)
to instead read:
sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0)
while allowing 'major' as an alias of 'series', 'minor' as an alias of 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd have started down the path towards coherent terminology for the three fields in the version numbers (by accepting that 'major' has now become irredeemably ambiguous in the context of CPython releases).
This idea of renaming all three fields has come up before, but I believe we got stuck on the question of what to call the first number (i.e. the one I'm calling the "series" here).
Can we drop this now? Too much effort for very little benefit. We call releases what we call releases. -- Regards, Benjamin
![](https://secure.gravatar.com/avatar/cb6a2b464b7190c48e4f148a0e51d13a.jpg?s=120&d=mm&r=g)
I like this article on it: http://semver.org/ The following snippets being relevant here: Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. With the exception of actually dropping stuff (however this only occurs in terms of modules, which hardly count in special cases?), Python already conforms to this standard very well. On Wed, Nov 30, 2011 at 11:00 AM, Benjamin Peterson <benjamin@python.org> wrote:
2011/11/29 Nick Coghlan <ncoghlan@gmail.com>:
On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw <barry@python.org> wrote:
On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote:
Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough.
Agreed, but it's too late to change it. I look at it as the attributes of the namedtuple being evocative of the traditional names for the digit positions, not the assignment of those positions to Python's semantics.
Hmm, I wonder about that. Perhaps we could add a second set of names in parallel with the "major.minor.micro" names: "series.feature.maint".
That would, after all, reflect what is actually said in practice: - release series: 2.x, 3.x (usually used in a form like "In the 3.x series, X is true. In 2.x, Y is true) - feature release: 2.7, 3.2, etc - maintenance release: 2.7.2, 3.2.1, etc
I know I tend to call feature releases major releases and I'm far from alone in that. The discrepancy in relation to sys.version_info is confusing, but we can't make 'major' refer to a different field without breaking existing programs. But we *can* change:
sys.version_info sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0)
to instead read:
sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0)
while allowing 'major' as an alias of 'series', 'minor' as an alias of 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd have started down the path towards coherent terminology for the three fields in the version numbers (by accepting that 'major' has now become irredeemably ambiguous in the context of CPython releases).
This idea of renaming all three fields has come up before, but I believe we got stuck on the question of what to call the first number (i.e. the one I'm calling the "series" here).
Can we drop this now? Too much effort for very little benefit. We call releases what we call releases.
-- Regards, Benjamin _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com
![](https://secure.gravatar.com/avatar/2fc5b058e338d06a8d8f8cd0cfe48376.jpg?s=120&d=mm&r=g)
Am 29.11.2011 13:46, schrieb Petri Lehtinen:
Michael Foord wrote:
We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology.
Even though (in the documentation) Python's version number components are called major, minor, micro, releaselevel and serial, in this order? So when the minor version component is increased it's a major version increment? :)
Yes. Georg
![](https://secure.gravatar.com/avatar/db5f70d2f2520ef725839f046bdc32fb.jpg?s=120&d=mm&r=g)
Hi, On Mon, 28 Nov 2011 01:30:53 -0800 Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote:
Hi, our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: 1) deprecate something and add a DeprecationWarning; 2) forget about it after a while; 3) wait a few versions until someone notices it; 4) actually remove it;
I suggest to follow the following process: 1) deprecate something and add a DeprecationWarning; 2) decide how long the deprecation should last; 3) use the deprecated-remove[1] directive to document it; 4) add a test that fails after the update so that we remember to remove it[2];
How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal.
Well, it's not like we aren't already conservative in deprecating things.
Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. There is rarely a need to actually remove support for something in the standard library. That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes.
I agree with Xavier's answer that having recipes around which use outdated (and possibly inefficient/insecure/etc.) APIs is a nuisance. Also, deprecated-but-not-removed APIs come at a maintenance and support cost. Regards Antoine.
participants (18)
-
Antoine Pitrou
-
Barry Warsaw
-
Benjamin Peterson
-
Brett Cannon
-
exarkun@twistedmatrix.com
-
Ezio Melotti
-
Georg Brandl
-
Matt Joiner
-
Michael Foord
-
Nick Coghlan
-
Oleg Broytman
-
Petri Lehtinen
-
Raymond Hettinger
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Xavier Morel
-
Xavier Morel
-
Éric Araujo