PEP 407: New release cycle and introducing long-term support versions

Hello, We would like to propose the following PEP to change (C)Python's release cycle. Discussion is welcome, especially from people involved in the release process, and maintainers from third-party distributions of Python. Regards Antoine. PEP: 407 Title: New release cycle and introducing long-term support versions Version: $Revision$ Last-Modified: $Date$ Author: Antoine Pitrou <solipsis@pitrou.net>, Georg Brandl <georg@python.org>, Barry Warsaw <barry@python.org> Status: Draft Type: Process Content-Type: text/x-rst Created: 2012-01-12 Post-History: Resolution: TBD Abstract ======== Finding a release cycle for an open-source project is a delicate exercise in managing mutually contradicting constraints: developer manpower, availability of release management volunteers, ease of maintenance for users and third-party packagers, quick availability of new features (and behavioural changes), availability of bug fixes without pulling in new features or behavioural changes. The current release cycle errs on the conservative side. It is adequate for people who value stability over reactivity. This PEP is an attempt to keep the stability that has become a Python trademark, while offering a more fluid release of features, by introducing the notion of long-term support versions. Scope ===== This PEP doesn't try to change the maintenance period or release scheme for the 2.7 branch. Only 3.x versions are considered. Proposal ======== Under the proposed scheme, there would be two kinds of feature versions (sometimes dubbed "minor versions", for example 3.2 or 3.3): normal feature versions and long-term support (LTS) versions. Normal feature versions would get either zero or at most one bugfix release; the latter only if needed to fix critical issues. Security fix handling for these branches needs to be decided. LTS versions would get regular bugfix releases until the next LTS version is out. They then would go into security fixes mode, up to a termination date at the release manager's discretion. Periodicity ----------- A new feature version would be released every X months. We tentatively propose X = 6 months. LTS versions would be one out of N feature versions. We tentatively propose N = 4. With these figures, a new LTS version would be out every 24 months, and remain supported until the next LTS version 24 months later. This is mildly similar to today's 18 months bugfix cycle for every feature version. Pre-release versions -------------------- More frequent feature releases imply a smaller number of disruptive changes per release. Therefore, the number of pre-release builds (alphas and betas) can be brought down considerably. Two alpha builds and a single beta build would probably be enough in the regular case. The number of release candidates depends, as usual, on the number of last-minute fixes before final release. Effects ======= Effect on development cycle --------------------------- More feature releases might mean more stress on the development and release management teams. This is quantitatively alleviated by the smaller number of pre-release versions; and qualitatively by the lesser amount of disruptive changes (meaning less potential for breakage). The shorter feature freeze period (after the first beta build until the final release) is easier to accept. The rush for adding features just before feature freeze should also be much smaller. Effect on bugfix cycle ---------------------- The effect on fixing bugs should be minimal with the proposed figures. The same number of branches would be simultaneously open for regular maintenance (two until 2.x is terminated, then one). Effect on workflow ------------------ The workflow for new features would be the same: developers would only commit them on the ``default`` branch. The workflow for bug fixes would be slightly updated: developers would commit bug fixes to the current LTS branch (for example ``3.3``) and then merge them into ``default``. If some critical fixes are needed to a non-LTS version, they can be grafted from the current LTS branch to the non-LTS branch, just like fixes are ported from 3.x to 2.7 today. Effect on the community ----------------------- People who value stability can just synchronize on the LTS releases which, with the proposed figures, would give a similar support cycle (both in duration and in stability). People who value reactivity and access to new features (without taking the risk to install alpha versions or Mercurial snapshots) would get much more value from the new release cycle than currently. People who want to contribute new features or improvements would be more motivated to do so, knowing that their contributions will be more quickly available to normal users. Also, a smaller feature freeze period makes it less cumbersome to interact with contributors of features. Discussion ========== These are open issues that should be worked out during discussion: * Decide on X (months between feature releases) and N (feature releases per LTS release) as defined above. * For given values of X and N, is the no-bugfix-releases policy for non-LTS versions feasible? * Restrict new syntax and similar changes (i.e. everything that was prohibited by PEP 3003) to LTS versions? * What is the effect on packagers such as Linux distributions? * How will release version numbers or other identifying and marketing material make it clear to users which versions are normal feature releases and which are LTS releases? How do we manage user expectations? A community poll or survey to collect opinions from the greater Python community would be valuable before making a final decision. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End:

On Tue, Jan 17, 2012 at 1:34 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
It sounds like every six months we would get a new feature version, with every fourth one an LTS release. That sounds great, but, unless I've misunderstood, there has been a strong desire to keep that number to one digit. It doesn't matter to me all that much. However, if there is such a limit, implied or explicit, it should be mentioned and factor into the PEP. That aside, +1. -eric

If minor/feature releases are introducing breaking changes perhaps it's time to adopt accelerated major versioning schedule. For instance there are breaking ABI changes between 3.0/3.1, and 3.2, and while acceptable for the early adoption state of Python 3, such changes should normally be reserved for major versions. If every 4th or so feature release is sufficiently different to be worth of an LTS, consider this a major release albeit with smaller beading changes than Python 3. Aside from this, given the radical features of 3.3, and the upcoming Ubuntu 12.04 LTS, I would recommend adopting 2.7 and 3.2 as the first LTSs, to be reviewed 2 years hence should this go ahead.

Hello, On Wed, 18 Jan 2012 10:04:19 +1100 Matt Joiner <anacrolix@gmail.com> wrote:
If minor/feature releases are introducing breaking changes perhaps it's time to adopt accelerated major versioning schedule.
The PEP doesn't propose to accelerate compatibility breakage. So I don't think a change in numbering is required.
Which "breaking ABI changes" are you thinking about? Python doesn't guarantee any A*B*I (as opposed to API), unless you use Py_LIMITED_API which was introduced in 3.2. Regards Antoine.

On 1/17/2012 3:34 PM, Antoine Pitrou wrote:
To me, as I understand the proposal, the title is wrong. Our current feather releases already are long-term support versions. They get bugfix releases at close to 6 month intervals for 1 1/2 -2 years and security fixes for 3 years. The only change here is that you propose, for instance, a fixed 6-month interval and 2 year period. As I read this, you propose to introduce a new short-term (interim, preview) feature release along with each bugfix release. Each would have all the bugfixes plus a preview of the new features expected to be in the next long-term release. (I know, this is not exactly how you spun it.) There has been discussion on python-ideas about whether new features are or can be considered experimental, or whether there should be an 'experimental' package. An argument against is that long-term production releases should not have experimental features that might go away or have their apis changed. If the short-term, non-production, interim feature releases were called preview releases, then some or all of the new features could be labelled experimental and subject to change. It might actually be good to have major new features tested in at least one preview release before being frozen. Maybe then more of the initial bugs would be found and repaired *before* their initial appearance in a long-term release. (All of this is not to say that experimental features should be casually changed or reverted without good reason.) One problem, at least on Windows, is that short-term releases would almost never have compiled binaries for 3rd-party libraries. It already takes awhile for them to appear for the current long-term releases. On the other hand, library authors might be more inclined to test new features, a few at a time, if part of tested preview releases, than if just in the repository. So the result *might* be quicker library updates after each long-term release. -- Terry Jan Reedy

On Tue, 17 Jan 2012 18:29:11 -0500 Terry Reedy <tjreedy@udel.edu> wrote:
Well, "spinning" is important here. We are not proposing any "preview" releases. These would have the same issue as alphas or betas: nobody wants to install them where they could disrupt working applications and libraries. What we are proposing are first-class releases that are as robust as any other (and usable in production). It's really about making feature releases more frequent, not making previews available during development. I agree "long-term" could be misleading as their support duration is not significantly longer than current feature releases. I chose this term because it is quite well-known and well-understood, but we could pick something else ("extended support", "2-year support", etc.).
That's orthogonal to this PEP. (that said, more frequent feature releases are also a benefit for the __preview__ proposal, since we could be more reactive changing APIs in that namespace)
One problem, at least on Windows, is that short-term releases would almost never have compiled binaries for 3rd-party libraries.
That's a good point, although Py_LIMITED_API will hopefully make things better in the middle term. Regards Antoine.

On 1/17/2012 6:42 PM, Antoine Pitrou wrote:
The main point of my comment is that the new thing you are introducing is not long-term supported versions but short term unsupported versions.
Well, "spinning" is important here. We are not proposing any "preview" releases. These would have the same issue as alphas or betas: nobody
I said nothing about quality. We aim to keep default in near-release condition and seem to be getting better. The new unicode is still getting polished a bit, it seems, after 3 months, but that is fairly unusual.
But I am dubious that releases that are obsolete in 6 months and lack 3rd party support will see much production use.
It's really about making feature releases more frequent, not making previews available during development.
Given the difficulty of making a complete windows build, it would be nice to have one made available every 6 months, regardless of how it is labeled. I believe that some people will see and use good-for-6-months releases as previews of the new features that will be in the 'real', normal, bug-fix supported, long-term releases. Every release is a snapshot of a continuous process, with some extra effort made to tie up some (but not all) of the loose ends. -- Terry Jan Reedy

On 18 January 2012 04:32, Terry Reedy <tjreedy@udel.edu> wrote:
I'd love to see 6-monthly releases, including Windows binaries, and binary builds of all packages that needed a compiler to build. Oh, and a pony every LTS release :-) Seriously, this proposal doesn't really acknowledge the amount of work by other people that would be needed for a 6-month release to be *usable* in normal cases (by Windows users, at least). It's usually some months after a release on the current schedule that Windows binaries have appeared for everything I use regularly. I could easily imagine 3rd-party developers tending to only focus on LTS releases, making the release cycle effectively *slower* for me, rather than faster. Paul PS Things that might help improve this: (1) PY_LIMITED_API, and (2) support in packaging for binary releases, including a way to force installation of a binary release on the "wrong" version (so that developers don't have to repackage and publish identical binaries every 6 months).

Am 18.01.2012 05:32, schrieb Terry Reedy:
That is really a matter of perspective. For the proposed cycle, there would be more regular version than LTS versions, so they are the exception and get the special name. (And at the same time, the name is already established and people probably grasp instantly what it means.)
Whether people would use the releases is probably something that only they can tell us -- that's why a community survey is mentioned in the PEP. Not sure what you mean by lacking 3rd party support.
Maybe they will. That's another thing that is made clear in the PEP: for one group of people (those preferring stability over long time), nothing much changes, except that the release period is a little longer, and there are these "previews" as you call them. Georg

On 18 January 2012 07:46, Georg Brandl <g.brandl@gmx.net> wrote:
The class of people who we need to consider carefully is those who want to use the latest release, but are limited by the need for other parties to release stuff that works with that release (usually, this means Windows binaries of extensions, or platform vendor packaged releases of modules/packages). For them, if the other parties focus on LTS releases (as is possible, certainly) the release cycle became slower, going from 18 months to 24.
Not sure what you mean by lacking 3rd party support.
I take it as meaning that the people who release Windows binaries on PyPI, and vendors who package up PyPI distributions in their own distribution format. Lacking support in the sense that these people might well decide that a 6 month cycle is too fast (too much work) and explicitly decide to focus only on LTS releases. Paul

On Wed, 18 Jan 2012 07:52:20 +0000 Paul Moore <p.f.moore@gmail.com> wrote:
Well, do consider, though, that anyone not using third-party C extensions under Windows (either Windows users that are content with pure Python libs, or users of other platforms) won't have that problem. That should be quite a lot of people already. As for vendors, they have their own release management independent of ours already, so this PEP wouldn't change anything for them. Regards Antoine.

Hi, On 17/01/2012 22.34, Antoine Pitrou wrote:
If non-LTS releases won't get bug fixes, a bug that is fixed in 3.3.x might not be fixed in 3.4, unless the bug fixes releases are synchronized with the new feature releases (see below).
If LTS bug fixes releases and feature releases are synchronized, we will have something like: 3.3 3.3.1 / 3.4 3.3.2 / 3.5 3.3.3 / 3.6 3.7 3.7.1 / 3.8 ... so every new feature release will have all the bug fixes of the current LTS release, plus new features. With this scheme we will soon run out of 1-digit numbers though. Currently we already have a 3.x release every ~18 months, so if we keep doing that (just every 24 months instead of 18) and introduce the feature releases in between under a different versioning scheme, we might avoid the problem. This means: 3.1 ... 18 months, N bug fix releases... 3.2 ... 18 months, N bug fix releases ... 3.3 LTS ... 24 months, 3 bug fix releases, 3 feature releases ... 3.4 LTS ... 24 months, 3 bug fix releases, 3 feature releases ... 3.5 LTS In this way we solve the numbering problem and keep a familiar scheme (all the 3.x will be LTS and will be released as the same pace as before, no need to mark some 3.x as LTS). OTOH this will make the feature releases less "noticeable" and people might just ignore them and stick with the LTS releases. Also we would need to define a versioning convention for the feature releases.
Wouldn't it still be two? Bug fixes will go to the last LTS and on default, features only on default.
So here the difference is that instead of committing on the previous release (what currently is 3.2), we commit it to the previous LTS release, ignoring the ones between that and default.
That's why I proposed to keep the same versioning scheme for these releases, and have a different numbering for the feature releases.
This doesn't necessarily have to be fixed, especially if we don't change the versioning scheme (so we don't need to know that we have a LTS release every N releases).
* For given values of X and N, is the no-bugfix-releases policy for non-LTS versions feasible?
If LTS bug fix releases and feature releases are synchronized it should be feasible.
* Restrict new syntax and similar changes (i.e. everything that was prohibited by PEP 3003) to LTS versions?
(I was reading this the other way around, maybe rephrase it to "Allow new syntax and similar changes only in LTS versions")
* What is the effect on packagers such as Linux distributions?
* What is the effect on PyPy/Jython/IronPython? Can they just skip the feature releases and focus on the LTS ones?
This is not an issue with the scheme I proposed.
Best Regards, Ezio Melotti

On Tue, Jan 17, 2012 at 3:50 PM, Ezio Melotti <ezio.melotti@gmail.com> wrote:
* What is the effect on PyPy/Jython/IronPython? Can they just skip the feature releases and focus on the LTS ones?
At least for IronPython it's unlikely we'd be able track the feature releases. We're still trying to catch up as it is. Honestly, I don't see the advantages of this. Are there really enough new features planned that Python needs a full release more than every 18 months? - Jeff

Am 18.01.2012 01:24, schrieb Jeff Hardy:
Yes, we think so. (What is a non-full release, by the way?) The main reason is changes in the library. We have been getting complaints about the standard library bitrotting for years now, and one of the main reasons it's so hard to a) get decent code into the stdlib and b) keep it maintained is that the release cycles are so long. It's a tough thing for contributors to accept that the feature you've just implemented will only be in a stable release in 16 months. If the stdlib does not get more reactive, it might just as well be cropped down to a bare core, because 3rd-party libraries do everything as well and do it before we do. But you're right that if Python came without batteries, the current release cycle would be fine. (Another, more far-reaching proposal, has been to move the stdlib out of the cpython repo and share a new repo with Jython/IronPython/PyPy. It could then also be released separately from the core. But this is much more work than the current proposal.) Georg

On Wed, Jan 18, 2012 at 6:55 PM, Georg Brandl <g.brandl@gmx.net> wrote:
I think this is the real issue here. The batteries in Python are so important because: 1) The stability and quality of 3rd party libraries is not guaranteed. 2) The mechanism used to obtain 3rd party libraries, is not popular or considered reliable. Much of the "bitrot" is that standard library modules have been deprecated by third party ones that are of a much higher functionality. Rather than importing these libraries, it needs to be trivial to obtain them. Putting some of these higher quality 3rd party modules into lock step with Python is an unpopular move, and hampers their future growth.

Am 18.01.2012 00:50, schrieb Ezio Melotti:
That's already the case today. 3.2.5 might be released before 3.3.1 and therefore include bugfixes that 3.3.0 doesn't. True, there will be a 3.3.1 afterwards that does include it, but in the new case, there will be a new feature release instead.
Let's see how Guido feels about 3.10 first.
"Maintenance" excludes the feature development branch here. Will clarify.
Yes.
For these relatively short times (X = 6 months), I feel it is important to fix the time spans to have predictability for our developers. Georg

Executive summary: My take is "show us the additional resources, and don't be stingy!" Sorry, Antoine, I agree with your goals, but I think you are too optimistic about the positive effects and way too optimistic about the costs. Antoine Pitrou writes:
This increases the demand for developer manpower somewhat.
availability of release management volunteers,
Dramatic increase here. It may look like RM is not so demanding -- run a few scripts to put out the alphas/betas/releases. But the RM needs to stay on top of breaking news, make decisions. That takes time, interrupts other work, etc.
ease of maintenance for users and third-party packagers,
Dunno about users, but 3rd party packagers will also have more work to do, or will have to tell their users "we only promise compatibility with LTS releases."
quick availability of new features (and behavioural changes),
These are already *available*, just not *tested*. Since testing is the bottleneck on what users consider to be "available for me", you cannot decrease the amount of testing (alpha, beta releases) by anywhere near the amount you're increasing frequency, or you're just producing "as is" snapshots. Percentage of time in feature freeze goes way up, features get introduced all at once just before the next release, schedule slippage is inevitable on some releases.
availability of bug fixes without pulling in new features or behavioural changes.
Sounds like a slight further increase in demand for RM, and as described a dramatic decrease in the bugfixing for throw-away releases.
The current release cycle errs on the conservative side.
What evidence do you have for that, besides people who aren't RMs wishing that somebody else would do more RM work?
Way optimistic IMO (theoretical, admitted, but I do release management for a less well-organized project, and I teach in a business school, FWIW).
The shorter feature freeze period (after the first beta build until the final release) is easier to accept.
But you need to look at total time in feature freeze over the LTS cycle, not just before each throw-away release.
The rush for adding features just before feature freeze should also be much smaller.
This doesn't depend on the length of time in feature freeze per release, it depends on the fraction of time in feature freeze over the cycle. Given your quality goals, this will go way up.

On Wed, 18 Jan 2012 11:37:08 +0900 "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Georg and Barry may answer you here: they are release managers and PEP co-authors.
The point is to *increase* the amount of testing by making features available in stable releases on a more frequent basis. Not decrease it. Alphas and betas never produce much feedback, because people are reluctant to install them for anything else than toying around. Python is not emacs or Firefox, you don't use it in a vacuum and therefore installing non-stable versions is dangerous. Regards Antoine.

Antoine Pitrou writes:
We're talking about different kinds of testing. You're talking about (what old-school commercial software houses meant by "beta") testing in a production or production prototype environment. I'd love to see more of that, too! My claim is that I don't expect much uptake if you don't do close to as many of what are called "alpha" and "beta" tests on python-dev as are currently done.
Exactly my point, except that the PEP authors seem to think that we can cut back on the number of alpha and beta prereleases and still achieve the stability that such users expect from a Python release. I don't think that's right. I expect that unless quite substantial resources (far more than "proportional to 1/frequency") are devoted to each non-LTS release, a large fraction of such users to avoid non-LTS releases the way they avoid betas now.

Le mercredi 18 janvier 2012 à 21:48 +0900, Stephen J. Turnbull a écrit :
You claim people won't use stable releases because of not enough alphas? That sounds completely unrelated. I don't know of any users who would bother about that. (you can produce flimsy software with many alphas, too)
Sure, and we think it is :) Regards Antoine.

Antoine Pitrou writes:
You claim people won't use stable releases because of not enough alphas? That sounds completely unrelated.
Surely testing is related to user perceptions of stability. More testing helps reduce bugs in released software, which improves user perception of stability, encouraging them to use the software in production. Less testing, then, will have the opposite effect. But you understand that theory, I'm sure. So what do you mean to say?
(you can produce flimsy software with many alphas, too)
The problem is the converse: can you produce Python-release-quality software with much less pre-release testing than current feature releases get?
Sure, and we think it is [possible to do that] :)
Given the relative risk of rejecting PEP 407 and me being wrong (the status quo really isn't all that bad AFAICS), vs. accepting PEP 407 and you being wrong, I don't find a smiley very convincing. In fact, I don't find the PEP itself convincing -- and I'm not the only one. We'll see what Barry and Georg have to say.

Le jeudi 19 janvier 2012 à 00:25 +0900, Stephen J. Turnbull a écrit :
I have asked a practical question, a theoretical answer isn't exactly what I was waiting for.
I don't care to convince *you*, since you are not involved in Python development and release management (you haven't ever been a contributor AFAIK). Unless you produce practical arguments, saying "I don't think you can do it" is plain FUD and certainly not worth answering to. Regards Antoine.

Antoine Pitrou wrote:
Pardon me, but people like Stephen Turnbull are *users* of Python, exactly the sort of people you DO have to convince that moving to an accelerated or more complex release process will result in a better product. The risk is that you will lose users, or fragment the user base even more than it is now with 2.x vs 3.x. Quite frankly, I like the simplicity and speed of the current release cycle. All this talk about separate LTS releases and parallel language releases and library releases makes my head spin. I fear the day that people asking questions on the tutor or python-list mailing lists will have to say (e.g.) "I'm using Python 3.4.1 and standard library 1.2.7" in order to specify the version they're using. I fear change, because the current system works well and for every way to make it better there are a thousand ways to make it worse. Dismissing fears like this as FUD doesn't do anyone any favours. One on-going complaint is that Python-Dev doesn't have the manpower or time to do everything that needs to be done. Bugs languish for months or years because nobody has the time to look at it. Will going to a more rapid release cycle give people more time, or just increase their workload? You're hoping that a more rapid release cycle will attract more developers, and there is a chance that you could be right; but a more rapid release cycle WILL increase the total work load. So you're betting that this change will attract enough new developers that the work load per person will decrease even as the total work load increases. I don't think that's a safe bet. -- Steven

Steven D'Aprano writes:
Well, to be fair, Antoine is right in excluding me from the user base he's trying to attract (as I understand it). I do not maintain products or systems that depend on Python working 99.99999% of the time, and in fact in many of my personal projects I use trunk. One of the problems with this kind of discussion is that the targets of the new procedures are not clear in everybody's mind, but all of us tend to use generic terms like "users" when we mean to discuss benefits or costs to a specific class of users.

On Thu, 19 Jan 2012 11:12:06 +1100 Steven D'Aprano <steve@pearwood.info> wrote:
Well, you might bring some examples here, but I haven't seen any project lose users *because* they switched to a faster release cycle (*). I don't understand why this proposal would fragment the user base, either. We're not proposing to drop compatibility or build Python 4. ((*) Firefox's decrease in popularity seems to be due to Chrome uptake, and their new release cycle is arguably in response to that)
Well, the PEP discussion might make your head spin, because various possibilities are explored. Obviously the final solution will have to be simple enough to be understood by anyone :-) (do you find Ubuntu's release model, for example, too complicated?)
Yeah, that's my biggest problem with Nick's proposal. Hopefully we can avoid parallel version schemes.
This is not something that we can find out without trying, I think. As Georg pointed out, the decision is easy to revert or amend if we find out that the new release cycle is unworkable. Regards Antoine.

On Thu, Jan 19, 2012 at 9:07 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
They're not really parallel - the stdlib version would fully determine the language version. I'm only proposing two version numbers because we're planning to start versioning *two* things (the standard library, updated every 6 months, and the language spec, updated every 18-24 months). Since the latter matches what we do now, I'm merely proposing that we leave its versioning alone, and add a *new* identiifier specifically for the interim stdlib updates. Thinking about it though, I've realised that the sys.version string already contains a lot more than just the language version number, so I think it should just be updated to include the stdlib version information, and the version_info named tuple could get a new 'stdlib' field as a string. That way, sys.version and sys.version_info would still fully define the Python version, we just wouldn't be mucking with the meaning of any of the existing fields. For example, the current:
might become:
for the maintenance release and:
for the stdlib-only update. Explicit-is-better-than-implicit'ly yours, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Am 19.01.2012 01:12, schrieb Steven D'Aprano:
I can't help noticing that so far, worries about the workload came mostly from people who don't actually bear that load (this is no accusation!), while those that do are the proponents of the PEP... That is, I don't want to exclude you from the discussion, but on the issue of workload I would like to encourage more of our (past and present) release managers and active bug triagers to weigh in. cheers, Georg

Ok, so let me add then that I'm worried about the additional work-load. I'm particularly worried about the coordination of vacation across the three people that work on a release. It might well not be possible to make any release for a period of two months, which, in a six-months release cycle with two alphas and a beta, might mean that we (the release people) would need to adjust our vacation plans with the release schedule, or else step down (unless you would release the "normal" feature releases as source-only releases). FWIW, it might well be that I can't be available for the 3.3 final release (I haven't finalized my vacation schedule yet for August). Regards, Martin

On Fri, Jan 20, 2012 at 9:54 AM, "Martin v. Löwis" <martin@v.loewis.de> wrote:
I must admit that aspect had concerned me as well. Currently we use the 18-24 month window for releases to slide things around to accommodate the schedules of the RM, Martin (Windows binaries) and Ned/Ronald (Mac OS X binaries). Before we could realistically switch to more frequent releases, something would need to change on the binary release side. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Thu, Jan 19, 2012 at 17:54, "Martin v. Löwis" <martin@v.loewis.de> wrote:
In the interest of not having Windows releases depend on one person, and having gone through building the installer myself (which I know is but one of the duties), I'm available to help should you need it.

On 20 January 2012 03:57, Brian Curtin <brian@python.org> wrote:
One thought comes to mind - while we need a PEP to make a permanent change to the release schedule, would it be practical in any way to do a "trial run" of the process, and simply aim to release 3.4 about 6 months after 3.3? Based on the experiences gained from that, some of the discussions around this PEP could be supported (or not :-)) with more concrete information. If we can't do that, then that says something about the practicality of the proposal in itself... The plan for 3.4 would need to be publicised well in advance, of course, but doing that as a one-off exercise might well be viable. Paul. PS I have no view on whether the proposal is a good idea or a bad idea from a RM point of view. That's entirely up to the people who do the work to decide, in my opinion.

Am 20.01.2012 00:54, schrieb "Martin v. Löwis":
Thanks for the reminder, Martin. Even with the current release schedule, I think that the load on you is too much, and we need a whole team of Windows release experts. It's not really fair that the RM usually changes from release to release (at least every 2), and you have to do the same for everyone. It looks like we have one volunteer already; if we find another, I think one of them will also be not on vacation at most times :) For the Mac, at least we're up to two experts, but I'd like to see a third there too. cheers, Georg

Am 18.01.2012 16:25, schrieb Stephen J. Turnbull:
"The status quo really isn't all that bad" applies to any PEP. Also, compared to most PEPs, it is quite easy to revert to the previous state of things if they don't work out as wanted.
In fact, I don't find the PEP itself convincing -- and I'm not the only one.
That is noted. And I think Antoine was a little harsh earlier; of course we also need to convince users that the new cycle is advantageous and not detrimental.
We'll see what Barry and Georg have to say.
Two things: a) The release manager's job is not as bad as you might believe. We have an incredibly helpful and active core of developers which means that the RM job is more or less "reduced" to pronouncing on changes during the rc phase, and actually producing the releases. b) I did not have the impression (maybe someone can underline that with tracker stats?) that there were a lot more bug reports than usual during the alpha and early beta stages of Python 3.2. Georg

Georg Brandl writes:
That depends on how "doesn't work out" plays out. If meeting the schedule *and* producing a good release regularly is just more work than expected, of course you're right. If you stick to the schedule with insufficient resources, and lack of testing produces a really bad release (or worse, a couple of sorta bad releases in succession), reverting Python's reputation for stability is going to be non-trivial.
I've done release management and I've been watching Python do release management since PEP 263; I'm well aware that Python has a truly excellent process in place, and I regularly recommend studying to friends interested in improving their own projects' processes. But I've also (twice) been involved (as RM) in a major revision of RM procedures, and both times it was a lot more work than anybody expected. Finally, the whole point of this exercise is to integrate a lot more stdlib changes (including whole packages) than in the past on a much shorter timeline, and to do it repeatedly. "Every six months" still sounds like a long time if you are a "leaf" project still working on your changes on your own schedule and chafing at the bit waiting to get them in to the core project's releases, but it's actually quite short for the RM. I'm not against this change (especially since, as Antoine so graciously pointed out, I'm not going to be actually doing the work in the foreseeable future), but I do advise that the effort required seemed to be dramatically underestimated.
Yeah, but the question for Python's stability reputation is "were there more than zero?" Every bug that gets through is a risk.

This won't be a surprise to Antoine or Georg (since I've already expressed the same opinion privately), but I'm -1 on the idea of official releases of the whole shebang every 6 months. We're not Ubuntu, Fedora, Chrome or Firefox with a for-profit company (or large foundation) with multiple paid employees kicking around to really drive the QA process. If we had official support from Red Hat or Canonical promising to devote paid QA and engineering resources to keeping things on track my opinion might be different, but that is highly unlikely. I'm also wholly in agreement with Ezio that using the same versioning scheme for both full releases and interim releases is thoroughly confusing for users (for example, I consider Red Hat's completely separate branding and versioning for Fedora and RHEL a better model for end users than Canonical's more subtle 'Ubuntu' and 'Ubuntu LTS' distinction, and that's been my opinion since long before I started working for RH). My original suggestion to Antoine and Georg for 3.4 was that we simply propose to Larry Hastings (the 3.4 RM) that we spread out the release cycle, releasing the first alpha after ~6 months, the second after about ~12, then rolling into the regular release cycle of a final alpha, some beta releases, one or two release candidates and then the actual release. However, I'm sympathetic to Antoine's point that early alphas aren't likely to be at all interesting to folks that would like a fully supported stdlib update to put into production and no longer think that suggestion makes much sense on its own. Instead, if the proposal involves instituting a PEP 3003 style moratorium (i.e. stdlib changes only) for all interim releases, then we're essentially talking about splitting the versioning of the core language (and the CPython C API) and the standard library. If we're going to discuss that, we may as well go a bit further and just split development of the two out onto separate branches, with the current numbering scheme applying to full language version releases and switching to a date-based versioning scheme for the standard library (i.e. if 3.3 goes out in August as planned, then it would be "Python 3.3 with the 12.08 stdlib release"). What might such a change mean? 1. For 3.3, the following releases would be made: - 3.2.x is cut from the 3.2 branch (1 rc + 1 release) - 3.3.0 + PyStdlib 12.08 is created from the default branch (1 alpha, 2 betas, 1+ rc, 1 release) - the 3.3 maintenance branch is created - the stdlib development branch is created 2. Once 3.2 goes into security-fix only mode, this would then leave us with 4 active branches: - 2.7 (maintenance) - 3.3 (maintenance) - stdlib (Python 3.3 compatible, PEP 3003 compliant updates) - default (3.4 development) The 2.7 branch would remain a separate head of development, but for 3.x development the update flow would become: Bug fixes: 3.3->stdlib->default Stdlib features: stdlib->default Language changes: default 3. Somewhere around February 2013, we prepare to release Python 3.4a1 and 3.3.1, along with PyStdlib 13.02: - 3.3.1 + PyStdlib 12.08 is cut from the 3.3 branch (1 rc + 1 release) - 3.3.1 + PyStdlib 13.02 comes from the stdlib branch (1 alpha, 1 beta, 1+ rc, 1 release) - 3.4.0a1 comes from the default branch (may include additional stdlib changes) 4. Around August 2013 this process repeats: - 3.3.2 + PyStdlib 12.08 is cut from the 3.3 branch - 3.3.2 + PyStdlib 13.08 comes from the stdlib branch (final 3.3 compatible stdlib release) - 3.4.0a2 comes from the default branch 5. And then in February 2014, we gear up for a new major release: - 3.3.3 is cut from the 3.3 branch and the 3.3 branch enters security fix only mode - 3.4.0 + PyStdlib 14.02 is created from the default branch (1 alpha, 2 betas, 1+ rc, 1 release) - the 3.4 maintenance branch is created and merged into the stdlib branch (alternatively, Feb 2014 could be another interim release of 3.4 alpha and a 3.3 compatible stdlib updated, with 3.4 delayed until August 2014) I believe this approach would get to the core of what the PEP authors want (i.e. more frequent releases of the standard library), while being quite explicit in *avoiding* the concerns associated with more frequent releases of the core language itself. The rate of updates on the language spec, the C API (and ABI), the bytecode format and the AST would remain largely unchanged at 18-24 months. Other key protocols (e.g. default pickle formats) could also be declared ineligible for changes in interim releases. If a critical security problem is found, then additional releases may be cut for the maintenance branch and for the stdlib branch. There's a slight annoyance in having all development filtered through an additional branch, but there's a large advantage in that having a stable core in the stdlib branch makes it more likely we'll be able to use it as a venue for collaboration with the PyPy, Jython and IronPython folks (they all have push rights and a separate branch means they can use it without having to worry about any of the core changes going on in the default branch). A separate branch with combined "3.x.y + PyStdlib YY.MM" releases is also significantly less work than trying to split the stdlib out completely into a separate repo. Regards, Nick.

Le mercredi 18 janvier 2012 à 21:26 +1000, Nick Coghlan a écrit :
It's a straight-forward way to track the feature support of a release. How do you suggest all these "sys.version_info >= (3, 2)" - and the corresponding documentation snippets a.k.a "versionadded" or "versionchanged" tags - be spelt otherwise?
It's not only branding and versioning, is it? They're completely different projects with different goals (and different commercial support). If you're suggesting we do only short-term releases and leave the responsibility of long-term support to another project or entity, I'm not against it, but it's far more radical than what we are proposing in the PEP :-)
Well, you're opposing the PEP on the basis that it's workforce-intensive but you're proposing something much more workforce-intensive :-) Splitting the stdlib: - requires someone to do the splitting (highly non-trivial given the interactions of some modules with interpreter details or low-level C code) - requires setting up separate resources (continuous integration with N stdlib versions and M interpreter versions, for example) - requires separate maintenance and releases for the stdlib (but with non-trivial interaction with interpreter maintenance, since they will affect each other and must be synchronized for Python to be usable at all) - requires more attention by users since there are now *two* release schedules and independent version numbers to track The former two are one-time costs, but the latter two are recurring costs. Therefore, splitting the stdlib is much more complicated and involved than many people think; it's not just "move a few directories around and be done". And it's not even obvious it would have an actual benefit, since developers of other implementations are busy doing just that (see Jeff Hardy's message in this thread). Regards Antoine.

On Wed, Jan 18, 2012 at 10:30 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
Did you read what I actually proposed? I specifically *didn't* propose separate stdlib releases (for all the reasons you point out), only separate date based stdlib *versioning*. Distribution of the CPython interpreter + stdlib would remain monolithic, as it is today. Any given stdlib release would only be supported for the most recent language release. The only difference is that between language releases, where we currently only release maintenance builds, we'd *also* release a second version of each maintenance build with an updated standard library, along with an alpha release of the next language version (with the last part being entirely optional, but I figured I may as well make the suggestion since I like the idea to encourage getting syntax updates and the like out for earlier experimentation). When you initially pitched the proposal via email, you didn't include the "language moratarium applies to interim releases" idea. That one additional suggestion makes the whole concept *much* more appealing to me, but I only like it on the condition that we decouple the stdlib versioning from the language definition versioning (even though I recommend we only officially support very specific combinations of the two). My suggestion is really just a concrete proposal for implementing Ezio's idea of only bumping the Python version for releases with long term support, and using some other mechanism to distinguish the interim releases. So, assuming a 2 year LTS cycle, the released versions up to February 2015 with my suggestion would end up being:
If we have to make "brown paper bag" releases for the maintenance or stdlib branches then the micro versions get bumped - the date based version of the standard library versions relates to when that particular *API* was realised, not when bugs were last fixed in it. If a target release date slips, then the stdlib version would be increased accordingly (cf. Ubuntu 6.06). Yes, we'd have an extra set of active buildbots to handle the stdlib branch, but a) that's no harder than creating the buildbots for a new maintenance branch and b) the interim release proposal will need to separate language level changes from stdlib level changes *anyway*. As far as how sys.version checks would be updated, I would propose a simple API addition to track the new date-based standard lib versioning: sys.stdlib_version. People could choose to just depend on a specific Python version (implicitly depending on the stdlib version that was originally shipped with that version of CPython), or they may instead decide to depend on a specific stdlib version (implicitly depending on the first Python version that was shipped with that stdlib). The reason I like this scheme is that it allows us (and users) to precisely track the things that can vary at the two different rates. At least the following would still be governed by changes in the first two fields of sys.version (i.e. the major Python version): - deprecation policy - language syntax - compiler AST - C ABI stability - Windows compilation suite and C runtime version - anything else we decide to link with the Python language version (e.g. default pickle protocol) However, the addition of date based stdlib versioning would allow us to clearly identify the new interim releases proposed by PEP 407 *without* mucking up all those things that are currently linked to sys.version and really *shouldn't* be getting updated every 6 months. Users get a clear guarantee that if they follow the stdlib updates instead of the regular maintenance releases, they'll get nice new features along with their bug fixes, but no new deprecations or backwards incompatible API changes. However, they're also going to be obliged to transition to each new language release as it comes out if they want to continue getting security updates. Basically, what it boils down to is that I'm now +1 on the general proposal in the PEP, *so long as*: 1. We get a separate Hg branch for "stdlib only" changes and default becomes the destination specifically for "language update" changes (with the latter being a superset of the former) 2. The proposed "interim releases" are denoted by a new date-based sys.stdlib_version field and sys.version retains its current meaning (and slow rate of change) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Nick Coghlan writes:
Typo? -> 3.4.0 + 14.08.0, right?
Python 3.4.1 + stdlib 15.02.0 (~February 2015)
It seems to me there could be considerable divergence between the stdlib code in
and
because 14.08.0a* will be targeting 3.4, and *should* use new language constructs and APIs where they are appropriate, while 13.02.0 ... 14.02.0 will be targeting the 3.3 API, and mustn't use them.

On Wed, Jan 18, 2012 at 09:08, Nick Coghlan <ncoghlan@gmail.com> wrote:
IOW we would have a language moratorium every 2 years (i.e. between LTS releases) while switching to a 6 month release cycle for language/VM bugfixes and full stdlib releases? I would support that as it has several benefits from several angles.
It also makes disruptive language changes less frequent so people have more time to catch up, update books/docs, etc. We can also let them bake longer and we all get more experience with them. Doing a release every 6 months that includes updates to the stdlib and bugfixes to the language/VM also benefits other VMs by getting compatibility fixes in faster. All of the other VM maintainers have told me that keeping the stdlib non-CPython compliant is the biggest hurdle. This kind of switch means they could release a VM that supports a release 6 months or a year after a language change release (e.g. 1 to 2 releases in) so as to get changes in faster and lower the need to keep their own fork. It should also increase the chances of external developers of projects being willing to become core developers and contributing their project to Python. If they get to keep a 6 month release cycle we could consider pulling in project like httplib2 and others that have resisted inclusion in the stdlib because painfully long (for them) wait between releases.
I don't think we need to do a new versioning scheme. Why can't we just say which releases are covered by a language moratorium? The community seemed to pick up on that rather well when we did it for Python 3 and I didn't see anyone having difficulty explaining it when someone didn't know what was going on. As long as we are clear which releases are under a language moratorium and which one's aren't we shouldn't need to switch to language + stdlib versioning scheme. This will lead to use reaching Python 4 faster (in about 4 years), but even that doesn't need to be a big deal. Linux jumped from 2 to 3 w/o issue. Once again, as long as we are clear on which new versions have language changes it should be clear as to what to expect. Otherwise I say we just bump the major version when we do a language-changing release (i.e. every 2 years) and just to a minor/feature number bump (i.e. every 6 months) when we add/change stuff to the stdlib. People can then be told "learn Python 4" which is easy to point out on docs, e.g. you won't have to go digging for what minor/feature release a book covers, just what major release which will probably be emblazoned on the cover. And with the faster stdlib release schedule other VMs can aim for X.N versions when they have all the language features *and* all of their compatibility fixes into the stdlib. And then once they hit that they can just continue to support that major version by just keeping up with minor releases with compatibility fixes (which buildbots can help guarantee). And honestly, if we don't go with this I'm with Georg's comment in another email of beginning to consider stripping the stdlib down to core libraries to help stop with the bitrot (sorry, Paul). If we can't attract new replacements for modules we can't ditch because of backwards compatibility I start to wonder if I should even care about improving the stdlib outside of core code required to make Python simply function. -Brett

Am 18.01.2012 18:56, schrieb Brett Cannon:
That is certainly a possibility (it's listed as an open issue in the PEP).
Yes. In the end, the moratorium really was a good idea, and this would be carrying on the spirit.
Exactly! Georg

On Thu, Jan 19, 2012 at 7:31 AM, fwierzbicki@gmail.com <fwierzbicki@gmail.com> wrote:
Yes, with the addition of the idea of a PEP 3003 style language change moratorium for interim releases, I've been converted from an initial opponent of the idea (since we don't want to give the wider community whiplash) to a supporter (since some parts of the community, especially web service developers that deploy to tightly controlled environments, aren't well served by the standard library's inability to keep up with externally maintained standards and recommended development practices). It means PEP 407 can end up serving two goals: 1. Speeding up the rate of release for the standard library, allowing enhanced features to be made available to end users sooner. 2. Slowing down (slightly) the rate of release of changes to the core language and builtins, providing more time for those changes to filter out through the wider Python ecosystem. Agreeing with those goals in principle then leaves two key questions to be addressed: 1. How would we have to update our development practices to make such a dual versioning scheme feasible? 2. How can we best communicate a new approach to versioning without unduly confusing developers that have built up certain expectations about Python's release cycle over the past 20+ years? For the first point, I think having two active development branches (one for stdlib updates, one for language updates) will prove to be absolutely essential. Otherwise all language updates would have to be landed in the 6 month window between the last stdlib release for a given language version and the next language release, which seems to me a crazy way to go about things. As a consequence, I think we'd be obliged to do something to avoid conflicts on Misc/NEWS (this could be as simple as splitting it out into NEWS and NEWS_STDLIB, but if we're restructuring those files anyway, we may also want to do something about the annoying conflicts between maintenance releases and development releases). That then leaves the question of how to best communicate such a change to the rest of the Python community. This is more a political and educational question than it is a technical one. A few different approaches have already been suggested: 1. I believe the PEP currently proposes just taking the "no more than 9" limit off the minor version of the language. Feature releases would just come out every 6 months, with every 4th release flagged as a language release. This could even be conveyed programmatically by offering "sys.lang_version" and "sys.lang_version_info" attributes that define the *language* version of a given release - 3.3, 3.4, 3.5 and 3.6 would all have something like sys.lang_version == '3.3', and then in 3.7 (the next language release) it would be updated to say sys.lang_version == '3.7'. This approach would require that some policies (such as the deprecation cycle) by updated to refer to changes in the language version (sys.lang_version) rather than change in the stdlib version (sys.version). I don't like this scheme because it tries to use one number (the minor version field) to cover two very different concepts (stdlib updates and language updates). While technically feasible, this is unnecessarily obscure and confusing for end users. 2. Brett's alternative proposal is that we switch to using the major version for language releases and the minor version for stdlib releases. We would then release 3.3, 3.4, 3.5 and 3.6 at 6 month intervals, with 4.0 then being released in August 2014 as a new language version. Without taking recent history into acount, I actually like this scheme - it fits well with traditional usage of major.minor.micro version numbering. However, I'm not confident that the "python" name will refer to Python 3 on a majority of systems by 2014 and accessing Python 4.0 through the "python3" name would just be odd. It also means we lose our ability to signal to the community when we plan to make a backwards incompatible language release (making the assumtion that we're never going to want to do that again would be incredibly naive). On a related note, we'd also be setting ourselves to have to explain to everyone that "no, no, Python 3 -> 4 is like upgrading from Python 3.2 -> 3.3, not 2.7 -> 3.2". I expect the disruptions of the Python 3 transition will still be fresh enough in everyone's mind at that point that we really shouldn't go there if we don't have to. 3. Finally, we get to my proposal: that we just leave sys.version and sys.version_info alone. They will still refer to Python language versions, the micro release will be incremented every 6 months or so, the minor release once every couple of years to indicate a language update and the major release every decade or so (if absolutely necessary) to indicate the introduction of backwards incompatibilities. All current intuitions and expectations regarding the meaning of sys.version and sys.version_info remain completely intact. However, we would still need *something* to indicate that the stdlib has changed in the interim releases. This should be a monotically increasing value, but should also be clearly distinct from the language version. Hence my proposal of a date based sys.stdlib_version and sys.stdlib_version_info. That way, nobody has to *unlearn* anything about current Python development practices and policies. Instead, all people have to do is *learn* that we now effectively have two release streams: a date-based release stream that comes out every 6 months (described by sys.stdlib_version) and an explicitly numbered release stream (described by sys.version) that comes out every 24 months. So in August this year, we would release 3.3+12.08, followed by 3.3+13.02, 3.3+13.08, 3.3+14.02 at 6 month intervals, and then the next language release as 3.4+14.08. If someone refers to just Python 3.3, then the "at least stdlib 12.08" is implied. If they refer to Python stdlib 12.08, 13.02, 13.08 or 14.02, then it is the dependency on "Python 3.3" that is implied. Two different rates of release -> two different version numbers. Makes sense to me. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Thu, 19 Jan 2012 11:03:15 +1000 Nick Coghlan <ncoghlan@gmail.com> wrote:
With the moratorium suggestion factored in, yes. The PEP insists on support duration rather than the breadth of changes, though. I think that's a more important piece of information for users. (you don't care whether or not new language constructs were added, if you were not planning to use them)
As an end user I wouldn't really care whether a release is "stdlib changes only" or "language/builtins additions too" (especially in a language like Python where the boundaries are somewhat blurry). I think this distinction is useful mainly for experts and therefore not worth complicating version numbering for.
The main problem I see with this is that Python 3 was a big disruptive event for the community, and calling a new version "Python 4" may make people anxious at the prospect of compatibility breakage. Instead of spending some time advertising that "Python 4" is a safe upgrade, perhaps we could simply call it "Python 3.X+1"? (and, as you point out, keep "Python X+1" for when we want to change the language in incompatible ways again)
If I were a casual user of a piece of software, I'd really find such a numbering scheme complicated and intimidating. I don't think most users want such a level of information. Regards Antoine.

On Thu, Jan 19, 2012 at 9:17 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
I think the ideal numbering scheme from a *new* user point of view is the one Brett suggested (where major=language update, minor=stdlib update), but (as has been noted) there are solid historical reasons we can't use that. While I still have misgivings, I'm starting to come around to the idea of just allowing the minor release number to increment faster (Barry's co-authorship of the PEP, suggesting he doesn't see such a scheme causing any problems for Ubuntu is big factor in that). I'd still like the core language version to be available programmatically, though, and I'd like the PEP to consider displaying it as part of sys.version and using it to allow things like having bytecode compatible versions share bytecode files in the cache. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Jan 19, 2012, at 12:17 PM, Antoine Pitrou wrote:
s/was/is/ The Python 3 transition is ongoing, and Guido himself at the time thought it would take 5 years. I think we're making excellent progress, but there are still occasional battles just to convince upstream third party developers that supporting Python 3 (let alone *switching* to Python 3) is even worth the effort. I think we're soon going to be at a tipping point where not supporting Python 3 will be the minority position. Even if a hypothetical Python 4 were completely backward compatible, I shudder at the PR nightmare that would entail. I'm not saying there will never be a time for Python 4, but I sure hope it's far enough in the future that you youngun's will be telling us about it in the Tim Peters Home for Python Old Farts, where we'll smile blankly, bore you again with stories of vinyl records, phones with real buttons, and Python 1.6.1 while you feed us our mush under chronologically arranged pictures of BDFLs Van Rossum, Peterson, and Van Rossum. -Barry

Brett Cannon wrote:
Do we have any evidence of this alleged bitrot? I spend a lot of time on the comp.lang.python newsgroup and I see no evidence that people using Python believe the standard library is rotting from lack of attention. I do see people having trouble with installing third party packages. I see that stripping back the standard library and forcing people to rely more on external libraries will hurt, rather than help, the experience they have with Python. -- Steven

On Thu, Jan 19, 2012 at 10:19 AM, Steven D'Aprano <steve@pearwood.info> wrote:
IMO, it's a problem mainly with network (especially web) protocols and file formats. It can take the stdlib a long time to catch up with external developments due to the long release cycle, so people are often forced to switch to third party libraries that better track the latest versions of relevant standards (de facto or otherwise). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 1/18/2012 8:06 PM, Nick Coghlan wrote:
Some of those modules are more that 2 years out of date and I guess what Brett is saying is that the people interested and able to update them will not do so in the stdlib because they want to be able to push out feature updates whenever they are needed and available and not be tied to a slow release schedule. Morever, since the external standards will continue to evolve for the foreseeable future, the need to track them more quickly will also continue. We could relax the ban on new features in micro releases and designate such modules as volatile and let them get new features in each x.y.z release. In a sense, this would be less drastic than inventing a new type of release. Code can require an x.y.z release, as it must if it depends on a bug fix not in x.y.0. I also like the idea of stretching out the alpha release cycle. I would like to see 3.3.0a1 appear along with 3.2.3 (in February?). If alpha releases are released with all buildbots green, they are as good, at least with respect to old features, as a corresponding bugfix release. All releases will become more dependable as test coverage improves. Again, this idea avoids inventing a new type of release with new release designations. I think one reason people avoid alpha releases is that they so quickly become obsolete. If one sat for 3 to 6 months, it might get more attention. As for any alpha stigma, we should emphasize that alpha only mean not feature frozen. -- Terry Jan Reedy

Nick Coghlan <ncoghlan@gmail.com> wrote:
I'm not sure how much of a problem this really is. I continually build fairly complicated systems with Python that do a lot of HTTP networking, for instance. It's fairly easy to replace use of the standard library modules with use of Tornado and httplib2, and I wouldn't think of *not* doing that. But the standard modules are there, out-of-the-box, for experimentation and tinkering, and they work in the sense that they pass their module tests. Are those standard modules as "Internet-proof" as some commercially-supported package with an income stream that supports frequent security updates would be? Perhaps not. But maybe that's OK. Another way of doing this would be to "bless" certain third-party modules in some fashion short of incorporation, and provide them with more robust development support, again, "somehow", so that they don't fall by the wayside when their developers move on to something else, but are still able to release on an independent schedule. Bill

On Jan 19, 2012 9:28 AM, "Bill Janssen" <janssen@parc.com> wrote:
This is starting to sound a little like the discussion about the __preview__ / __experimental__ idea. If I recall correctly, one of the points is that for some organizations getting a third-party library approved for use is not trivial. In contrast, inclusion in the stdlib is like a free pass, since the organization can rely on the robustness of the CPython QA and release processes. As well, there is at least a small cost with third-party libraries for those that maintain more rigorous configuration management. In contrast, there is basically no extra cost with new/updated stdlib, beyond upgrading Python. -eric
http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail....

Hi, One of the main sticking points over possible fixes for the hash-collision security issue seems to be a fear that changing the iteration order of a dictionary will break backwards compatibility. The order of iteration has never been specified. In fact not only is it arbitrary, it cannot be determined from the contents of a dict alone; it may depend on the insertion order. Changing a hash function is not the only change that will change the iteration order; any of the following will also do so: * Changing the minimum size of a dict. * Changing the load factor of a dict. * Changing the resizing policy of a dict. * Sharing of keys between dicts. By treating iteration order as part of the API we are effectively ruling out ever making any improvements to the dict. For example, my new dictionary implementation https://bitbucket.org/markshannon/hotpy_new_dict/ reduces memory use by 47% for gcbench, and by about 20% for the 2to3 benchmark, on my 32bit machine. (Nice graphs: http://tinyurl.com/7qd2nnm http://tinyurl.com/6uqvl2x ) The new dict implementation (necessarily) changes the iteration order and will break code that relies on it. If dict iteration order is to be treated as part of the API (and I think that is a very bad idea) then it should be documented, which will be difficult since it is barely deterministic. This will also be a major problem for PyPy, Jython and IronPython, as they will have to reimplement their dicts. So, don't be afraid to change that hash function :) Cheers, Mark

On Fri, Jan 20, 2012 at 5:49 AM, Mark Shannon <mark@hotpy.org> wrote:
So, don't be afraid to change that hash function :)
Definitely. The hash function *has* been changed in the past, and a lot of developers were schooled in not relying on the iteration order. That's a good thing, as those developers now write tests of what's actually important rather than relying on implementation details of the Python runtime. A hash function that changes more often than during an occasional major version update will encourage more developers to write better tests. We can think of it as an educational tool. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> "A person who won't read has no advantage over one who can't read." --Samuel Langhorne Clemens

On Fri, Jan 20, 2012 at 8:49 PM, Mark Shannon <mark@hotpy.org> wrote:
So, don't be afraid to change that hash function :)
Changing it for 3.3 isn't really raising major concerns: the real concern is with changing it in maintenance and security patches for earlier releases. Security patches that may break production applications aren't desirable, since it means admins have to weigh up the risk of being affected by the security vulnerability against the risk of breakage from the patch itself. The collision counting approach was attractive because it looked like it might offer a way out that was less likely to break deployed systems. Unfortunately, I think the point Martin raised about just opening a new (even more subtle) attack vector kills that idea dead. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Wed, Jan 18, 2012 at 09:26:19PM +1000, Nick Coghlan wrote:
This looks like a 'good bridge' of suggestion between rapid releases and stable releases. What would be purpose of alpha release. Would we encourage people to use it or test it? Which the rapid relase cycle, the encouragement is to use rather than test. -- Senthil

On Tuesday, January 17, 2012, Antoine Pitrou <solipsis@pitrou.net> wrote:
As a Gentoo packager, this would mean much more work for us, unless all the non-LTS releases promised to be backwards compatible. I.e. the hard part for us is managing all the incompatibilities in other packages, compatibility with Python. As a user of Python, I would rather dislike the change from 18 to 24 months for LTS release cycles. And the limiting factor for my use of Python features is largely old Python versions still in use, not the availability of newer features in the newest Python. So I'm much more interested in finding ways of improving 2.7/3.2 uptake than adding more feature releases. I also think that it would be sensible to wait with something like this process change until the 3.x adoption curve is much further along. Cheers, Dirkjan

Hello Dirkjan, On Wed, 18 Jan 2012 18:32:22 +0100 Dirkjan Ochtman <dirkjan@ochtman.nl> wrote:
It might need to be spelt clearly in the PEP, but one of my assumptions is that packagers choose on what release series they want to synchronize. So packagers can synchronize on the LTS releases if it's more practical for them, or if it maps better to their own release model (e.g. Debian). Do you think that's a valid answer to Gentoo's concerns?
So I'm much more interested in finding ways of improving 2.7/3.2 uptake than adding more feature releases.
That would be nice as well, but I think it's orthogonal to the PEP. Besides, I'm afraid there's not much we (python-dev) can do about it. Some vendors (Debian, Redhat) will always lag behind the bleeding-edge feature releases. Regards Antoine.

On Tue, Jan 17, 2012 at 1:34 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
It sounds like every six months we would get a new feature version, with every fourth one an LTS release. That sounds great, but, unless I've misunderstood, there has been a strong desire to keep that number to one digit. It doesn't matter to me all that much. However, if there is such a limit, implied or explicit, it should be mentioned and factor into the PEP. That aside, +1. -eric

If minor/feature releases are introducing breaking changes perhaps it's time to adopt accelerated major versioning schedule. For instance there are breaking ABI changes between 3.0/3.1, and 3.2, and while acceptable for the early adoption state of Python 3, such changes should normally be reserved for major versions. If every 4th or so feature release is sufficiently different to be worth of an LTS, consider this a major release albeit with smaller beading changes than Python 3. Aside from this, given the radical features of 3.3, and the upcoming Ubuntu 12.04 LTS, I would recommend adopting 2.7 and 3.2 as the first LTSs, to be reviewed 2 years hence should this go ahead.

Hello, On Wed, 18 Jan 2012 10:04:19 +1100 Matt Joiner <anacrolix@gmail.com> wrote:
If minor/feature releases are introducing breaking changes perhaps it's time to adopt accelerated major versioning schedule.
The PEP doesn't propose to accelerate compatibility breakage. So I don't think a change in numbering is required.
Which "breaking ABI changes" are you thinking about? Python doesn't guarantee any A*B*I (as opposed to API), unless you use Py_LIMITED_API which was introduced in 3.2. Regards Antoine.

On 1/17/2012 3:34 PM, Antoine Pitrou wrote:
To me, as I understand the proposal, the title is wrong. Our current feather releases already are long-term support versions. They get bugfix releases at close to 6 month intervals for 1 1/2 -2 years and security fixes for 3 years. The only change here is that you propose, for instance, a fixed 6-month interval and 2 year period. As I read this, you propose to introduce a new short-term (interim, preview) feature release along with each bugfix release. Each would have all the bugfixes plus a preview of the new features expected to be in the next long-term release. (I know, this is not exactly how you spun it.) There has been discussion on python-ideas about whether new features are or can be considered experimental, or whether there should be an 'experimental' package. An argument against is that long-term production releases should not have experimental features that might go away or have their apis changed. If the short-term, non-production, interim feature releases were called preview releases, then some or all of the new features could be labelled experimental and subject to change. It might actually be good to have major new features tested in at least one preview release before being frozen. Maybe then more of the initial bugs would be found and repaired *before* their initial appearance in a long-term release. (All of this is not to say that experimental features should be casually changed or reverted without good reason.) One problem, at least on Windows, is that short-term releases would almost never have compiled binaries for 3rd-party libraries. It already takes awhile for them to appear for the current long-term releases. On the other hand, library authors might be more inclined to test new features, a few at a time, if part of tested preview releases, than if just in the repository. So the result *might* be quicker library updates after each long-term release. -- Terry Jan Reedy

On Tue, 17 Jan 2012 18:29:11 -0500 Terry Reedy <tjreedy@udel.edu> wrote:
Well, "spinning" is important here. We are not proposing any "preview" releases. These would have the same issue as alphas or betas: nobody wants to install them where they could disrupt working applications and libraries. What we are proposing are first-class releases that are as robust as any other (and usable in production). It's really about making feature releases more frequent, not making previews available during development. I agree "long-term" could be misleading as their support duration is not significantly longer than current feature releases. I chose this term because it is quite well-known and well-understood, but we could pick something else ("extended support", "2-year support", etc.).
That's orthogonal to this PEP. (that said, more frequent feature releases are also a benefit for the __preview__ proposal, since we could be more reactive changing APIs in that namespace)
One problem, at least on Windows, is that short-term releases would almost never have compiled binaries for 3rd-party libraries.
That's a good point, although Py_LIMITED_API will hopefully make things better in the middle term. Regards Antoine.

On 1/17/2012 6:42 PM, Antoine Pitrou wrote:
The main point of my comment is that the new thing you are introducing is not long-term supported versions but short term unsupported versions.
Well, "spinning" is important here. We are not proposing any "preview" releases. These would have the same issue as alphas or betas: nobody
I said nothing about quality. We aim to keep default in near-release condition and seem to be getting better. The new unicode is still getting polished a bit, it seems, after 3 months, but that is fairly unusual.
But I am dubious that releases that are obsolete in 6 months and lack 3rd party support will see much production use.
It's really about making feature releases more frequent, not making previews available during development.
Given the difficulty of making a complete windows build, it would be nice to have one made available every 6 months, regardless of how it is labeled. I believe that some people will see and use good-for-6-months releases as previews of the new features that will be in the 'real', normal, bug-fix supported, long-term releases. Every release is a snapshot of a continuous process, with some extra effort made to tie up some (but not all) of the loose ends. -- Terry Jan Reedy

On 18 January 2012 04:32, Terry Reedy <tjreedy@udel.edu> wrote:
I'd love to see 6-monthly releases, including Windows binaries, and binary builds of all packages that needed a compiler to build. Oh, and a pony every LTS release :-) Seriously, this proposal doesn't really acknowledge the amount of work by other people that would be needed for a 6-month release to be *usable* in normal cases (by Windows users, at least). It's usually some months after a release on the current schedule that Windows binaries have appeared for everything I use regularly. I could easily imagine 3rd-party developers tending to only focus on LTS releases, making the release cycle effectively *slower* for me, rather than faster. Paul PS Things that might help improve this: (1) PY_LIMITED_API, and (2) support in packaging for binary releases, including a way to force installation of a binary release on the "wrong" version (so that developers don't have to repackage and publish identical binaries every 6 months).

Am 18.01.2012 05:32, schrieb Terry Reedy:
That is really a matter of perspective. For the proposed cycle, there would be more regular version than LTS versions, so they are the exception and get the special name. (And at the same time, the name is already established and people probably grasp instantly what it means.)
Whether people would use the releases is probably something that only they can tell us -- that's why a community survey is mentioned in the PEP. Not sure what you mean by lacking 3rd party support.
Maybe they will. That's another thing that is made clear in the PEP: for one group of people (those preferring stability over long time), nothing much changes, except that the release period is a little longer, and there are these "previews" as you call them. Georg

On 18 January 2012 07:46, Georg Brandl <g.brandl@gmx.net> wrote:
The class of people who we need to consider carefully is those who want to use the latest release, but are limited by the need for other parties to release stuff that works with that release (usually, this means Windows binaries of extensions, or platform vendor packaged releases of modules/packages). For them, if the other parties focus on LTS releases (as is possible, certainly) the release cycle became slower, going from 18 months to 24.
Not sure what you mean by lacking 3rd party support.
I take it as meaning that the people who release Windows binaries on PyPI, and vendors who package up PyPI distributions in their own distribution format. Lacking support in the sense that these people might well decide that a 6 month cycle is too fast (too much work) and explicitly decide to focus only on LTS releases. Paul

On Wed, 18 Jan 2012 07:52:20 +0000 Paul Moore <p.f.moore@gmail.com> wrote:
Well, do consider, though, that anyone not using third-party C extensions under Windows (either Windows users that are content with pure Python libs, or users of other platforms) won't have that problem. That should be quite a lot of people already. As for vendors, they have their own release management independent of ours already, so this PEP wouldn't change anything for them. Regards Antoine.

Hi, On 17/01/2012 22.34, Antoine Pitrou wrote:
If non-LTS releases won't get bug fixes, a bug that is fixed in 3.3.x might not be fixed in 3.4, unless the bug fixes releases are synchronized with the new feature releases (see below).
If LTS bug fixes releases and feature releases are synchronized, we will have something like: 3.3 3.3.1 / 3.4 3.3.2 / 3.5 3.3.3 / 3.6 3.7 3.7.1 / 3.8 ... so every new feature release will have all the bug fixes of the current LTS release, plus new features. With this scheme we will soon run out of 1-digit numbers though. Currently we already have a 3.x release every ~18 months, so if we keep doing that (just every 24 months instead of 18) and introduce the feature releases in between under a different versioning scheme, we might avoid the problem. This means: 3.1 ... 18 months, N bug fix releases... 3.2 ... 18 months, N bug fix releases ... 3.3 LTS ... 24 months, 3 bug fix releases, 3 feature releases ... 3.4 LTS ... 24 months, 3 bug fix releases, 3 feature releases ... 3.5 LTS In this way we solve the numbering problem and keep a familiar scheme (all the 3.x will be LTS and will be released as the same pace as before, no need to mark some 3.x as LTS). OTOH this will make the feature releases less "noticeable" and people might just ignore them and stick with the LTS releases. Also we would need to define a versioning convention for the feature releases.
Wouldn't it still be two? Bug fixes will go to the last LTS and on default, features only on default.
So here the difference is that instead of committing on the previous release (what currently is 3.2), we commit it to the previous LTS release, ignoring the ones between that and default.
That's why I proposed to keep the same versioning scheme for these releases, and have a different numbering for the feature releases.
This doesn't necessarily have to be fixed, especially if we don't change the versioning scheme (so we don't need to know that we have a LTS release every N releases).
* For given values of X and N, is the no-bugfix-releases policy for non-LTS versions feasible?
If LTS bug fix releases and feature releases are synchronized it should be feasible.
* Restrict new syntax and similar changes (i.e. everything that was prohibited by PEP 3003) to LTS versions?
(I was reading this the other way around, maybe rephrase it to "Allow new syntax and similar changes only in LTS versions")
* What is the effect on packagers such as Linux distributions?
* What is the effect on PyPy/Jython/IronPython? Can they just skip the feature releases and focus on the LTS ones?
This is not an issue with the scheme I proposed.
Best Regards, Ezio Melotti

On Tue, Jan 17, 2012 at 3:50 PM, Ezio Melotti <ezio.melotti@gmail.com> wrote:
* What is the effect on PyPy/Jython/IronPython? Can they just skip the feature releases and focus on the LTS ones?
At least for IronPython it's unlikely we'd be able track the feature releases. We're still trying to catch up as it is. Honestly, I don't see the advantages of this. Are there really enough new features planned that Python needs a full release more than every 18 months? - Jeff

Am 18.01.2012 01:24, schrieb Jeff Hardy:
Yes, we think so. (What is a non-full release, by the way?) The main reason is changes in the library. We have been getting complaints about the standard library bitrotting for years now, and one of the main reasons it's so hard to a) get decent code into the stdlib and b) keep it maintained is that the release cycles are so long. It's a tough thing for contributors to accept that the feature you've just implemented will only be in a stable release in 16 months. If the stdlib does not get more reactive, it might just as well be cropped down to a bare core, because 3rd-party libraries do everything as well and do it before we do. But you're right that if Python came without batteries, the current release cycle would be fine. (Another, more far-reaching proposal, has been to move the stdlib out of the cpython repo and share a new repo with Jython/IronPython/PyPy. It could then also be released separately from the core. But this is much more work than the current proposal.) Georg

On Wed, Jan 18, 2012 at 6:55 PM, Georg Brandl <g.brandl@gmx.net> wrote:
I think this is the real issue here. The batteries in Python are so important because: 1) The stability and quality of 3rd party libraries is not guaranteed. 2) The mechanism used to obtain 3rd party libraries, is not popular or considered reliable. Much of the "bitrot" is that standard library modules have been deprecated by third party ones that are of a much higher functionality. Rather than importing these libraries, it needs to be trivial to obtain them. Putting some of these higher quality 3rd party modules into lock step with Python is an unpopular move, and hampers their future growth.

Am 18.01.2012 00:50, schrieb Ezio Melotti:
That's already the case today. 3.2.5 might be released before 3.3.1 and therefore include bugfixes that 3.3.0 doesn't. True, there will be a 3.3.1 afterwards that does include it, but in the new case, there will be a new feature release instead.
Let's see how Guido feels about 3.10 first.
"Maintenance" excludes the feature development branch here. Will clarify.
Yes.
For these relatively short times (X = 6 months), I feel it is important to fix the time spans to have predictability for our developers. Georg

Executive summary: My take is "show us the additional resources, and don't be stingy!" Sorry, Antoine, I agree with your goals, but I think you are too optimistic about the positive effects and way too optimistic about the costs. Antoine Pitrou writes:
This increases the demand for developer manpower somewhat.
availability of release management volunteers,
Dramatic increase here. It may look like RM is not so demanding -- run a few scripts to put out the alphas/betas/releases. But the RM needs to stay on top of breaking news, make decisions. That takes time, interrupts other work, etc.
ease of maintenance for users and third-party packagers,
Dunno about users, but 3rd party packagers will also have more work to do, or will have to tell their users "we only promise compatibility with LTS releases."
quick availability of new features (and behavioural changes),
These are already *available*, just not *tested*. Since testing is the bottleneck on what users consider to be "available for me", you cannot decrease the amount of testing (alpha, beta releases) by anywhere near the amount you're increasing frequency, or you're just producing "as is" snapshots. Percentage of time in feature freeze goes way up, features get introduced all at once just before the next release, schedule slippage is inevitable on some releases.
availability of bug fixes without pulling in new features or behavioural changes.
Sounds like a slight further increase in demand for RM, and as described a dramatic decrease in the bugfixing for throw-away releases.
The current release cycle errs on the conservative side.
What evidence do you have for that, besides people who aren't RMs wishing that somebody else would do more RM work?
Way optimistic IMO (theoretical, admitted, but I do release management for a less well-organized project, and I teach in a business school, FWIW).
The shorter feature freeze period (after the first beta build until the final release) is easier to accept.
But you need to look at total time in feature freeze over the LTS cycle, not just before each throw-away release.
The rush for adding features just before feature freeze should also be much smaller.
This doesn't depend on the length of time in feature freeze per release, it depends on the fraction of time in feature freeze over the cycle. Given your quality goals, this will go way up.

On Wed, 18 Jan 2012 11:37:08 +0900 "Stephen J. Turnbull" <stephen@xemacs.org> wrote:
Georg and Barry may answer you here: they are release managers and PEP co-authors.
The point is to *increase* the amount of testing by making features available in stable releases on a more frequent basis. Not decrease it. Alphas and betas never produce much feedback, because people are reluctant to install them for anything else than toying around. Python is not emacs or Firefox, you don't use it in a vacuum and therefore installing non-stable versions is dangerous. Regards Antoine.

Antoine Pitrou writes:
We're talking about different kinds of testing. You're talking about (what old-school commercial software houses meant by "beta") testing in a production or production prototype environment. I'd love to see more of that, too! My claim is that I don't expect much uptake if you don't do close to as many of what are called "alpha" and "beta" tests on python-dev as are currently done.
Exactly my point, except that the PEP authors seem to think that we can cut back on the number of alpha and beta prereleases and still achieve the stability that such users expect from a Python release. I don't think that's right. I expect that unless quite substantial resources (far more than "proportional to 1/frequency") are devoted to each non-LTS release, a large fraction of such users to avoid non-LTS releases the way they avoid betas now.

Le mercredi 18 janvier 2012 à 21:48 +0900, Stephen J. Turnbull a écrit :
You claim people won't use stable releases because of not enough alphas? That sounds completely unrelated. I don't know of any users who would bother about that. (you can produce flimsy software with many alphas, too)
Sure, and we think it is :) Regards Antoine.

Antoine Pitrou writes:
You claim people won't use stable releases because of not enough alphas? That sounds completely unrelated.
Surely testing is related to user perceptions of stability. More testing helps reduce bugs in released software, which improves user perception of stability, encouraging them to use the software in production. Less testing, then, will have the opposite effect. But you understand that theory, I'm sure. So what do you mean to say?
(you can produce flimsy software with many alphas, too)
The problem is the converse: can you produce Python-release-quality software with much less pre-release testing than current feature releases get?
Sure, and we think it is [possible to do that] :)
Given the relative risk of rejecting PEP 407 and me being wrong (the status quo really isn't all that bad AFAICS), vs. accepting PEP 407 and you being wrong, I don't find a smiley very convincing. In fact, I don't find the PEP itself convincing -- and I'm not the only one. We'll see what Barry and Georg have to say.

Le jeudi 19 janvier 2012 à 00:25 +0900, Stephen J. Turnbull a écrit :
I have asked a practical question, a theoretical answer isn't exactly what I was waiting for.
I don't care to convince *you*, since you are not involved in Python development and release management (you haven't ever been a contributor AFAIK). Unless you produce practical arguments, saying "I don't think you can do it" is plain FUD and certainly not worth answering to. Regards Antoine.

Antoine Pitrou wrote:
Pardon me, but people like Stephen Turnbull are *users* of Python, exactly the sort of people you DO have to convince that moving to an accelerated or more complex release process will result in a better product. The risk is that you will lose users, or fragment the user base even more than it is now with 2.x vs 3.x. Quite frankly, I like the simplicity and speed of the current release cycle. All this talk about separate LTS releases and parallel language releases and library releases makes my head spin. I fear the day that people asking questions on the tutor or python-list mailing lists will have to say (e.g.) "I'm using Python 3.4.1 and standard library 1.2.7" in order to specify the version they're using. I fear change, because the current system works well and for every way to make it better there are a thousand ways to make it worse. Dismissing fears like this as FUD doesn't do anyone any favours. One on-going complaint is that Python-Dev doesn't have the manpower or time to do everything that needs to be done. Bugs languish for months or years because nobody has the time to look at it. Will going to a more rapid release cycle give people more time, or just increase their workload? You're hoping that a more rapid release cycle will attract more developers, and there is a chance that you could be right; but a more rapid release cycle WILL increase the total work load. So you're betting that this change will attract enough new developers that the work load per person will decrease even as the total work load increases. I don't think that's a safe bet. -- Steven

Steven D'Aprano writes:
Well, to be fair, Antoine is right in excluding me from the user base he's trying to attract (as I understand it). I do not maintain products or systems that depend on Python working 99.99999% of the time, and in fact in many of my personal projects I use trunk. One of the problems with this kind of discussion is that the targets of the new procedures are not clear in everybody's mind, but all of us tend to use generic terms like "users" when we mean to discuss benefits or costs to a specific class of users.

On Thu, 19 Jan 2012 11:12:06 +1100 Steven D'Aprano <steve@pearwood.info> wrote:
Well, you might bring some examples here, but I haven't seen any project lose users *because* they switched to a faster release cycle (*). I don't understand why this proposal would fragment the user base, either. We're not proposing to drop compatibility or build Python 4. ((*) Firefox's decrease in popularity seems to be due to Chrome uptake, and their new release cycle is arguably in response to that)
Well, the PEP discussion might make your head spin, because various possibilities are explored. Obviously the final solution will have to be simple enough to be understood by anyone :-) (do you find Ubuntu's release model, for example, too complicated?)
Yeah, that's my biggest problem with Nick's proposal. Hopefully we can avoid parallel version schemes.
This is not something that we can find out without trying, I think. As Georg pointed out, the decision is easy to revert or amend if we find out that the new release cycle is unworkable. Regards Antoine.

On Thu, Jan 19, 2012 at 9:07 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
They're not really parallel - the stdlib version would fully determine the language version. I'm only proposing two version numbers because we're planning to start versioning *two* things (the standard library, updated every 6 months, and the language spec, updated every 18-24 months). Since the latter matches what we do now, I'm merely proposing that we leave its versioning alone, and add a *new* identiifier specifically for the interim stdlib updates. Thinking about it though, I've realised that the sys.version string already contains a lot more than just the language version number, so I think it should just be updated to include the stdlib version information, and the version_info named tuple could get a new 'stdlib' field as a string. That way, sys.version and sys.version_info would still fully define the Python version, we just wouldn't be mucking with the meaning of any of the existing fields. For example, the current:
might become:
for the maintenance release and:
for the stdlib-only update. Explicit-is-better-than-implicit'ly yours, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Am 19.01.2012 01:12, schrieb Steven D'Aprano:
I can't help noticing that so far, worries about the workload came mostly from people who don't actually bear that load (this is no accusation!), while those that do are the proponents of the PEP... That is, I don't want to exclude you from the discussion, but on the issue of workload I would like to encourage more of our (past and present) release managers and active bug triagers to weigh in. cheers, Georg

Ok, so let me add then that I'm worried about the additional work-load. I'm particularly worried about the coordination of vacation across the three people that work on a release. It might well not be possible to make any release for a period of two months, which, in a six-months release cycle with two alphas and a beta, might mean that we (the release people) would need to adjust our vacation plans with the release schedule, or else step down (unless you would release the "normal" feature releases as source-only releases). FWIW, it might well be that I can't be available for the 3.3 final release (I haven't finalized my vacation schedule yet for August). Regards, Martin

On Fri, Jan 20, 2012 at 9:54 AM, "Martin v. Löwis" <martin@v.loewis.de> wrote:
I must admit that aspect had concerned me as well. Currently we use the 18-24 month window for releases to slide things around to accommodate the schedules of the RM, Martin (Windows binaries) and Ned/Ronald (Mac OS X binaries). Before we could realistically switch to more frequent releases, something would need to change on the binary release side. Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Thu, Jan 19, 2012 at 17:54, "Martin v. Löwis" <martin@v.loewis.de> wrote:
In the interest of not having Windows releases depend on one person, and having gone through building the installer myself (which I know is but one of the duties), I'm available to help should you need it.

On 20 January 2012 03:57, Brian Curtin <brian@python.org> wrote:
One thought comes to mind - while we need a PEP to make a permanent change to the release schedule, would it be practical in any way to do a "trial run" of the process, and simply aim to release 3.4 about 6 months after 3.3? Based on the experiences gained from that, some of the discussions around this PEP could be supported (or not :-)) with more concrete information. If we can't do that, then that says something about the practicality of the proposal in itself... The plan for 3.4 would need to be publicised well in advance, of course, but doing that as a one-off exercise might well be viable. Paul. PS I have no view on whether the proposal is a good idea or a bad idea from a RM point of view. That's entirely up to the people who do the work to decide, in my opinion.

Am 20.01.2012 00:54, schrieb "Martin v. Löwis":
Thanks for the reminder, Martin. Even with the current release schedule, I think that the load on you is too much, and we need a whole team of Windows release experts. It's not really fair that the RM usually changes from release to release (at least every 2), and you have to do the same for everyone. It looks like we have one volunteer already; if we find another, I think one of them will also be not on vacation at most times :) For the Mac, at least we're up to two experts, but I'd like to see a third there too. cheers, Georg

Am 18.01.2012 16:25, schrieb Stephen J. Turnbull:
"The status quo really isn't all that bad" applies to any PEP. Also, compared to most PEPs, it is quite easy to revert to the previous state of things if they don't work out as wanted.
In fact, I don't find the PEP itself convincing -- and I'm not the only one.
That is noted. And I think Antoine was a little harsh earlier; of course we also need to convince users that the new cycle is advantageous and not detrimental.
We'll see what Barry and Georg have to say.
Two things: a) The release manager's job is not as bad as you might believe. We have an incredibly helpful and active core of developers which means that the RM job is more or less "reduced" to pronouncing on changes during the rc phase, and actually producing the releases. b) I did not have the impression (maybe someone can underline that with tracker stats?) that there were a lot more bug reports than usual during the alpha and early beta stages of Python 3.2. Georg

Georg Brandl writes:
That depends on how "doesn't work out" plays out. If meeting the schedule *and* producing a good release regularly is just more work than expected, of course you're right. If you stick to the schedule with insufficient resources, and lack of testing produces a really bad release (or worse, a couple of sorta bad releases in succession), reverting Python's reputation for stability is going to be non-trivial.
I've done release management and I've been watching Python do release management since PEP 263; I'm well aware that Python has a truly excellent process in place, and I regularly recommend studying to friends interested in improving their own projects' processes. But I've also (twice) been involved (as RM) in a major revision of RM procedures, and both times it was a lot more work than anybody expected. Finally, the whole point of this exercise is to integrate a lot more stdlib changes (including whole packages) than in the past on a much shorter timeline, and to do it repeatedly. "Every six months" still sounds like a long time if you are a "leaf" project still working on your changes on your own schedule and chafing at the bit waiting to get them in to the core project's releases, but it's actually quite short for the RM. I'm not against this change (especially since, as Antoine so graciously pointed out, I'm not going to be actually doing the work in the foreseeable future), but I do advise that the effort required seemed to be dramatically underestimated.
Yeah, but the question for Python's stability reputation is "were there more than zero?" Every bug that gets through is a risk.

This won't be a surprise to Antoine or Georg (since I've already expressed the same opinion privately), but I'm -1 on the idea of official releases of the whole shebang every 6 months. We're not Ubuntu, Fedora, Chrome or Firefox with a for-profit company (or large foundation) with multiple paid employees kicking around to really drive the QA process. If we had official support from Red Hat or Canonical promising to devote paid QA and engineering resources to keeping things on track my opinion might be different, but that is highly unlikely. I'm also wholly in agreement with Ezio that using the same versioning scheme for both full releases and interim releases is thoroughly confusing for users (for example, I consider Red Hat's completely separate branding and versioning for Fedora and RHEL a better model for end users than Canonical's more subtle 'Ubuntu' and 'Ubuntu LTS' distinction, and that's been my opinion since long before I started working for RH). My original suggestion to Antoine and Georg for 3.4 was that we simply propose to Larry Hastings (the 3.4 RM) that we spread out the release cycle, releasing the first alpha after ~6 months, the second after about ~12, then rolling into the regular release cycle of a final alpha, some beta releases, one or two release candidates and then the actual release. However, I'm sympathetic to Antoine's point that early alphas aren't likely to be at all interesting to folks that would like a fully supported stdlib update to put into production and no longer think that suggestion makes much sense on its own. Instead, if the proposal involves instituting a PEP 3003 style moratorium (i.e. stdlib changes only) for all interim releases, then we're essentially talking about splitting the versioning of the core language (and the CPython C API) and the standard library. If we're going to discuss that, we may as well go a bit further and just split development of the two out onto separate branches, with the current numbering scheme applying to full language version releases and switching to a date-based versioning scheme for the standard library (i.e. if 3.3 goes out in August as planned, then it would be "Python 3.3 with the 12.08 stdlib release"). What might such a change mean? 1. For 3.3, the following releases would be made: - 3.2.x is cut from the 3.2 branch (1 rc + 1 release) - 3.3.0 + PyStdlib 12.08 is created from the default branch (1 alpha, 2 betas, 1+ rc, 1 release) - the 3.3 maintenance branch is created - the stdlib development branch is created 2. Once 3.2 goes into security-fix only mode, this would then leave us with 4 active branches: - 2.7 (maintenance) - 3.3 (maintenance) - stdlib (Python 3.3 compatible, PEP 3003 compliant updates) - default (3.4 development) The 2.7 branch would remain a separate head of development, but for 3.x development the update flow would become: Bug fixes: 3.3->stdlib->default Stdlib features: stdlib->default Language changes: default 3. Somewhere around February 2013, we prepare to release Python 3.4a1 and 3.3.1, along with PyStdlib 13.02: - 3.3.1 + PyStdlib 12.08 is cut from the 3.3 branch (1 rc + 1 release) - 3.3.1 + PyStdlib 13.02 comes from the stdlib branch (1 alpha, 1 beta, 1+ rc, 1 release) - 3.4.0a1 comes from the default branch (may include additional stdlib changes) 4. Around August 2013 this process repeats: - 3.3.2 + PyStdlib 12.08 is cut from the 3.3 branch - 3.3.2 + PyStdlib 13.08 comes from the stdlib branch (final 3.3 compatible stdlib release) - 3.4.0a2 comes from the default branch 5. And then in February 2014, we gear up for a new major release: - 3.3.3 is cut from the 3.3 branch and the 3.3 branch enters security fix only mode - 3.4.0 + PyStdlib 14.02 is created from the default branch (1 alpha, 2 betas, 1+ rc, 1 release) - the 3.4 maintenance branch is created and merged into the stdlib branch (alternatively, Feb 2014 could be another interim release of 3.4 alpha and a 3.3 compatible stdlib updated, with 3.4 delayed until August 2014) I believe this approach would get to the core of what the PEP authors want (i.e. more frequent releases of the standard library), while being quite explicit in *avoiding* the concerns associated with more frequent releases of the core language itself. The rate of updates on the language spec, the C API (and ABI), the bytecode format and the AST would remain largely unchanged at 18-24 months. Other key protocols (e.g. default pickle formats) could also be declared ineligible for changes in interim releases. If a critical security problem is found, then additional releases may be cut for the maintenance branch and for the stdlib branch. There's a slight annoyance in having all development filtered through an additional branch, but there's a large advantage in that having a stable core in the stdlib branch makes it more likely we'll be able to use it as a venue for collaboration with the PyPy, Jython and IronPython folks (they all have push rights and a separate branch means they can use it without having to worry about any of the core changes going on in the default branch). A separate branch with combined "3.x.y + PyStdlib YY.MM" releases is also significantly less work than trying to split the stdlib out completely into a separate repo. Regards, Nick.

Le mercredi 18 janvier 2012 à 21:26 +1000, Nick Coghlan a écrit :
It's a straight-forward way to track the feature support of a release. How do you suggest all these "sys.version_info >= (3, 2)" - and the corresponding documentation snippets a.k.a "versionadded" or "versionchanged" tags - be spelt otherwise?
It's not only branding and versioning, is it? They're completely different projects with different goals (and different commercial support). If you're suggesting we do only short-term releases and leave the responsibility of long-term support to another project or entity, I'm not against it, but it's far more radical than what we are proposing in the PEP :-)
Well, you're opposing the PEP on the basis that it's workforce-intensive but you're proposing something much more workforce-intensive :-) Splitting the stdlib: - requires someone to do the splitting (highly non-trivial given the interactions of some modules with interpreter details or low-level C code) - requires setting up separate resources (continuous integration with N stdlib versions and M interpreter versions, for example) - requires separate maintenance and releases for the stdlib (but with non-trivial interaction with interpreter maintenance, since they will affect each other and must be synchronized for Python to be usable at all) - requires more attention by users since there are now *two* release schedules and independent version numbers to track The former two are one-time costs, but the latter two are recurring costs. Therefore, splitting the stdlib is much more complicated and involved than many people think; it's not just "move a few directories around and be done". And it's not even obvious it would have an actual benefit, since developers of other implementations are busy doing just that (see Jeff Hardy's message in this thread). Regards Antoine.

On Wed, Jan 18, 2012 at 10:30 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
Did you read what I actually proposed? I specifically *didn't* propose separate stdlib releases (for all the reasons you point out), only separate date based stdlib *versioning*. Distribution of the CPython interpreter + stdlib would remain monolithic, as it is today. Any given stdlib release would only be supported for the most recent language release. The only difference is that between language releases, where we currently only release maintenance builds, we'd *also* release a second version of each maintenance build with an updated standard library, along with an alpha release of the next language version (with the last part being entirely optional, but I figured I may as well make the suggestion since I like the idea to encourage getting syntax updates and the like out for earlier experimentation). When you initially pitched the proposal via email, you didn't include the "language moratarium applies to interim releases" idea. That one additional suggestion makes the whole concept *much* more appealing to me, but I only like it on the condition that we decouple the stdlib versioning from the language definition versioning (even though I recommend we only officially support very specific combinations of the two). My suggestion is really just a concrete proposal for implementing Ezio's idea of only bumping the Python version for releases with long term support, and using some other mechanism to distinguish the interim releases. So, assuming a 2 year LTS cycle, the released versions up to February 2015 with my suggestion would end up being:
If we have to make "brown paper bag" releases for the maintenance or stdlib branches then the micro versions get bumped - the date based version of the standard library versions relates to when that particular *API* was realised, not when bugs were last fixed in it. If a target release date slips, then the stdlib version would be increased accordingly (cf. Ubuntu 6.06). Yes, we'd have an extra set of active buildbots to handle the stdlib branch, but a) that's no harder than creating the buildbots for a new maintenance branch and b) the interim release proposal will need to separate language level changes from stdlib level changes *anyway*. As far as how sys.version checks would be updated, I would propose a simple API addition to track the new date-based standard lib versioning: sys.stdlib_version. People could choose to just depend on a specific Python version (implicitly depending on the stdlib version that was originally shipped with that version of CPython), or they may instead decide to depend on a specific stdlib version (implicitly depending on the first Python version that was shipped with that stdlib). The reason I like this scheme is that it allows us (and users) to precisely track the things that can vary at the two different rates. At least the following would still be governed by changes in the first two fields of sys.version (i.e. the major Python version): - deprecation policy - language syntax - compiler AST - C ABI stability - Windows compilation suite and C runtime version - anything else we decide to link with the Python language version (e.g. default pickle protocol) However, the addition of date based stdlib versioning would allow us to clearly identify the new interim releases proposed by PEP 407 *without* mucking up all those things that are currently linked to sys.version and really *shouldn't* be getting updated every 6 months. Users get a clear guarantee that if they follow the stdlib updates instead of the regular maintenance releases, they'll get nice new features along with their bug fixes, but no new deprecations or backwards incompatible API changes. However, they're also going to be obliged to transition to each new language release as it comes out if they want to continue getting security updates. Basically, what it boils down to is that I'm now +1 on the general proposal in the PEP, *so long as*: 1. We get a separate Hg branch for "stdlib only" changes and default becomes the destination specifically for "language update" changes (with the latter being a superset of the former) 2. The proposed "interim releases" are denoted by a new date-based sys.stdlib_version field and sys.version retains its current meaning (and slow rate of change) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Nick Coghlan writes:
Typo? -> 3.4.0 + 14.08.0, right?
Python 3.4.1 + stdlib 15.02.0 (~February 2015)
It seems to me there could be considerable divergence between the stdlib code in
and
because 14.08.0a* will be targeting 3.4, and *should* use new language constructs and APIs where they are appropriate, while 13.02.0 ... 14.02.0 will be targeting the 3.3 API, and mustn't use them.

On Wed, Jan 18, 2012 at 09:08, Nick Coghlan <ncoghlan@gmail.com> wrote:
IOW we would have a language moratorium every 2 years (i.e. between LTS releases) while switching to a 6 month release cycle for language/VM bugfixes and full stdlib releases? I would support that as it has several benefits from several angles.
It also makes disruptive language changes less frequent so people have more time to catch up, update books/docs, etc. We can also let them bake longer and we all get more experience with them. Doing a release every 6 months that includes updates to the stdlib and bugfixes to the language/VM also benefits other VMs by getting compatibility fixes in faster. All of the other VM maintainers have told me that keeping the stdlib non-CPython compliant is the biggest hurdle. This kind of switch means they could release a VM that supports a release 6 months or a year after a language change release (e.g. 1 to 2 releases in) so as to get changes in faster and lower the need to keep their own fork. It should also increase the chances of external developers of projects being willing to become core developers and contributing their project to Python. If they get to keep a 6 month release cycle we could consider pulling in project like httplib2 and others that have resisted inclusion in the stdlib because painfully long (for them) wait between releases.
I don't think we need to do a new versioning scheme. Why can't we just say which releases are covered by a language moratorium? The community seemed to pick up on that rather well when we did it for Python 3 and I didn't see anyone having difficulty explaining it when someone didn't know what was going on. As long as we are clear which releases are under a language moratorium and which one's aren't we shouldn't need to switch to language + stdlib versioning scheme. This will lead to use reaching Python 4 faster (in about 4 years), but even that doesn't need to be a big deal. Linux jumped from 2 to 3 w/o issue. Once again, as long as we are clear on which new versions have language changes it should be clear as to what to expect. Otherwise I say we just bump the major version when we do a language-changing release (i.e. every 2 years) and just to a minor/feature number bump (i.e. every 6 months) when we add/change stuff to the stdlib. People can then be told "learn Python 4" which is easy to point out on docs, e.g. you won't have to go digging for what minor/feature release a book covers, just what major release which will probably be emblazoned on the cover. And with the faster stdlib release schedule other VMs can aim for X.N versions when they have all the language features *and* all of their compatibility fixes into the stdlib. And then once they hit that they can just continue to support that major version by just keeping up with minor releases with compatibility fixes (which buildbots can help guarantee). And honestly, if we don't go with this I'm with Georg's comment in another email of beginning to consider stripping the stdlib down to core libraries to help stop with the bitrot (sorry, Paul). If we can't attract new replacements for modules we can't ditch because of backwards compatibility I start to wonder if I should even care about improving the stdlib outside of core code required to make Python simply function. -Brett

Am 18.01.2012 18:56, schrieb Brett Cannon:
That is certainly a possibility (it's listed as an open issue in the PEP).
Yes. In the end, the moratorium really was a good idea, and this would be carrying on the spirit.
Exactly! Georg

On Thu, Jan 19, 2012 at 7:31 AM, fwierzbicki@gmail.com <fwierzbicki@gmail.com> wrote:
Yes, with the addition of the idea of a PEP 3003 style language change moratorium for interim releases, I've been converted from an initial opponent of the idea (since we don't want to give the wider community whiplash) to a supporter (since some parts of the community, especially web service developers that deploy to tightly controlled environments, aren't well served by the standard library's inability to keep up with externally maintained standards and recommended development practices). It means PEP 407 can end up serving two goals: 1. Speeding up the rate of release for the standard library, allowing enhanced features to be made available to end users sooner. 2. Slowing down (slightly) the rate of release of changes to the core language and builtins, providing more time for those changes to filter out through the wider Python ecosystem. Agreeing with those goals in principle then leaves two key questions to be addressed: 1. How would we have to update our development practices to make such a dual versioning scheme feasible? 2. How can we best communicate a new approach to versioning without unduly confusing developers that have built up certain expectations about Python's release cycle over the past 20+ years? For the first point, I think having two active development branches (one for stdlib updates, one for language updates) will prove to be absolutely essential. Otherwise all language updates would have to be landed in the 6 month window between the last stdlib release for a given language version and the next language release, which seems to me a crazy way to go about things. As a consequence, I think we'd be obliged to do something to avoid conflicts on Misc/NEWS (this could be as simple as splitting it out into NEWS and NEWS_STDLIB, but if we're restructuring those files anyway, we may also want to do something about the annoying conflicts between maintenance releases and development releases). That then leaves the question of how to best communicate such a change to the rest of the Python community. This is more a political and educational question than it is a technical one. A few different approaches have already been suggested: 1. I believe the PEP currently proposes just taking the "no more than 9" limit off the minor version of the language. Feature releases would just come out every 6 months, with every 4th release flagged as a language release. This could even be conveyed programmatically by offering "sys.lang_version" and "sys.lang_version_info" attributes that define the *language* version of a given release - 3.3, 3.4, 3.5 and 3.6 would all have something like sys.lang_version == '3.3', and then in 3.7 (the next language release) it would be updated to say sys.lang_version == '3.7'. This approach would require that some policies (such as the deprecation cycle) by updated to refer to changes in the language version (sys.lang_version) rather than change in the stdlib version (sys.version). I don't like this scheme because it tries to use one number (the minor version field) to cover two very different concepts (stdlib updates and language updates). While technically feasible, this is unnecessarily obscure and confusing for end users. 2. Brett's alternative proposal is that we switch to using the major version for language releases and the minor version for stdlib releases. We would then release 3.3, 3.4, 3.5 and 3.6 at 6 month intervals, with 4.0 then being released in August 2014 as a new language version. Without taking recent history into acount, I actually like this scheme - it fits well with traditional usage of major.minor.micro version numbering. However, I'm not confident that the "python" name will refer to Python 3 on a majority of systems by 2014 and accessing Python 4.0 through the "python3" name would just be odd. It also means we lose our ability to signal to the community when we plan to make a backwards incompatible language release (making the assumtion that we're never going to want to do that again would be incredibly naive). On a related note, we'd also be setting ourselves to have to explain to everyone that "no, no, Python 3 -> 4 is like upgrading from Python 3.2 -> 3.3, not 2.7 -> 3.2". I expect the disruptions of the Python 3 transition will still be fresh enough in everyone's mind at that point that we really shouldn't go there if we don't have to. 3. Finally, we get to my proposal: that we just leave sys.version and sys.version_info alone. They will still refer to Python language versions, the micro release will be incremented every 6 months or so, the minor release once every couple of years to indicate a language update and the major release every decade or so (if absolutely necessary) to indicate the introduction of backwards incompatibilities. All current intuitions and expectations regarding the meaning of sys.version and sys.version_info remain completely intact. However, we would still need *something* to indicate that the stdlib has changed in the interim releases. This should be a monotically increasing value, but should also be clearly distinct from the language version. Hence my proposal of a date based sys.stdlib_version and sys.stdlib_version_info. That way, nobody has to *unlearn* anything about current Python development practices and policies. Instead, all people have to do is *learn* that we now effectively have two release streams: a date-based release stream that comes out every 6 months (described by sys.stdlib_version) and an explicitly numbered release stream (described by sys.version) that comes out every 24 months. So in August this year, we would release 3.3+12.08, followed by 3.3+13.02, 3.3+13.08, 3.3+14.02 at 6 month intervals, and then the next language release as 3.4+14.08. If someone refers to just Python 3.3, then the "at least stdlib 12.08" is implied. If they refer to Python stdlib 12.08, 13.02, 13.08 or 14.02, then it is the dependency on "Python 3.3" that is implied. Two different rates of release -> two different version numbers. Makes sense to me. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Thu, 19 Jan 2012 11:03:15 +1000 Nick Coghlan <ncoghlan@gmail.com> wrote:
With the moratorium suggestion factored in, yes. The PEP insists on support duration rather than the breadth of changes, though. I think that's a more important piece of information for users. (you don't care whether or not new language constructs were added, if you were not planning to use them)
As an end user I wouldn't really care whether a release is "stdlib changes only" or "language/builtins additions too" (especially in a language like Python where the boundaries are somewhat blurry). I think this distinction is useful mainly for experts and therefore not worth complicating version numbering for.
The main problem I see with this is that Python 3 was a big disruptive event for the community, and calling a new version "Python 4" may make people anxious at the prospect of compatibility breakage. Instead of spending some time advertising that "Python 4" is a safe upgrade, perhaps we could simply call it "Python 3.X+1"? (and, as you point out, keep "Python X+1" for when we want to change the language in incompatible ways again)
If I were a casual user of a piece of software, I'd really find such a numbering scheme complicated and intimidating. I don't think most users want such a level of information. Regards Antoine.

On Thu, Jan 19, 2012 at 9:17 PM, Antoine Pitrou <solipsis@pitrou.net> wrote:
I think the ideal numbering scheme from a *new* user point of view is the one Brett suggested (where major=language update, minor=stdlib update), but (as has been noted) there are solid historical reasons we can't use that. While I still have misgivings, I'm starting to come around to the idea of just allowing the minor release number to increment faster (Barry's co-authorship of the PEP, suggesting he doesn't see such a scheme causing any problems for Ubuntu is big factor in that). I'd still like the core language version to be available programmatically, though, and I'd like the PEP to consider displaying it as part of sys.version and using it to allow things like having bytecode compatible versions share bytecode files in the cache. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Jan 19, 2012, at 12:17 PM, Antoine Pitrou wrote:
s/was/is/ The Python 3 transition is ongoing, and Guido himself at the time thought it would take 5 years. I think we're making excellent progress, but there are still occasional battles just to convince upstream third party developers that supporting Python 3 (let alone *switching* to Python 3) is even worth the effort. I think we're soon going to be at a tipping point where not supporting Python 3 will be the minority position. Even if a hypothetical Python 4 were completely backward compatible, I shudder at the PR nightmare that would entail. I'm not saying there will never be a time for Python 4, but I sure hope it's far enough in the future that you youngun's will be telling us about it in the Tim Peters Home for Python Old Farts, where we'll smile blankly, bore you again with stories of vinyl records, phones with real buttons, and Python 1.6.1 while you feed us our mush under chronologically arranged pictures of BDFLs Van Rossum, Peterson, and Van Rossum. -Barry

Brett Cannon wrote:
Do we have any evidence of this alleged bitrot? I spend a lot of time on the comp.lang.python newsgroup and I see no evidence that people using Python believe the standard library is rotting from lack of attention. I do see people having trouble with installing third party packages. I see that stripping back the standard library and forcing people to rely more on external libraries will hurt, rather than help, the experience they have with Python. -- Steven

On Thu, Jan 19, 2012 at 10:19 AM, Steven D'Aprano <steve@pearwood.info> wrote:
IMO, it's a problem mainly with network (especially web) protocols and file formats. It can take the stdlib a long time to catch up with external developments due to the long release cycle, so people are often forced to switch to third party libraries that better track the latest versions of relevant standards (de facto or otherwise). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 1/18/2012 8:06 PM, Nick Coghlan wrote:
Some of those modules are more that 2 years out of date and I guess what Brett is saying is that the people interested and able to update them will not do so in the stdlib because they want to be able to push out feature updates whenever they are needed and available and not be tied to a slow release schedule. Morever, since the external standards will continue to evolve for the foreseeable future, the need to track them more quickly will also continue. We could relax the ban on new features in micro releases and designate such modules as volatile and let them get new features in each x.y.z release. In a sense, this would be less drastic than inventing a new type of release. Code can require an x.y.z release, as it must if it depends on a bug fix not in x.y.0. I also like the idea of stretching out the alpha release cycle. I would like to see 3.3.0a1 appear along with 3.2.3 (in February?). If alpha releases are released with all buildbots green, they are as good, at least with respect to old features, as a corresponding bugfix release. All releases will become more dependable as test coverage improves. Again, this idea avoids inventing a new type of release with new release designations. I think one reason people avoid alpha releases is that they so quickly become obsolete. If one sat for 3 to 6 months, it might get more attention. As for any alpha stigma, we should emphasize that alpha only mean not feature frozen. -- Terry Jan Reedy

Nick Coghlan <ncoghlan@gmail.com> wrote:
I'm not sure how much of a problem this really is. I continually build fairly complicated systems with Python that do a lot of HTTP networking, for instance. It's fairly easy to replace use of the standard library modules with use of Tornado and httplib2, and I wouldn't think of *not* doing that. But the standard modules are there, out-of-the-box, for experimentation and tinkering, and they work in the sense that they pass their module tests. Are those standard modules as "Internet-proof" as some commercially-supported package with an income stream that supports frequent security updates would be? Perhaps not. But maybe that's OK. Another way of doing this would be to "bless" certain third-party modules in some fashion short of incorporation, and provide them with more robust development support, again, "somehow", so that they don't fall by the wayside when their developers move on to something else, but are still able to release on an independent schedule. Bill

On Jan 19, 2012 9:28 AM, "Bill Janssen" <janssen@parc.com> wrote:
This is starting to sound a little like the discussion about the __preview__ / __experimental__ idea. If I recall correctly, one of the points is that for some organizations getting a third-party library approved for use is not trivial. In contrast, inclusion in the stdlib is like a free pass, since the organization can rely on the robustness of the CPython QA and release processes. As well, there is at least a small cost with third-party libraries for those that maintain more rigorous configuration management. In contrast, there is basically no extra cost with new/updated stdlib, beyond upgrading Python. -eric
http://mail.python.org/mailman/options/python-dev/ericsnowcurrently%40gmail....

Hi, One of the main sticking points over possible fixes for the hash-collision security issue seems to be a fear that changing the iteration order of a dictionary will break backwards compatibility. The order of iteration has never been specified. In fact not only is it arbitrary, it cannot be determined from the contents of a dict alone; it may depend on the insertion order. Changing a hash function is not the only change that will change the iteration order; any of the following will also do so: * Changing the minimum size of a dict. * Changing the load factor of a dict. * Changing the resizing policy of a dict. * Sharing of keys between dicts. By treating iteration order as part of the API we are effectively ruling out ever making any improvements to the dict. For example, my new dictionary implementation https://bitbucket.org/markshannon/hotpy_new_dict/ reduces memory use by 47% for gcbench, and by about 20% for the 2to3 benchmark, on my 32bit machine. (Nice graphs: http://tinyurl.com/7qd2nnm http://tinyurl.com/6uqvl2x ) The new dict implementation (necessarily) changes the iteration order and will break code that relies on it. If dict iteration order is to be treated as part of the API (and I think that is a very bad idea) then it should be documented, which will be difficult since it is barely deterministic. This will also be a major problem for PyPy, Jython and IronPython, as they will have to reimplement their dicts. So, don't be afraid to change that hash function :) Cheers, Mark

On Fri, Jan 20, 2012 at 5:49 AM, Mark Shannon <mark@hotpy.org> wrote:
So, don't be afraid to change that hash function :)
Definitely. The hash function *has* been changed in the past, and a lot of developers were schooled in not relying on the iteration order. That's a good thing, as those developers now write tests of what's actually important rather than relying on implementation details of the Python runtime. A hash function that changes more often than during an occasional major version update will encourage more developers to write better tests. We can think of it as an educational tool. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> "A person who won't read has no advantage over one who can't read." --Samuel Langhorne Clemens

On Fri, Jan 20, 2012 at 8:49 PM, Mark Shannon <mark@hotpy.org> wrote:
So, don't be afraid to change that hash function :)
Changing it for 3.3 isn't really raising major concerns: the real concern is with changing it in maintenance and security patches for earlier releases. Security patches that may break production applications aren't desirable, since it means admins have to weigh up the risk of being affected by the security vulnerability against the risk of breakage from the patch itself. The collision counting approach was attractive because it looked like it might offer a way out that was less likely to break deployed systems. Unfortunately, I think the point Martin raised about just opening a new (even more subtle) attack vector kills that idea dead. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Wed, Jan 18, 2012 at 09:26:19PM +1000, Nick Coghlan wrote:
This looks like a 'good bridge' of suggestion between rapid releases and stable releases. What would be purpose of alpha release. Would we encourage people to use it or test it? Which the rapid relase cycle, the encouragement is to use rather than test. -- Senthil

On Tuesday, January 17, 2012, Antoine Pitrou <solipsis@pitrou.net> wrote:
As a Gentoo packager, this would mean much more work for us, unless all the non-LTS releases promised to be backwards compatible. I.e. the hard part for us is managing all the incompatibilities in other packages, compatibility with Python. As a user of Python, I would rather dislike the change from 18 to 24 months for LTS release cycles. And the limiting factor for my use of Python features is largely old Python versions still in use, not the availability of newer features in the newest Python. So I'm much more interested in finding ways of improving 2.7/3.2 uptake than adding more feature releases. I also think that it would be sensible to wait with something like this process change until the 3.x adoption curve is much further along. Cheers, Dirkjan

Hello Dirkjan, On Wed, 18 Jan 2012 18:32:22 +0100 Dirkjan Ochtman <dirkjan@ochtman.nl> wrote:
It might need to be spelt clearly in the PEP, but one of my assumptions is that packagers choose on what release series they want to synchronize. So packagers can synchronize on the LTS releases if it's more practical for them, or if it maps better to their own release model (e.g. Debian). Do you think that's a valid answer to Gentoo's concerns?
So I'm much more interested in finding ways of improving 2.7/3.2 uptake than adding more feature releases.
That would be nice as well, but I think it's orthogonal to the PEP. Besides, I'm afraid there's not much we (python-dev) can do about it. Some vendors (Debian, Redhat) will always lag behind the bleeding-edge feature releases. Regards Antoine.
participants (23)
-
"Martin v. Löwis"
-
Antoine Pitrou
-
Barry Warsaw
-
Bill Janssen
-
Brett Cannon
-
Brian Curtin
-
Dirkjan Ochtman
-
Eric Snow
-
Ezio Melotti
-
Fred Drake
-
fwierzbicki@gmail.com
-
Georg Brandl
-
Jeff Hardy
-
Mark Shannon
-
Matt Joiner
-
Nick Coghlan
-
Paul Moore
-
Senthil Kumaran
-
Stephen J. Turnbull
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Terry Reedy
-
Tim Golden