Because there won't typically be sufficient testing and release infrastructure to allow arbitrary bug fixes to be committed on the branch. The buildbots are turned off, and nobody tests the release candidate, no Windows binaries are provided - thus, chances are very high that a bug fix release for some very old branch will be *worse* than the previous release, rather than better.
Why is that qualitatively different than a security fix? All the same conditions apply.
No. The problem being fixed is completely different. For a security fix, it is typically fairly obvious what the bug being fixed is (in particular, if you look at the recent ones dealing with overflows): the interpreter crashes without the patch, and stops crashing (but raises an exception instead) with the patch.
For regular bug fixes, it is much more difficult to see whether the behavior being changed was a bug. They typically "merely" change the behavior A to behavior B, along with a claim that behavior A is a bug, and behavior B is correct. Even if that is true, there is still a chance that applications relied on behavior A, and break. OTOH, for an interpreter crash, it is highly unlikely that existing applications rely on the crash.
For example, in the 2.4 branch, among the patches I rolled back, was r53001. This adds string.strip to the lookup of logging handlers. It might be "better" to do that (and perhaps even correspond with the documentation), but still, it might break applications which had leading or trailing spaces in their handlers names.
That's not necessary. When I made 2.3.7 and 2.4.5, I went through the complete log, and posted a list of patches that I wanted to revert. This was little effort, and I'm sure it would have been even less effort if people had known that 2.4.x is a closed branch.
I'm glad it wasn't much effort. Would you propose using technological means to close the branch?
They are still open for security patches (well, 2.4 is; under my proposed policy, 2.3 isn't anymore). If people think it's desirable, we could rename the branch, or we could enforce a certain keyword (e.g. "security") in the commit messages.
Again, I don't think that's qualitatively much different for security patches. We may manually test them, inspect them, rely on vendors to have tested them, but they don't go through the Q/A process we enforce for our active branches.
Due to the reliance on inspection, it is *particularly* important that there are only few of them, and that those are all local.
Would a policy of security-patches-only have any effect on vendors sharing fixes with us? By that I mean, if 2.4 were open to non-security patches, would they be more or less willing to feed them upstream, where we could, if someone were motivated, port them forward?
I do think that vendors will continue to provide patches. They want to get rid of them to reduce their overhead, eventually, and it doesn't really matter that much that they get rid of them for the current branch (as they have done all the hard work there already). Efforts grow when you need to forward-port it, in which case you do want to contribute it upstream (in the hope of at least partially offloading the effort to a regular Python contributor).
Let me emphasize that I'm not suggesting our committers do this. I'm suggesting that if a specific committer is motivated to fix a non-security bug in an older release, they would have to accept this responsibility. Maybe it'll never happen because no one really cares enough. But a policy against it would /prevent/ it even if there was motivation to do it.
I don't like the arbitrariness that this will produce.
I think this is an illusion. When did you last commit something to the trunk, and forward-ported it to the 3.0 branch? When did you last run "svnmerge avail"? Porting patches between 2.6 and 3.0 is anything but trivial.
I'll concede that it's very difficult.
Indeed. I just added s* to both 2.6 and 3.0, and it took me two days to port it from 2.6 to 3.0 (just because 3.0 was using the buffer interface in so many more places).