
On Sat, 24 Oct 2009 04:30:36 pm Stephen J. Turnbull wrote:
It seems to me that what Guido is heading for here is very similar to the "punctuated equilibrium" concept (associated with the evolutionary biologist Stephen Jay Gould, the wikipedia article is pretty good, and fairly short).
I argue that you've got it backwards. Python has been relatively stable in the ways that matter for almost all of it's history. Nearly all the changes to the language in 1.x and 2.x have either been backwards-compatible, or managed carefully with a deprecation schedule or __future__. The result of this is that code written for Python 1.5, and possibly older, will still work in Python 2.6. It's easy enough to ignore new features if you need to support an older version. This does not apply to 3.x. If you want to see punctuated equilibrium in software development, the change from 2.x to 3.x is an good example: a relatively large change in a non-backwards compatible way, with no gentle migration path. Supporting 3.x is all-or-nothing: you can't support 3.1 and 2.6 with the same code base, except possibly for the most trivial code. I suggest that the causes of the slow uptake of 3.x isn't too many changes to the core, but three factors: (1) There's no gentle migration path from 2.x to 3.x in the same way that there have been gentle migration paths from every version to the next version in 1.x and 2.x. Instead you've got a discontinuous change. Library maintainers have to choose between: - support 2.x only - support 3.x only - maintain two incompatible code bases. The path of least effort is to support 2.x only, because they're already doing that. (2) For many people, 3.x doesn't offer any obvious compelling advantages over 2.x. (Of course, your mileage may vary.) Reading the What's New for 3.0, I see *many* nice features and cleanups, many Nice To Have new features, but no obvious Must Have to encourage me to migrate. http://docs.python.org/dev/3.0/whatsnew/3.0.html (3) It's the bootstrap problem: most systems that ship with CPython still ship with 2.x, and will continue to do so until people are regularly using 3.x, but people generally use whatever their system ships with. I don't see that a moratorium on new features will help with any of these issues.
In software, it may make sense to have the stable periods be *much* more stable,
The principle of "release early, release often" argues against that claim. The 2.x series has been very successful. Why change a successful procedure for one which is, at best, unproven, and at worst, condemns the language to the (unfair) label "moribund"? I don't believe that the experience of the C language is relevant. The C *standard* was stable, but actual C compilers evolved like mad, providing non-standard features. Because C is so low-level, the difference between a built-in and a library is very slight.
But it makes sense to propose to compress the evolution into short periods with many changes,
As has happened in the change to Python 3.x.
and have very stable periods of "moratorium" between.
We keep coming back to this idea that the volume of change in a language is, in and of itself, harmful. I dispute that. It is *incompatible* change that is harmful. Developers have been slow to move to 3.x not because it's different from 2.x, but because it is different in inconveniently incompatible ways. There have been some major new features in the 2.x series, e.g. new-style classes, generators and decorators. As far as I can tell, those versions didn't suffer from lack of uptake as 3.x has suffered. It's not new features that frightens developers off, but incompatible change. -- Steven D'Aprano