Personally, I hate the way Linux releases are numbered (I can never tell which one is stable and which one isn't). But I could get used to it if we used the micro version number to indicate stability -- in particular, 2.2 would be experimental, and 2.2.1 and following would be stable; 2.3 would be experimental, and 2.3.1 stable.
How are "experimental" and "stable" defined? 2.2 was as stable as we could make it with 7(!) full pre-releases strung out over half a year. 2.2.1 has fewer bugs, but it would be extremely optimistic to believe more surprises aren't lurking in, e.g., the type/class dark corners. So is 2.2.1 "stable"? Whatever that means, intuitively I doubt it will be as stable as 2.1.2.
So long as we don't have people testing Python full-time (i.e., quite possibly forever), Python history says relatively few people will bother to try a pre-release, so lots of bugs have no hope of getting caught before an i.j.0 release.
... Or we could stay longer in beta.
I don't think it would help much -- just a few days after initial release of an alpha or beta, downloads go way down.
In practice, Python releases get a "street rep" that's not hard to pick up from c.l.py traffic. For example, several people independently recommend 2.1.2 as "the most stable" version of Python currently available, and I expect 2.2.1 will still be viewed as bleeding edge.
Since there's no effective way to get wide testing of pre-releases (btw, I don't believe Linux shares this problem), there's no real way to judge a release's perceived stability until after it's been released. This makes a priori stability-number schemes "a problem" for us.
Well, instead of calling 2.3 "2.3", we could call it 2.3.1. Then release 2.3.2, 2.3.3, ..., until consensus appears that 2.3.k is the most stable version of Python available. At that point we could re-release 2.3.k under the name 2.3 <0.9 wink>.
microsoft-is-still-the-most-successful-software-vendor-in-the-solar- system-ly y'rs - tim