As an observer and user—

It may be worth asking the Rust team what the main pain points are in coordinating and managing their releases.

Some context for those unfamiliar: Rust uses a Chrome- or Firefox-like release train approach, with stable and beta releases every six weeks. Each release cycle includes both the compiler and the standard library. They use feature flags on "nightly" (the master branch) and cut release branches for actually gets shipped in each release. This has the advantage of letting new features and functionality ship whenever they're ready, rather than waiting for Big Bang releases. Because of strong commitments to stability and backwards compatibility as part of that, it hasn't led to any substantial breakage along the way, either.

There is also some early discussion of how they might add LTS releases into that mix.

The Rust standard library is currently bundled into the same repository as the compiler. Although the stdlib is currently being modularized and somewhat decoupled from the compiler, I don't believe they intend to separate it from the compiler repository or release in that process (not least because there's no need to further speed up their release cadence!).

None of that is meant to suggest Python adopt that specific cadence (though I have found it quite nice), but simply to observe that the Rust team might have useful info on upsides, downsides, and particular gotchas as Python considers changing its own release process.

Regards,
Chris Krycho

On Jul 3, 2016, at 16:22, Brett Cannon <brett@python.org> wrote:

[forking the conversation since the subject has shifted]

On Sun, 3 Jul 2016 at 09:50 Steve Dower <steve.dower@python.org> wrote:
Many of our users prefer stability (the sort who plan operating system updates years in advance), but generally I'm in favour of more frequent releases.

So there's our 18 month cadence for feature/minor releases, and then there's the 6 month cadence for bug-fix/micro releases. At the language summit there was the discussion kicked off by Ned about our release schedule and a group of us had a discussion afterward where a more strict release cadence of 12 months with the release date tied to a consistent month -- e.g. September of every year -- instead of our hand-wavy "about 18 months after the last feature release"; people in the discussion seemed to like the 12 months consistency idea. I think making releases on a regular, annual schedule requires simply a decision by us to do it since the time scale we are talking about is still so large it shouldn't impact the workload of RMs & friends that much (I think).

As for upping the bug-fix release cadence, if we can automate that then perhaps we can up the frequency (maybe once every quarter), but I'm not sure what kind of overhead that would add and thus how much would need to be automated to make that release cadence work. Doing this kind of shrunken cadence for bug-fix releases would require the RM & friends to decide what would need to be automated to shrink the release schedule to make it viable (e.g. "if we automated steps N & M of the release process then I would be okay releasing every 3 months instead of 6").

For me, I say we shift to an annual feature release in a specific month every year, and switch to a quarterly bug-fix releases only if we can add zero extra work to RMs & friends.
 
It will likely require more complex branching though, presumably based on the LTS model everyone else uses.

Why is that? You can almost view our feature releases as LTS releases, at which point our current branching structure is no different.
 

One thing we've discussed before is separating core and stdlib releases. I'd be really interested to see a release where most of the stdlib is just preinstalled (and upgradeable) PyPI packages. We can pin versions/bundle wheels for stable releases and provide a fast track via pip to update individual packages.

Probably no better opportunity to make such a fundamental change as we move to a new VCS...

<deep breath />

Topic 1
=======
If we separate out the stdlib, we first need to answer why we are doing this? The arguments supporting this idea is (1) it might simplify more frequent releases of Python (but that's a guess), (2) it would make the stdlib less CPython-dependent (if purely by the fact of perception and ease of testing using CI against other interpreters when they have matching version support), and (3) it might make it easier for us to get more contributors who are comfortable helping with just the stdlib vs CPython itself (once again, this might simply be through perception).

So if we really wanted to go this route of breaking out the stdlib, I think we have two options. One is to have the cpython repo represent the CPython interpreter and then have a separate stdlib repo. The other option is to still have cpython represent the interpreter but then each stdlib module have their own repository.

Since the single repo for the stdlib is not that crazy, I'll talk about the crazier N repo idea (in all scenarios we would probably have a repo that pulled in cpython and the stdlib through either git submodules or subtrees and that would represent a CPython release repo). In this scenario, having each module/package have its own repo could get us a couple of things. One is that it might help simplify module maintenance by allowing each module to have its own issue tracker, set of contributors, etc. This also means it will make it obvious what modules are being neglected which will either draw attention and get help or honestly lead to a deprecation if no one is willing to help maintain it.

Separate repos would also allow for easier backport releases (e.g. what asyncio and typing have been doing since they were created). If a module is maintained as if it was its own project then it makes it easier to make releases separated from the stdlib itself (although the usefulness is minimized as long as sys.path has site-packages as its last entry). Separate releases allows for faster releases of the stand-alone module, e.g. if only asyncio has a bug then asyncio can cut their own release and the rest of the stdlib doesn't need to care. Then when a new CPython release is done we can simply bundle up the stable release at the moment and essentially make our mythical sumo release be the stdlib release itself (and this would help stop modules like asyncio and typing from simply copying modules into the stdlib from their external repo if we just pulled in their repo using submodules or subtrees in a master repo).

And yes, I realize this might lead to a ton of repos, but maybe that's an important side effect. We have so much code in our stdlib that it's hard to maintain and fixes can get dropped on the floor. If this causes us to re-prioritize what should be in the stdlib and trim it back to things we consider critical to have in all Python releases, then IMO that's as a huge win in maintainability and workload savings instead of carrying forward neglected code (or at least help people focus on modules they care about and let others know where help is truly needed).

Topic 2
=======
Independent releases of the stdlib could be done, although if we break the stdlib up into individual repos then it shifts the conversation as individual modules could simply do their own releases independent of the big stdlib release. Personally I don't see a point of doing a stdlib release separate from CPython, but I could see doing a more frequent release of CPython where the only thing that changed is the stdlib itself (but I don't know if that would even alleviate the RM workload).

For me, I'm more interested in thinking about breaking the stdlib modules into their own repos and making a CPython release more of a collection of python-dev-approved modules that are maintained under the python organization on GitHub and follow our compatibility guidelines and code quality along with the CPython interpreter. This would also make it much easier for custom distros, e.g. a cloud-targeted CPython release that ignored all GUI libraries.

-Brett
 

Cheers,
Steve

Top-posted from my Windows Phone

From: Guido van Rossum
Sent: ‎7/‎3/‎2016 7:42
To: Python-Dev
Cc: Nick Coghlan
Subject: Re: [Python-Dev] Request for CPython 3.5.3 release

Another thought recently occurred to me. Do releases really have to be
such big productions? A recent ACM article by Tom Limoncelli[1]
reminded me that we're doing releases the old-fashioned way --
infrequently, and with lots of manual labor. Maybe we could
(eventually) try to strive for a lighter-weight, more automated
release process? It would be less work, and it would reduce stress for
authors of stdlib modules and packages -- there's always the next
release. I would think this wouldn't obviate the need for carefully
planned and timed "big deal" feature releases, but it could make the
bug fix releases *less* of a deal, for everyone.

[1] http://cacm.acm.org/magazines/2016/7/204027-the-small-batches-principle/abstract
(sadly requires login)

--
--Guido van Rossum (python.org/~guido)
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris%40chriskrycho.com