Regular scheduled releases (was: Continuing 2.x)
On Fri, Oct 29, 2010 at 21:54, Barry Warsaw
Another quick thought. What would people think about regular timed releases if python 2.7? This is probably more a question for Benjamin but doing sonmight provide better predictability and "customer service" to our users. I might like to see monthly releases but even quarterly would probably be useful. Doing timed releases might also incentivize folks to fix more outstanding 2.7 bugs.
I would really like to see a more regular and frequent release schedule. Most of my experience with this is with Mercurial, where we have a time-based schedule with a feature release every four months and a bugfix release at least every month (usually at the first of each month) and more often if we have bad regressions. It's nice because (a) release are practiced more and therefore become easier to do, (b) regressions can be fixed in a shorter timeframe. A predictable schedule is also just nice for all parties involved. In Gentoo, we actually started taking backports from the maintenance branches to fix issues (regressions) in our packages, but didn't work out so well. Obviously a random snapshot from SVN (even from a stable branch) isn't exercised as well as an actual release, so we ended up having some issues due to that. Also releasing packages with a version number that doesn't fully correspond to the tarball is less than ideal (we mitigated this somewhat by adding a date tag to the packages, but still). Here are the bugfix releases from the 2.6 branch: 2.6.1: 64 days 2.6.2: 131 days 2.6.3: 174 days 2.6.4: 23 days (critical regressions) 2.6.5: 145 days 2.6.6: 158 days That's an average of 4 (if you include .4) or 4.5 months (PEP 6 specifies 6 months, but some of the parts seem outdated). I think releasing each month might be a bit ambitious, but it would be great to drive down the release interval towards 2-3 months instead of 4-5. Cheers, Dirkjan
On Sat, Oct 30, 2010 at 7:39 AM, Dirkjan Ochtman
That's an average of 4 (if you include .4) or 4.5 months (PEP 6 specifies 6 months, but some of the parts seem outdated). I think releasing each month might be a bit ambitious, but it would be great to drive down the release interval towards 2-3 months instead of 4-5.
Ultimately, the frequency of releases comes down to the burden on the release manager and the folks that build the binary installers. Any given RM is usually only responsible for one or two branches, but the same two people (Martin and Ronald) typically build the Windows and Mac OS X binaries for all of them. So if you add 2.6 and 3.1 together, as well as the releases for 2.7 and 3.2 development, I think you'll find releases happening a lot more often than an average of 1 every 4 months. I suspect the most significant thing that needs to be done in making more regular bug fix releases possible is solid, reliable automated creation of Windows and Mac OS X binaries. We also need to consider the impact on downstream - switching to a new compiler or interpreter version generally has a much higher chance of breaking things than switching to a new version of almost any other software development tool. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sat, Oct 30, 2010 at 05:22, Nick Coghlan
Ultimately, the frequency of releases comes down to the burden on the release manager and the folks that build the binary installers. Any given RM is usually only responsible for one or two branches, but the same two people (Martin and Ronald) typically build the Windows and Mac OS X binaries for all of them. So if you add 2.6 and 3.1 together, as well as the releases for 2.7 and 3.2 development, I think you'll find releases happening a lot more often than an average of 1 every 4 months.
Right, the effort of those people is obviously the limiting factor here. Automating builds sounds like a good step forward. What are the sticky bits here? Martin, Ronald, how much of the process is not automated, and why is automating hard? Cheers, Dirkjan
Right, the effort of those people is obviously the limiting factor here. Automating builds sounds like a good step forward. What are the sticky bits here? Martin, Ronald, how much of the process is not automated, and why is automating hard?
I don't feel like producing a complete list of build steps; the entire process takes about four hours. The steps that are difficult to automate are: - code signing - testing the resulting installer - uploading However, in a significant number of releases, some of the build steps failed, so it requires some investigation to find the source of the problem. Regards, Martin
On Sat, Oct 30, 2010 at 14:09, "Martin v. Löwis"
I don't feel like producing a complete list of build steps; the entire process takes about four hours.
So is most of this scripted, or is there just a process in your head?
The steps that are difficult to automate are: - code signing - testing the resulting installer - uploading
Why are those steps difficult to automate? I'd think that uploading is just some ftp commands. Signing might be a thing because you don't want the keys distributed. Testing the resulting installer could probably be automated in part, and people could still download the nightlies to do more testing.
However, in a significant number of releases, some of the build steps failed, so it requires some investigation to find the source of the problem.
Well, sure, but if we do continuous builds we find failures like that sooner. Cheers, Dirkjan
Am 30.10.2010 14:29, schrieb Dirkjan Ochtman:
On Sat, Oct 30, 2010 at 14:09, "Martin v. Löwis"
wrote: I don't feel like producing a complete list of build steps; the entire process takes about four hours.
So is most of this scripted, or is there just a process in your head?
Define "most". There hundreds of commands that run automatically (like all compiler invocations). But still, there are about 20 steps that are in my head, some written down in plain English. But yes, "most" of the steps (by sheer number) are automated.
Why are those steps difficult to automate? I'd think that uploading is just some ftp commands.
No, it's ssh, and you need access to the right key. More importantly, it's editing content.ht, to update the links, compute and copy the md5sums and size, and svn add the GPG signatures. It *could* be automated, I suppose. It just isn't, and it would take quite some time to automate it.
Testing the resulting installer could probably be automated in part, and people could still download the nightlies to do more testing.
People will never ever test nightly builds. Been there, done that. Instead, the nightly build process will break, and nobody will fix it for months (or even complain, for that matter). I also doubt that the installer could be automatically tested with reasonable effort. Regards, Martin
"Martin v. Löwis"
People will never ever test nightly builds. Been there, done that. Instead, the nightly build process will break, and nobody will fix it for months (or even complain, for that matter).
Certainly seems to be past experience. I know Martin knows this, but for other readers who may not, my Windows XP buildbot built nightly windows installers (the basic MSI package, close but not necessarily a fully signed new release as Martin makes) starting in September, 2007. It ran successfully for about 6 months, at which point it started to fail fairly consistently. Nobody noticed, and Martin and I finally just shut it down in December, deciding it wasn't worth the effort to try to fix. My OSX buildbot has been building nightly DMG images (though again, I suspect Ronald has a few extra steps beyond what its doing for a full release) since April. I'd actually be interested in knowing if anyone is using them - I suspect perhaps not. In both cases, getting the process going actually took quite a bit of effort (even stuff like having to fix the buildbot upload code in the Windows case), not just on my part, but with the help of Martin and Ronald. But without actual use of the result, it's hard to think it was worth it. I'm pretty sure my default reaction to a break-down in the current OSX build process at this point would be to first suggest disabling it unless there were real users. -- David
participants (4)
-
"Martin v. Löwis"
-
David Bolen
-
Dirkjan Ochtman
-
Nick Coghlan