[Python-Dev] I am now lost - committed, pulled, merged, what is "collapse"?

John Arbash Meinel john at arbash-meinel.com
Tue Mar 22 10:45:08 CET 2011


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 3/21/2011 6:53 PM, Barry Warsaw wrote:
> On Mar 21, 2011, at 01:19 PM, R. David Murray wrote:
> 
>> So you are worried about the small window between me doing an 'svn up',
>> seeing no changes, and doing an 'svn ci'?  I suppose that is a legitimate
>> concern, but considering the fact that if the same thing happens in hg,
>> the only difference is that I know about it and have to do more work[*],
>> I don't think it really changes anything.  Well, it means that if your
>> culture uses the "always test" workflow you can't be *perfect* about it
>> if you use svn[**], which I must suppose has been your (and Stephen's)
>> point from the beginning.
>>
>> [*] that is, I'm *not* going to rerun the test suite even if I have to
>> pull/up/merge, unless there are conflicts.
> 
> I think if we really want full testing of all changesets landing on
> hg.python.org/cpython we're going to need a submit robot like PQM or Tarmac,
> although the latter is probably too tightly wedded to the Launchpad API, and I
> don't know if the former supports Mercurial.
> 
> With the benefits such robots bring, it's also important to understand the
> downsides.  There are more moving parts to maintain, and because landings are
> serialized, long test suites can sometimes cause significant backlogs.  Other
> than during the Pycon sprints, the backlog probably wouldn't be that big.
> 
> Another complication we'd have is running the test suite cross-platform, but I
> suspect that almost nobody does that today anyway.  So the buildbot farm would
> still be important.
> 
> -Barry

I'm personally a huge fan of 2(multi)-tier testing. So you have a basic
(and fast) test suite that runs across all your modules before every
commit in your mainline. Then a much larger (and slower) test suite that
runs regression testing across all platforms/etc that runs
asynchronously. Which gives you some basic protection against brown-bag
failures (you committed a typo in the main Python.h file, breaking
everyone). And still avoids a huge pushback on throughput.

I think Launchpad is currently looking to do batch-PQM. So that every
commit to the final mainline must pass the full test suite. However the
automated bot can grab multiple requests from the queue at a time, on
the premise that 90% of the time, none of them will break anything. So
you end up with a 100% stable trunk (any given revision committed by the
bot did pass the full test suite), but still get most of the throughput.

Also, by working in batch mode, if you have 20 submissions, and
submission #2 would have broken the test suite, it only bumps some (say
the first 5) submissions, and the other 15 still get to land in an
orderly fashion. You could even put any bumped submissions into a
deferred 'one-by-one' queue.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk2Ib6QACgkQJdeBCYSNAAPJWwCggyeS5DZlm/DR7bo+1AmpD9rr
YmMAoLFmmu7VBTJJX/khyigaOPU9dDE9
=68rK
-----END PGP SIGNATURE-----


More information about the Python-Dev mailing list