
On Jan 25, 2014, at 10:09 AM, R. David Murray <rdmurray@bitdance.com> wrote:
On Sat, 25 Jan 2014 06:35:59 -0800, Eli Bendersky <eliben@gmail.com> wrote:
On Sat, Jan 25, 2014 at 6:14 AM, R. David Murray <rdmurray@bitdance.com>wrote:
On Sat, 25 Jan 2014 05:49:56 -0800, Eli Bendersky <eliben@gmail.com> wrote:
do the latter in Python, which carries a problem we'll probably need to resolve first - how to know that the bots are green enough. That really needs human attention.
By "that needs human attention", do you mean: dealing with the remaining flaky tests, so that "stable buildbots are green" is a binary decision? We strive for that now, but Nick's proposal would mean we'd have to finally buckle down and complete the work. I'm sure we'd make some new flaky tests at some point, but in this future they'd become show-stoppers until they were fixed. I think this would be a good thing, overall :)
Non-flakiness of bots is a holy grail few projects attain. If your bots are consistently green with no flakes, it just means you're not testing enough :-)
How does OpenStack do it, then? I haven't actually looked at Zuul yet, though it is on my shortlist.
--David
python-committers mailing list python-committers@python.org https://mail.python.org/mailman/listinfo/python-committers
Flaky tests have bugs assigned, if a test fails due to a bug you make a comment on the review saying to reverify with the bug number. It let’s them track which bugs are causing the most issues with the gate and such too.
Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA