travis bottleneck at sprints
tl;dr Could we temporarily bump our cap on concurrent travis builds during sprints? During the sprints at PyCon we've been running into a serious bottleneck with travis. Having an extra allowance for builds already is great, but during sprints even that gets swamped. I am not sure what projects contribute most to the problem, but I'm pretty sure it isn't CPython (essentially 2 builds per PR). Regardless, with the new workflow this bottleneck significantly impacts the higher pace that usually takes place at sprints. Would it make sense to see if the Travis folks would be willing to bump our limit temporarily during each sprint? -eric
I've noticed that too; the python/mypy project is also experiencing slow builds (as is mypy/typeshed). I don't know how to contact Travis for this though. On Wed, May 24, 2017 at 2:38 PM, Eric Snow <ericsnowcurrently@gmail.com> wrote:
tl;dr Could we temporarily bump our cap on concurrent travis builds during sprints?
During the sprints at PyCon we've been running into a serious bottleneck with travis. Having an extra allowance for builds already is great, but during sprints even that gets swamped. I am not sure what projects contribute most to the problem, but I'm pretty sure it isn't CPython (essentially 2 builds per PR). Regardless, with the new workflow this bottleneck significantly impacts the higher pace that usually takes place at sprints. Would it make sense to see if the Travis folks would be willing to bump our limit temporarily during each sprint?
-eric _______________________________________________ core-workflow mailing list core-workflow@python.org https://mail.python.org/mailman/listinfo/core-workflow This list is governed by the PSF Code of Conduct: https://www.python.org/psf/codeofconduct
-- --Guido van Rossum (python.org/~guido)
Donald is our contact with Travis, so I've explicitly added him to this email. To give some details: we get 25 concurrent jobs across all the various "official" Python projects on GitHub hosted under the Python, PyPA, and PyCA organizations (which is a substantial bump from what most projects get; see https://travis-ci.com/plans to get an idea of what we're getting for free). CPython itself uses 3 of those with any single PR or merge into a branch (docs, Py_DEBUG, coverage). Now normally this works out great for us since CPython is probably one of the more active projects that gets to use this increased budget, and so we typically take a chunk of the 25 concurrent builds happily and get our builds started very promptly. But at the sprints we ran up against cryptography and their crazy build needs: https://travis-ci.org/pyca/cryptography . IOW having every major Python project using Travis' free service at once hit us hard. As to whether we can get more of a budget for the sprints at PyCon US (or any other conference), I don't know. Maybe Donald could tell us more detail and/or find out if next year we can plan ahead to get a temp boost for the four days. Otherwise we're talking about making PyCA suffer year-round by having them have to get their own quota or something. On Wed, 24 May 2017 at 14:46 Guido van Rossum <guido@python.org> wrote:
I've noticed that too; the python/mypy project is also experiencing slow builds (as is mypy/typeshed). I don't know how to contact Travis for this though.
On Wed, May 24, 2017 at 2:38 PM, Eric Snow <ericsnowcurrently@gmail.com> wrote:
tl;dr Could we temporarily bump our cap on concurrent travis builds during sprints?
During the sprints at PyCon we've been running into a serious bottleneck with travis. Having an extra allowance for builds already is great, but during sprints even that gets swamped. I am not sure what projects contribute most to the problem, but I'm pretty sure it isn't CPython (essentially 2 builds per PR). Regardless, with the new workflow this bottleneck significantly impacts the higher pace that usually takes place at sprints. Would it make sense to see if the Travis folks would be willing to bump our limit temporarily during each sprint?
-eric _______________________________________________ core-workflow mailing list core-workflow@python.org https://mail.python.org/mailman/listinfo/core-workflow This list is governed by the PSF Code of Conduct: https://www.python.org/psf/codeofconduct
-- --Guido van Rossum (python.org/~guido) _______________________________________________ core-workflow mailing list core-workflow@python.org https://mail.python.org/mailman/listinfo/core-workflow This list is governed by the PSF Code of Conduct: https://www.python.org/psf/codeofconduct
On May 25, 2017, at 4:49 PM, Brett Cannon <brett@python.org> wrote:
Donald is our contact with Travis, so I've explicitly added him to this email.
To give some details: we get 25 concurrent jobs across all the various "official" Python projects on GitHub hosted under the Python, PyPA, and PyCA organizations (which is a substantial bump from what most projects get; see https://travis-ci.com/plans <https://travis-ci.com/plans> to get an idea of what we're getting for free). CPython itself uses 3 of those with any single PR or merge into a branch (docs, Py_DEBUG, coverage).
Now normally this works out great for us since CPython is probably one of the more active projects that gets to use this increased budget, and so we typically take a chunk of the 25 concurrent builds happily and get our builds started very promptly. But at the sprints we ran up against cryptography and their crazy build needs: https://travis-ci.org/pyca/cryptography <https://travis-ci.org/pyca/cryptography> . IOW having every major Python project using Travis' free service at once hit us hard.
As to whether we can get more of a budget for the sprints at PyCon US (or any other conference), I don't know. Maybe Donald could tell us more detail and/or find out if next year we can plan ahead to get a temp boost for the four days. Otherwise we're talking about making PyCA suffer year-round by having them have to get their own quota or something.
Just to be completely accurate, PyPA and PyCA had independent quotas for a long time (The default is uh, 5 I think, and both PyPA and PyCA had 10). When I reached out to Travis about getting the same for CPython, they came up with the idea of sharing a quota to allow CPython a chance to burst higher without having to add yet another dedicated queue for CPython. This generally ends up being a net positive for everyone since everyone gets a chance at a much larger burst capacity and between the three orgs we don’t generally have *too* much contention. I can reach out to Travis though about the possibility of a temporary bump during large sprints. Another option might be to reduce the human bottleneck by allowing a bot to auto merge once tests pass (although that adds more dev work of course). That way people can review and mark as approved (or even explicit to auto merge) and once tests catch up the bot handles merging it. That would allow people to generally move on once everything but waiting on tests + pressing merge button is done. — Donald Stufft
participants (4)
-
Brett Cannon
-
Donald Stufft
-
Eric Snow
-
Guido van Rossum