[Python-Dev] My thinking about the development process

Brett Cannon brett at python.org
Sat Dec 6 16:21:46 CET 2014


On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft <donald at stufft.io> wrote:

>
> On Dec 6, 2014, at 9:11 AM, Brett Cannon <brett at python.org> wrote:
>
>
>
> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray <rdmurray at bitdance.com>
> wrote:
>
>> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow <
>> ericsnowcurrently at gmail.com> wrote:
>> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon <bcannon at gmail.com> wrote:
>> > > We don't exactly have a ton of people
>> > > constantly going "I'm so bored because everything for Python's
>> development
>> > > infrastructure gets sorted so quickly!" A perfect example is that R.
>> David
>> > > Murray came up with a nice update for our workflow after PyCon but
>> then ran
>> > > out of time after mostly defining it and nothing ever became of it
>> (maybe we
>> > > can rectify that at PyCon?). Eric Snow has pointed out how he has
>> written
>> > > similar code for pulling PRs from I think GitHub to another code
>> review
>> > > tool, but that doesn't magically make it work in our infrastructure
>> or get
>> > > someone to write it and help maintain it (no offense, Eric).
>> >
>> > None taken.  I was thinking the same thing when I wrote that. :)
>> >
>> > >
>> > > IOW our infrastructure can do anything, but it can't run on hopes and
>> > > dreams. Commitments from many people to making this happen by a
>> certain
>> > > deadline will be needed so as to not allow it to drag on forever.
>> People
>> > > would also have to commit to continued maintenance to make this viable
>> > > long-term.
>>
>> The biggest blocker to my actually working the proposal I made was that
>> people wanted to see it in action first, which means I needed to spin up
>> a test instance of the tracker and do the work there.  That barrier to
>> getting started was enough to keep me from getting started...even though
>> the barrier isn't *that* high (I've done it before, and it is easier now
>> than it was when I first did it), it is still a *lot* higher than
>> checking out CPython and working on a patch.
>>
>> That's probably the biggest issue with *anyone* contributing to tracker
>> maintenance, and if we could solve that, I think we could get more
>> people interested in helping maintain it.  We need the equivalent of
>> dev-in-a-box for setting up for testing proposed changes to
>> bugs.python.org, but including some standard way to get it deployed so
>> others can look at a live system running the change in order to review
>> the patch.
>>
>
> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
> the past week, but this just screams "container" to me which would make
> getting a test instance set up dead simple.
>
>
> Heh, one of my thoughts on deploying the bug tracker into production was
> via a container, especially since we have multiple instances of it. I got
> side tracked on getting the rest of the infrastructure readier for a web
> application and some improvements there as well as getting a big postgresql
> database cluster set up (2x 15GB RAM servers running in Primary/Replica
> mode). The downside of course to this is that afaik Docker is a lot harder
> to use on Windows and to some degree OS X than linux. However if the
> tracker could be deployed as a docker image that would make the
> infrastructure side a ton easier. I also have control over the python/
> organization on Docker Hub too for whatever uses we have for it.
>

I think it's something worth thinking about, but like you I don't know if
the containers work on OS X or Windows (I don't work with containers
personally).


>
> Unrelated to the tracker:
>
> Something that any PEP should consider is security, particularly that of
> running the tests. Currently we have a buildbot fleet that checks out the
> code and executes the test suite (aka code). A problem that any pre-merge
> test runner needs to solve is that unlike a post-merge runner, which will
> only run code that has been committed by a committer, a pre-merge runner
> will run code that _anybody_ has submitted. This means that it’s not merely
> enough to simply trigger a build in our buildbot fleet prior to the merge
> happening as that would allow anyone to execute arbitrary code there. As
> far as I’m aware there are two solutions to this problem in common use,
> either use throw away environments/machines/containers that isolate the
> running code and then get destroyed after each test run, or don’t run the
> pre-merge tests immediately unless it’s from a “trusted” person and for
> “untrusted” or “unknown” people require a “trusted” person to give the OK
> for each test run.
>
> The throw away machine solution is obviously much nicer experience for the
> “untrusted” or “unknown” users since they don’t require any intervention to
> get their tests run which means that they can see if their tests pass, fix
> things, and then see if that fixes it much quicker. The obvious downside
> here is that it’s more effort to do that and the availability of throw away
> environments for all the systems we support. Linux, most (all?) of the
> BSDs, and Windows are pretty easy here since there are cloud offerings for
> them that can be used to spin up a temporary environment, run tests, and
> then delete it. OS X is a problem because afaik you can only virtualize OS
> X on Apple hardware and I’m not aware of any cloud provider that offers
> metered access to OS X hosts. The more esoteric systems like AIX and what
> not are likely an even bigger problem in this regard since I’m unsure of
> the ability to get virtualized instances of these at all. It may be
> possible to build our own images of these on a cloud provider assuming that
> their licenses allow that.
>
> The other solution would work easier with our current buildbot fleet since
> you’d just tell it to run some tests but you’d wait until a “trusted”
> person gave the OK before you did that.
>
> A likely solution is to use a pre-merge test runner for the systems that
> we can isolate which will give a decent indication if the tests are going
> to pass across the entire supported matrix or not and then continue to use
> the current post-merge test runner to handle testing the esoteric systems
> that we can’t work into the pre-merge testing.
>

Security is definitely something to consider and what you mentioned above
is all reasonable for CI of submitted patches. This all also a reason to
consider CI services like Travis, Codeship, Drone, etc. as they are already
set up for this kind of thing and simply using them for the pre-commit
checks and then relying on the buildbots for post-commit verification we
didn't break on some specific platform.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20141206/5985a7a9/attachment.html>


More information about the Python-Dev mailing list