[core-workflow] Use buildbot to test tracker patches

Ezio Melotti ezio.melotti at gmail.com
Sat Jul 5 21:55:05 CEST 2014


On Sat, Jul 5, 2014 at 9:20 PM, anatoly techtonik <techtonik at gmail.com> wrote:
> On Sat, Jul 5, 2014 at 8:43 PM, Ezio Melotti <ezio.melotti at gmail.com> wrote:
>> On Sat, Jul 5, 2014 at 8:06 PM, francis <francismb at email.de> wrote:
>>> On 07/05/2014 02:46 PM, Ezio Melotti wrote:
>>>>
>>>>    1) a new "Test with buildbot" (or whatever) button is added to the
>>>> tracker next to each patch, and developers/triagers can press it to
>>>> trigger the build;
>>>
>>>
>>> Just an small detail/question: does it make a difference to change the
>>> "trigger the build"-button with a "continuously check this patch"
>>> -checkbox? The idea is not to check when the button is pressed
>>> (if I've understood the original idea correctly) but just to set the
>>> check and from then on get continuously feedback that's (still) working
>>> (e.g. with a small green icon nearby the patch).
>>
>> The advantage of this is that people can know immediately if a patch
>> still applies (assuming someone checked the checkbox before and that
>> the check runs often enough).
>> However this will waste lot of machine time if no one is looking at
>> the patch for a long time, especially if we consider that there are
>> over 2000 patches currently on the tracker.
>
> We already know which files patch touches:
> https://bitbucket.org/techtonik/python-stdlib/src/567adbfaa3bf1de905949e38a27bee0334081174/modstats.py?at=default#cl-167
> so we can automatically detect a subset of patches to try on each
> commit. We only need a DB with analyzed patch info to run queries.
>
> Independent pieces of code that need to be implemented as I see it:
>
> 1.  [x] patch upload -> [ ] patch detection -> [ ] patch reaction ->
> -> [x] patch analysis (get paths) -> [ ] store info (I'd use some key/value)
>

This is also something I'm working on, but even though I have some of
those pieces already working, integrating everything on the bug
tracker (or somewhere else) is not an easy task, so I haven't done it
yet.

> 2.  [ ] post-commit hook to detect broken patches -> [ ] detect paths ->
> -> [ ] run query to select patches -> [ ] schedule patch tests against
> specific commits
>
> 3. [ ] get patch from the test queue -> [ ] checkout revision -> [ ] test
> -> [ ] store result to DB
>
> 4. [ ] roundup GUI indicator -> [ ] query DB for patch status
>
>> On the technical side there are two issues:
>>   1) it's easier to send the request whenever someone press the
>> button, rather than having something that keeps track of all the
>> patches that need to be checked and either keeps checking it or keeps
>> sending requests to the buildbot master;
>>   2) the green icon is a good idea, but also requires either a
>> different way for the buildbot master to inform the tracker of the
>> results, or some way to intercept the email send to the tracker and
>> convert it to a green/red icon instead of adding a regular message.  A
>> third and simpler option is to just add the icon via JS on the
>> client-side, and possibly also collapse buildbot messages so that they
>> don't flood the message list.
>
> JS that accesses specific URL about patch status.
> http://hostname/patches/file5555/status
> {
>    status: "ok",   # "check pending..", "broken by commit..", "applied.."
> }
>

This can be done server-side if the status is hosted on the bug tracker.
Exposing this info to the outside as JSON is a separate issue.

> The task seems huge, so instead of making half-baked hack, I propose to
> add some engineering - write down the roadmap above and concentrate on
> that, then allow people to have fun adding meat to the bones.
>

Given the available time I have, I can either offer a half-baked hack
or nothing.
Once I put the first bone, people can add other bones/meat as they see fit.

I don't think having a roadmap will make much difference though.  Once
you have it you still need to advertise it, reach the set of people
with both skill, time, and interest, get them involved, find the time
to review and apply the changes, etc..
I think we discussed this already somewhere else, and this thread is
not the right place where to bring this up again (if you wish to
continue the discussion we can do so off-list, but you should already
know what's my opinion on the topic).

> This will be a good practice to test more focused distributed workflow.
> --
> anatoly t.


More information about the core-workflow mailing list