[Numpy-discussion] Downstream integration testing

Pauli Virtanen pav at iki.fi
Sun Jan 31 08:12:28 EST 2016


31.01.2016, 14:41, Daπid kirjoitti:
> On 31 Jan 2016 13:08, "Pauli Virtanen" <pav at iki.fi> wrote:
>> For example, automated test rig that does the following:
>>
>> - run tests of a given downstream project version, against
>>   previous numpy version, record output
>>
>> - run tests of a given downstream project version, against
>>   numpy master, record output
>>
>> - determine which failures were added by the new numpy version
>>
>> - make this happen with just a single command, eg "python run.py",
>>   and implement it for several downstream packages and versions.
>>   (Probably good to steal ideas from travis-ci dependency matrix etc.)
> 
> A simpler idea: build the master branch of a series of projects and run the
> tests. In case of failure, we can compare with Travis's logs from the
> project when they use the released numpy. In most cases, the master branch
> is clean, so an error will likely be a change in behaviour.

If you can assume the tests of a downstream project are in an OK state,
then you can skip the build against existing numpy.

But it's an additional and unnecessary burden for the Numpy maintainers
to compare the logs manually (and check the built versions are the same,
and that the difference is not due to difference in build environments).
I would also avoid depending on the other projects' Travis-CI
configurations, since these may change.

I think testing released versions of downstream projects is better than
testing their master versions here, as the master branch may contain
workarounds for Numpy changes and not be representative of what people
get on their computers after Numpy release.

> This can be run automatically once a week, to not hog too much of Travis,
> and counting the costs in hours of work, is very cheap to set up, and free
> to maintain.

It may be that such project could be runnable on Travis, if split to
per-project runs to work around the 50min timeout.

I'm not aware of Travis-CI having support for "automatically once per
week" builds.

Anyway, having any form of central automated integration testing would
be better than the current situation where it's mostly all-manual and
relies on the activity of downstream project maintainers.

-- 
Pauli Virtanen




More information about the NumPy-Discussion mailing list