
On 31 Jan 2016 13:08, "Pauli Virtanen" <pav@iki.fi> wrote:
For example, automated test rig that does the following:
- run tests of a given downstream project version, against previous numpy version, record output
- run tests of a given downstream project version, against numpy master, record output
- determine which failures were added by the new numpy version
- make this happen with just a single command, eg "python run.py", and implement it for several downstream packages and versions. (Probably good to steal ideas from travis-ci dependency matrix etc.)
A simpler idea: build the master branch of a series of projects and run the tests. In case of failure, we can compare with Travis's logs from the project when they use the released numpy. In most cases, the master branch is clean, so an error will likely be a change in behaviour. This can be run automatically once a week, to not hog too much of Travis, and counting the costs in hours of work, is very cheap to set up, and free to maintain. /David