On Sun, Jan 31, 2016 at 11:57 AM, Julian Taylor <jtaylor.debian@googlemail.com> wrote:
On 01/30/2016 06:27 PM, Ralf Gommers wrote:
>
>
> On Fri, Jan 29, 2016 at 11:39 PM, Nathaniel Smith <njs@pobox.com
> <mailto:njs@pobox.com>> wrote:
>
>     It occurs to me that the best solution might be to put together a
>     .travis.yml for the release branches that does: "for pkg in
>     IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg;
>     pkg.test()'"
>     This might not be viable right now, but will be made more viable if
>     pypi starts allowing official Linux wheels, which looks likely to
>     happen before 1.12... (see PEP 513)
>
>     On Jan 29, 2016 9:46 AM, "Andreas Mueller" <t3kcit@gmail.com
>     <mailto:t3kcit@gmail.com>> wrote:
>     >
>     > Is this the point when scikit-learn should build against it?
>
>     Yes please!
>
>     > Or do we wait for an RC?
>
>     This is still all in flux, but I think we might actually want a rule
>     that says it can't become an RC until after we've tested
>     scikit-learn (and a list of similarly prominent packages). On the
>     theory that RC means "we think this is actually good enough to
>     release" :-). OTOH I'm not sure the alpha/beta/RC distinction is
>     very helpful; maybe they should all just be betas.
>
>     > Also, we need a scipy build against it. Who does that?
>
>     Like Julian says, it shouldn't be necessary. In fact using old
>     builds of scipy and scikit-learn is even better than rebuilding
>     them, because it tests numpy's ABI compatibility -- if you find you
>     *have* to rebuild something then we *definitely* want to know that.
>
>     > Our continuous integration doesn't usually build scipy or numpy, so it will be a bit tricky to add to our config.
>     > Would you run our master tests? [did we ever finish this discussion?]
>
>     We didn't, and probably should... :-)
>
> Why would that be necessary if scikit-learn simply tests pre-releases of
> numpy as you suggested earlier in the thread (with --pre)?
>
> There's also https://github.com/MacPython/scipy-stack-osx-testing by the
> way, which could have scikit-learn and scikit-image added to it.
>
> That's two options that are imho both better than adding more workload
> for the numpy release manager. Also from a principled point of view,
> packages should test with new versions of their dependencies, not the
> other way around.


It would be nice but its not realistic, I doubt most upstreams that are
not themselves major downstreams are even subscribed to this list.

I'm pretty sure that some core devs from all major scipy stack packages are subscribed to this list.

Testing or delegating testing of least our major downstreams should be
the job of the release manager.

If we make it (almost) fully automated, like in https://github.com/MacPython/scipy-stack-osx-testing, then I agree that adding this to the numpy release checklist would make sense.

But it should really only be a tiny amount of work - we're short on developer power, and many things that are cross-project like build & test infrastructure (numpy.distutils, needed pip/packaging fixes, numpy.testing), scipy.org (the "stack" website), numpydoc, etc. are mostly maintained by the numpy/scipy devs. I'm very reluctant to say yes to putting even more work on top of that.

So: it would really help if someone could pick up the automation part of this and improve the stack testing, so the numpy release manager doesn't have to do this.

Ralf