[Numpy-discussion] Allow == and != to raise errors

Frédéric Bastien nouiz at nouiz.org
Thu Jul 25 09:15:30 EDT 2013

On Thu, Jul 25, 2013 at 7:48 AM, Nathaniel Smith <njs at pobox.com> wrote:

> On Tue, Jul 23, 2013 at 4:10 PM, Frédéric Bastien <nouiz at nouiz.org> wrote:
> > I'm mixed, because I see the good value, but I'm not able to guess the
> > consequence of the interface change.
> >
> > So doing your FutureWarning would allow to gatter some data about this,
> and
> > if it seam to cause too much problem, we could cancel the change.
> >
> > Also, in the case there is a few software that depend on the old
> behaviour,
> > this will cause a crash(Except if they have a catch all Exception case),
> not
> > bad result.
> I think we have to be willing to fix bugs, even if we can't be sure
> what all the consequences are. Carefully of course, and with due
> consideration to possible compatibility consequences, but if we
> rejected every change that might have unforeseen effects then we'd
> have to stop accepting changes altogether. (And anyway the
> show-stopper regressions that make it into releases always seem to be
> the ones we didn't anticipate at all, so I doubt that being 50% more
> careful with obscure corner cases like this will have any measurable
> impact in our overall release-to-release compatibility.) So I'd
> consider Fred's comments above to be a vote for the change, in
> practice...
> > I think it is always hard to predict the consequence of interface change
> in
> > NumPy. To help measure it, we could make/as people to contribute to a
> > collection of software that use NumPy with a good tests suites. We could
> > test interface change on them by running there tests suites to try to
> have a
> > guess of the impact of those change. What do you think of that? I think
> it
> > was already discussed on the mailing list, but not acted upon.
> Yeah, if we want to be careful then it never hurts to run other
> projects test suites to flush out bugs :-).
> We don't do this systematically right now. Maybe we should stick some
> precompiled copies of scipy and other core numpy-dependants up on a
> host somewhere and then pull them down and run their test suite as
> part of the Travis tests? We have maybe 10 minutes of CPU budget for
> tests still.

Theano tests will be too long. I'm not sure that doing this on travis-ci is
the right place. Doing this for each version of a PR will be too long for
travis and will limit the project that we will test on.

What about doing a vagrant VM that update/install the development version
of NumPy and then reinstall some predetermined version of other project and
run there tests?

I started playing with vagrant VM to help test differente OS configuration
for Theano. I haven't finished this, but it seam to do the job well. People
just cd in a directory, then run "vagrant up" and then all is automatic.
They just wait and read the output.

Other idea? I know some other project used jenkins. Would this be a better

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20130725/d11df775/attachment.html>

More information about the NumPy-Discussion mailing list