[Numpy-discussion] Allow == and != to raise errors

Nathaniel Smith njs at pobox.com
Thu Jul 25 07:48:10 EDT 2013

On Tue, Jul 23, 2013 at 4:10 PM, Frédéric Bastien <nouiz at nouiz.org> wrote:
> I'm mixed, because I see the good value, but I'm not able to guess the
> consequence of the interface change.
> So doing your FutureWarning would allow to gatter some data about this, and
> if it seam to cause too much problem, we could cancel the change.
> Also, in the case there is a few software that depend on the old behaviour,
> this will cause a crash(Except if they have a catch all Exception case), not
> bad result.

I think we have to be willing to fix bugs, even if we can't be sure
what all the consequences are. Carefully of course, and with due
consideration to possible compatibility consequences, but if we
rejected every change that might have unforeseen effects then we'd
have to stop accepting changes altogether. (And anyway the
show-stopper regressions that make it into releases always seem to be
the ones we didn't anticipate at all, so I doubt that being 50% more
careful with obscure corner cases like this will have any measurable
impact in our overall release-to-release compatibility.) So I'd
consider Fred's comments above to be a vote for the change, in

> I think it is always hard to predict the consequence of interface change in
> NumPy. To help measure it, we could make/as people to contribute to a
> collection of software that use NumPy with a good tests suites. We could
> test interface change on them by running there tests suites to try to have a
> guess of the impact of those change. What do you think of that? I think it
> was already discussed on the mailing list, but not acted upon.

Yeah, if we want to be careful then it never hurts to run other
projects test suites to flush out bugs :-).

We don't do this systematically right now. Maybe we should stick some
precompiled copies of scipy and other core numpy-dependants up on a
host somewhere and then pull them down and run their test suite as
part of the Travis tests? We have maybe 10 minutes of CPU budget for
tests still.


More information about the NumPy-Discussion mailing list