On Sun, Apr 28, 2019, 08:41 Marten van Kerkwijk <m.h.vankerkwijk@gmail.com> wrote:
Hi Ralf,

Thanks for the comments and summary slides. I think you're over-interpreting my wish to break people's code! I certainly believe - and think we all agree - that we remain as committed as ever to ensure that
continues to work just as before. My main comment is that I want to ensure that no similar guarantee will exist for
(or whatever we call it). I think that is quite consistent with NEP-18, since as originally written there was not even the possibility to access the implementation directly (which was after long discussions about whether to allow it, including ideas like `import numpy.internal_apt as np`). In this respect, the current proposal is a large deviation from the original intent, so we need to be clear about what we are promising.

In summary, I think the guarantees should be as follows:
1.If you call np.function and
  - do not define __array_function__, changes happen only via the usual cycle.
  - define __array_function__, you take responsibility for returning the result.
2. If you call np.function.__wrapped__ and
  - input only ndarray, changes happen only via the usual cycle;
  - input anything but ndarray, changes can happen in any release.

Let's just say that __skip_array_function__ is provisional, the same as __array_function__ itself.

On the larger picture,in your slides, the further split that happens is that if no override is present, the first thing that actually gets called is not the function implementation but rather `ndarray.__array_function__`.

This is tricky. I've definitely wanted to figure out some way the conceptual model could be simplified by integrating __array_function__ into regular dispatch to reduce special cases. (It's possible I suggested adding ndarray.__array_dispatch__ in the first place?)

But, on further consideration, I don't think there's actually any way to pretend that ndarray is just another duck array with an __array_function__ method. The big problem is:

np.concatenate([1, 2], [3, 4])

Here none of the arguments have __array_function__ methods. So the implementation *has* to start by doing coercion. The coercion can't happen inside __array_function__, because there is no __array_function__ until after coercion.

So ndarray coercion and everything after it has to remain a special case. ndarray.__array_function__ isn't fooling anyone.

Also: if we add Stephan's __skipping_array_function__ (or whatever we call it), then that's also incompatible with the idea that ndarray.__array_function__ is where the real work happens.

I'm starting to think ndarray.__array_function__ is a mistake – it was supposed to simplify the conceptual model, by letting us handle the fallback logic and the override logic using the same unified framework. But it fails.