>> I don't really understand the 'types' frozenset. The NEP says "it will
>> be used by most __array_function__ methods, which otherwise would need
>> to extract this information themselves"... but they still need to
>> extract the information themselves, because they still have to examine
>> each object and figure out what type it is. And, simply creating a
>> frozenset costs ~0.2 ┬Ás on my laptop, which is overhead that we can't
>> possibly optimize later...
>
>
> The most flexible alternative would be to just say that we provide an
> fixed-length iterable, and return a tuple object. (In my microbenchmarks,
> it's faster to make a tuple than a list or set.) In an early draft of the
> NEP, I proposed exactly this, but the speed difference seemed really
> marginal to me.
>
> I included 'types' in the interface because I really do think it's something
> that almost all __array_function__ implementations should use use. It
> preserves a nice separation of concerns between dispatching logic and
> implementations for a new type. At least as long as __array_function__ is
> experimental, I don't think we should be encouraging people to write
> functions that could return NotImplemented directly and to rely entirely on
> the NumPy interface.
>
> Many but not all implementations will need to look at argument types. This
> is only really essential for cases where mixed operations between NumPy
> arrays and another type are allowed. If you only implement the NumPy
> interface for MyArray objects, then in the usual Python style you wouldn't
> need isinstance checks.
>
> It's also important from an ecosystem perspective. If we don't make it easy
> to get type information, my guess is that many __array_function__ authors
> wouldn't bother to return NotImplemented for unexpected types, which means
> that __array_function__ will break in weird ways when used with objects from
> unrelated libraries.

This is much more of a detail as compared to the rest of the
discussion, so I don't want to quibble too much about it. (Especially
since if we keep things really-provisional, we can change our mind
about the argument later :-).) Mostly I'm just confused, because there
are lots of __dunder__ functions in Python (and NumPy), and none of
them take a special 'types' argument... so what's special about
__array_function__ that makes it necessary/worthwhile?

Any implementation of, say, concatenate-via-array_function is going to
involve iterating through all the arguments and looking at each of
them to figure out what kind of object it is and how to handle it,
right? That's true whether or not they've done a "pre-check" using the
types set, so in theory it's just as easy to return NotImplemented at
that point. But I guess your point in the last paragraph is that this
means there will be lots of chances to mess up the
NotImplemented-returning code in particular, especially since it's
less likely to be tested than the happy path, which seems plausible.
So basically the point of the types set is to let people factor out
that little bit of lots of functions into one common place? I guess
some careful devs might be unhappy with paying extra so that other
lazier devs can get away with being lazy, but maybe it's a good
tradeoff for us (esp. since as numpy devs, we'll be getting the bug
reports regardless :-)).

If that's the goal, then it does make me wonder if there might be a
more direct way to accomplish it -- like, should we let classes define
an __array_function_types__ attribute that numpy would check before
even trying to dispatch to __array_function__?

I quite like that idea; I've not been enchanted by the extra `types` either - it seems like `method` in `__array_ufunc__`, it could become quite superfluous.

-- Marten