On Tue, Aug 21, 2018 at 9:39 AM, Stephan Hoyer <shoyer@gmail.com> wrote:

On Tue, Aug 21, 2018 at 12:21 AM Nathaniel Smith <njs@pobox.com> wrote:

On Wed, Aug 15, 2018 at 9:45 AM, Stephan Hoyer <shoyer@gmail.com> wrote:

This avoids a classic subclassing problem that has plagued NumPy for years, where overriding the behavior of method A causes apparently unrelated method B to break, because it relied on method A internally. In NumPy, this constrained our implementation of np.median(), because it needed to call np.mean() in order for subclasses implementing units to work properly.

I don't think I follow... if B uses A internally, then overriding A shouldn't cause B to break, unless the overridden A is buggy.

Let me try another example with arrays with units. My understanding of the contract provided by unit implementations is their behavior should never deviate from NumPy unless an operation raises an error. (This is more explicit for arrays with units because they raise errors for operations with incompatible units, but practically speaking almost all duck arrays will have at least some unsupported operations in NumPy's giant API.)

It is quite possible that NumPy functions could be (re)written in a way that is incompatible with some unit implementations but is perfectly valid for "full" duck arrays. We actually see this even within NumPy already -- for example, see this recent PR adding support for the datetime64 dtype to percentile: https://github.com/numpy/numpy/pull/11627

From your mention of "full" duck arrays I guess you're thinking of

I clicked the link, but I don't see anything about units? Of course units are a tricky example to make inferences from, because they aren't a good fit for the duck array concept in general. (In terms of numpy's core semantics, data-with-units is a special dtype, not a special container type.) this distinction?: http://www.numpy.org/neps/nep-0022-ndarray-duck-typing-overview.html#princip... You're right: if numpy changes the implementation of some high-level function to use protocol-A instead of protocol-B, and there's some partial-duck-array that only implements protocol-B, then it gets broken. Of course, in general __array_function__ has the same problem: if sklearn changes their implementation of some function to call numpy function A instead of numpy function B, and there's a partial-duck-array that only implements numpy function B, then sklearn is broken. I think that as duck arrays roll out, we're just going to have to get used to dealing with breakage like this sometimes. The advantage of __array_function__ is that we get to ignore these issues within numpy itself. The advantage of having focused-protocols is that they make it easier to implement full duck arrays, and they give us a vocabulary for talking about degrees of partiality. For example, with __array_concatenate__, a duck array either supports all the concatenation/stacking operations or none of them – so sklearn never has to worry that switching between np.row_stack and np.stack will cause issues.

A lesser case of this are changes in NumPy causing performance issues for users of duck arrays, which is basically inevitable if we share implementations.

NumPy (and Python in general) is never going to make everything 100% optimized all the time. Over and over we choose to accept small inefficiencies in order to improve maintainability. How big are these inefficiencies – 1% overhead, 10% overhead, 10x overhead...? Do they show up everywhere, or just for a few key functions? What's the maintenance cost of making NumPy's whole API overrideable, in terms of making it harder for us to evolve numpy? What about for users dealing with a proliferation of subtly incompatible implementations? You may be right that the tradeoffs work out so that every API needs to be individually overridable and the benefits are worth it, but we at least need to be asking these questions.

And when we fix a bug in row_stack, this means we also have to fix it in all the copy-paste versions, which won't happen, so np.row_stack has different semantics on different objects, even if they started out matching. The NDArrayOperatorsMixin reduces the number of duplicate copies of the same code that need to be updated, but 2 copies is still a lot worse than 1 copy :-).

I see your point, but in all seriousness if encounter a bug in np.row_stack at this point we might just call it a feature instead.

Yeah, you're right, row_stack is a bad example :-). But of course the point is that it's literally any bug-fix or added feature in numpy's public API. Here's a better, more concrete example: back in 2015, you added np.stack (PR #5605), which was a great new feature. Its implementation was entirely in terms of np.concatenate and other basic APIs like .ndim, asanyarray, etc. In the smallish-set-of-designed-protocols world, as soon as that's merged into numpy, you're done: it works on sparse arrays, dask arrays, tensorflow tensors, etc. People can use it as soon as they upgrade their numpy. In the __array_function__ world, merging into numpy is only the beginning: now you have to go make new PRs to sparse, dask, tensorflow, etc., get them merged, released, etc. Downstream projects may refuse to use it until it's supported in multiple projects that have their own release cycles, etc. Or another example: at a workshop a few years ago, Matti put up some of the source code to numpypy to demonstrate what it looked like. I immediately spotted a subtle bug, because I happened to know that it was one we'd found and fixed recently. (IIRC it was the thing where arr[...] should return a view of arr, not arr itself.) Of course indexing for duck arrays is its own mess that's somewhat orthogonal to __array_function__, but the basic point is that numpy has a lot of complex error-prone semantics, and we are still actively finding and fixing issues in numpy's own implementations.

1. The details of how NumPy implements a high-level function in terms of overloaded functions now becomes an implicit part of NumPy’s public API. For example, refactoring stack to use np.block() instead of np.concatenate() internally would now become a breaking change.

The way I'm imagining this would work is, we guarantee not to take a function that used to be implemented in terms of overridable operations, and refactor it so it's implemented in terms of overridable operations. So long as people have correct implementations of __array_concatenate__ and __array_block__, they shouldn't care which one we use. In the interim period where we have __array_concatenate__ but there's no such thing as __array_block__, then that refactoring would indeed break things, so we shouldn't do that :-). But we could fix that by adding __array_block__.

""we guarantee not to take a function that used to be implemented in terms of overridable operations, and refactor it so it's implemented in terms of overridable operations" Did you miss a "not" in here somewhere, e.g., "refactor it so it's NOT implemented"?

Yeah, sorry.

If we ever tried to do something like this, I'm pretty sure that it just wouldn't happen -- unless we also change NumPy's extremely conservative approach to breaking third-party code. np.block() is much more complex to implement than np.concatenate(), and users would resist being forced to handle that complexity if they don't need it. (Example: TensorFlow has a concatenate function, but not block.)

I agree, we probably wouldn't do this particular change.

2. Array libraries may prefer to implement high level functions differently than NumPy. For example, a library might prefer to implement a fundamental operations like mean() directly rather than relying on sum() followed by division. More generally, it’s not clear yet what exactly qualifies as core functionality, and figuring this out could be a large project.

True. And this is a very general problem... for example, the appropriate way to implement logistic regression is very different in-core versus out-of-core. You're never going to be able to take code written for ndarray, drop in an arbitrary new array object, and get optimal results in all cases -- that's just way too ambitious to hope for. There will be cases where reducing to operations like sum() and division is fine. There will be cases where you have a high-level operation like logistic regression, where reducing to sum() and division doesn't work, but reducing to slightly-higher-level operations like np.mean also doesn't work, because you need to redo the whole high-level operation. And then there will be cases where sum() and division are too low-level, but mean() is high-level enough to make the critical difference. It's that last one where it's important to be able to override mean() directly. Are there a lot of cases like this?

mean() is not entirely hypothetical. TensorFlow and Eigen actually do implement mean separately from sum, though to be honest it's not entirely clear to me why: https://github.com/tensorflow/tensorflow/blob/1c1dad105a57bb13711492a8ba5ab9... https://eigen.tuxfamily.org/dox/unsupported/TensorFunctors_8h_source.html

I do think this probably will come up with some frequency for other operations, but the bigger answer here really is consistency -- it allows projects and their users to have very clearly defined dependencies on NumPy's API. They don't need to worry about any implementation details from NumPy leaking into their override of a function.

When you say "consistency" here that means: "they can be sure that when they disagree with the numpy devs about the semantics/implementation of a numpy API, then the numpy API will consistently act the way they want, not the way the numpy devs want". Right? This is a very double-edged sword :-).

3. We don’t yet have an overloading system for attributes and methods on array objects, e.g., for accessing .dtype and .shape. This should be the subject of a future NEP, but until then we should be reluctant to rely on these properties.

This one I don't understand. If you have a duck-array object, and you want to access its .dtype or .shape attributes, you just... write myobj.dtype or myobj.shape? That doesn't need a NEP though so I must be missing something :-).

We don't have np.asduckarray() yet or whatever we'll end up calling our proposed casting function from NEP 22, so we don't have a fully fleshed out mechanism for NumPy to declare "this object needs to support .shape and .dtype, or I'm going to cast it into something that does".

That's true, but it's just as big a problem for NEP 18, because __array_function__ is never going to do much if you've already coerced the thing to an ndarray. Some kind of asduckarray solution is basically a prerequisite to any other duck array features. -n -- Nathaniel J. Smith -- https://vorpus.org