Matrix multiplication infix operator PEP nearly ready to go
Hi all, The proposal to add an infix operator to Python for matrix multiplication is nearly ready for its debut on pythonideas; so if you want to look it over first, just want to check out where it's gone, then now's a good time: https://github.com/numpy/numpy/pull/4351 The basic idea here is to try to make the strongest argument we can for the simplest extension that we actually want, and then whether it gets accepted or rejected at least we'll know that's final. Absolutely all comments and feedback welcome. Cheers, n  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
On 3/12/2014 6:04 PM, Nathaniel Smith wrote:
The Semantics section still begins with 0d, then 2d, then 1d, then nd. Given the context of the proposal, the order should be: 2d (the core need expressed in the proposal) nd (which generalizes via broadcasting the 2d behavior) 1d (special casing) 0d (error) In this context I see one serious problem: is there a NumPy function that produces the proposed nd behavior? If not why not, and can it really be sold as a core need if the need to implement it has never been pressed to the point of an implementation? Unless this behavior is first implemented, the obvious question remains: why will `@` not just implement `dot`, for which there is a well tested and much used implementation? Note I am not taking a position on the semantics. I'm just pointing out a question that is sure to arise. Cheers, Alan
On Thu, Mar 13, 2014 at 1:03 AM, Alan G Isaac <alan.isaac@gmail.com> wrote:
On 3/12/2014 6:04 PM, Nathaniel Smith wrote:
The Semantics section still begins with 0d, then 2d, then 1d, then nd. Given the context of the proposal, the order should be:
2d (the core need expressed in the proposal) nd (which generalizes via broadcasting the 2d behavior) 1d (special casing) 0d (error)
I've just switched it to 2d > 1d > 3d+ > 0d. You're right that 2d should go first, but IMO 1d should go after it because 2d and 1d are the two cases that really get used heavily in practice.
In this context I see one serious problem: is there a NumPy function that produces the proposed nd behavior? If not why not, and can it really be sold as a core need if the need to implement it has never been pressed to the point of an implementation?
The logic isn't "we have a core need to implement these exact semantics". It's: "we have a core need for this operator; given that we are adding an operator we have to figure out exactly what the semantics should be; we did that and documented it and got consensus from a bunch of projects on it". I don't think the actual details of the semantics matter nearly as much as the fact that they exist.
Unless this behavior is first implemented, the obvious question remains: why will `@` not just implement `dot`, for which there is a well tested and much used implementation?
Because of the reason above, I'm not sure it will come up (I don't think pythondev is nearly as familiar with the corner cases of numpy.dot as we are :)). But if it does the answer is easy: noone ever thought through exactly how `dot` should work in these rare edge cases, now we did. But we can't just change `dot` quickly, because of backwards compatibility considerations. `@` is new, there's no compatibility problem, so we might as well get it right from the start. If the behavioural differences between `dot` and `@` were more controversial then I'd worry more. But the consequences of the 0d thing are trivial to understand, and in the 3d+ case we're already shipping dozens of functions that have exactly these broadcasting semantics. n
participants (2)

Alan G Isaac

Nathaniel Smith