It looks like Py 3.5 will include a dedicated infix matrix multiply operator
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/ Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup. n  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
That's the best news I've had all week. Thanks for all your work on this Nathan. A On Fri, Mar 14, 2014 at 8:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
This is great news. Excellent work Nathaniel and all others! Frédéric On Fri, Mar 14, 2014 at 8:57 PM, Aron Ahmadia <aron@ahmadia.net> wrote:
That's the best news I've had all week.
Thanks for all your work on this Nathan.
A
On Fri, Mar 14, 2014 at 8:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
This id good for Numpyists but this will be another operator that good also help in another contexts. As a math user, I was first very skeptical but finally this is a good news for non Numpyists too. Christophe BAL Le 15 mars 2014 02:01, "Frédéric Bastien" <nouiz@nouiz.org> a écrit :
This is great news. Excellent work Nathaniel and all others!
Frédéric
On Fri, Mar 14, 2014 at 8:57 PM, Aron Ahmadia <aron@ahmadia.net> wrote:
That's the best news I've had all week.
Thanks for all your work on this Nathan.
A
On Fri, Mar 14, 2014 at 8:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
That’s great. Does this mean that, in the notsodistant future, the matrix class will go the way of the dodos? I have had more subtle to fix bugs sneak into code b/c something returns a matrix instead of an array than almost any other single source I can think of. Having two almost indistinguishable types for 2d arrays with slightly different semantics for a small subset of operations is terrible. Best, C  Chris Laumann Sent with Airmail On March 14, 2014 at 7:16:24 PM, Christophe Bal (projetmbc@gmail.com) wrote: This id good for Numpyists but this will be another operator that good also help in another contexts. As a math user, I was first very skeptical but finally this is a good news for non Numpyists too. Christophe BAL Le 15 mars 2014 02:01, "Frédéric Bastien" <nouiz@nouiz.org> a écrit : This is great news. Excellent work Nathaniel and all others! Frédéric On Fri, Mar 14, 2014 at 8:57 PM, Aron Ahmadia <aron@ahmadia.net> wrote:
That's the best news I've had all week.
Thanks for all your work on this Nathan.
A
On Fri, Mar 14, 2014 at 8:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sat, Mar 15, 2014 at 3:18 AM, Chris Laumann <chris.laumann@gmail.com> wrote:
That’s great.
Does this mean that, in the notsodistant future, the matrix class will go the way of the dodos? I have had more subtle to fix bugs sneak into code b/c something returns a matrix instead of an array than almost any other single source I can think of. Having two almost indistinguishable types for 2d arrays with slightly different semantics for a small subset of operations is terrible.
Well, it depends on what your definition of "distant" is :). Py 3.5 won't be out for some time (3.*4* is coming out this week). And we'll still need to sit down and figure out if there's any bits of matrix we want to save (e.g., maybe create an ndarray version of the parser used for np.matrix("1 2; 3 4")), come up with a transition plan, have a long mailing list argument about it, etc. But the goal (IMO) is definitely to get rid of np.matrix as soon as reasonable given these considerations, and similarly to find a way to switch scipy.sparse matrices to a more ndarraylike API. So it'll be a few years at least, but I think we'll get there. n  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
Congratulations Nathaniel! This is great news! Well done on starting the process and taking things forward. Travis On Mar 14, 2014 7:51 PM, "Nathaniel Smith" <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Fri, Mar 14, 2014 at 6:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
Surprisingly little discussion on pythonideas, or so it seemed to me. Guido came out in favor less than halfway through. Congratulations on putting together a successful proposal, many of us had given up on ever seeing a matrix multiplication operator. Chuck
Note that I am not opposed to extra operators in python, and only mildly opposed to a matrix multiplication operator in numpy; but let me lay out the case against, for your consideration. First of all, the use of matrix semantics relative to arrays semantics is extremely rare; even in linear algebra heavy code, arrays semantics often dominate. As such, the default of array semantics for numpy has been a great choice. Ive never looked back at MATLAB semantics. Secondly, I feel the urge to conform to a historical mathematical notation is misguided, especially for the problem domain of linear algebra. Perhaps in the world of mathematics your operation is associative or commutes, but on your computer, the order of operations will influence both outcomes and performance. Even for products, we usually care not only about the outcome, but also how that outcome is arrived at. And along the same lines, I don't suppose I need to explain how I feel about A@@1 and the likes. Sure, it isn't to hard to learn or infer this implies a matrix inverse, but why on earth would I want to pretend the rich complexity of numerical matrix inversion can be mangled into one symbol? Id much rather write inv or pinv, or whatever particular algorithm happens to be called for given the situation. Considering this isn't the numlisp discussion group, I suppose I am hardly the only one who feels so. On the whole, I feel the @ operator is mostly superfluous. I prefer to be explicit about where I place my brackets. I prefer to be explicit about the data layout and axes that go into a (multi)linear product, rather than rely on obtuse row/column conventions which are not transparent across function calls. When I do linear algebra, it is almost always vectorized over additional axes; how does a special operator which is only well defined for a few special cases of 2d and 1d tensors help me with that? On the whole, the linear algebra conventions inspired by the particular constraints of people working with blackboards, are a rather ugly and hacky beast in my opinion, which I feel no inclination to emulate. As a sidenote to the contrary; I love using broadcasting semantics when writing papers. Sure, your reviewers will balk at it, but it wouldn't do to give the dinosaurs the last word on what any given formal language ought to be like. We get to define the future, and im not sure the set of conventions that goes under the name of 'matrix multiplication' is one of particular importance to the future of numerical linear algebra. Note that I don't think there is much harm in an @ operator; but I don't see myself using it either. Aside from making textbook examples like a gramschmidt orthogonalization more compact to write, I don't see it having much of an impact in the real world. On Sat, Mar 15, 2014 at 3:52 PM, Charles R Harris <charlesr.harris@gmail.com
wrote:
On Fri, Mar 14, 2014 at 6:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
Surprisingly little discussion on pythonideas, or so it seemed to me. Guido came out in favor less than halfway through. Congratulations on putting together a successful proposal, many of us had given up on ever seeing a matrix multiplication operator.
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
This is awesome! Congrats! On Sun, Mar 16, 2014 at 9:39 AM, Eelco Hoogendoorn < hoogendoorn.eelco@gmail.com> wrote:
Note that I am not opposed to extra operators in python, and only mildly opposed to a matrix multiplication operator in numpy; but let me lay out the case against, for your consideration.
First of all, the use of matrix semantics relative to arrays semantics is extremely rare; even in linear algebra heavy code, arrays semantics often dominate. As such, the default of array semantics for numpy has been a great choice. Ive never looked back at MATLAB semantics.
Secondly, I feel the urge to conform to a historical mathematical notation is misguided, especially for the problem domain of linear algebra. Perhaps in the world of mathematics your operation is associative or commutes, but on your computer, the order of operations will influence both outcomes and performance. Even for products, we usually care not only about the outcome, but also how that outcome is arrived at. And along the same lines, I don't suppose I need to explain how I feel about A@@1 and the likes. Sure, it isn't to hard to learn or infer this implies a matrix inverse, but why on earth would I want to pretend the rich complexity of numerical matrix inversion can be mangled into one symbol? Id much rather write inv or pinv, or whatever particular algorithm happens to be called for given the situation. Considering this isn't the numlisp discussion group, I suppose I am hardly the only one who feels so.
On the whole, I feel the @ operator is mostly superfluous. I prefer to be explicit about where I place my brackets. I prefer to be explicit about the data layout and axes that go into a (multi)linear product, rather than rely on obtuse row/column conventions which are not transparent across function calls. When I do linear algebra, it is almost always vectorized over additional axes; how does a special operator which is only well defined for a few special cases of 2d and 1d tensors help me with that? On the whole, the linear algebra conventions inspired by the particular constraints of people working with blackboards, are a rather ugly and hacky beast in my opinion, which I feel no inclination to emulate. As a sidenote to the contrary; I love using broadcasting semantics when writing papers. Sure, your reviewers will balk at it, but it wouldn't do to give the dinosaurs the last word on what any given formal language ought to be like. We get to define the future, and im not sure the set of conventions that goes under the name of 'matrix multiplication' is one of particular importance to the future of numerical linear algebra.
Note that I don't think there is much harm in an @ operator; but I don't see myself using it either. Aside from making textbook examples like a gramschmidt orthogonalization more compact to write, I don't see it having much of an impact in the real world.
On Sat, Mar 15, 2014 at 3:52 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Fri, Mar 14, 2014 at 6:51 PM, Nathaniel Smith <njs@pobox.com> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
Surprisingly little discussion on pythonideas, or so it seemed to me. Guido came out in favor less than halfway through. Congratulations on putting together a successful proposal, many of us had given up on ever seeing a matrix multiplication operator.
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sun, Mar 16, 2014 at 2:39 PM, Eelco Hoogendoorn <hoogendoorn.eelco@gmail.com> wrote:
Note that I am not opposed to extra operators in python, and only mildly opposed to a matrix multiplication operator in numpy; but let me lay out the case against, for your consideration.
First of all, the use of matrix semantics relative to arrays semantics is extremely rare; even in linear algebra heavy code, arrays semantics often dominate. As such, the default of array semantics for numpy has been a great choice. Ive never looked back at MATLAB semantics.
Different people work on different code and have different experiences here  yours may or may be typical yours. Pauli did some quick checks on scikitlearn & nipy & scipy, and found that in their test suites, uses of np.dot and uses of elementwisemultiplication are ~equally common: https://github.com/numpy/numpy/pull/4351#issuecomment37717330h
Secondly, I feel the urge to conform to a historical mathematical notation is misguided, especially for the problem domain of linear algebra. Perhaps in the world of mathematics your operation is associative or commutes, but on your computer, the order of operations will influence both outcomes and performance. Even for products, we usually care not only about the outcome, but also how that outcome is arrived at. And along the same lines, I don't suppose I need to explain how I feel about A@@1 and the likes. Sure, it isn't to hard to learn or infer this implies a matrix inverse, but why on earth would I want to pretend the rich complexity of numerical matrix inversion can be mangled into one symbol? Id much rather write inv or pinv, or whatever particular algorithm happens to be called for given the situation. Considering this isn't the numlisp discussion group, I suppose I am hardly the only one who feels so.
My impression from the other thread is that @@ probably won't end up existing, so you're safe here ;).
On the whole, I feel the @ operator is mostly superfluous. I prefer to be explicit about where I place my brackets. I prefer to be explicit about the data layout and axes that go into a (multi)linear product, rather than rely on obtuse row/column conventions which are not transparent across function calls. When I do linear algebra, it is almost always vectorized over additional axes; how does a special operator which is only well defined for a few special cases of 2d and 1d tensors help me with that?
Einstein notation is coming up on its 100th birthday and is just as blackboardfriendly as matrix product notation. Yet there's still a huge number of domains where the matrix notation dominates. It's cool if you aren't one of the people who find it useful, but I don't think it's going anywhere soon.
Note that I don't think there is much harm in an @ operator; but I don't see myself using it either. Aside from making textbook examples like a gramschmidt orthogonalization more compact to write, I don't see it having much of an impact in the real world.
The analysis in the PEP found ~780 calls to np.dot, just in the two projects I happened to look at. @ will get tons of use in the real world. Maybe all those people who will be using it would be happier if they were using einsum instead, I dunno, but it's an argument you'll have to convince them of, not me :). n  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
On Sun, Mar 16, 2014 at 10:54 AM, Nathaniel Smith <njs@pobox.com> wrote:
Note that I am not opposed to extra operators in python, and only mildly opposed to a matrix multiplication operator in numpy; but let me lay out
On Sun, Mar 16, 2014 at 2:39 PM, Eelco Hoogendoorn <hoogendoorn.eelco@gmail.com> wrote: the
case against, for your consideration.
First of all, the use of matrix semantics relative to arrays semantics is extremely rare; even in linear algebra heavy code, arrays semantics often dominate. As such, the default of array semantics for numpy has been a great choice. Ive never looked back at MATLAB semantics.
Different people work on different code and have different experiences here  yours may or may be typical yours. Pauli did some quick checks on scikitlearn & nipy & scipy, and found that in their test suites, uses of np.dot and uses of elementwisemultiplication are ~equally common: https://github.com/numpy/numpy/pull/4351#issuecomment37717330h
Secondly, I feel the urge to conform to a historical mathematical notation is misguided, especially for the problem domain of linear algebra. Perhaps in the world of mathematics your operation is associative or commutes, but on your computer, the order of operations will influence both outcomes and performance. Even for products, we usually care not only about the outcome, but also how that outcome is arrived at. And along the same lines, I don't suppose I need to explain how I feel about A@@1 and the likes. Sure, it isn't to hard to learn or infer this implies a matrix inverse, but why on earth would I want to pretend the rich complexity of numerical matrix inversion can be mangled into one symbol? Id much rather write inv or pinv, or whatever particular algorithm happens to be called for given the situation. Considering this isn't the numlisp discussion group, I suppose I am hardly the only one who feels so.
My impression from the other thread is that @@ probably won't end up existing, so you're safe here ;).
On the whole, I feel the @ operator is mostly superfluous. I prefer to be explicit about where I place my brackets. I prefer to be explicit about the data layout and axes that go into a (multi)linear product, rather than rely on obtuse row/column conventions which are not transparent across function calls. When I do linear algebra, it is almost always vectorized over additional axes; how does a special operator which is only well defined for a few special cases of 2d and 1d tensors help me with that?
Einstein notation is coming up on its 100th birthday and is just as blackboardfriendly as matrix product notation. Yet there's still a huge number of domains where the matrix notation dominates. It's cool if you aren't one of the people who find it useful, but I don't think it's going anywhere soon.
Note that I don't think there is much harm in an @ operator; but I don't see myself using it either. Aside from making textbook examples like a gramschmidt orthogonalization more compact to write, I don't see it having much of an impact in the real world.
The analysis in the PEP found ~780 calls to np.dot, just in the two projects I happened to look at. @ will get tons of use in the real world. Maybe all those people who will be using it would be happier if they were using einsum instead, I dunno, but it's an argument you'll have to convince them of, not me :).
Just as example I just read for the first time two journal articles in econometrics that use einsum notation. I have no idea what their formulas are supposed to mean, no sum signs and no matrix algebra. I need to have a strong incentive to stare at those formulas again. (statsmodels search finds 1520 "dot", including sandbox and examples) Josef <TODO: learn how to use einsums>
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sunday, March 16, 2014, <josef.pktd@gmail.com> wrote:
On Sun, Mar 16, 2014 at 10:54 AM, Nathaniel Smith <njs@pobox.com<javascript:_e(%7B%7D,'cvml','njs@pobox.com');>
wrote:
On Sun, Mar 16, 2014 at 2:39 PM, Eelco Hoogendoorn <hoogendoorn.eelco@gmail.com<javascript:_e(%7B%7D,'cvml','hoogendoorn.eelco@gmail.com');>> wrote:
Note that I am not opposed to extra operators in python, and only mildly opposed to a matrix multiplication operator in numpy; but let me lay out the case against, for your consideration.
First of all, the use of matrix semantics relative to arrays semantics is extremely rare; even in linear algebra heavy code, arrays semantics often dominate. As such, the default of array semantics for numpy has been a great choice. Ive never looked back at MATLAB semantics.
Different people work on different code and have different experiences here  yours may or may be typical yours. Pauli did some quick checks on scikitlearn & nipy & scipy, and found that in their test suites, uses of np.dot and uses of elementwisemultiplication are ~equally common: https://github.com/numpy/numpy/pull/4351#issuecomment37717330h
Secondly, I feel the urge to conform to a historical mathematical notation is misguided, especially for the problem domain of linear algebra. Perhaps in the world of mathematics your operation is associative or commutes, but on your computer, the order of operations will influence both outcomes and performance. Even for products, we usually care not only about the outcome, but also how that outcome is arrived at. And along the same lines, I don't suppose I need to explain how I feel about A@@1 and the likes. Sure, it isn't to hard to learn or infer this implies a matrix inverse, but why on earth would I want to pretend the rich complexity of numerical matrix inversion can be mangled into one symbol? Id much rather write inv or pinv, or whatever particular algorithm happens to be called for given the situation. Considering this isn't the numlisp discussion group, I suppose I am hardly the only one who feels so.
My impression from the other thread is that @@ probably won't end up existing, so you're safe here ;).
On the whole, I feel the @ operator is mostly superfluous. I prefer to be explicit about where I place my brackets. I prefer to be explicit about the data layout and axes that go into a (multi)linear product, rather than rely on obtuse row/column conventions which are not transparent across function calls. When I do linear algebra, it is almost always vectorized over additional axes; how does a special operator which is only well defined for a few special cases of 2d and 1d tensors help me with that?
Einstein notation is coming up on its 100th birthday and is just as blackboardfriendly as matrix product notation. Yet there's still a huge number of domains where the matrix notation dominates. It's cool if you aren't one of the people who find it useful, but I don't think it's going anywhere soon.
Note that I don't think there is much harm in an @ operator; but I don't see myself using it either. Aside from making textbook examples like a gramschmidt orthogonalization more compact to write, I don't see it having much of an impact in the real world.
The analysis in the PEP found ~780 calls to np.dot, just in the two projects I happened to look at. @ will get tons of use in the real world. Maybe all those people who will be using it would be happier if they were using einsum instead, I dunno, but it's an argument you'll have to convince them of, not me :).
Just as example
I just read for the first time two journal articles in econometrics that use einsum notation. I have no idea what their formulas are supposed to mean, no sum signs and no matrix algebra. I need to have a strong incentive to stare at those formulas again.
(statsmodels search finds 1520 "dot", including sandbox and examples)
Josef <TODO: learn how to use einsums>
n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org<javascript:_e(%7B%7D,'cvml','NumPyDiscussion@scipy.org');> http://mail.scipy.org/mailman/listinfo/numpydiscussion
An important distinction between calling dot or @ is that matrix multiplication is a domain where enormous effort has already been spent on algorithms and building fast, scalable libraries. Yes einsum can call these for some subset of calls but it's also trivial to set up a case where it can't. This is a huge pitfall because it hides this complexity. Matrixmatrix and matrixvector products are the fundamental operations, generalized multilinear products etc are not. Einsum, despite the brevity that it can provide, is too general to make a basic building block. There isn't a good way to reason about its runtime. Eric
Different people work on different code and have different experiences here  yours may or may be typical yours. Pauli did some quick checks on scikitlearn & nipy & scipy, and found that in their test suites, uses of np.dot and uses of elementwisemultiplication are ~equally common: https://github.com/numpy/numpy/pull/4351#issuecomment37717330h Yeah; these are examples of linalgheavy packages. Even there, dot does not dominate. My impression from the other thread is that @@ probably won't end up existing, so you're safe here ;). I know; my point is that the same objections apply to @, albeit in weaker form. Einstein notation is coming up on its 100th birthday and is just as blackboardfriendly as matrix product notation. Yet there's still a huge number of domains where the matrix notation dominates. It's cool if you aren't one of the people who find it useful, but I don't think it's going anywhere soon. Einstein notation is just as blackboard friendly; but also much more computerfuture proof. I am not saying matrix multiplication is going anywhere soon; but as far as I can tell that is all inertia; historical circumstance has not accidentially prepared it well for numerical needs, as far as I can tell. The analysis in the PEP found ~780 calls to np.dot, just in the two projects I happened to look at. @ will get tons of use in the real world. Maybe all those people who will be using it would be happier if they were using einsum instead, I dunno, but it's an argument you'll have to convince them of, not me :). 780 calls is not tons of use, and these projects are outliers id argue. I just read for the first time two journal articles in econometrics that use einsum notation. I have no idea what their formulas are supposed to mean, no sum signs and no matrix algebra. If they could have been expressed more clearly otherwise, of course this is what they should have done; but could they? b_i = A_ij x_j isnt exactly hard to read, but if it was some form of complicated product, its probably tensor notation was their best bet.
On Sun, Mar 16, 2014 at 4:33 PM, Eelco Hoogendoorn <hoogendoorn.eelco@gmail.com> wrote:
Different people work on different code and have different experiences here  yours may or may be typical yours. Pauli did some quick checks on scikitlearn & nipy & scipy, and found that in their test suites, uses of np.dot and uses of elementwisemultiplication are ~equally common: https://github.com/numpy/numpy/pull/4351#issuecomment37717330h
Yeah; these are examples of linalgheavy packages. Even there, dot does not dominate.
Not sure what makes them "linalgheavy"  they're just trying to cover two application areas, machine learning and neuroscience. If that turns out to involve a lot of linear algebra, well, then...
780 calls is not tons of use, and these projects are outliers id argue.
But you haven't argued! You've just asserted. I admittedly didn't spend a lot of time figuring out what the "most representative" projects were, I just picked two high profile ones off the top of my head, but I ran the numbers and they came out the way they did. (I wasn't convinced @ was useful either when I started, I just figured it would be good to settle the infix operator question one way or the other. I was also surprised np.dot turned out to be used that heavily.) If you don't like my data, then show us yours :).  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
An important distinction between calling dot or @ is that matrix multiplication is a domain where enormous effort has already been spent on algorithms and building fast, scalable libraries. Yes einsum can call these for some subset of calls but it's also trivial to set up a case where it can't. This is a huge pitfall because it hides this complexity.
Einsum, despite the brevity that it can provide, is too general to make a basic building block. There isn't a good way to reason about its runtime.
I am not arguing in favor of einsum; I am arguing in favor of being explicit, rather than hiding semantically meaningful information from the code. Whether using @ or dot or einsum, you are not explicitly specifying the type of algorithm used, so on that front, its a wash, really. But at least dot and einsum have room for keyword arguments. '@' is in my perception simply too narrow an interface to cram in all meaningful information that you might want to specify concerning a linear product.
Matrixmatrix and matrixvector products are the fundamental operations, generalized multilinear products etc are not.
Perhaps from a library perspective, but from a conceptual perspective, it is very much the other way around. If we keep going in the direction that numba/theano/loopy take, such library functionality will soon be moot. Id argue that the priority of the default semantics should be in providing a unified conceptual scheme, rather than maximum performance considerations. Ideally, the standard operator would pick a sensible default which can be inferred from the arguments, while allowing for explicit specification of the kind of algorithm used where this verbosity is worth the hassle. On Sun, Mar 16, 2014 at 5:33 PM, Eelco Hoogendoorn < hoogendoorn.eelco@gmail.com> wrote:
Different people work on different code and have different experiences here  yours may or may be typical yours. Pauli did some quick checks on scikitlearn & nipy & scipy, and found that in their test suites, uses of np.dot and uses of elementwisemultiplication are ~equally common: https://github.com/numpy/numpy/pull/4351#issuecomment37717330h
Yeah; these are examples of linalgheavy packages. Even there, dot does not dominate.
My impression from the other thread is that @@ probably won't end up existing, so you're safe here ;).
I know; my point is that the same objections apply to @, albeit in weaker form.
Einstein notation is coming up on its 100th birthday and is just as blackboardfriendly as matrix product notation. Yet there's still a huge number of domains where the matrix notation dominates. It's cool if you aren't one of the people who find it useful, but I don't think it's going anywhere soon.
Einstein notation is just as blackboard friendly; but also much more computerfuture proof. I am not saying matrix multiplication is going anywhere soon; but as far as I can tell that is all inertia; historical circumstance has not accidentially prepared it well for numerical needs, as far as I can tell.
The analysis in the PEP found ~780 calls to np.dot, just in the two projects I happened to look at. @ will get tons of use in the real world. Maybe all those people who will be using it would be happier if they were using einsum instead, I dunno, but it's an argument you'll have to convince them of, not me :).
780 calls is not tons of use, and these projects are outliers id argue.
I just read for the first time two journal articles in econometrics that use einsum notation.
I have no idea what their formulas are supposed to mean, no sum signs and no matrix algebra.
If they could have been expressed more clearly otherwise, of course this is what they should have done; but could they? b_i = A_ij x_j isnt exactly hard to read, but if it was some form of complicated product, its probably tensor notation was their best bet.
Le 16/03/2014 15:39, Eelco Hoogendoorn a écrit :
Note that I am not opposed to extra operators in python, and only mildly opposed to a matrix multiplication operator in numpy; but let me lay out the case against, for your consideration.
First of all, the use of matrix semantics relative to arrays semantics is extremely rare; even in linear algebra heavy code, arrays semantics often dominate. As such, the default of array semantics for numpy has been a great choice. Ive never looked back at MATLAB semantics.
Secondly, I feel the urge to conform to a historical mathematical notation is misguided, especially for the problem domain of linear algebra. Perhaps in the world of mathematics your operation is associative or commutes, but on your computer, the order of operations will influence both outcomes and performance. Even for products, we usually care not only about the outcome, but also how that outcome is arrived at. And along the same lines, I don't suppose I need to explain how I feel about A@@1 and the likes. Sure, it isn't to hard to learn or infer this implies a matrix inverse, but why on earth would I want to pretend the rich complexity of numerical matrix inversion can be mangled into one symbol? Id much rather write inv or pinv, or whatever particular algorithm happens to be called for given the situation. Considering this isn't the numlisp discussion group, I suppose I am hardly the only one who feels so.
On the whole, I feel the @ operator is mostly superfluous. I prefer to be explicit about where I place my brackets. I prefer to be explicit about the data layout and axes that go into a (multi)linear product, rather than rely on obtuse row/column conventions which are not transparent across function calls. When I do linear algebra, it is almost always vectorized over additional axes; how does a special operator which is only well defined for a few special cases of 2d and 1d tensors help me with that?
Well, the PEP explains a welldefined logical interpretation for cases
2d, using broadcasting. You can vectorize over additionnal axes.
On the whole, the linear algebra conventions inspired by the particular constraints of people working with blackboards, are a rather ugly and hacky beast in my opinion, which I feel no inclination to emulate. As a sidenote to the contrary; I love using broadcasting semantics when writing papers. Sure, your reviewers will balk at it, but it wouldn't do to give the dinosaurs the last word on what any given formal language ought to be like. We get to define the future, and im not sure the set of conventions that goes under the name of 'matrix multiplication' is one of particular importance to the future of numerical linear algebra.
Note that I don't think there is much harm in an @ operator; but I don't see myself using it either. Aside from making textbook examples like a gramschmidt orthogonalization more compact to write, I don't see it having much of an impact in the real world.
On Sat, Mar 15, 2014 at 3:52 PM, Charles R Harris <charlesr.harris@gmail.com <mailto:charlesr.harris@gmail.com>> wrote:
On Fri, Mar 14, 2014 at 6:51 PM, Nathaniel Smith <njs@pobox.com <mailto:njs@pobox.com>> wrote:
Well, that was fast. Guido says he'll accept the addition of '@' as an infix operator for matrix multiplication, once some details are ironed out: https://mail.python.org/pipermail/pythonideas/2014March/027109.html http://legacy.python.org/dev/peps/pep0465/
Specifically, we need to figure out whether we want to make an argument for a matrix power operator ("@@"), and what precedence/associativity we want '@' to have. I'll post two separate threads to get feedback on those in an organized way  this is just a headsup.
Surprisingly little discussion on pythonideas, or so it seemed to me. Guido came out in favor less than halfway through. Congratulations on putting together a successful proposal, many of us had given up on ever seeing a matrix multiplication operator.
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org <mailto:NumPyDiscussion@scipy.org> http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
 Ce courrier électronique ne contient aucun virus ou logiciel malveillant parce que la protection avast! Antivirus est active. http://www.avast.com
participants (12)

Anthony Scopatz

Aron Ahmadia

Charles R Harris

Chris Laumann

Christophe Bal

Eelco Hoogendoorn

Eric Moore

Frédéric Bastien

josef.pktd＠gmail.com

Joseph MartinotLagarde

Nathaniel Smith

Travis Oliphant