Sebastien Loisel wrote:
What are the odds of this thing going in?
I don't know. Guido has said nothing about it so far this time round, and his is the only opinion that matters in the end.
I may write a PEP about this. However, since yesterday I've realised that there's a rather serious problem with part of my proposal.
The problem is that being able to multiply matrices isn't much use without also being able to add them and multiply them by numbers, which obviously can't work with the built-in sequence types, since they already have other meanings for + and *.
However, I still think that adding an @ operator for numpy to use is a good idea. So I'm now suggesting that the operator be added, with the intended meaning of matrix multiplication, but that no implementation of it be provided in core Python.
There is a precedent for this -- the ellipsis notation was added purely for use by Numeric and its successors, and nothing in core Python attaches any meaning to it.
How do the PEPs work?
Someone writes a PEP. People talk about it. Eventually, Guido either accepts it or rejects it (although in some cases it is an infinitely long time before that happens:-).
The desire for a new operator for matrix mutltiplication is because binary prefix operators are horrible for expressin this kind of thing, right?
Stuff like this is hard to write, read, and debug (especially when checking it against an infix formula):
mmul(mmul(mmul(a, b), c), d)
How about just making a matrix multiply function that can take many arguments? I think this is pretty readable:
mmul(a, b, c, d)
Additionally, mmul could then optimize the order of the multiplications (e.g., depending the dimensions of the matrices, it may be much more efficient to perform a*((b*c)*d) rather than ((a*b)*c)*d).
(please forgive typos--writing this on a smartphone)
-- Daniel Stutzbach
On Sun, 27 Jul 2008 02:23:11 am daniel.stutzbach@gmail.com wrote:
How about just making a matrix multiply function that can take many arguments? I think this is pretty readable:
mmul(a, b, c, d)
Additionally, mmul could then optimize the order of the multiplications (e.g., depending the dimensions of the matrices, it may be much more efficient to perform a*((b*c)*d) rather than ((a*b)*c)*d).
But be careful there: matrix multiplication is associative, so that a*b*c = (a*b)*c = a*(b*c), but that doesn't necessarily apply once the elements of the matrices are floats. For instance, a*(b*c) might underflow some elements to zero, while multiplying (a*b)*c does not. As a general rule, compilers should not mess with the order of floating point calculations.
Also, some classes that people might want to multiply may not be associative even in principle. E.g. the vector cross product:
(a*b)*c != a*(b*c) in general.
I think a product() function that multiplies the arguments from left to right would be useful. But it won't solve the problems that people want custom operators to solve. I'm not even sure if it will solve the problem of matrix multiplication.
daniel.stutzbach@gmail.com wrote:
How about just making a matrix multiply function that can take many arguments? I think this is pretty readable:
mmul(a, b, c, d)
The multiplications aren't necessarily all together, e.g.
a*b + c*d + e*f
would become
mmul(a, b) + mmul(c, d) + mmul(e, f)
On Fri, Jul 25, 2008 at 6:50 PM, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
Sebastien Loisel wrote:
What are the odds of this thing going in?
I don't know. Guido has said nothing about it so far this time round, and his is the only opinion that matters in the end.
I'd rather stay silent until a PEP exists, but I should point out that last time '@' was considered as a new operator, that character had no uses in the language at all. Now it is the decorator marker. Therefore it may not be so attractive any more.
I understand that you can't use A*B as matrix multiplication because it should mean elementwise multiplication instead, just like A+B is elementwise addition (for matrixes, as opposed to Python sequences).
But would it be totally outlandish to propose A**B for matrix multiplication? I can't think of what "matrix exponentiation" would mean...
--Guido
I may write a PEP about this. However, since yesterday I've realised that there's a rather serious problem with part of my proposal.
The problem is that being able to multiply matrices isn't much use without also being able to add them and multiply them by numbers, which obviously can't work with the built-in sequence types, since they already have other meanings for + and *.
However, I still think that adding an @ operator for numpy to use is a good idea. So I'm now suggesting that the operator be added, with the intended meaning of matrix multiplication, but that no implementation of it be provided in core Python.
There is a precedent for this -- the ellipsis notation was added purely for use by Numeric and its successors, and nothing in core Python attaches any meaning to it.
How do the PEPs work?
Someone writes a PEP. People talk about it. Eventually, Guido either accepts it or rejects it (although in some cases it is an infinitely long time before that happens:-).
-- Greg _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org
On Tue, Jul 29, 2008 at 9:26 PM, Guido van Rossum guido@python.org wrote:
But would it be totally outlandish to propose A**B for matrix multiplication? I can't think of what "matrix exponentiation" would mean...
Before even reading this paragraph, A**B came to my mind, so I suspect it would be the most intuitive option.
Dear Guido,
Thank you for your email.
On Tue, Jul 29, 2008 at 8:26 PM, Guido van Rossum guido@python.org wrote:
But would it be totally outlandish to propose A**B for matrix multiplication? I can't think of what "matrix exponentiation" would mean...
Right now, ** is the pointwise power:
from numpy import * A=array([[1,2],[3,4]]) print(A**A)
[[ 1 4] [ 27 256]]
Since all the scalar operators have meaning as pointwise operator, like you say it's hard to bump one off to give it to the matrix product instead. I don't know if it's a good idea with **, it will destroy the orthogonality of the system.
They used to say, ignore LISP at your own peril. In that spirit, let me describe MATLAB's approach to this. It features a complete suite of matrix operators (+-*/^), and their pointwise variants (.+ .- ./ .* .^), although + and .+ are synonyms, as are - and.-. Right now, numpy's *,**,/ correspond to MATLAB .*,.^,./.
MATLAB implements scalar^matrix, matrix^scalar, but not matrix^matrix (although since log and exp are defined, I guess you could clobber together a meaning for matrix^matrix). Since ^ is the matrix-product version of "power", 2^A may not be what you expect:
2^A
10.4827 14.1519 21.2278 31.7106
Sincerely,
Sebastien Loisel wrote:
let me describe MATLAB's approach to this. It features a complete suite of matrix operators (+-*/^), and their pointwise variants (.+ .- ./ .* .^)
That was considered before as well, but rejected on the grounds that the dot-prefixed operators were too cumbersome to use heavily.
In MATLAB, the elementwise operations are probably used fairly infrequently. But numpy arrays are often used to vectorise what are otherwise scalar operations, in which case elementwise operations are used almost exclusively.
On Wed, Jul 30, 2008 at 2:26 AM, Guido van Rossum guido@python.org wrote:
On Fri, Jul 25, 2008 at 6:50 PM, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
Sebastien Loisel wrote:
What are the odds of this thing going in?
I don't know. Guido has said nothing about it so far this time round, and his is the only opinion that matters in the end.
I'd rather stay silent until a PEP exists, but I should point out that last time '@' was considered as a new operator, that character had no uses in the language at all. Now it is the decorator marker. Therefore it may not be so attractive any more.
I don't like @.
I understand that you can't use A*B as matrix multiplication because it should mean elementwise multiplication instead, just like A+B is elementwise addition (for matrixes, as opposed to Python sequences).
But would it be totally outlandish to propose A**B for matrix multiplication? I can't think of what "matrix exponentiation" would mean...
http://mathworld.wolfram.com/MatrixExponential.html :-)
In fact Mathematica uses ** to denote general noncommutative multiplication (though . for matrix multiplication in particular). However, this wouldn't solve the problem, because an important reason to introduce a matrix multiplication operator is to distinguish between matrix and elementwise operations for arrays. The ** operator already denotes the obvious elementwise operation in numpy.
Further, while A**B is not so common, A**n is quite common (for integral n, in the sense of repeated matrix multiplication). So a matrix multiplication operator really should come with a power operator cousin.
Matlab uses * for matrix and .* for elementwise multiplication. Introducing .* for elementwise multiplication in Python would not be compatible with existing numpy code, and introducing .* with the reversed meaning of Matlab would be *very* confusing :-)
Maple uses &* for matrix multiplication. However, Maple's syntax is not a good style reference for anything.
Besides those alternatives and the regular *, I don't know any other ASCII operators used by existing mathematical software for matrix multiplication. Well, Fortress probably has some unicode symbol for it (I suppose that would be one desperate possibility).
Fredrik
Fredrik Johansson wrote:
Further, while A**B is not so common, A**n is quite common (for integral n, in the sense of repeated matrix multiplication). So a matrix multiplication operator really should come with a power operator cousin.
Which obviously should be @@ :-)
Well, Fortress probably has some unicode symbol for it (I suppose that would be one desperate possibility).
I've been carefully refraining from suggesting that. Although now that unicode is allowed in identifiers, it's not *quite* as heretical as it used to be.
Further, while A**B is not so common, A**n is quite common (for integral n, in the sense of repeated matrix multiplication). So a matrix multiplication operator really should come with a power operator cousin.
Which obviously should be @@ :-)
I think much of this thread is a repeat of conversations that were held for PEP 225: http://www.python.org/dev/peps/pep-0225/
That PEP is marked as deferred. Maybe it's time to bring it back to life.
Raymond
Dear Greg,
Thank you for your email.
In MATLAB, the elementwise operations are probably used fairly infrequently. But numpy arrays are often used to vectorise what are otherwise scalar operations, in which case elementwise operations are used almost exclusively.
Your assessment of pointwise operators in MATLAB is incorrect. The pointwise operators in MATLAB are used heavily to vectorise the scalar operations, exactly the same as what you describe in numpy. Recently, the MATLAB JIT has become good enough that looping is fast, however, the pointwise operators remain "the MATLAB way" of programming.
Dear Raymond,
Thank you for your email.
I think much of this thread is a repeat of conversations that were held for PEP 225: http://www.python.org/dev/peps/pep-0225/
That PEP is marked as deferred. Maybe it's time to bring it back to life.
This is a much better PEP than the one I had found, and would solve all of the numpy problems. The PEP is very well thought-out.
Sincerely,
Sebastien Loisel wrote:
Dear Raymond,
Thank you for your email.
I think much of this thread is a repeat of conversations that were held for PEP 225: http://www.python.org/dev/peps/pep-0225/
That PEP is marked as deferred. Maybe it's time to bring it back to life.
This is a much better PEP than the one I had found, and would solve all of the numpy problems. The PEP is very well thought-out.
A very interesting read! I wouldn't support some of the more exotic elements tacked on to the end (particularly the replacement of the now thoroughly entrenched bitwise operators), but the basic idea of providing ~op variants of several operators seems fairly sound. I'd be somewhat inclined to add ~not, ~and and ~or to the list even though that would pretty much force the semantics to be elementwise for the ~ variants (since the standard not, and and or are always objectwise and without PEP 335 there's no way for an object to change that).
Cheers, Nick.
Nick Coghlan write:
Sebastien Loisel wrote:
Dear Raymond,
Thank you for your email.
I think much of this thread is a repeat of conversations that were held for PEP 225: http://www.python.org/dev/peps/pep-0225/
That PEP is marked as deferred. Maybe it's time to bring it back to life.
This is a much better PEP than the one I had found, and would solve all of the numpy problems. The PEP is very well thought-out.
A very interesting read! I wouldn't support some of the more exotic elements tacked on to the end (particularly the replacement of the now thoroughly entrenched bitwise operators), but the basic idea of providing ~op variants of several operators seems fairly sound. I'd be somewhat inclined to add ~not, ~and and ~or to the list even though that would pretty much force the semantics to be elementwise for the ~ variants (since the standard not, and and or are always objectwise and without PEP 335 there's no way for an object to change that).
Cheers, Nick.
I agree: adding ~op will be very interesting.
For example, we can easily provide case insensitive comparisons for string:
if foo ~== 'Spam': print "It's spam!'
equivalent to:
if foo.upper() == 'SPAM: print "It's spam!'
we can save both CPU time and memory to build a brand new string that will be discarded after the comparison...
It will be also useful to redefine /, // and ** operators to do some common operations:
'spam, egg' / ', ' could be equivalent to iter('spam, egg'.split(', ')) # Generates an iterator
'spam, egg' // ', ' could be equivalent to 'spam, egg'.split(', ') # Generates a list
and ', ' ** ('spam', 'egg') could be equivalent to ', '.join(('spam', 'egg'))
but unfortunately we know that at the moment buil-in types cannot be "extended" through "monkey patching"...
Cesare
Cesare Di Mauro wrote:
Nick Coghlan write:
Sebastien Loisel wrote:
Dear Raymond,
Thank you for your email.
I think much of this thread is a repeat of conversations that were held for PEP 225: http://www.python.org/dev/peps/pep-0225/
That PEP is marked as deferred. Maybe it's time to bring it back to life.
This is a much better PEP than the one I had found, and would solve all of the numpy problems. The PEP is very well thought-out.
A very interesting read! I wouldn't support some of the more exotic elements tacked on to the end (particularly the replacement of the now thoroughly entrenched bitwise operators), but the basic idea of providing ~op variants of several operators seems fairly sound. I'd be somewhat inclined to add ~not, ~and and ~or to the list even though that would pretty much force the semantics to be elementwise for the ~ variants (since the standard not, and and or are always objectwise and without PEP 335 there's no way for an object to change that).
Cheers, Nick.
I agree: adding ~op will be very interesting.
As interesting as I may have found it though, further discussion of the prospect of resurrecting it for consideration in the 2.7/3.1 timeframe should really take place on python-ideas.
Cheers, Nick.
Guido van Rossum wrote:
last time '@' was considered as a new operator, that character had no uses in the language at all. Now it is the decorator marker.
The only alternatives left would seem to be ?, ! or $, none of which look particularly multiplicationish.
But would it be totally outlandish to propose A**B for matrix multiplication? I can't think of what "matrix exponentiation" would mean...
But ** has the same problem -- it already represents an elementwise operation on numpy arrays.
Guido van Rossum wrote:
On Fri, Jul 25, 2008 at 6:50 PM, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
Sebastien Loisel wrote:
What are the odds of this thing going in?
I don't know. Guido has said nothing about it so far this time round, and his is the only opinion that matters in the end.
I'd rather stay silent until a PEP exists, but I should point out that last time '@' was considered as a new operator, that character had no uses in the language at all. Now it is the decorator marker. Therefore it may not be so attractive any more.
Others have indicated already how pep 225 seems to be the best current summary of this issue. Here's a concrete proposal: the SciPy conference, where a lot of people with a direct stake on this mattter will be present, will be held very soon (August 19-24 at Caltech):
I am hereby volunteering to try to organize a BOF session at the conference on this topic, and can come back later with the summary. I'm also scheduled to give a talk at BayPiggies on Numpy/Scipy soon after the conference, so that may be a good opportunity to have some further discussions in person with some of you.
It's probably worth noting that python is *really* growing in the scientific world. A few weeks ago I ran a session on Python for science at the annual SIAM conference (the largest applied math conference in the country), with remarkable success:
http://fdoperez.blogspot.com/2008/07/python-tools-for-science-go-to-siam.htm...
(punchline: we were selected for the annual highlights - http://www.ams.org/ams/siam-2008.html#python).
This is just to show that python really matters to scientific users, and its impact is growing rapidly, as the tools mature and we reach critical mass so the network effects kick in. It would be great to see this topic considered for the language in the 2.7/3.1 timeframe, and I'm willing to help with some of the legwork.
So if this idea sounds agreeable to python-dev, I'd need to know whether I should propose the BOF using pep 225 as a starting point, or if there are any other considerations on the matter I should be aware of (I've read this thread in full, but I just want to start on track since the BOF is a one-shot event). I'll obviously post this on the numpy/scipy mailing lists so those not coming to the conference can participate, but an all-hands BOF is an excellent opportunity to collect feedback and ideas from the community that is likely to care most about this feature.
Thanks,
f
Fernando Perez wrote: re http://www.python.org/dev/peps/pep-0225/
I am hereby volunteering to try to organize a BOF session at the conference on this topic, and can come back later with the summary. I'm also scheduled to give a talk at BayPiggies on Numpy/Scipy soon after the conference, so that may be a good opportunity to have some further discussions in person with some of you.
...
So if this idea sounds agreeable to python-dev, I'd need to know whether I should propose the BOF using pep 225 as a starting point, or if there are any other considerations on the matter I should be aware of (I've read this thread in full, but I just want to start on track since the BOF is a one-shot event). I'll obviously post this on the numpy/scipy mailing lists so those not coming to the conference can participate, but an all-hands BOF is an excellent opportunity to collect feedback and ideas from the community that is likely to care most about this feature.
When I read this some years ago, I was impressed by the unifying concept of operations on elements versus objects. And rereading, I plan to use the concept in writing about computation with Python. But implementing even half of the total examples with operator syntax rather than functions seemed a bit revolutionary and heavy. I am not sure I would want the number of __special__ methods nearly doubled. On the other hand, there is something to be said for orthogonality.
That said, I am curious what working scientists using Python think.
tjr
Terry Reedy wrote:
That said, I am curious what working scientists using Python think.
Well, we'll let you know more after SciPy '08, but I suspect the answer is that they just want one teensy little wafer-thin operator to do matrix multiplication on numpy arrays or their favorite matrix object. I don't think there are many scientists/engineers/whatnot who want to double the number of operators to learn or who care if the matmult operator works on lists of lists or anything else in the standard library.