[RFC] should we argue for a matrix power operator, @@?
Hi all, Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/ The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense. It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case. Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)? Some thoughts to start us off: Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.) (Did I miss any?) In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?) On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do. A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have: mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i) In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it. As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here. It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake. n  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
On Fri, Mar 14, 2014 at 9:32 PM, Nathaniel Smith <njs@pobox.com> wrote:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
I'm not really arguing for it, and I am not sure how, or even if, it fits in the general scheme. But for completeness sake, 'e @@ Matrix' is used in some treatments of linear systems of differential equations, where: d<vector>/dt = <matrix> @ <vector> would have solution <vector> = e @@ (<matrix> * t) @ <vector_0> I don't think it makes any sense to use it as such in the context of numpy, as I think it would make broadcasting undecidable. But there may be parallel universes where having n @@ <matrix> and <matrix> @@ n both with well defined, yet different meanings may make sense. It is my impression that in this entirely made up scenario you would want e @@ A @@ 3 to be evaluated as (e @@ A) @@ 3. Which probably has more to do with the fact that the two @@ mean different things, than with the associativity that repeated calls to the same @@ should have. Personally I couldn't care less, and if I had a vote I would let @@ rest for now, until we see how @ plays out. Jaime
Hello. Maybe a solution would be to not see @ and @@ only from the matrix point of view. Why ? The philosophy of Python is to give total control of the infix operators +, * and ** for example via the magic methods. So it can be also the case for @ and @@ that could be use for something else that <matrix>@@<int>. So what we can expect from A@@B@@C. I will say that is the same as a**b**c because a human goes from top to down (but this is not a general convention in CAS). Ok guy but what can we do for <matrix>@@<matrix>@@<matrix>. Just raises an error. The programmer has the possibility to use @@ as ** but it has to take care of the meaning regarding to the types of the objects. This is for example what we expect for <matrix>@@pi even if we mathematically can give a meaning to that for some matrices. Do not forget also that a direct computation of the inverse of a matrice is a complicated things, and that integer power of matrices have to be cleverly build, but I'm sure that everyones here know that. *So standard Python can...*  only proposes multiplication of matrices,  and for the power of matrices, just indicates that there is a magic method associated to @@ and explains that regarding to the complexity of this problem, it will be the job of the programmer to implement it. I think the problem from Guido's point of view is the asymmetrical type domain for operations. All the numeric operators are from <number>*<number> to <number>. Hoping that my frenchy english is clear enough. Chrisopthe BAL PS: maybe a good question for Python would be to see if other operators could be useful. For CAS, I would like to have the possibility to use f°g for composition, even if it is more for pedagogical reason, and f°°n for dynamical systems. But this is just a dream... 20140315 6:39 GMT+01:00 Jaime Fernández del Río <jaime.frio@gmail.com>:
On Fri, Mar 14, 2014 at 9:32 PM, Nathaniel Smith <njs@pobox.com> wrote:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
I'm not really arguing for it, and I am not sure how, or even if, it fits in the general scheme. But for completeness sake, 'e @@ Matrix' is used in some treatments of linear systems of differential equations, where:
d<vector>/dt = <matrix> @ <vector>
would have solution
<vector> = e @@ (<matrix> * t) @ <vector_0>
I don't think it makes any sense to use it as such in the context of numpy, as I think it would make broadcasting undecidable. But there may be parallel universes where having n @@ <matrix> and <matrix> @@ n both with well defined, yet different meanings may make sense. It is my impression that in this entirely made up scenario you would want e @@ A @@ 3 to be evaluated as (e @@ A) @@ 3. Which probably has more to do with the fact that the two @@ mean different things, than with the associativity that repeated calls to the same @@ should have.
Personally I couldn't care less, and if I had a vote I would let @@ rest for now, until we see how @ plays out.
Jaime
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On 3/15/2014 12:32 AM, Nathaniel Smith wrote:
I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?
Not much. Positive integer powers would be useful (for illustrating e.g. graph theory and difference equations), but not enough to delay the PEP. I think NumPy should "take the money and run". Getting `@` is great. Let's get experience with it before deciding whether it's worth asking for `@@`. Questions for `@@`:  would it just be `matrix_power`, with all the restrictions?  or would `a(10,2,2)@@1` return an array of matrix inverses?  etc In the end, I'd like to see a functional implementation before deciding on `@@`, but I would not like to see `@` delayed at all. Congratulations, Alan
On Sat, Mar 15, 2014 at 1:13 PM, Alan G Isaac <alan.isaac@gmail.com> wrote:
On 3/15/2014 12:32 AM, Nathaniel Smith wrote:
I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?
Not much. Positive integer powers would be useful (for illustrating e.g. graph theory and difference equations), but not enough to delay the PEP.
So to be clear, even if numpy.matrix is going away, and even if ndarray isn't getting a .I attribute, then you're just as happy typing/teaching inv(X) as X @@ 1?
I think NumPy should "take the money and run". Getting `@` is great. Let's get experience with it before deciding whether it's worth asking for `@@`.
Questions for `@@`:  would it just be `matrix_power`, with all the restrictions?  or would `a(10,2,2)@@1` return an array of matrix inverses?  etc
The version in the PEP does do gufuncstyle broadcasting for >2d arrays, yes. So will np.linalg.matrix_power as soon as someone bothers to send a patch ;)
In the end, I'd like to see a functional implementation before deciding on `@@`, but I would not like to see `@` delayed at all.
Oh, well, not much is going to affect `@`'s timing, unless we're *dreadfully* slow. Py 3.5 isn't even scheduled yet b/c 3.4 isn't out, and IIUC Python's standard release cycle is 18 months. So we've got a year+ before feature freeze, regardless. n  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
On 3/15/2014 10:12 PM, Nathaniel Smith wrote:
So to be clear, even if numpy.matrix is going away, and even if ndarray isn't getting a .I attribute, then you're just as happy typing/teaching inv(X) as X @@ 1?
Yes, that is correct. I am somewhat more unhappy with having to use npla.matrix_power(M,n) instead of M@@n in other teaching settings (e.g., graph theory and recurrence relations). I am certainly not objecting to making `@@` available. It just seems much less important than getting `@` asap. Thanks, Alan Isaac
On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs@pobox.com> wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
I don't have a strong feeling either way on '@@' . Matrix inverses are pretty common in matrix expressions, but I don't know that the new operator offers much advantage over a function call. The positive integer powers might be useful in some domains, as others have pointed out, but computational practice one would tend to factor the evaluation. Chuck n
 Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
20140315 11:18 GMT04:00 Charles R Harris <charlesr.harris@gmail.com>:
On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs@pobox.com> wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
I don't have a strong feeling either way on '@@' . Matrix inverses are pretty common in matrix expressions, but I don't know that the new operator offers much advantage over a function call. The positive integer powers might be useful in some domains, as others have pointed out, but computational practice one would tend to factor the evaluation.
Chuck
Personally I think it should go in, because:  it's useful (although marginally), as in the examples previously mentioned  it's what people will expect  it's the only reasonable use of @@ once @ makes it in As far as the details about precedence rules and what not... Yes, someone should think about them and come up with rules that make sense, but since it will be pretty much only be used in unambiguous situations, this shouldn't be a blocker. = Olivier
Speaking only for myself (and as someone who has regularly used matrix powers), I would not expect matrix power as @@ to follow from matrix multiplication as @. I do agree that matrix power is the only reasonable use for @@ (given @), but it's still not something I would be confident enough to know without looking up. We should keep in mind that each new operator imposes some (small) cognitive burden on everyone who encounters them for the first time, and, in this case, this will include a large fraction of all Python users, whether they do numerical computation or not. Guido has given us a tremendous gift in the form of @. Let's not insist on @@, when it is unclear if the burden of figuring out what @@ means it would be worth using, even for heavily numeric code. I would certainly prefer to encounter norm(A), inv(A), matrix_power(A, n), fractional_matrix_power(A, n) and expm(A) rather than their infix equivalents. It will certainly not be obvious which of these @@ will support for objects from any given library. One useful data point might be to consider whether matrix power is available as an infix operator in other languages commonly used for numerical work. AFAICT from some quick searches: MATLAB: Yes R: No IDL: No All of these languages do, of course, implement infix matrix multiplication, but it is apparently not clear at all whether the matrix power is useful. Best, Stephan On Sat, Mar 15, 2014 at 9:03 AM, Olivier Delalleau <shish@keba.be> wrote:
20140315 11:18 GMT04:00 Charles R Harris <charlesr.harris@gmail.com>:
On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs@pobox.com> wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
I don't have a strong feeling either way on '@@' . Matrix inverses are pretty common in matrix expressions, but I don't know that the new operator offers much advantage over a function call. The positive integer powers might be useful in some domains, as others have pointed out, but computational practice one would tend to factor the evaluation.
Chuck
Personally I think it should go in, because:  it's useful (although marginally), as in the examples previously mentioned  it's what people will expect  it's the only reasonable use of @@ once @ makes it in
As far as the details about precedence rules and what not... Yes, someone should think about them and come up with rules that make sense, but since it will be pretty much only be used in unambiguous situations, this shouldn't be a blocker.
= Olivier
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
I think I wouldn't use anything like @@ often enough to remember it's meaning. I'd rather see english names for anything that is not **very** common. I find A@@1 pretty ugly compared to inv(A) A@@(0.5) might be nice (do we have matrix_sqrt ?) Josef On Sat, Mar 15, 2014 at 5:11 PM, Stephan Hoyer <shoyer@gmail.com> wrote:
Speaking only for myself (and as someone who has regularly used matrix powers), I would not expect matrix power as @@ to follow from matrix multiplication as @. I do agree that matrix power is the only reasonable use for @@ (given @), but it's still not something I would be confident enough to know without looking up.
We should keep in mind that each new operator imposes some (small) cognitive burden on everyone who encounters them for the first time, and, in this case, this will include a large fraction of all Python users, whether they do numerical computation or not.
Guido has given us a tremendous gift in the form of @. Let's not insist on @@, when it is unclear if the burden of figuring out what @@ means it would be worth using, even for heavily numeric code. I would certainly prefer to encounter norm(A), inv(A), matrix_power(A, n), fractional_matrix_power(A, n) and expm(A) rather than their infix equivalents. It will certainly not be obvious which of these @@ will support for objects from any given library.
One useful data point might be to consider whether matrix power is available as an infix operator in other languages commonly used for numerical work. AFAICT from some quick searches: MATLAB: Yes R: No IDL: No
All of these languages do, of course, implement infix matrix multiplication, but it is apparently not clear at all whether the matrix power is useful.
Best, Stephan
On Sat, Mar 15, 2014 at 9:03 AM, Olivier Delalleau <shish@keba.be> wrote:
20140315 11:18 GMT04:00 Charles R Harris <charlesr.harris@gmail.com>:
On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs@pobox.com> wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
I don't have a strong feeling either way on '@@' . Matrix inverses are pretty common in matrix expressions, but I don't know that the new operator offers much advantage over a function call. The positive integer powers might be useful in some domains, as others have pointed out, but computational practice one would tend to factor the evaluation.
Chuck
Personally I think it should go in, because:  it's useful (although marginally), as in the examples previously mentioned  it's what people will expect  it's the only reasonable use of @@ once @ makes it in
As far as the details about precedence rules and what not... Yes, someone should think about them and come up with rules that make sense, but since it will be pretty much only be used in unambiguous situations, this shouldn't be a blocker.
= Olivier
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sat, Mar 15, 2014 at 8:38 PM, <josef.pktd@gmail.com> wrote:
I think I wouldn't use anything like @@ often enough to remember it's meaning. I'd rather see english names for anything that is not **very** common.
I find A@@1 pretty ugly compared to inv(A) A@@(0.5) might be nice (do we have matrix_sqrt ?)
scipy.linalg.sqrtm: http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.sqrtm.html Warren
Josef
On Sat, Mar 15, 2014 at 5:11 PM, Stephan Hoyer <shoyer@gmail.com> wrote:
Speaking only for myself (and as someone who has regularly used matrix powers), I would not expect matrix power as @@ to follow from matrix multiplication as @. I do agree that matrix power is the only reasonable use for @@ (given @), but it's still not something I would be confident enough to know without looking up.
We should keep in mind that each new operator imposes some (small) cognitive burden on everyone who encounters them for the first time, and, in this case, this will include a large fraction of all Python users, whether they do numerical computation or not.
Guido has given us a tremendous gift in the form of @. Let's not insist on @@, when it is unclear if the burden of figuring out what @@ means it would be worth using, even for heavily numeric code. I would certainly prefer to encounter norm(A), inv(A), matrix_power(A, n), fractional_matrix_power(A, n) and expm(A) rather than their infix equivalents. It will certainly not be obvious which of these @@ will support for objects from any given library.
One useful data point might be to consider whether matrix power is available as an infix operator in other languages commonly used for numerical work. AFAICT from some quick searches: MATLAB: Yes R: No IDL: No
All of these languages do, of course, implement infix matrix multiplication, but it is apparently not clear at all whether the matrix power is useful.
Best, Stephan
On Sat, Mar 15, 2014 at 9:03 AM, Olivier Delalleau <shish@keba.be> wrote:
20140315 11:18 GMT04:00 Charles R Harris <charlesr.harris@gmail.com>:
On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs@pobox.com>wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
I don't have a strong feeling either way on '@@' . Matrix inverses are pretty common in matrix expressions, but I don't know that the new operator offers much advantage over a function call. The positive integer powers might be useful in some domains, as others have pointed out, but computational practice one would tend to factor the evaluation.
Chuck
Personally I think it should go in, because:  it's useful (although marginally), as in the examples previously mentioned  it's what people will expect  it's the only reasonable use of @@ once @ makes it in
As far as the details about precedence rules and what not... Yes, someone should think about them and come up with rules that make sense, but since it will be pretty much only be used in unambiguous situations, this shouldn't be a blocker.
= Olivier
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Sat, Mar 15, 2014 at 8:47 PM, Warren Weckesser < warren.weckesser@gmail.com> wrote:
On Sat, Mar 15, 2014 at 8:38 PM, <josef.pktd@gmail.com> wrote:
I think I wouldn't use anything like @@ often enough to remember it's meaning. I'd rather see english names for anything that is not **very** common.
I find A@@1 pretty ugly compared to inv(A) A@@(0.5) might be nice (do we have matrix_sqrt ?)
scipy.linalg.sqrtm: http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.sqrtm.html
maybe a good example: I could never figured that one out M = sqrtm(A) A = M @ M but what we use in stats is A = R.T @ R (eigenvectors dot diag(sqrt of eigenvalues) which sqrt is A@@(0.5) ? Josef
Warren
Josef
On Sat, Mar 15, 2014 at 5:11 PM, Stephan Hoyer <shoyer@gmail.com> wrote:
Speaking only for myself (and as someone who has regularly used matrix powers), I would not expect matrix power as @@ to follow from matrix multiplication as @. I do agree that matrix power is the only reasonable use for @@ (given @), but it's still not something I would be confident enough to know without looking up.
We should keep in mind that each new operator imposes some (small) cognitive burden on everyone who encounters them for the first time, and, in this case, this will include a large fraction of all Python users, whether they do numerical computation or not.
Guido has given us a tremendous gift in the form of @. Let's not insist on @@, when it is unclear if the burden of figuring out what @@ means it would be worth using, even for heavily numeric code. I would certainly prefer to encounter norm(A), inv(A), matrix_power(A, n), fractional_matrix_power(A, n) and expm(A) rather than their infix equivalents. It will certainly not be obvious which of these @@ will support for objects from any given library.
One useful data point might be to consider whether matrix power is available as an infix operator in other languages commonly used for numerical work. AFAICT from some quick searches: MATLAB: Yes R: No IDL: No
All of these languages do, of course, implement infix matrix multiplication, but it is apparently not clear at all whether the matrix power is useful.
Best, Stephan
On Sat, Mar 15, 2014 at 9:03 AM, Olivier Delalleau <shish@keba.be>wrote:
20140315 11:18 GMT04:00 Charles R Harris <charlesr.harris@gmail.com> :
On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs@pobox.com>wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
I don't have a strong feeling either way on '@@' . Matrix inverses are pretty common in matrix expressions, but I don't know that the new operator offers much advantage over a function call. The positive integer powers might be useful in some domains, as others have pointed out, but computational practice one would tend to factor the evaluation.
Chuck
Personally I think it should go in, because:  it's useful (although marginally), as in the examples previously mentioned  it's what people will expect  it's the only reasonable use of @@ once @ makes it in
As far as the details about precedence rules and what not... Yes, someone should think about them and come up with rules that make sense, but since it will be pretty much only be used in unambiguous situations, this shouldn't be a blocker.
= Olivier
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On 16/03/2014 01:31, josef.pktd@gmail.com wrote:
On Sat, Mar 15, 2014 at 8:47 PM, Warren Weckesser <warren.weckesser@gmail.com <mailto:warren.weckesser@gmail.com>> wrote:
On Sat, Mar 15, 2014 at 8:38 PM, <josef.pktd@gmail.com <mailto:josef.pktd@gmail.com>> wrote:
I think I wouldn't use anything like @@ often enough to remember it's meaning. I'd rather see english names for anything that is not **very** common.
I find A@@1 pretty ugly compared to inv(A) A@@(0.5) might be nice (do we have matrix_sqrt ?)
scipy.linalg.sqrtm: http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.sqrtm.html
maybe a good example: I could never figured that one out
M = sqrtm(A)
A = M @ M
but what we use in stats is
A = R.T @ R (eigenvectors dot diag(sqrt of eigenvalues)
which sqrt is A@@(0.5) ?
Josef
Agreed In general, "the matrix square root" isn't a welldefined quantity. For some uses, the Cholesky decomposition is what you want, for some others it's the matrix with the same eigenvectors, but the square root of the eigenvalues, etc. etc. As an important aside, it would be good if the docs addressed this. Yours, Andrew
Le samedi 15 mars 2014 à 04:32 +0000, Nathaniel Smith a écrit :
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
Another usecase may rely on tensor contraction. Matrix multiplication appears to be a particular case of tensor contraction for matrix seen as 2ndorder tensor : (A @ B)_{ij} = A_{ik} B_{kj} using Einstein summation notation. @@ might also be used for double contraction as frequently used in continuum mechanics. For example, the relation between strain and stress (2nd order tensors) involves the elasticity tensor (a 4nd order one) using the double contraction : S_{ij} = C_{ijkl}E_{kl} that might be simply calculated with S = C @@ E, the variables S, E, C being instances of whatever class representing tensors. My two cents
Personally I did not like @@ in the first place. Sturla Nathaniel Smith <njs@pobox.com> wrote:
Hi all,
Here's the second thread for discussion about Guido's concerns about PEP 465. The issue here is that PEP 465 as currently written proposes two new operators, @ for matrix multiplication and @@ for matrix power (analogous to * and **): http://legacy.python.org/dev/peps/pep0465/
The main thing we care about of course is @; I pushed for including @@ because I thought it was nicer to have than not, and I thought the analogy between * and ** might make the overall package more appealing to Guido's aesthetic sense.
It turns out I was wrong :). Guido is 0 on @@, but willing to be swayed if we think it's worth the trouble to make a solid case.
Note that question now is *not*, how will @@ affect the reception of @. @ itself is AFAICT a done deal, regardless of what happens with @@. For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
Some thoughts to start us off:
Here are the interesting use cases for @@ that I can think of:  'vector @@ 2' gives the squared Euclidean length (because it's the same as vector @ vector). Kind of handy.  'matrix @@ n' of course gives the matrix power, which is of marginal use but does come in handy sometimes, e.g., when looking at graph connectivity.  'matrix @@ 1' provides a very transparent notation for translating textbook formulas (with all their inverses) into code. It's a bit unhelpful in practice, because (a) usually you should use solve(), and (b) 'matrix @@ 1' is actually more characters than 'inv(matrix)'. But sometimes transparent notation may be important. (And in some cases, like using numba or theano or whatever, 'matrix @@ 1 @ foo' could be compiled into a call to solve() anyway.)
(Did I miss any?)
In practice it seems to me that the last use case is the one that's might matter a lot practice, but then again, it might not  I'm not sure. For example, does anyone who teaches programming with numpy have a feeling about whether the existence of '@@ 1' would make a big difference to you and your students? (Alan? I know you were worried about losing the .I attribute on matrices if switching to ndarrays for teaching  given that ndarray will probably not get a .I attribute, how much would the existence of @@ 1 affect you?)
On a more technical level, Guido is worried about how @@'s precedence should work (and this is somewhat related to the other thread about @'s precedence and associativity, because he feels that if we end up giving @ and * different precedence, then that makes it much less clear what to do with @@, and reduces the strength of the */**/@/@@ analogy). In particular, if we want to argue for @@ then we'll need to figure out what expressions like a @@ b @@ c and a ** b @@ c and a @@ b ** c should do.
A related question is what @@ should do if given an array as its right argument. In the current PEP, only integers are accepted, which rules out a bunch of the more complicated cases like a @@ b @@ c (at least assuming @@ is rightassociative, like **, and I can't see why you'd want anything else). OTOH, in the brave new gufunc world, it technically would make sense to define @@ as being a gufunc with signature (m,m),()>(m,m), and the way gufuncs work this *would* allow the "power" to be an array  for example, we'd have:
mat = randn(m, m) pow = range(n) result = gufunc_matrix_power(mat, pow) assert result.shape == (n, m, m) for i in xrange(n): assert np.all(result[i, :, :] == mat ** i)
In this case, a @@ b @@ c would at least be a meaningful expression to write. OTOH it would be incredibly bizarre and useless, so probably noone would ever write it.
As far as these technical issues go, my guess is that the correct rule is that @@ should just have the same precedence and the same (right) associativity as **, and in practice noone will ever write stuff like a @@ b @@ c. But if we want to argue for @@ we need to come to some consensus or another here.
It's also possible the answer is "ugh, these issues are too complicated, we should defer this until later when we have more experience with @ and gufuncs and stuff". After all, I doubt anyone else will swoop in and steal @@ to mean something else! OTOH, if e.g. there's a strong feeling that '@@ 1' will make a big difference in pedagogical contexts, then putting that off for years might be a mistake.
n
On Sat, Mar 15, 2014 at 4:32 AM, Nathaniel Smith <njs@pobox.com> wrote:
For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
The thread so far, it sounds like the consensus answer is "meh, whatever". So I'm thinking we should just drop @@ from the PEP, and if it turns out that this is a problem we can always revisit it in the ~3.6/3.7 timeframe.  Nathaniel J. Smith Postdoctoral researcher  Informatics  University of Edinburgh http://vorpus.org
On Mon, Mar 17, 2014 at 11:53 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Sat, Mar 15, 2014 at 4:32 AM, Nathaniel Smith <njs@pobox.com> wrote:
For this discussion let's assume @ can be taken for granted, and that we can freely choose to either add @@ or not add @@ to the language. The question is: which do we think makes Python a better language (for us and in general)?
The thread so far, it sounds like the consensus answer is "meh, whatever". So I'm thinking we should just drop @@ from the PEP, and if it turns out that this is a problem we can always revisit it in the ~3.6/3.7 timeframe.
+1. Thanks!  Robert Kern
On Mon, Mar 17, 2014 at 7:53 AM, Nathaniel Smith <njs@pobox.com> wrote:
The thread so far, it sounds like the consensus answer is "meh, whatever". So I'm thinking we should just drop @@ from the PEP, and if it turns out that this is a problem we can always revisit it in the ~3.6/3.7 timeframe.
+1 from here.
On Mon, Mar 17, 2014 at 10:01 AM, Aron Ahmadia <aron@ahmadia.net> wrote:
On Mon, Mar 17, 2014 at 7:53 AM, Nathaniel Smith <njs@pobox.com> wrote:
The thread so far, it sounds like the consensus answer is "meh, whatever". So I'm thinking we should just drop @@ from the PEP, and if it turns out that this is a problem we can always revisit it in the ~3.6/3.7 timeframe.
+1 from here.
+1 too. Absent *clear* enthusiasm and support for new syntax/operators, I think being conservative and slow is the right approach. Just having @ will give us data and experience with this space, and it may become clear after one more cycle that we really need/want @@, or not, as the case may be. But it's easier to add it later if we really need it than to remove it if it proves to be a bad idea, so +1 for moving slowly on this.
On Mon, Mar 17, 2014 at 11:30 AM, Fernando Perez <fperez.net@gmail.com> wrote:
On Mon, Mar 17, 2014 at 10:01 AM, Aron Ahmadia <aron@ahmadia.net> wrote:
On Mon, Mar 17, 2014 at 7:53 AM, Nathaniel Smith <njs@pobox.com> wrote:
The thread so far, it sounds like the consensus answer is "meh, whatever". So I'm thinking we should just drop @@ from the PEP, and if it turns out that this is a problem we can always revisit it in the ~3.6/3.7 timeframe.
+1 from here.
+1 too. Absent *clear* enthusiasm and support for new syntax/operators, I think being conservative and slow is the right approach. Just having @ will give us data and experience with this space, and it may become clear after one more cycle that we really need/want @@, or not, as the case may be. But it's easier to add it later if we really need it than to remove it if it proves to be a bad idea, so +1 for moving slowly on this.
+1. Thanks Nathan for pushing this! Ondrej
participants (16)

Alan G Isaac

Andrew Jaffe

Aron Ahmadia

Charles R Harris

Christophe Bal

Fabrice Silva

Fernando Perez

Jaime Fernández del Río

josef.pktd＠gmail.com

Nathaniel Smith

Olivier Delalleau

Ondřej Čertík

Robert Kern

Stephan Hoyer

Sturla Molden

Warren Weckesser