[Numpy-discussion] [RFC] should we argue for a matrix power operator, @@?

josef.pktd at gmail.com josef.pktd at gmail.com
Sat Mar 15 21:31:22 EDT 2014


On Sat, Mar 15, 2014 at 8:47 PM, Warren Weckesser <
warren.weckesser at gmail.com> wrote:

>
> On Sat, Mar 15, 2014 at 8:38 PM, <josef.pktd at gmail.com> wrote:
>
>> I think I wouldn't use anything like @@ often enough to remember it's
>> meaning. I'd rather see english names for anything that is not **very**
>> common.
>>
>> I find A@@-1 pretty ugly compared to inv(A)
>> A@@(-0.5)  might be nice   (do we have matrix_sqrt ?)
>>
>
>
> scipy.linalg.sqrtm:
> http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.sqrtm.html
>

maybe a good example: I could never figured that one out

M = sqrtm(A)

A = M @ M

but what we use in stats is

A = R.T @ R
(eigenvectors dot diag(sqrt of eigenvalues)

which sqrt is A@@(0.5) ?

Josef



>
>
> Warren
>
>
>
>> Josef
>>
>>
>>
>> On Sat, Mar 15, 2014 at 5:11 PM, Stephan Hoyer <shoyer at gmail.com> wrote:
>>
>>> Speaking only for myself (and as someone who has regularly used matrix
>>> powers), I would not expect matrix power as @@ to follow from matrix
>>> multiplication as @. I do agree that matrix power is the only reasonable
>>> use for @@ (given @), but it's still not something I would be confident
>>> enough to know without looking up.
>>>
>>> We should keep in mind that each new operator imposes some (small)
>>> cognitive burden on everyone who encounters them for the first time, and,
>>> in this case, this will include a large fraction of all Python users,
>>> whether they do numerical computation or not.
>>>
>>> Guido has given us a tremendous gift in the form of @. Let's not insist
>>> on @@, when it is unclear if the burden of figuring out what @@ means it
>>> would be worth using, even for heavily numeric code. I would certainly
>>> prefer to encounter norm(A), inv(A), matrix_power(A, n),
>>> fractional_matrix_power(A, n) and expm(A) rather than their infix
>>> equivalents. It will certainly not be obvious which of these @@ will
>>> support for objects from any given library.
>>>
>>> One useful data point might be to consider whether matrix power is
>>> available as an infix operator in other languages commonly used for
>>> numerical work. AFAICT from some quick searches:
>>> MATLAB: Yes
>>> R: No
>>> IDL: No
>>>
>>> All of these languages do, of course, implement infix matrix
>>> multiplication, but it is apparently not clear at all whether the matrix
>>> power is useful.
>>>
>>> Best,
>>> Stephan
>>>
>>>
>>>
>>>
>>> On Sat, Mar 15, 2014 at 9:03 AM, Olivier Delalleau <shish at keba.be>wrote:
>>>
>>>> 2014-03-15 11:18 GMT-04:00 Charles R Harris <charlesr.harris at gmail.com>
>>>> :
>>>>
>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 14, 2014 at 10:32 PM, Nathaniel Smith <njs at pobox.com>wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Here's the second thread for discussion about Guido's concerns about
>>>>>> PEP 465. The issue here is that PEP 465 as currently written proposes
>>>>>> two new operators, @ for matrix multiplication and @@ for matrix power
>>>>>> (analogous to * and **):
>>>>>>   http://legacy.python.org/dev/peps/pep-0465/
>>>>>>
>>>>>> The main thing we care about of course is @; I pushed for including @@
>>>>>> because I thought it was nicer to have than not, and I thought the
>>>>>> analogy between * and ** might make the overall package more appealing
>>>>>> to Guido's aesthetic sense.
>>>>>>
>>>>>> It turns out I was wrong :-). Guido is -0 on @@, but willing to be
>>>>>> swayed if we think it's worth the trouble to make a solid case.
>>>>>>
>>>>>> Note that question now is *not*, how will @@ affect the reception of
>>>>>> @. @ itself is AFAICT a done deal, regardless of what happens with @@.
>>>>>> For this discussion let's assume @ can be taken for granted, and that
>>>>>> we can freely choose to either add @@ or not add @@ to the language.
>>>>>> The question is: which do we think makes Python a better language (for
>>>>>> us and in general)?
>>>>>>
>>>>>> Some thoughts to start us off:
>>>>>>
>>>>>> Here are the interesting use cases for @@ that I can think of:
>>>>>> - 'vector @@ 2' gives the squared Euclidean length (because it's the
>>>>>> same as vector @ vector). Kind of handy.
>>>>>> - 'matrix @@ n' of course gives the matrix power, which is of marginal
>>>>>> use but does come in handy sometimes, e.g., when looking at graph
>>>>>> connectivity.
>>>>>> - 'matrix @@ -1' provides a very transparent notation for translating
>>>>>> textbook formulas (with all their inverses) into code. It's a bit
>>>>>> unhelpful in practice, because (a) usually you should use solve(), and
>>>>>> (b) 'matrix @@ -1' is actually more characters than 'inv(matrix)'. But
>>>>>> sometimes transparent notation may be important. (And in some cases,
>>>>>> like using numba or theano or whatever, 'matrix @@ -1 @ foo' could be
>>>>>> compiled into a call to solve() anyway.)
>>>>>>
>>>>>> (Did I miss any?)
>>>>>>
>>>>>> In practice it seems to me that the last use case is the one that's
>>>>>> might matter a lot practice, but then again, it might not -- I'm not
>>>>>> sure. For example, does anyone who teaches programming with numpy have
>>>>>> a feeling about whether the existence of '@@ -1' would make a big
>>>>>> difference to you and your students? (Alan? I know you were worried
>>>>>> about losing the .I attribute on matrices if switching to ndarrays for
>>>>>> teaching -- given that ndarray will probably not get a .I attribute,
>>>>>> how much would the existence of @@ -1 affect you?)
>>>>>>
>>>>>> On a more technical level, Guido is worried about how @@'s precedence
>>>>>> should work (and this is somewhat related to the other thread about
>>>>>> @'s precedence and associativity, because he feels that if we end up
>>>>>> giving @ and * different precedence, then that makes it much less
>>>>>> clear what to do with @@, and reduces the strength of the */**/@/@@
>>>>>> analogy). In particular, if we want to argue for @@ then we'll need to
>>>>>> figure out what expressions like
>>>>>>    a @@ b @@ c
>>>>>> and
>>>>>>    a ** b @@ c
>>>>>> and
>>>>>>    a @@ b ** c
>>>>>> should do.
>>>>>>
>>>>>> A related question is what @@ should do if given an array as its right
>>>>>> argument. In the current PEP, only integers are accepted, which rules
>>>>>> out a bunch of the more complicated cases like a @@ b @@ c (at least
>>>>>> assuming @@ is right-associative, like **, and I can't see why you'd
>>>>>> want anything else). OTOH, in the brave new gufunc world, it
>>>>>> technically would make sense to define @@ as being a gufunc with
>>>>>> signature (m,m),()->(m,m), and the way gufuncs work this *would* allow
>>>>>> the "power" to be an array -- for example, we'd have:
>>>>>>
>>>>>>    mat = randn(m, m)
>>>>>>    pow = range(n)
>>>>>>    result = gufunc_matrix_power(mat, pow)
>>>>>>    assert result.shape == (n, m, m)
>>>>>>    for i in xrange(n):
>>>>>>        assert np.all(result[i, :, :] == mat ** i)
>>>>>>
>>>>>> In this case, a @@ b @@ c would at least be a meaningful expression to
>>>>>> write. OTOH it would be incredibly bizarre and useless, so probably
>>>>>> no-one would ever write it.
>>>>>>
>>>>>> As far as these technical issues go, my guess is that the correct rule
>>>>>> is that @@ should just have the same precedence and the same (right)
>>>>>> associativity as **, and in practice no-one will ever write stuff like
>>>>>> a @@ b @@ c. But if we want to argue for @@ we need to come to some
>>>>>> consensus or another here.
>>>>>>
>>>>>> It's also possible the answer is "ugh, these issues are too
>>>>>> complicated, we should defer this until later when we have more
>>>>>> experience with @ and gufuncs and stuff". After all, I doubt anyone
>>>>>> else will swoop in and steal @@ to mean something else! OTOH, if e.g.
>>>>>> there's a strong feeling that '@@ -1' will make a big difference in
>>>>>> pedagogical contexts, then putting that off for years might be a
>>>>>> mistake.
>>>>>>
>>>>>>
>>>>> I don't have a strong feeling either way on '@@' . Matrix inverses are
>>>>> pretty common in matrix expressions, but I don't know that the new operator
>>>>> offers much advantage over a function call. The positive integer powers
>>>>> might be useful in some domains, as others have pointed out, but
>>>>> computational practice one would tend to factor the evaluation.
>>>>>
>>>>> Chuck
>>>>>
>>>>
>>>> Personally I think it should go in, because:
>>>> - it's useful (although marginally), as in the examples previously
>>>> mentioned
>>>>  - it's what people will expect
>>>> - it's the only reasonable use of @@ once @ makes it in
>>>>
>>>> As far as the details about precedence rules and what not... Yes,
>>>> someone should think about them and come up with rules that make sense, but
>>>> since it will be pretty much only be used in unambiguous situations, this
>>>> shouldn't be a blocker.
>>>>
>>>> -=- Olivier
>>>>
>>>> _______________________________________________
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion at scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>
>>>>
>>>
>>> _______________________________________________
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion at scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20140315/ddc1812f/attachment.html>


More information about the NumPy-Discussion mailing list