[Python-Dev] PEP 465: A dedicated infix operator for matrix multiplication

Björn Lindqvist bjourne at gmail.com
Thu Apr 10 13:34:54 CEST 2014


2014-04-09 17:37 GMT+02:00 Nathaniel Smith <njs at pobox.com>:
> On Wed, Apr 9, 2014 at 4:25 PM, Björn Lindqvist <bjourne at gmail.com> wrote:
>> 2014-04-08 14:52 GMT+02:00 Nathaniel Smith <njs at pobox.com>:
>>> On Tue, Apr 8, 2014 at 9:58 AM, Björn Lindqvist <bjourne at gmail.com> wrote:
>>>> 2014-04-07 3:41 GMT+02:00 Nathaniel Smith <njs at pobox.com>:
>>>>> So, I guess as far as I'm concerned, this is ready to go. Feedback welcome:
>>>>>   http://legacy.python.org/dev/peps/pep-0465/
>>>>
>>>> Couldn't you please have made your motivation example actually runnable?
>>>>
>>>> import numpy as np
>>>> from numpy.linalg import inv, solve
>>>>
>>>> # Using dot function:
>>>> S = np.dot((np.dot(H, beta) - r).T,
>>>>            np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))
>>>>
>>>> # Using dot method:
>>>> S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
>>>>
>>>> Don't keep your reader hanging! Tell us what the magical variables H,
>>>> beta, r and V are. And why import solve when you aren't using it?
>>>> Curious readers that aren't very good at matrix math, like me, should
>>>> still be able to follow your logic. Even if it is just random data,
>>>> it's better than nothing!
>>>
>>> There's a footnote that explains the math in more detail and links to
>>> the real code this was adapted from. And solve is used further down in
>>> the section. But running it is really what you want, just insert:
>>>
>>> beta = np.random.randn(10)
>>> H = np.random.randn(2, 10)
>>> r = np.random.randn(2)
>>> V = np.random.randn(10, 10)
>>>
>>> Does that help? ;-)
>>
>> Thanks! Yes it does help. Then I can see that this expression:
>>
>>   np.dot(H, beta) - r
>>
>> Evaluates to a vector. And a vector transposed is the vector itself.
>> So the .T part in this expression np.dot(H, beta) - r).T is
>> unnecessary, isn't it?
>
> In univariate regressions r and beta are vectors, and the .T is a
> no-op. The formula also works for multivariate regression, in which
> case r and beta become matrices; in this case the .T becomes
> necessary.

Then what is the shape of those variables supposed to be? The earlier
definitions you gave isn't enough for this general case.


-- 
mvh/best regards Björn Lindqvist


More information about the Python-Dev mailing list