[Numpy-discussion] matrix default to column vector?
Tom K.
tpk at kraussfamily.org
Sat Jun 6 21:57:06 EDT 2009
Fernando Perez wrote:
>
> On Sat, Jun 6, 2009 at 11:03 AM, Charles R
> Harris<charlesr.harris at gmail.com> wrote:
>
>> I don't think we can change the current matrix class, to do so would
>> break
>> too much code. It would be nice to extend it with an explicit inner
>> product,
>> but I can't think of any simple notation for it that python would parse.
>
> Maybe it's time to make another push on python-dev for the pep-225
> stuff for other operators?
>
> https://cirl.berkeley.edu/fperez/static/numpy-pep225/
>
> Last year I got pretty much zero interest from python-dev on this, but
> they were very very busy with 3.0 on the horizon. Perhaps once they
> put 3.1 out would be a good time to champion this again.
>
> It's slightly independent of the matrix class debate, but perhaps
> having special operators for real matrix multiplication could ease
> some of the bottlenecks of this discussion.
>
> It would be great if someone could champion that discussion on
> python-dev though, I don't see myself finding the time for it another
> time around...
>
How about pep 211?
http://www.python.org/dev/peps/pep-0211/
PEP 211 proposes a single new operator (@) that could be used for matrix
multiplication.
MATLAB has elementwise versions of multiply, exponentiation, and left and
right division using a preceding "." for the usual matrix versions (* ^ \
/).
PEP 225 proposes "tilde" versions of + - * / % **.
While PEP 225 would allow a matrix exponentiation and right divide, I think
these things are much less common than matrix multiply. Plus, I think
following through with the PEP 225 implementation would create a
frankenstein of a language that would be hard to read.
So, I would argue for pushing for a single new operator that can then be
used to implement "dot" with a binary infix operator. We can resurrect PEP
211 or start a new PEP or whatever, the main thing is to have a proposal
that makes sense. Actually, what do you all think of this:
@ --> matrix multiply
@@ --> matrix exponentiation
and we leave it at that - let's not get too greedy and try for matrix
inverse via @/ or something.
For the nd array operator, I would propose taking the last dimension of the
left array and "collapsing" it with the first dimension of the right array,
so
shape (a0, ..., aL-1,k) @ (k, b0, ..., bM-1) --> (a0, ..., aL-1, b0, ...,
bM-1)
Does that make sense?
With this proposal, matrices go away and all our lives are sane again. :-)
Long live the numpy ndarray! Thanks to the creators for all your hard work
BTW - I love this stuff!
- Tom K.
--
View this message in context: http://www.nabble.com/matrix-default-to-column-vector--tp23652920p23907204.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.
More information about the NumPy-Discussion
mailing list