In mathematics, and inner product is a sesquilinear form on pairs of vectors, so at the least it should return a scalar. In numpy inner is a sum over the last indices. OK, so we have
In [10]: inner(ones(2),ones(2)) Out[10]: 2.0
This doesn't work as an inner product for column vectors, which would be the usual textbook convention, but that's alright, it's not a 'real' inner product. But what happens when matrices are involved?
In [11]: inner(mat(ones(2)),mat(ones(2))) Out[11]: array([[ 2.]])
Hmm, we get an array, not a scalar. Maybe we can cheat
In [12]: mat(ones(2))*mat(ones(2)).T Out[12]: matrix([[ 2.]])
What about vdot (conjugate of the mathematical convention, i.e., the Dirac convention)
In [17]: vdot(mat(ones(2)),mat(ones(2))) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last)
/home/charris/<ipython console>
ValueError: vectors have different lengths
In [18]: vdot(mat(ones(2)),mat(ones(2)).T) --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last)
/home/charris/<ipython console>
ValueError: vectors have different lengths
Nope, vdot doesn't work for row and column vectors. So there is *no* builtin inner product that works for matrices. I wonder if we should have one, and if so, what it should be called. I think that vdot should probably be modified to do the job. There is also the question of whether or not v.T * v should be a scalar when v is a column vector. I believe that construction is commonly used in matrix algebra as an alias for the inner product, although strictly speaking it uses the mapping between a vector space and its dual that the inner product provides.
Chuck
Charles R Harris schrieb:
Nope, vdot doesn't work for row and column vectors. So there is *no* builtin inner product that works for matrices. I wonder if we should have one, and if so, what it should be called. I think that vdot should probably be modified to do the job. There is also the question of whether or not v.T * v should be a scalar when v is a column vector. I believe that construction is commonly used in matrix algebra as an alias for the inner product, although strictly speaking it uses the mapping between a vector space and its dual that the inner product provides.
As a matrix-using user and w/o too much thinking, I would suggest to treat inner() as a reduce-like method returning a scalar (I believe such in the context of other functions a similar issue was discussed here some time ago), and leave '*'-style multiplication alone (no special casing there -- actually due to numpy's broadcasting capabilities, it shouldn't be a problem to get a 1,1-matrix in place of a scalar, right?).
Thanks for caring about matrices, Sven