On Tue, Apr 29, 2008 at 1:22 PM, Timothy Hochberg <tim.hochberg@ieee.org> wrote:

Let me throw out a couple of more thoughts:

First, there seems to be disagreement about what a row_vector and column_vector are (and even if they are sensible concepts, but let's leave that aside for moment). One school of thought is that they are one-dimensional objects that have some orientation (hence row/column). They correspond, more or less, to covariant and contravariant tensors, although I can never recall which is which.  The second view, which I suspect is influenced by MatLab and its ilk, is  that they are 2-dimensional 1xN and Nx1 arrays. It's my view that the pseudo tensor approach is more powerful, but it does require some metainformation be added to the array. This metadata can either take the form of making the different objects different classes, which leads to the matrix/row/column formulation, or adding some sort of tag to the array object (proposal #5, which so far lacks any detail).

Second, most of the stuff that we have been discussing so far is primarily about notational convenience. However, there is matrix related stuff that is at best poorly supported now, namely operations on stacks of arrays (or vectors). As a concrete example, I at times need to work with stacks of small matrices. If I do the operations one by one, the overhead is prohibitive, however, most of that overhead can be avoided. For example, I rewrote some of the linalg routines to work on stacks of matrices and inverse is seven times faster for a 100x10x10 array (a stack of 100 10x10 matrices) when operating on a stack than when operating on the matrices one at a time. This is a result of sharing the setup overhead, the C routines that called are the same in either case.

Some operations on stacks of small matrices are easy to get, for instance, +,-,*,/, and matrix multiply. The last is the interesting one. If A and B are stacks of matrices with the same number of dimensions with the matrices stored in the last two indices, then

sum(A[...,:,:,newaxis]*B[...,newaxis,:,:], axis=-2)

is the matrix-wise multiplication of the two stacks. If B is replaced by a stack of 1D vectors, x, it is even simpler:

sum(A[...,:,:]*x[...,newaxis,:], axis=-1)

This doesn't go through BLAS, but for large stacks of small matrices it might be even faster than BLAS because BLAS is kinda slow for small matrices.

Chuck