I would certainly use einsum. It is almost perfect for these use cases, e.g.,
np.einsum('ki,kij,kj->k', A, inv(B), A)

On Thu, Oct 26, 2017 at 12:38 PM Charles R Harris <charlesr.harris@gmail.com> wrote:
On Thu, Oct 26, 2017 at 12:11 PM, Daniele Nicolodi <daniele@grinta.net> wrote:
Hello,

is there a better way to write the dot product between a stack of
matrices?  In my case I need to compute

y = A.T @ inv(B) @ A

with A a 3x1 matrix and B a 3x3 matrix, N times, with N in the few
hundred thousands range.  I thus "vectorize" the thing using stack of
matrices, so that A is a Nx3x1 matrix and B is Nx3x3 and I can write:

y = np.matmul(np.transpose(A, (0, 2, 1)), np.matmul(inv(B), A))

which I guess could be also written (in Python 3.6 and later):

y = np.transpose(A, (0, 2, 1)) @ inv(B) @ A

and I obtain a Nx1x1 y matrix which I can collapse to the vector I need
with np.squeeze().

However, the need for the second argument of np.transpose() seems odd to
me, because all other functions handle transparently the matrix stacking.

Am I missing something?  Is there a more natural matrix arrangement that
I could use to obtain the same results more naturally?

There has been discussion of adding a operator for transposing the matrices in a stack, but no resolution at this point. However, if you have a stack of vectors (not matrices) you can turn then into transposed matrices like `A[..., None, :]`, so `A[..., None, :] @ inv(B) @ A[..., None]`  and then squeeze.

Another option is to use einsum.

Chuck
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion