Maybe I wasn't clear, I'm talking about the 1-dimensional vector product, but applied to N-D arrays of vectors. Certainly dot products can be realized as matrix products, and often are in mathematics for convenience, but matrices and vectors are not the same thing, theoretically or coding wise. If I have two (M, N, k) arrays a and b where k is the vector dimension, to dot product them using matrix notation I have to do: (a[:, :, np.newaxis, :] @ b[:, :, :, np.newaxis])[:, :, 0, 0] Which I certainly don't find readable (I always have to scratch my head a little bit to figure out whether the newaxis's are in the right places). If this is a common operation in larger expressions, then it basically has to be written as a separate function, which then someone reading the code may have to look at for the semantics. It also breaks down if you want to write generic vector functions that may be applied along different axes; then you need to do something like np.squeeze(np.expand_dims(a, axis=axis) @ np.expand_dims(b, axis=axis+1), (axis, axis+1)) (after normalizing the axis; if it's negative you'd need to do axis-1 and axis instead). Compare this to the simplicity, composability and consistency of: a.dot(b, axis=-1) * np.cross(c, d, axis=-1).dot(e, axis=-1) / np.linalg.norm(f, axis=-1) (the cross and norm operators already support an axis parameter)