On Tue, Jan 23, 2024 at 3:18 PM Marten van Kerkwijk <mhvk@astro.utoronto.ca> wrote:
Hi All,
I have a PR [1] that adds `np.matvec` and `np.vecmat` gufuncs for matrix-vector and vector-matrix calculations, to add to plain matrix-matrix multiplication with `np.matmul` and the inner vector product with `np.vecdot`. They call BLAS where possible for speed. I'd like to hear whether these are good additions.
I also note that for complex numbers, `vecmat` is defined as `x†A`, i.e., the complex conjugate of the vector is taken. This seems to be the standard and is what we used for `vecdot` too (`x†x`). However, it is *not* what `matmul` does for vector-matrix or indeed vector-vector products (remember that those are possible only if the vector is one-dimensional, i.e., not with a stack of vectors). I think this is a bug in matmul, which I'm happy to fix. But I'm posting here in part to get feedback on that.
Thanks!
Marten
I tend to agree with not using the complex conjugate for vecmat, but would prefer having separate functions for that that make it explicit in the name. I also note that mathematicians use sesquilinear forms, which have the vector conjugate on the other side, so there are different conventions. I prefer the Dirac convention myself, but many mathematical methods texts use the opposite. It is tricky for the teacher in introductory courses, right up there with vectors being called contravariant when they are actually covariant (the coefficients are contravariant). Anyway, I think having the convention explicit in the name will avoid confusion. Chuck