
On Tue, Jun 4, 2019 at 7:28 PM Stephen J. Turnbull turnbull.stephen.fw@u.tsukuba.ac.jp wrote:
Cody Piersall writes:
would be the in-place matmul operator (@=) but there are use cases where matrix-multiplication of signals would actually be useful too.
If I recall correctly, the problem that the numeric community faced was that there are multiple "multiplication" operations that matrices "want to" support with operator notation because they're all frequently used in more or less complex expressions, not that matrix algebra needs to spell its multiplication operator differently from "*".
According to the OP, signals are "just integers". Integers do not need to support matrix multiplication because they *can't*. There may be matrices of signals that do want to support multiplication, but that will be a different type, and presumably multiplication of signal matrices will be supported by "*". Can you say that signal matrices will have more than one frequently needed "multiplication" operation?
Your statement about the history is absolutely correct, but please notice two matrix dot-multiplication 1xN @ Nx1 ==> a single value, so something like below holds:
signal_result <== [sig sig sig ...] @ [sig, sig, sig, ...] (',' used to make the second a Nx1 for example). And now imagine you want to mix up @= with @, in one place @= means signal assignment, in another it means matrix. What's more, when start to use matrix to drive signal matrix, I assume a lot of matrix ops including the @= is definitely going to be very often used to produce the stimuli (e.g. using normal number matrix ops to generate the desired result matrix to drive on signal matrix), that's why I am refrained from using @= entirely.