
On Wed, 5 Jul 2023 at 11:18, <haael@interia.pl> wrote:
Python has the "star" ("*") operator for multiplication. In the context of collections it is supposed to mean element-wise multiplication. Its associated operator is __mul__. It also has the double star ("**") operator for exponentiation, which is repeated multiplication. Its associated operator is __exp__.
In addition to that Python has the "at" ("@") operator for multiplication. In the context of collections it should mean linear product, like matrix times a matrix, or matrix times vector. Its associated method is __matmul__.
For completeness we should now have the exponentiation operator meaning repeated application of the "at" operator. Its method could me __matexp__.
This was discussed at the time of the PEP that introduced matmul: https://peps.python.org/pep-0465/#non-definition-of-matrix-power Quote: """ Earlier versions of this PEP also proposed a matrix power operator, @@, analogous to **. But on further consideration, it was decided that the utility of this was sufficiently unclear that it would be better to leave it out for now, and only revisit the issue if – once we have more experience with @ – it turns out that @@ is truly missed. """ It has been nearly 10 years which I guess is enough time to revisit but most likely the numpy-discussion mailing list is the place to start that discussion: https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ I think that the original authors wanted @@ and also other operators. At the time there had been previous more ambitious PEPs proposing many more operators or custom operators but those were rejected at least partly for being too broad. As I remember it there was a conscious effort to make PEP 465 as minimal as possible just to make sure that at least the @ operator got accepted. Matrix powers are not as widely used as matrix multiplication and probably the most common use would be M@@-1 as a strange looking version of M^-1 which would otherwise be spelled as np.linalg.inv(M). It is usually a bad idea to compute matrix inverses though (np.linalg.solve(a, b) is both faster and more accurate than np.linalg.inv(a)*b). Matlab uses left and right division like a\b and a/b for this but NumPy etc already define a/b to mean elementwise division and then having a new operator a\b mean something completely different from a/b might be confusing. I think that the people who would have wanted to push forward with things like this have probably reduced the scope of what seems possible to the extent that the only thing left to do is to possibly add @@. If that is all that is on the table then it possibly does not have enough value to be worth the effort of a language change. On the other hand it is a relatively small change since @ is already added. -- Oscar