On 14/03/14 18:53, Guido van Rossum wrote:
- Did you consider a duck-typing (is that the word?) attribute? E.g. a*b is elementwise multiplication; a.M*b must be used for matrix multiplication.
This is exactly what has been tried with the types numpy.matrix and numpy.array. For numpy.matrix the * operator means matrix multiplication and for numpy.array it means Hadamard product. In practice, it does not work as we want. That is the raison d'etre for this PEP. First, we often needs both operators in the same expression. This makes it very hard to read, in fact a function call is always more readable. Second, NumPy often has to return temporary arrays from binary operators, and the duck-typing of these must be controlled too. If we are unlucky a temporary array will have the wrong type, and we end up with errors that are very hard to track down. So for any serious code, matrix multiplication is written with a function call: numpy.dot. numpy.matrix is for this reason mostly useful for teaching students, not for serious numerical hacking with Python. It is due for retirement from NumPy. A special operator will work, however, because languages like Matlab and R have proven that it does. Many in the NumPy community have extensive experience with Matlab and R, and we know it works with two multiplication operators. In contrast, we have proven that duck-typing is not useful in this context. Though this PEP is written by Nathaniel, the whole NumPy dev team is behind it. It has been planned and discussed for a long time, both on the NumPy mailing list and on Github. But Nathaniel took the job to write it down. So other options has been considered very carefully.
- Is @@ really necessary?
It seems the consensus on the NumPy list is that asking for @@ can wait. - An expression like matrix @@ 0.5 is ambiguous because it could mean the matrix square root or the Cholesky factorization. - An expression like Z = (X @@ -1) @ Y should for numerical stability ans speed be written as solving a set of equations: X @ Z = Y. Symbolically it is the same, but numerically it is not. So for most practical purposes, matrix exponentiation will require a function call anyway. The unambiguous case of positive integer powers might not be common enough. Sturla