Moreover, not to mention the result is mostly valid for mod2 aritmetic; something the authors chose to mention it in the fine print causing this, a bit overblown in my opinion, excitement.

So for now it seems like we don't need to take action for regular matmul. 

On Wed, Oct 26, 2022, 16:03 Robert Kern <> wrote:
On Wed, Oct 26, 2022 at 9:31 AM <> wrote:

I was curious on how AlphaTensor will effect NumPy and other similar applications considering it has found a way to perform 3x3 matrix multiplication efficiently. I am not even sure how NumPy does this under the hood, is it 2x2?

Is anyone working on implementing this 3x3 algorithm for NumPy? Is it too early, and if so why? Are there any concerns about this algorithm?

numpy links against accelerated linear algebra libraries like OpenBLAS and Intel MKL to provide the matrix multiplication. If they find that the AlphaTensor results are better than the options they currently have, then numpy will get them. In general, I think it is unlikely that they will be used. Even the older state of the art that they compare with, like Strassen's algorithm, are not often used in practice. Concerns like memory movement and the ability to use instruction-level parallelism on each kind of CPU tend to dominate over a marginal change in the number of multiplication operations. The answers to this StackOverflow question give some more information:

Robert Kern
NumPy-Discussion mailing list --
To unsubscribe send an email to
Member address: