On Wed, Oct 26, 2022 at 9:31 AM <canhtart@gmail.com> wrote:
Hello!

I was curious on how AlphaTensor will effect NumPy and other similar applications considering it has found a way to perform 3x3 matrix multiplication efficiently. https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor. I am not even sure how NumPy does this under the hood, is it 2x2?

Is anyone working on implementing this 3x3 algorithm for NumPy? Is it too early, and if so why? Are there any concerns about this algorithm?

numpy links against accelerated linear algebra libraries like OpenBLAS and Intel MKL to provide the matrix multiplication. If they find that the AlphaTensor results are better than the options they currently have, then numpy will get them. In general, I think it is unlikely that they will be used. Even the older state of the art that they compare with, like Strassen's algorithm, are not often used in practice. Concerns like memory movement and the ability to use instruction-level parallelism on each kind of CPU tend to dominate over a marginal change in the number of multiplication operations. The answers to this StackOverflow question give some more information:

  https://stackoverflow.com/questions/1303182/how-does-blas-get-such-extreme-performance

--
Robert Kern