[Numpy-discussion] MKL and OpenBLAS

Sturla Molden sturla.molden at gmail.com
Fri Feb 7 07:44:46 EST 2014


Thomas Unterthiner <thomas_unterthiner at web.de> wrote:

> Sorry for going a bit off-topic, but:  do you have any links to the 
> benchmarks?  I googled around, but I haven't found anything. FWIW, on my 
> own machines OpenBLAS is on par with MKL (on an i5 laptop and an older 
> Xeon server) and actually slightly faster than ACML (on an FX8150) for 
> my use cases (I mainly tested DGEMM/SGEMM, and a few LAPACK calls). So 
> your claim is very surprising for me.

I was thinking about the benchmarks on Eigen's website, but it might be a
bit old now and possibly biased:

http://eigen.tuxfamily.org/index.php?title=Benchmark

It uses a single thread only, but for smaller matrix sizes Eigen tends to
be the better.

Carl Kleffner alerted me to this benchmark today:

http://gcdart.blogspot.de/2013/06/fast-matrix-multiply-and-ml.html

It shows superb performance and unparallelled scalability for OpenBLAS on
Opteron. MKL might be better on Intel CPUs though. ATLAS is doing quite
well too, better than I would expect, and generally better than Eigen. It
is also interesting that ACML is crap, except with a single-threaded BLAS.

Sturla




More information about the NumPy-Discussion mailing list