[Numpy-discussion] Linking other libm-Implementation

Gregor Thalhammer gregor.thalhammer at gmail.com
Tue Feb 9 12:02:41 EST 2016


> Am 09.02.2016 um 11:21 schrieb Nils Becker <nilsc.becker at gmail.com>:
> 
> 2016-02-08 18:54 GMT+01:00 Julian Taylor <jtaylor.debian at googlemail.com <mailto:jtaylor.debian at googlemail.com>>:
> > which version of glibm was used here? There are significant difference
> > in performance between versions.
> > Also the input ranges are very important for these functions, depending
> > on input the speed of these functions can vary by factors of 1000.
> >
> > glibm now includes vectorized versions of most math functions, does
> > openlibm have vectorized math?
> > Thats where most speed can be gained, a lot more than 25%.
> 
> glibc 2.22 was used running on archlinux. As far as I know openlibm does not include special vectorized functions. (for reference vectorized operations in glibc: https://sourceware.org/glibc/wiki/libmvec <https://sourceware.org/glibc/wiki/libmvec>).
> 
> 2016-02-08 23:32 GMT+01:00 Gregor Thalhammer <gregor.thalhammer at gmail.com <mailto:gregor.thalhammer at gmail.com>>:
> Years ago I made the vectorized math functions from Intels Vector Math Library (VML), part of MKL, available for numpy, see https://github.com/geggo/uvml <https://github.com/geggo/uvml>
> Not particularly difficult, you not even have to change numpy. For some cases (e.g., exp) I have seen speedups up to 5x-10x. Unfortunately MKL is not free, and free vector math libraries like yeppp implement much fewer functions or do not support the required strided memory layout. But to improve performance, numexpr, numba or theano are much better.
> 
> Gregor
> 
> 
> Thank you very much for the link! I did not know about numpy.set_numeric_ops.
> You are right, vectorized operations can push down calculation time per element by factors. The benchmarks done for the yeppp-project also indicate that (however far you would trust them: http://www.yeppp.info/benchmarks.html <http://www.yeppp.info/benchmarks.html>). But I would agree that this domain should be left to specialized tools like numexpr as fully exploiting the speedup depends on the expression, that should be calculated. It is not suitable as a standard for bumpy.

Why should numpy not provide fast transcendental math functions? For linear algebra it supports fast implementations, even non-free (MKL). Wouldn’t it be nice if numpy outperforms C?

> 
> Still, I think it would be good to give the possibility to choose the libm numpy links against. And be it simply to allow to choose or guarantee a specific accuracy/performance on different platforms and systems.
> Maybe maintaining a de-facto libm in npy_math could be replaced with a dependency on e.g. openlibm. But such a decision would require a thorough benchmark/testing of the available solutions. Especially with respect to the accuracy-performance-tradeoff that was mentioned.
> 

Intel publishes accuracy/performance charts for VML/MKL:
https://software.intel.com/sites/products/documentation/doclib/mkl/vm/functions/_accuracyall.html <https://software.intel.com/sites/products/documentation/doclib/mkl/vm/functions/_accuracyall.html>

For GNU libc it is more difficult to find similarly precise data, I only could find:
http://www.gnu.org/software/libc/manual/html_node/Errors-in-Math-Functions.html <http://www.gnu.org/software/libc/manual/html_node/Errors-in-Math-Functions.html>

Gregor


> Cheers
> Nils
> 
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20160209/4c7b2204/attachment.html>


More information about the NumPy-Discussion mailing list