
What about having a np.fastmath module for faster, lower precision implementations? The error guarantees there would be lower, and possibly hardware dependent. By default we get the high precision version, but if the user knows what they are doing, they can get the speed. /David On Wed, 31 May 2023, 07:58 Sebastian Berg, <sebastian@sipsolutions.net> wrote:
Hi all,
there was recently a PR to NumPy to improve the performance of sin/cos on most platforms (on my laptop it seems to be about 5x on simple inputs). This changes the error bounds on platforms that were not previously accelerated (most users):
https://github.com/numpy/numpy/pull/23399
The new error is <4 ULP similar to what it was before, but only on high end Intel CPUs which not users would have noticed. And unfortunately, it is a bit unclear whether this is too disruptive or not.
The main surprise is probably that the range of both does not include 1 (and -1) exactly with this and quite a lot of downstream packages noticed this and needed test adaptions.
Now, most of these are harmless: users shouldn't expect exact results from floating point math and test tolerances need adjustment. OTOH, sin/cos are practically 1/-1 on a wide range of inputs (they are basically constant) so it is surprising that they deviate from it and never reach 1/-1 exactly.
Since quite a few downstream libs notice this and NumPy users cannot explicitly opt-in to a different performance/precision trade-off. The question is how everyone feels about it being better to revert for now and hope for a better one?
I doubt we can decide on a very clear cut yes/no, but I am very interested what everyone thinks whether this precision trade-off is too surprising to users?
Cheers,
Sebastian
_______________________________________________ NumPy-Discussion mailing list -- numpy-discussion@python.org To unsubscribe send an email to numpy-discussion-leave@python.org https://mail.python.org/mailman3/lists/numpy-discussion.python.org/ Member address: davidmenhur@gmail.com