Precision changes to sin/cos in the next release?
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
Hi all, there was recently a PR to NumPy to improve the performance of sin/cos on most platforms (on my laptop it seems to be about 5x on simple inputs). This changes the error bounds on platforms that were not previously accelerated (most users): https://github.com/numpy/numpy/pull/23399 The new error is <4 ULP similar to what it was before, but only on high end Intel CPUs which not users would have noticed. And unfortunately, it is a bit unclear whether this is too disruptive or not. The main surprise is probably that the range of both does not include 1 (and -1) exactly with this and quite a lot of downstream packages noticed this and needed test adaptions. Now, most of these are harmless: users shouldn't expect exact results from floating point math and test tolerances need adjustment. OTOH, sin/cos are practically 1/-1 on a wide range of inputs (they are basically constant) so it is surprising that they deviate from it and never reach 1/-1 exactly. Since quite a few downstream libs notice this and NumPy users cannot explicitly opt-in to a different performance/precision trade-off. The question is how everyone feels about it being better to revert for now and hope for a better one? I doubt we can decide on a very clear cut yes/no, but I am very interested what everyone thinks whether this precision trade-off is too surprising to users? Cheers, Sebastian
![](https://secure.gravatar.com/avatar/008b55030cffb9a4c4f7d8422e10343e.jpg?s=120&d=mm&r=g)
On Wed, 31 May 2023, at 4:11 PM, Juan Nunez-Iglesias wrote:
For me it is indeed too surprising, and I would be in favour of reverting.
(👆 Incidentally I wrote that before seeing https://github.com/scikit-image/scikit-image/pull/6970 😂)
![](https://secure.gravatar.com/avatar/d9ac9213ada4a807322f99081296784b.jpg?s=120&d=mm&r=g)
Hi Sebastian, Could you clarify whether there are now varying code paths, depending on the CPU features available? As mentioned on the skimage issue, if results differ but errors are reduced across the board, I'd be happy to fix the test suite. But if this simply jiggers results, I'm less sure it is worth it. You also mentioned a potential middle ground, where the approximating polynomial could be expanded by another term? Overall, I feel this is a rather invasive change to NumPy that affects results that have been stable for many years, so it warrants careful consideration--perhaps even postponing until 2.0? Best regards, Stéfan On Tue, May 30, 2023, at 22:55, Sebastian Berg wrote:
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Tue, 2023-05-30 at 23:31 -0700, Stefan van der Walt wrote:
There is less variation then before because the new code path will be taken on practically all CPUs and I would think with the exact same algorithm (although some values fall-back to the default implementation probably). But before not super many end-users may have hit the less precise results in practice (only some CPUs).
The errors are basically identical to before on AVX512 machines but differ and slightly larger around those special values. On most machines you used to get the very precise default math results.
You also mentioned a potential middle ground, where the approximating polynomial could be expanded by another term?
Yes, it is plausible that the team working on this can improve the precision, I am not sure how quickly that could happen and how the new trade-offs would be. - Sebastian
![](https://secure.gravatar.com/avatar/114b52fab2628d55a23c40a246a08bd1.jpg?s=120&d=mm&r=g)
Hi Sebastian, I had a quick look at the PR and it looks like you re-implemented the sin-cos function using SIMD. I wonder how it compares with SLEEF (header only library, CPU-architecture agnostic SIMD implementation of transcendental functions with precision validation). SLEEF is close to the Intel SVML library in spirit but extended to multi-architecture (tested on PowerPC and ARM for example). This is just curiosity ... Like Juan, I am afraid of this change since my code, which depends on numpy for sin/cos used for rotation is likely to see large change of behavior. Cheers, Jerome On Wed, 31 May 2023 07:55:34 +0200 Sebastian Berg <sebastian@sipsolutions.net> wrote:
-- Jérôme Kieffer tel +33 476 882 445
![](https://secure.gravatar.com/avatar/72f994ca072df3a3d2c3db8a137790fd.jpg?s=120&d=mm&r=g)
On 31/5/23 09:33, Jerome Kieffer wrote:
I think we should revert the changes. They have proved to be disruptive, and I am not sure the improvement is worth the cost. The reversion should add a test that cements the current user expectations. The path forward is a different discussion, but for the 1.25 release I think we should revert. Matti
![](https://secure.gravatar.com/avatar/ae8547ef993e0dd573bbe29bef562293.jpg?s=120&d=mm&r=g)
Hi, On Wed, May 31, 2023 at 8:40 AM Matti Picus <matti.picus@gmail.com> wrote:
Is there a way to make the changes opt-in for now, while we go back to see if we can improve the precision? If that's not practical - would it be reasonable to guess that there will only be a very small proportion of users who will notice large whole-code performance gains from the e.g. 5x performance gain for transcendental functions? Cheers, Matthew
![](https://secure.gravatar.com/avatar/a196447c97eac5c663b07f82037df459.jpg?s=120&d=mm&r=g)
Matthew Brett wrote:
This would be similar to the approach libmvec is taking (https://sourceware.org/glibc/wiki/libmvec), adding the `--disable-mathvec` option, although they favour the 4ULP variants rather than the higher accuracy ones by default. If someone can advise as to the most appropriate place for such a toggle I can look into adding it, I would prefer for the default to be 4ULP to match libc though. Cheers, Chris
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 12:28 PM Chris Sidebottom <chris.sidebottom@arm.com> wrote:
We have a build-time toggle for SVML (`disable-svml` in `meson_options.txt` and an `NPY_DISABLE_SVML` environment variable for the distutils build). This one should look similar I think - and definitely not separate Python API with `np.fastmath` or similar. The flag can then default to the old (higher-precision, slower) behavior for <2.0, and the fast version for
The `libmvec` link above is not conclusive it seems to me Chris, given that the examples specify that one only gets the faster version with `-ffast-math`, hence it's off by default. Cheers, Ralf
![](https://secure.gravatar.com/avatar/72f994ca072df3a3d2c3db8a137790fd.jpg?s=120&d=mm&r=g)
On 31/5/23 14:12, Matthew Brett wrote:
There is a discussion about a runtime context variable/manager that would extend errorstate to have a precision flag as well in https://github.com/numpy/numpy/issues/23362. Matti
![](https://secure.gravatar.com/avatar/18a30ce6d84de6ce5c11ce006d10f616.jpg?s=120&d=mm&r=g)
What about having a np.fastmath module for faster, lower precision implementations? The error guarantees there would be lower, and possibly hardware dependent. By default we get the high precision version, but if the user knows what they are doing, they can get the speed. /David On Wed, 31 May 2023, 07:58 Sebastian Berg, <sebastian@sipsolutions.net> wrote:
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
I would much, much rather have the special functions in the `np.*` namespace be more accurate than fast on all platforms. These would not have been on my list for general purpose speed optimization. How much time is actually spent inside sin/cos even in a trig-heavy numpy program? And most numpy programs aren't trig-heavy, but the precision cost would be paid and noticeable even for those programs. I would want fast-and-inaccurate functions to be strictly opt-in for those times that they really paid off. Probably by providing them in their own module or package rather than a runtime switch, because it's probably only a *part* of my program that needs that kind of speed and can afford that precision loss while there will be other parts that need the precision. On Wed, May 31, 2023 at 1:59 AM Sebastian Berg <sebastian@sipsolutions.net> wrote:
-- Robert Kern
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 3:04 PM Robert Kern <robert.kern@gmail.com> wrote:
I would much, much rather have the special functions in the `np.*` namespace be more accurate than fast on all platforms. These would not have been on my list for general purpose speed optimization. How much time is actually spent inside sin/cos even in a trig-heavy numpy program? And most numpy programs aren't trig-heavy, but the precision cost would be paid and noticeable even for those programs. I would want fast-and-inaccurate functions to be strictly opt-in for those times that they really paid off. Probably by providing them in their own module or package rather than a runtime switch, because it's probably only a part of my program that needs that kind of speed and can afford that precision loss while there will be other parts that need the precision.
What Robert said :) But I still think the ideal would be the runtime option, maybe via the proposed context manager, for them as do need it, or want to try it out. Cheers, Matthew
![](https://secure.gravatar.com/avatar/8744048060e5931c500d3c9d1ecb997e.jpg?s=120&d=mm&r=g)
I am not in favor of reverting this change. We already accounted for this in Matplotlib ( https://github.com/matplotlib/matplotlib/issues/25789 and https://github.com/matplotlib/matplotlib/pull/25813). It was not actually that disruptive and mostly identified tests that were too brittle to begin with. My understanding is that a majority of the impact is not that the results are inaccurate, it is that they are differently inaccurate than they used to be. If this is going to be reverted I think the burden should be on those who want the reversion to demonstrate that the different results actually matter. Tom On Wed, May 31, 2023 at 10:11 AM Matthew Brett <matthew.brett@gmail.com> wrote:
-- Thomas Caswell tcaswell@gmail.com
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 4:19 PM Charles R Harris <charlesr.harris@gmail.com> wrote:
There's a little more to it than "precise and slow good", "fast == less accurate == bad". We've touched on this when SVML got merged (e.g., [1]) and with other SIMD code, e.g. in the "Floating point precision expectations in NumPy" thread [2]. Even libm doesn't guarantee the best possible result of <0.5 ULP max error, and there are also considerations like whether any numerical errors are normally distributed around the exact mathematical answer or not (see, e.g., [3]). It seems fairly clear that with this recent change, the feeling is that the tradeoff is bad and that too much accuracy was lost, for not enough real-world gain. However, we now had several years worth of performance work with few complaints about accuracy issues. So I wouldn't throw out the baby with the bath water now and say that we always want the best accuracy only. It seems to me like we need a better methodology for evaluating changes. Contributors have been pretty careful, but looking back at SIMD PRs, there were usually detailed benchmarks but not always detailed accuracy impact evaluations. Cheers, Ralf [1] https://github.com/numpy/numpy/pull/19478#issuecomment-883748183 [2] https://mail.python.org/archives/list/numpy-discussion@python.org/thread/56B... [3] https://github.com/JuliaMath/openlibm/issues/212
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 10:40 AM Ralf Gommers <ralf.gommers@gmail.com> wrote:
If we had a portable, low-maintenance, high-accuracy library that we could vendorize (and its performance cost wasn't *horrid*), I might even advocate that we do that. Reliance on platform libms is mostly about our maintenance burden than a principled accuracy/performance tradeoff. My preference *is* definitely firmly on the "precise and slow good" for these ufuncs because of the role these ufuncs play in real numpy programs; performance has a limited, situational effect while accuracy can have substantial ones across the board. It seems fairly clear that with this recent change, the feeling is that the
Except that we get a flurry of complaints now that they actually affect popular platforms. I'm not sure I'd read much into a lack of complaints before that.
I've only seen micro-benchmarks testing the runtime of individual functions, but maybe I haven't paid close enough attention. Have there been any benchmarks on real(ish) *programs* that demonstrate what utility these provide in even optimistic scenarios? I care precisely <1ULP about the absolute performance of `np.sin()` on its own. There are definitely programs that would care about that; I'm not sure any of them are (or should be) written in Python, though. -- Robert Kern
![](https://secure.gravatar.com/avatar/96dd777e397ab128fedab46af97a3a4a.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 9:12 AM Robert Kern <robert.kern@gmail.com> wrote:
One of my takeaways is that there are special values where more care should be taken. Given the inherent inaccuracy of floating point computation, it can be argued that there should be no such expectation, but here we are. Some inaccuracies are more visible than others. I think it is less intrusive to have the option to lessen precision when more speed is needed than the other way around. Our experience is that most users are unsophisticated when it comes to floating point, we should minimise the consequences for those users. Chuck
![](https://secure.gravatar.com/avatar/697900d3a29858ea20cc109a2aee0af6.jpg?s=120&d=mm&r=g)
I think it is the special values aspect that is most concerning. Math is just littered with all sorts of identities, especially with trig functions. While I know that floating point calculations are imprecise, there are certain properties of these functions that still held, such as going from -1 to 1. As a reference point on an M1 Mac using conda-forge: ```
Not perfect, but still right in most places.
I'm ambivalent about reverting. I know I would love speed improvements
because transformation calculations in GIS is slow using numpy, but also
some coordinate transformations might break because of these changes.
Ben Root
On Wed, May 31, 2023 at 11:40 AM Charles R Harris <charlesr.harris@gmail.com>
wrote:
>
>
> On Wed, May 31, 2023 at 9:12 AM Robert Kern <robert.kern@gmail.com> wrote:
>
>> On Wed, May 31, 2023 at 10:40 AM Ralf Gommers <ralf.gommers@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Wed, May 31, 2023 at 4:19 PM Charles R Harris <
>>> charlesr.harris@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Wed, May 31, 2023 at 8:05 AM Robert Kern <robert.kern@gmail.com>
>>>> wrote:
>>>>
>>>>> I would much, much rather have the special functions in the `np.*`
>>>>> namespace be more accurate than fast on all platforms. These would not
>>>>> have been on my list for general purpose speed optimization. How much time
>>>>> is actually spent inside sin/cos even in a trig-heavy numpy program? And
>>>>> most numpy programs aren't trig-heavy, but the precision cost would be paid
>>>>> and noticeable even for those programs. I would want fast-and-inaccurate
>>>>> functions to be strictly opt-in for those times that they really paid off.
>>>>> Probably by providing them in their own module or package rather than a
>>>>> runtime switch, because it's probably only a *part* of my program
>>>>> that needs that kind of speed and can afford that precision loss while
>>>>> there will be other parts that need the precision.
>>>>>
>>>>>
>>>> I think that would be a good policy going forward.
>>>>
>>>
>>> There's a little more to it than "precise and slow good", "fast == less
>>> accurate == bad". We've touched on this when SVML got merged (e.g., [1])
>>> and with other SIMD code, e.g. in the "Floating point precision
>>> expectations in NumPy" thread [2]. Even libm doesn't guarantee the best
>>> possible result of <0.5 ULP max error, and there are also considerations
>>> like whether any numerical errors are normally distributed around the exact
>>> mathematical answer or not (see, e.g., [3]).
>>>
>>
>> If we had a portable, low-maintenance, high-accuracy library that we
>> could vendorize (and its performance cost wasn't *horrid*), I might even
>> advocate that we do that. Reliance on platform libms is mostly about our
>> maintenance burden than a principled accuracy/performance tradeoff. My
>> preference *is* definitely firmly on the "precise and slow good" for
>> these ufuncs because of the role these ufuncs play in real numpy programs;
>> performance has a limited, situational effect while accuracy can have
>> substantial ones across the board.
>>
>> It seems fairly clear that with this recent change, the feeling is that
>>> the tradeoff is bad and that too much accuracy was lost, for not enough
>>> real-world gain. However, we now had several years worth of performance
>>> work with few complaints about accuracy issues.
>>>
>>
>> Except that we get a flurry of complaints now that they actually affect
>> popular platforms. I'm not sure I'd read much into a lack of complaints
>> before that.
>>
>>
>>> So I wouldn't throw out the baby with the bath water now and say that we
>>> always want the best accuracy only. It seems to me like we need a better
>>> methodology for evaluating changes. Contributors have been pretty careful,
>>> but looking back at SIMD PRs, there were usually detailed benchmarks but
>>> not always detailed accuracy impact evaluations.
>>>
>>
>> I've only seen micro-benchmarks testing the runtime of individual
>> functions, but maybe I haven't paid close enough attention. Have there been
>> any benchmarks on real(ish) *programs* that demonstrate what utility
>> these provide in even optimistic scenarios? I care precisely <1ULP about
>> the absolute performance of `np.sin()` on its own. There are definitely
>> programs that would care about that; I'm not sure any of them are (or
>> should be) written in Python, though.
>>
>>
> One of my takeaways is that there are special values where more care
> should be taken. Given the inherent inaccuracy of floating point
> computation, it can be argued that there should be no such expectation, but
> here we are. Some inaccuracies are more visible than others.
>
> I think it is less intrusive to have the option to lessen precision when
> more speed is needed than the other way around. Our experience is that most
> users are unsophisticated when it comes to floating point, we should
> minimise the consequences for those users.
>
> Chuck
> _______________________________________________
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-leave@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: ben.v.root@gmail.com
>
![](https://secure.gravatar.com/avatar/0383e4cae325f65a1bbd906be4be2276.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 3:51 PM Benjamin Root <ben.v.root@gmail.com> wrote:
I would say these are all correct. The true value of sin(np.pi) *is* 1.2246467991473532e-16 (to 15 decimal places). Remember np.pi is not π, but just a rational approximation to it. You can see this with mpmath (note the use of np.pi, which is a fixed float with 15 digits of precision):
On the other hand, the "correct" floating-point value of sin(np.pi/2) is exactly 1.0, because it rounds to 1 within 15 decimal places
That's 32 9's after the decimal point. (Things work out nicer at the 1s than the 0s because the derivatives of the trig functions are zero there, meaning the Taylor series have a cubic error term rather than a squared one) Aaron Meurer
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 5:51 PM Benjamin Root <ben.v.root@gmail.com> wrote:
FWIW, those ~0 answers are actually closer to the correct answers than 0 would be because `np.pi` is not actually π. Those aren't problems in the implementations of np.sin/np.cos, just the intrinsic problems with floating point representations and the choice of radians which places particularly special values at places in between adjacent representable floating point numbers.
Good to know. Do you have any concrete example that might be worth taking a look at in more detail? Either for performance or accuracy. -- Robert Kern
![](https://secure.gravatar.com/avatar/8744048060e5931c500d3c9d1ecb997e.jpg?s=120&d=mm&r=g)
Just for reference, this is the current results on the numpy main branch at special points: In [1]: import numpy as np In [2]: np.sin(0.0) Out[2]: 0.0 In [3]: np.cos(0.0) Out[3]: 0.9999999999999999 In [4]: np.cos(2*np.pi) Out[4]: 0.9999999999999998 In [5]: np.sin(2*np.pi) Out[5]: -2.4492935982947064e-16 In [6]: np.sin(np.pi) Out[6]: 1.2246467991473532e-16 In [7]: np.cos(np.pi) Out[7]: -0.9999999999999998 In [8]: np.cos(np.pi/2) Out[8]: 6.123233995736766e-17 In [9]: np.sin(np.pi/2) Out[9]: 0.9999999999999998 In [10]: np.__version__ Out[10]: '2.0.0.dev0+60.g174dfae62' On Wed, May 31, 2023 at 6:20 PM Robert Kern <robert.kern@gmail.com> wrote:
-- Thomas Caswell tcaswell@gmail.com
![](https://secure.gravatar.com/avatar/49c5ca40df804ebf053e121a6104f405.jpg?s=120&d=mm&r=g)
Thanks for your work on this, Sebastian. I think there is a benefit for new users and learners to have visually obviously-correct results for the identities. The SciPy Developer's Guide [1] several of us worked on last week uses Snell's Law as a teaching example, and it would now give some results that would make newcomers double-take. You could argue that it's important to learn early not to expect too much from floats, but nonetheless I think something tangible is lost with this change, in a teaching context. [1] https://learn.scientific-python.org/development/tutorials/module/ Dan Daniel B. Allan, Ph.D (he/him) Data Science and Systems Integration NSLS-II Brookhaven National Laboratory ________________________________ From: Thomas Caswell <tcaswell@gmail.com> Sent: Wednesday, May 31, 2023 6:38 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: Precision changes to sin/cos in the next release? Just for reference, this is the current results on the numpy main branch at special points: In [1]: import numpy as np In [2]: np.sin(0.0) Out[2]: 0.0 In [3]: np.cos(0.0) Out[3]: 0.9999999999999999 In [4]: np.cos(2*np.pi) Out[4]: 0.9999999999999998 In [5]: np.sin(2*np.pi) Out[5]: -2.4492935982947064e-16 In [6]: np.sin(np.pi) Out[6]: 1.2246467991473532e-16 In [7]: np.cos(np.pi) Out[7]: -0.9999999999999998 In [8]: np.cos(np.pi/2) Out[8]: 6.123233995736766e-17 In [9]: np.sin(np.pi/2) Out[9]: 0.9999999999999998 In [10]: np.__version__ Out[10]: '2.0.0.dev0+60.g174dfae62' On Wed, May 31, 2023 at 6:20 PM Robert Kern <robert.kern@gmail.com<mailto:robert.kern@gmail.com>> wrote: On Wed, May 31, 2023 at 5:51 PM Benjamin Root <ben.v.root@gmail.com<mailto:ben.v.root@gmail.com>> wrote: I think it is the special values aspect that is most concerning. Math is just littered with all sorts of identities, especially with trig functions. While I know that floating point calculations are imprecise, there are certain properties of these functions that still held, such as going from -1 to 1. As a reference point on an M1 Mac using conda-forge: ```
Not perfect, but still right in most places.
FWIW, those ~0 answers are actually closer to the correct answers than 0 would be because `np.pi` is not actually π. Those aren't problems in the implementations of np.sin/np.cos, just the intrinsic problems with floating point representations and the choice of radians which places particularly special values at places in between adjacent representable floating point numbers.
I'm ambivalent about reverting. I know I would love speed improvements because transformation calculations in GIS is slow using numpy, but also some coordinate transformations might break because of these changes.
Good to know. Do you have any concrete example that might be worth taking a look at in more detail? Either for performance or accuracy.
--
Robert Kern
_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org<mailto:numpy-discussion@python.org>
To unsubscribe send an email to numpy-discussion-leave@python.org<mailto:numpy-discussion-leave@python.org>
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/<https://urldefense.com/v3/__https://mail.python.org/mailman3/lists/numpy-discussion.python.org/__;!!P4SdNyxKAPE!AN49UtwKc5L2e0XZQOM8cmjUjtJb6jv0Ut6c-8COS1z8tgzfuRIb9stzIPAmrtbW8DyDnXNAmB4V6Nw$>
Member address: tcaswell@gmail.com<mailto:tcaswell@gmail.com>
--
Thomas Caswell
tcaswell@gmail.com<mailto:tcaswell@gmail.com>
![](https://secure.gravatar.com/avatar/008b55030cffb9a4c4f7d8422e10343e.jpg?s=120&d=mm&r=g)
On Wed, 31 May 2023, at 4:11 PM, Juan Nunez-Iglesias wrote:
For me it is indeed too surprising, and I would be in favour of reverting.
(👆 Incidentally I wrote that before seeing https://github.com/scikit-image/scikit-image/pull/6970 😂)
![](https://secure.gravatar.com/avatar/d9ac9213ada4a807322f99081296784b.jpg?s=120&d=mm&r=g)
Hi Sebastian, Could you clarify whether there are now varying code paths, depending on the CPU features available? As mentioned on the skimage issue, if results differ but errors are reduced across the board, I'd be happy to fix the test suite. But if this simply jiggers results, I'm less sure it is worth it. You also mentioned a potential middle ground, where the approximating polynomial could be expanded by another term? Overall, I feel this is a rather invasive change to NumPy that affects results that have been stable for many years, so it warrants careful consideration--perhaps even postponing until 2.0? Best regards, Stéfan On Tue, May 30, 2023, at 22:55, Sebastian Berg wrote:
![](https://secure.gravatar.com/avatar/b4f6d4f8b501cb05fd054944a166a121.jpg?s=120&d=mm&r=g)
On Tue, 2023-05-30 at 23:31 -0700, Stefan van der Walt wrote:
There is less variation then before because the new code path will be taken on practically all CPUs and I would think with the exact same algorithm (although some values fall-back to the default implementation probably). But before not super many end-users may have hit the less precise results in practice (only some CPUs).
The errors are basically identical to before on AVX512 machines but differ and slightly larger around those special values. On most machines you used to get the very precise default math results.
You also mentioned a potential middle ground, where the approximating polynomial could be expanded by another term?
Yes, it is plausible that the team working on this can improve the precision, I am not sure how quickly that could happen and how the new trade-offs would be. - Sebastian
![](https://secure.gravatar.com/avatar/114b52fab2628d55a23c40a246a08bd1.jpg?s=120&d=mm&r=g)
Hi Sebastian, I had a quick look at the PR and it looks like you re-implemented the sin-cos function using SIMD. I wonder how it compares with SLEEF (header only library, CPU-architecture agnostic SIMD implementation of transcendental functions with precision validation). SLEEF is close to the Intel SVML library in spirit but extended to multi-architecture (tested on PowerPC and ARM for example). This is just curiosity ... Like Juan, I am afraid of this change since my code, which depends on numpy for sin/cos used for rotation is likely to see large change of behavior. Cheers, Jerome On Wed, 31 May 2023 07:55:34 +0200 Sebastian Berg <sebastian@sipsolutions.net> wrote:
-- Jérôme Kieffer tel +33 476 882 445
![](https://secure.gravatar.com/avatar/72f994ca072df3a3d2c3db8a137790fd.jpg?s=120&d=mm&r=g)
On 31/5/23 09:33, Jerome Kieffer wrote:
I think we should revert the changes. They have proved to be disruptive, and I am not sure the improvement is worth the cost. The reversion should add a test that cements the current user expectations. The path forward is a different discussion, but for the 1.25 release I think we should revert. Matti
![](https://secure.gravatar.com/avatar/ae8547ef993e0dd573bbe29bef562293.jpg?s=120&d=mm&r=g)
Hi, On Wed, May 31, 2023 at 8:40 AM Matti Picus <matti.picus@gmail.com> wrote:
Is there a way to make the changes opt-in for now, while we go back to see if we can improve the precision? If that's not practical - would it be reasonable to guess that there will only be a very small proportion of users who will notice large whole-code performance gains from the e.g. 5x performance gain for transcendental functions? Cheers, Matthew
![](https://secure.gravatar.com/avatar/a196447c97eac5c663b07f82037df459.jpg?s=120&d=mm&r=g)
Matthew Brett wrote:
This would be similar to the approach libmvec is taking (https://sourceware.org/glibc/wiki/libmvec), adding the `--disable-mathvec` option, although they favour the 4ULP variants rather than the higher accuracy ones by default. If someone can advise as to the most appropriate place for such a toggle I can look into adding it, I would prefer for the default to be 4ULP to match libc though. Cheers, Chris
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 12:28 PM Chris Sidebottom <chris.sidebottom@arm.com> wrote:
We have a build-time toggle for SVML (`disable-svml` in `meson_options.txt` and an `NPY_DISABLE_SVML` environment variable for the distutils build). This one should look similar I think - and definitely not separate Python API with `np.fastmath` or similar. The flag can then default to the old (higher-precision, slower) behavior for <2.0, and the fast version for
The `libmvec` link above is not conclusive it seems to me Chris, given that the examples specify that one only gets the faster version with `-ffast-math`, hence it's off by default. Cheers, Ralf
![](https://secure.gravatar.com/avatar/72f994ca072df3a3d2c3db8a137790fd.jpg?s=120&d=mm&r=g)
On 31/5/23 14:12, Matthew Brett wrote:
There is a discussion about a runtime context variable/manager that would extend errorstate to have a precision flag as well in https://github.com/numpy/numpy/issues/23362. Matti
![](https://secure.gravatar.com/avatar/18a30ce6d84de6ce5c11ce006d10f616.jpg?s=120&d=mm&r=g)
What about having a np.fastmath module for faster, lower precision implementations? The error guarantees there would be lower, and possibly hardware dependent. By default we get the high precision version, but if the user knows what they are doing, they can get the speed. /David On Wed, 31 May 2023, 07:58 Sebastian Berg, <sebastian@sipsolutions.net> wrote:
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
I would much, much rather have the special functions in the `np.*` namespace be more accurate than fast on all platforms. These would not have been on my list for general purpose speed optimization. How much time is actually spent inside sin/cos even in a trig-heavy numpy program? And most numpy programs aren't trig-heavy, but the precision cost would be paid and noticeable even for those programs. I would want fast-and-inaccurate functions to be strictly opt-in for those times that they really paid off. Probably by providing them in their own module or package rather than a runtime switch, because it's probably only a *part* of my program that needs that kind of speed and can afford that precision loss while there will be other parts that need the precision. On Wed, May 31, 2023 at 1:59 AM Sebastian Berg <sebastian@sipsolutions.net> wrote:
-- Robert Kern
![](https://secure.gravatar.com/avatar/b4929294417e9ac44c17967baae75a36.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 3:04 PM Robert Kern <robert.kern@gmail.com> wrote:
I would much, much rather have the special functions in the `np.*` namespace be more accurate than fast on all platforms. These would not have been on my list for general purpose speed optimization. How much time is actually spent inside sin/cos even in a trig-heavy numpy program? And most numpy programs aren't trig-heavy, but the precision cost would be paid and noticeable even for those programs. I would want fast-and-inaccurate functions to be strictly opt-in for those times that they really paid off. Probably by providing them in their own module or package rather than a runtime switch, because it's probably only a part of my program that needs that kind of speed and can afford that precision loss while there will be other parts that need the precision.
What Robert said :) But I still think the ideal would be the runtime option, maybe via the proposed context manager, for them as do need it, or want to try it out. Cheers, Matthew
![](https://secure.gravatar.com/avatar/8744048060e5931c500d3c9d1ecb997e.jpg?s=120&d=mm&r=g)
I am not in favor of reverting this change. We already accounted for this in Matplotlib ( https://github.com/matplotlib/matplotlib/issues/25789 and https://github.com/matplotlib/matplotlib/pull/25813). It was not actually that disruptive and mostly identified tests that were too brittle to begin with. My understanding is that a majority of the impact is not that the results are inaccurate, it is that they are differently inaccurate than they used to be. If this is going to be reverted I think the burden should be on those who want the reversion to demonstrate that the different results actually matter. Tom On Wed, May 31, 2023 at 10:11 AM Matthew Brett <matthew.brett@gmail.com> wrote:
-- Thomas Caswell tcaswell@gmail.com
![](https://secure.gravatar.com/avatar/5f88830d19f9c83e2ddfd913496c5025.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 4:19 PM Charles R Harris <charlesr.harris@gmail.com> wrote:
There's a little more to it than "precise and slow good", "fast == less accurate == bad". We've touched on this when SVML got merged (e.g., [1]) and with other SIMD code, e.g. in the "Floating point precision expectations in NumPy" thread [2]. Even libm doesn't guarantee the best possible result of <0.5 ULP max error, and there are also considerations like whether any numerical errors are normally distributed around the exact mathematical answer or not (see, e.g., [3]). It seems fairly clear that with this recent change, the feeling is that the tradeoff is bad and that too much accuracy was lost, for not enough real-world gain. However, we now had several years worth of performance work with few complaints about accuracy issues. So I wouldn't throw out the baby with the bath water now and say that we always want the best accuracy only. It seems to me like we need a better methodology for evaluating changes. Contributors have been pretty careful, but looking back at SIMD PRs, there were usually detailed benchmarks but not always detailed accuracy impact evaluations. Cheers, Ralf [1] https://github.com/numpy/numpy/pull/19478#issuecomment-883748183 [2] https://mail.python.org/archives/list/numpy-discussion@python.org/thread/56B... [3] https://github.com/JuliaMath/openlibm/issues/212
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 10:40 AM Ralf Gommers <ralf.gommers@gmail.com> wrote:
If we had a portable, low-maintenance, high-accuracy library that we could vendorize (and its performance cost wasn't *horrid*), I might even advocate that we do that. Reliance on platform libms is mostly about our maintenance burden than a principled accuracy/performance tradeoff. My preference *is* definitely firmly on the "precise and slow good" for these ufuncs because of the role these ufuncs play in real numpy programs; performance has a limited, situational effect while accuracy can have substantial ones across the board. It seems fairly clear that with this recent change, the feeling is that the
Except that we get a flurry of complaints now that they actually affect popular platforms. I'm not sure I'd read much into a lack of complaints before that.
I've only seen micro-benchmarks testing the runtime of individual functions, but maybe I haven't paid close enough attention. Have there been any benchmarks on real(ish) *programs* that demonstrate what utility these provide in even optimistic scenarios? I care precisely <1ULP about the absolute performance of `np.sin()` on its own. There are definitely programs that would care about that; I'm not sure any of them are (or should be) written in Python, though. -- Robert Kern
![](https://secure.gravatar.com/avatar/96dd777e397ab128fedab46af97a3a4a.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 9:12 AM Robert Kern <robert.kern@gmail.com> wrote:
One of my takeaways is that there are special values where more care should be taken. Given the inherent inaccuracy of floating point computation, it can be argued that there should be no such expectation, but here we are. Some inaccuracies are more visible than others. I think it is less intrusive to have the option to lessen precision when more speed is needed than the other way around. Our experience is that most users are unsophisticated when it comes to floating point, we should minimise the consequences for those users. Chuck
![](https://secure.gravatar.com/avatar/697900d3a29858ea20cc109a2aee0af6.jpg?s=120&d=mm&r=g)
I think it is the special values aspect that is most concerning. Math is just littered with all sorts of identities, especially with trig functions. While I know that floating point calculations are imprecise, there are certain properties of these functions that still held, such as going from -1 to 1. As a reference point on an M1 Mac using conda-forge: ```
Not perfect, but still right in most places.
I'm ambivalent about reverting. I know I would love speed improvements
because transformation calculations in GIS is slow using numpy, but also
some coordinate transformations might break because of these changes.
Ben Root
On Wed, May 31, 2023 at 11:40 AM Charles R Harris <charlesr.harris@gmail.com>
wrote:
>
>
> On Wed, May 31, 2023 at 9:12 AM Robert Kern <robert.kern@gmail.com> wrote:
>
>> On Wed, May 31, 2023 at 10:40 AM Ralf Gommers <ralf.gommers@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Wed, May 31, 2023 at 4:19 PM Charles R Harris <
>>> charlesr.harris@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Wed, May 31, 2023 at 8:05 AM Robert Kern <robert.kern@gmail.com>
>>>> wrote:
>>>>
>>>>> I would much, much rather have the special functions in the `np.*`
>>>>> namespace be more accurate than fast on all platforms. These would not
>>>>> have been on my list for general purpose speed optimization. How much time
>>>>> is actually spent inside sin/cos even in a trig-heavy numpy program? And
>>>>> most numpy programs aren't trig-heavy, but the precision cost would be paid
>>>>> and noticeable even for those programs. I would want fast-and-inaccurate
>>>>> functions to be strictly opt-in for those times that they really paid off.
>>>>> Probably by providing them in their own module or package rather than a
>>>>> runtime switch, because it's probably only a *part* of my program
>>>>> that needs that kind of speed and can afford that precision loss while
>>>>> there will be other parts that need the precision.
>>>>>
>>>>>
>>>> I think that would be a good policy going forward.
>>>>
>>>
>>> There's a little more to it than "precise and slow good", "fast == less
>>> accurate == bad". We've touched on this when SVML got merged (e.g., [1])
>>> and with other SIMD code, e.g. in the "Floating point precision
>>> expectations in NumPy" thread [2]. Even libm doesn't guarantee the best
>>> possible result of <0.5 ULP max error, and there are also considerations
>>> like whether any numerical errors are normally distributed around the exact
>>> mathematical answer or not (see, e.g., [3]).
>>>
>>
>> If we had a portable, low-maintenance, high-accuracy library that we
>> could vendorize (and its performance cost wasn't *horrid*), I might even
>> advocate that we do that. Reliance on platform libms is mostly about our
>> maintenance burden than a principled accuracy/performance tradeoff. My
>> preference *is* definitely firmly on the "precise and slow good" for
>> these ufuncs because of the role these ufuncs play in real numpy programs;
>> performance has a limited, situational effect while accuracy can have
>> substantial ones across the board.
>>
>> It seems fairly clear that with this recent change, the feeling is that
>>> the tradeoff is bad and that too much accuracy was lost, for not enough
>>> real-world gain. However, we now had several years worth of performance
>>> work with few complaints about accuracy issues.
>>>
>>
>> Except that we get a flurry of complaints now that they actually affect
>> popular platforms. I'm not sure I'd read much into a lack of complaints
>> before that.
>>
>>
>>> So I wouldn't throw out the baby with the bath water now and say that we
>>> always want the best accuracy only. It seems to me like we need a better
>>> methodology for evaluating changes. Contributors have been pretty careful,
>>> but looking back at SIMD PRs, there were usually detailed benchmarks but
>>> not always detailed accuracy impact evaluations.
>>>
>>
>> I've only seen micro-benchmarks testing the runtime of individual
>> functions, but maybe I haven't paid close enough attention. Have there been
>> any benchmarks on real(ish) *programs* that demonstrate what utility
>> these provide in even optimistic scenarios? I care precisely <1ULP about
>> the absolute performance of `np.sin()` on its own. There are definitely
>> programs that would care about that; I'm not sure any of them are (or
>> should be) written in Python, though.
>>
>>
> One of my takeaways is that there are special values where more care
> should be taken. Given the inherent inaccuracy of floating point
> computation, it can be argued that there should be no such expectation, but
> here we are. Some inaccuracies are more visible than others.
>
> I think it is less intrusive to have the option to lessen precision when
> more speed is needed than the other way around. Our experience is that most
> users are unsophisticated when it comes to floating point, we should
> minimise the consequences for those users.
>
> Chuck
> _______________________________________________
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-leave@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: ben.v.root@gmail.com
>
![](https://secure.gravatar.com/avatar/0383e4cae325f65a1bbd906be4be2276.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 3:51 PM Benjamin Root <ben.v.root@gmail.com> wrote:
I would say these are all correct. The true value of sin(np.pi) *is* 1.2246467991473532e-16 (to 15 decimal places). Remember np.pi is not π, but just a rational approximation to it. You can see this with mpmath (note the use of np.pi, which is a fixed float with 15 digits of precision):
On the other hand, the "correct" floating-point value of sin(np.pi/2) is exactly 1.0, because it rounds to 1 within 15 decimal places
That's 32 9's after the decimal point. (Things work out nicer at the 1s than the 0s because the derivatives of the trig functions are zero there, meaning the Taylor series have a cubic error term rather than a squared one) Aaron Meurer
![](https://secure.gravatar.com/avatar/764323a14e554c97ab74177e0bce51d4.jpg?s=120&d=mm&r=g)
On Wed, May 31, 2023 at 5:51 PM Benjamin Root <ben.v.root@gmail.com> wrote:
FWIW, those ~0 answers are actually closer to the correct answers than 0 would be because `np.pi` is not actually π. Those aren't problems in the implementations of np.sin/np.cos, just the intrinsic problems with floating point representations and the choice of radians which places particularly special values at places in between adjacent representable floating point numbers.
Good to know. Do you have any concrete example that might be worth taking a look at in more detail? Either for performance or accuracy. -- Robert Kern
![](https://secure.gravatar.com/avatar/8744048060e5931c500d3c9d1ecb997e.jpg?s=120&d=mm&r=g)
Just for reference, this is the current results on the numpy main branch at special points: In [1]: import numpy as np In [2]: np.sin(0.0) Out[2]: 0.0 In [3]: np.cos(0.0) Out[3]: 0.9999999999999999 In [4]: np.cos(2*np.pi) Out[4]: 0.9999999999999998 In [5]: np.sin(2*np.pi) Out[5]: -2.4492935982947064e-16 In [6]: np.sin(np.pi) Out[6]: 1.2246467991473532e-16 In [7]: np.cos(np.pi) Out[7]: -0.9999999999999998 In [8]: np.cos(np.pi/2) Out[8]: 6.123233995736766e-17 In [9]: np.sin(np.pi/2) Out[9]: 0.9999999999999998 In [10]: np.__version__ Out[10]: '2.0.0.dev0+60.g174dfae62' On Wed, May 31, 2023 at 6:20 PM Robert Kern <robert.kern@gmail.com> wrote:
-- Thomas Caswell tcaswell@gmail.com
![](https://secure.gravatar.com/avatar/49c5ca40df804ebf053e121a6104f405.jpg?s=120&d=mm&r=g)
Thanks for your work on this, Sebastian. I think there is a benefit for new users and learners to have visually obviously-correct results for the identities. The SciPy Developer's Guide [1] several of us worked on last week uses Snell's Law as a teaching example, and it would now give some results that would make newcomers double-take. You could argue that it's important to learn early not to expect too much from floats, but nonetheless I think something tangible is lost with this change, in a teaching context. [1] https://learn.scientific-python.org/development/tutorials/module/ Dan Daniel B. Allan, Ph.D (he/him) Data Science and Systems Integration NSLS-II Brookhaven National Laboratory ________________________________ From: Thomas Caswell <tcaswell@gmail.com> Sent: Wednesday, May 31, 2023 6:38 PM To: Discussion of Numerical Python <numpy-discussion@python.org> Subject: [Numpy-discussion] Re: Precision changes to sin/cos in the next release? Just for reference, this is the current results on the numpy main branch at special points: In [1]: import numpy as np In [2]: np.sin(0.0) Out[2]: 0.0 In [3]: np.cos(0.0) Out[3]: 0.9999999999999999 In [4]: np.cos(2*np.pi) Out[4]: 0.9999999999999998 In [5]: np.sin(2*np.pi) Out[5]: -2.4492935982947064e-16 In [6]: np.sin(np.pi) Out[6]: 1.2246467991473532e-16 In [7]: np.cos(np.pi) Out[7]: -0.9999999999999998 In [8]: np.cos(np.pi/2) Out[8]: 6.123233995736766e-17 In [9]: np.sin(np.pi/2) Out[9]: 0.9999999999999998 In [10]: np.__version__ Out[10]: '2.0.0.dev0+60.g174dfae62' On Wed, May 31, 2023 at 6:20 PM Robert Kern <robert.kern@gmail.com<mailto:robert.kern@gmail.com>> wrote: On Wed, May 31, 2023 at 5:51 PM Benjamin Root <ben.v.root@gmail.com<mailto:ben.v.root@gmail.com>> wrote: I think it is the special values aspect that is most concerning. Math is just littered with all sorts of identities, especially with trig functions. While I know that floating point calculations are imprecise, there are certain properties of these functions that still held, such as going from -1 to 1. As a reference point on an M1 Mac using conda-forge: ```
Not perfect, but still right in most places.
FWIW, those ~0 answers are actually closer to the correct answers than 0 would be because `np.pi` is not actually π. Those aren't problems in the implementations of np.sin/np.cos, just the intrinsic problems with floating point representations and the choice of radians which places particularly special values at places in between adjacent representable floating point numbers.
I'm ambivalent about reverting. I know I would love speed improvements because transformation calculations in GIS is slow using numpy, but also some coordinate transformations might break because of these changes.
Good to know. Do you have any concrete example that might be worth taking a look at in more detail? Either for performance or accuracy.
--
Robert Kern
_______________________________________________
NumPy-Discussion mailing list -- numpy-discussion@python.org<mailto:numpy-discussion@python.org>
To unsubscribe send an email to numpy-discussion-leave@python.org<mailto:numpy-discussion-leave@python.org>
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/<https://urldefense.com/v3/__https://mail.python.org/mailman3/lists/numpy-discussion.python.org/__;!!P4SdNyxKAPE!AN49UtwKc5L2e0XZQOM8cmjUjtJb6jv0Ut6c-8COS1z8tgzfuRIb9stzIPAmrtbW8DyDnXNAmB4V6Nw$>
Member address: tcaswell@gmail.com<mailto:tcaswell@gmail.com>
--
Thomas Caswell
tcaswell@gmail.com<mailto:tcaswell@gmail.com>
participants (16)
-
Aaron Meurer
-
Allan, Daniel
-
Benjamin Root
-
Charles R Harris
-
Chris Sidebottom
-
David Menéndez Hurtado
-
Jerome Kieffer
-
Juan Nunez-Iglesias
-
Matthew Brett
-
Matthew Brett
-
Matti Picus
-
Ralf Gommers
-
Robert Kern
-
Sebastian Berg
-
Stefan van der Walt
-
Thomas Caswell