Slightly off-topic - accuracy of C exp function?

Hi, I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX. The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0). Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author? Cheers, Matthew

On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!) Presumably the accuracy in this case is a function of the C library anyway, not the compiler? Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case. -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org

On Wed, Apr 23, 2014 at 9:43 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!)
Presumably the accuracy in this case is a function of the C library anyway, not the compiler? Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case.
In the case of MS runtime, at least 9 (as shipped in VS 2008), our implementation is likely to be better (most of the code was taken from the sun math library when the license allowed it). David
-n
-- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

Hi, On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!)
Presumably the accuracy in this case is a function of the C library anyway, not the compiler?
Mingw-w64 implementation is in assembly: http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/e...
Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case.
I only investigated this particular value, in which case it looked as though the OSX value was closer to the exact value (via sympy.mpmath) - by ~1 unit-at-the-last-place. This was causing a divergence in the powell optimization path and therefore a single scipy test failure. I haven't investigated further - was wondering what investigation I should do, more than running the numpy / scipy test suites. Cheers, Matthew

Hi, On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!)
Presumably the accuracy in this case is a function of the C library anyway, not the compiler?
Mingw-w64 implementation is in assembly:
http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/e...
Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case.
I only investigated this particular value, in which case it looked as though the OSX value was closer to the exact value (via sympy.mpmath) - by ~1 unit-at-the-last-place. This was causing a divergence in the powell optimization path and therefore a single scipy test failure. I haven't investigated further - was wondering what investigation I should do, more than running the numpy / scipy test suites.
Investigating further, with this script: https://gist.github.com/matthew-brett/11301221 The following are tests of np.exp accuracy for input values between 0 and 10, for numpy 1.8.1. If np.exp(x) performs perfectly, it will return the nearest floating point value to the exact value of exp(x). If it does, this scores a zero for error in the tables below. If 'proportion of zeros' is 1 - then np.exp performs perfectly for all tested values of exp (as is the case for linux here). OSX 10.9 Proportion of zeros: 0.99789 Sum of error: 2.15021267458e-09 Sum of squared error: 2.47149370032e-14 Max / min error: 5.96046447754e-08 -2.98023223877e-08 Sum of squared relative error: 5.22456992025e-30 Max / min relative error: 2.19700100681e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0 Debian Jessie / Sid Proportion of zeros: 1.0 Sum of error: 0.0 Sum of squared error: 0.0 Max / min error: 0.0 0.0 Sum of squared relative error: 0.0 Max / min relative error: 0.0 0.0 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0 Mingw-w64 Windows 7 Proportion of zeros: 0.82089 Sum of error: 8.08415331122e-07 Sum of squared error: 2.90045099615e-12 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.18466468175e-28 Max / min relative error: 2.22041308226e-16 -2.22042100773e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0 Take-home : exp implementation for mingw-w64 is exactly (floating point) correct 82% of the time, and one unit-at-the-last-place off for the rest [1]. OSX is off by 1 ULP only 0.2% of the time. Is mingw-w64 accurate enough? Do we have any policy on this? Cheers, Matthew [1] http://matthew-brett.github.io/pydagogue/floating_error.html

On Sat, Apr 26, 2014 at 6:37 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!)
Presumably the accuracy in this case is a function of the C library anyway, not the compiler?
Mingw-w64 implementation is in assembly:
http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/e...
Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case.
I only investigated this particular value, in which case it looked as though the OSX value was closer to the exact value (via sympy.mpmath) - by ~1 unit-at-the-last-place. This was causing a divergence in the powell optimization path and therefore a single scipy test failure. I haven't investigated further - was wondering what investigation I should do, more than running the numpy / scipy test suites.
Investigating further, with this script:
https://gist.github.com/matthew-brett/11301221
The following are tests of np.exp accuracy for input values between 0 and 10, for numpy 1.8.1.
If np.exp(x) performs perfectly, it will return the nearest floating point value to the exact value of exp(x). If it does, this scores a zero for error in the tables below. If 'proportion of zeros' is 1 - then np.exp performs perfectly for all tested values of exp (as is the case for linux here).
OSX 10.9
Proportion of zeros: 0.99789 Sum of error: 2.15021267458e-09 Sum of squared error: 2.47149370032e-14 Max / min error: 5.96046447754e-08 -2.98023223877e-08 Sum of squared relative error: 5.22456992025e-30 Max / min relative error: 2.19700100681e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Debian Jessie / Sid
Proportion of zeros: 1.0 Sum of error: 0.0 Sum of squared error: 0.0 Max / min error: 0.0 0.0 Sum of squared relative error: 0.0 Max / min relative error: 0.0 0.0 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Mingw-w64 Windows 7
Proportion of zeros: 0.82089 Sum of error: 8.08415331122e-07 Sum of squared error: 2.90045099615e-12 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.18466468175e-28 Max / min relative error: 2.22041308226e-16 -2.22042100773e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Take-home : exp implementation for mingw-w64 is exactly (floating point) correct 82% of the time, and one unit-at-the-last-place off for the rest [1]. OSX is off by 1 ULP only 0.2% of the time.
Windows 64 with MKL \WinPython-64bit-3.3.2.2\python-3.3.2.amd64>python "E:\Josef\eclipsegworkspace\statsmodels-git\local_scripts\local_scripts\try_exp_error.py" Proportion of zeros: 0.99793 Sum of error: -2.10546855506e-07 Sum of squared error: 3.33304327526e-14 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.98420694339e-30 Max / min relative error: 2.20881302691e-16 -2.18321571939e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0 Windows 32 bit python with official MingW binaries Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32 Proportion of zeros: 0.99464 Sum of error: -3.91621083118e-07 Sum of squared error: 9.2239247812e-14 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 1.3334972729e-29 Max / min relative error: 2.21593462148e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Is mingw-w64 accurate enough? Do we have any policy on this?
I wouldn't worry about a missing or an extra eps in our applications, but the competition is more accurate. Josef
Cheers,
Matthew
[1] http://matthew-brett.github.io/pydagogue/floating_error.html _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

On Sat, Apr 26, 2014 at 8:05 PM, <josef.pktd@gmail.com> wrote:
On Sat, Apr 26, 2014 at 6:37 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett < matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!)
Presumably the accuracy in this case is a function of the C library anyway, not the compiler?
Mingw-w64 implementation is in assembly:
http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/e...
Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case.
I only investigated this particular value, in which case it looked as though the OSX value was closer to the exact value (via sympy.mpmath) - by ~1 unit-at-the-last-place. This was causing a divergence in the powell optimization path and therefore a single scipy test failure. I haven't investigated further - was wondering what investigation I should do, more than running the numpy / scipy test suites.
Investigating further, with this script:
https://gist.github.com/matthew-brett/11301221
The following are tests of np.exp accuracy for input values between 0 and 10, for numpy 1.8.1.
If np.exp(x) performs perfectly, it will return the nearest floating point value to the exact value of exp(x). If it does, this scores a zero for error in the tables below. If 'proportion of zeros' is 1 - then np.exp performs perfectly for all tested values of exp (as is the case for linux here).
OSX 10.9
Proportion of zeros: 0.99789 Sum of error: 2.15021267458e-09 Sum of squared error: 2.47149370032e-14 Max / min error: 5.96046447754e-08 -2.98023223877e-08 Sum of squared relative error: 5.22456992025e-30 Max / min relative error: 2.19700100681e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Debian Jessie / Sid
Proportion of zeros: 1.0 Sum of error: 0.0 Sum of squared error: 0.0 Max / min error: 0.0 0.0 Sum of squared relative error: 0.0 Max / min relative error: 0.0 0.0 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Mingw-w64 Windows 7
Proportion of zeros: 0.82089 Sum of error: 8.08415331122e-07 Sum of squared error: 2.90045099615e-12 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.18466468175e-28 Max / min relative error: 2.22041308226e-16 -2.22042100773e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Take-home : exp implementation for mingw-w64 is exactly (floating point) correct 82% of the time, and one unit-at-the-last-place off for the rest [1]. OSX is off by 1 ULP only 0.2% of the time.
Windows 64 with MKL
\WinPython-64bit-3.3.2.2\python-3.3.2.amd64>python "E:\Josef\eclipsegworkspace\statsmodels-git\local_scripts\local_scripts\try_exp_error.py" Proportion of zeros: 0.99793 Sum of error: -2.10546855506e-07 Sum of squared error: 3.33304327526e-14 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.98420694339e-30 Max / min relative error: 2.20881302691e-16 -2.18321571939e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Windows 32 bit python with official MingW binaries
Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32
Proportion of zeros: 0.99464 Sum of error: -3.91621083118e-07 Sum of squared error: 9.2239247812e-14 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 1.3334972729e-29 Max / min relative error: 2.21593462148e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Is mingw-w64 accurate enough? Do we have any policy on this?
I wouldn't worry about a missing or an extra eps in our applications, but the competition is more accurate.
Just for comparison, I increased `until` to 300 the proportion of zeros and relative error stays about the same for both MKL and your wheels absolute error are huge, the following is MKL Sum of error: -3.78802736366e+112 Sum of squared error: 1.51136754049e+225 Max / min error: 4.80981520952e+111 -3.84785216762e+112 (I looked a lot at the behavior of exp in the hundreds recently :( As illustration why I don't care about one **relative** eps
np.finfo(np.double).eps + 10 == 10 True
https://github.com/scipy/scipy/pull/3547 and many others Josef
Josef
Cheers,
Matthew
[1] http://matthew-brett.github.io/pydagogue/floating_error.html _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

On Sat, Apr 26, 2014 at 8:31 PM, <josef.pktd@gmail.com> wrote:
On Sat, Apr 26, 2014 at 8:05 PM, <josef.pktd@gmail.com> wrote:
On Sat, Apr 26, 2014 at 6:37 PM, Matthew Brett <matthew.brett@gmail.com>wrote:
Hi,
On Wed, Apr 23, 2014 at 11:59 AM, Matthew Brett <matthew.brett@gmail.com> wrote:
Hi,
On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett < matthew.brett@gmail.com> wrote:
Hi,
I'm exploring Mingw-w64 for numpy building, and I've found it gives a slightly different answer for 'exp' than - say - gcc on OSX.
The difference is of the order of the eps value for the output number (2 * eps for a result of ~2.0).
Is accuracy somewhere specified for C functions like exp? Or is accuracy left as an implementation detail for the C library author?
C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point operations ... and of the library functions in <math.h> and <complex.h> that return floating point results is implemenetation defined. The implementation may state that the accuracy is unknown." (This last sentence is basically saying that with regard to some higher up clauses that required all conforming implementations to document this stuff, saying "eh, who knows" counts as documenting it. Hooray for standards!)
Presumably the accuracy in this case is a function of the C library anyway, not the compiler?
Mingw-w64 implementation is in assembly:
http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/e...
Numpy has its own implementations for a bunch of the math functions, and it's been unclear in the past whether numpy or the libc implementations were better in any particular case.
I only investigated this particular value, in which case it looked as though the OSX value was closer to the exact value (via sympy.mpmath) - by ~1 unit-at-the-last-place. This was causing a divergence in the powell optimization path and therefore a single scipy test failure. I haven't investigated further - was wondering what investigation I should do, more than running the numpy / scipy test suites.
Investigating further, with this script:
https://gist.github.com/matthew-brett/11301221
The following are tests of np.exp accuracy for input values between 0 and 10, for numpy 1.8.1.
If np.exp(x) performs perfectly, it will return the nearest floating point value to the exact value of exp(x). If it does, this scores a zero for error in the tables below. If 'proportion of zeros' is 1 - then np.exp performs perfectly for all tested values of exp (as is the case for linux here).
OSX 10.9
Proportion of zeros: 0.99789 Sum of error: 2.15021267458e-09 Sum of squared error: 2.47149370032e-14 Max / min error: 5.96046447754e-08 -2.98023223877e-08 Sum of squared relative error: 5.22456992025e-30 Max / min relative error: 2.19700100681e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Debian Jessie / Sid
Proportion of zeros: 1.0 Sum of error: 0.0 Sum of squared error: 0.0 Max / min error: 0.0 0.0 Sum of squared relative error: 0.0 Max / min relative error: 0.0 0.0 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Mingw-w64 Windows 7
Proportion of zeros: 0.82089 Sum of error: 8.08415331122e-07 Sum of squared error: 2.90045099615e-12 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.18466468175e-28 Max / min relative error: 2.22041308226e-16 -2.22042100773e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Take-home : exp implementation for mingw-w64 is exactly (floating point) correct 82% of the time, and one unit-at-the-last-place off for the rest [1]. OSX is off by 1 ULP only 0.2% of the time.
Windows 64 with MKL
\WinPython-64bit-3.3.2.2\python-3.3.2.amd64>python "E:\Josef\eclipsegworkspace\statsmodels-git\local_scripts\local_scripts\try_exp_error.py" Proportion of zeros: 0.99793 Sum of error: -2.10546855506e-07 Sum of squared error: 3.33304327526e-14 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 4.98420694339e-30 Max / min relative error: 2.20881302691e-16 -2.18321571939e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Windows 32 bit python with official MingW binaries
Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32
Proportion of zeros: 0.99464 Sum of error: -3.91621083118e-07 Sum of squared error: 9.2239247812e-14 Max / min error: 5.96046447754e-08 -5.96046447754e-08 Sum of squared relative error: 1.3334972729e-29 Max / min relative error: 2.21593462148e-16 -2.2098803255e-16 eps: 2.22044604925e-16 Proportion of relative err >= eps: 0.0
Is mingw-w64 accurate enough? Do we have any policy on this?
I wouldn't worry about a missing or an extra eps in our applications, but the competition is more accurate.
Just for comparison, I increased `until` to 300 the proportion of zeros and relative error stays about the same for both MKL and your wheels
absolute error are huge, the following is MKL
Sum of error: -3.78802736366e+112 Sum of squared error: 1.51136754049e+225 Max / min error: 4.80981520952e+111 -3.84785216762e+112
(I looked a lot at the behavior of exp in the hundreds recently :(
As illustration why I don't care about one **relative** eps
np.finfo(np.double).eps + 10 == 10 True
https://github.com/scipy/scipy/pull/3547 and many others
another variation on the theme with wheels there is a positive bias, pretty large
errors.sum() 8.0841533112163688e-07
but where did it go ?
exact_exp.sum() - np.exp(vals).sum() 0.0 (Law of Large Numbers ? errors get swamped or cancel)
with MKL smaller negative bias
errors.sum() -2.1054685550581098e-07
exact_exp.sum() - np.exp(vals).sum() 0.0
Josef
Josef
Josef
Cheers,
Matthew
[1] http://matthew-brett.github.io/pydagogue/floating_error.html _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion

Hi, 27.04.2014 01:37, Matthew Brett kirjoitti: [clip]
Take-home : exp implementation for mingw-w64 is exactly (floating point) correct 82% of the time, and one unit-at-the-last-place off for the rest [1]. OSX is off by 1 ULP only 0.2% of the time.
Is mingw-w64 accurate enough? Do we have any policy on this?
I think as mentioned, the C standards don't specify accuracy requirements. Errors of a couple of ULP should be still acceptable. Re: powell test --- if this turns out to be complicated to deal with, just go ahead and disable the trace test. Optimization routines contain statements of the form `if a > b: ...` with floating point numbers, so that the execution path can be sensitive to rounding error if you're unlucky, and the chances go up as the iteration count increases. -- Pauli Virtanen
participants (5)
-
David Cournapeau
-
josef.pktd@gmail.com
-
Matthew Brett
-
Nathaniel Smith
-
Pauli Virtanen