Hi, I'm getting some strange behavior with logaddexp2.reduce: from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)]) Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
On Wed, Mar 31, 2010 at 10:30 AM, T J
Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
Sorry, forgot version information: $ python c "import numpy;print numpy.__version__" 1.5.0.dev8106
On Wed, Mar 31, 2010 at 11:38 AM, T J
On Wed, Mar 31, 2010 at 10:30 AM, T J
wrote: Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
Sorry, forgot version information:
$ python c "import numpy;print numpy.__version__" 1.5.0.dev8106 __
Looks like roundoff error. Chuck
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
Looks like roundoff error.
So this is "expected" behavior? In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561 In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan In [3]: np.log2(np.exp2(0.5849625007211563) + np.exp2(53.584962500721154)) Out[3]: 0.58496250072115608 Shouldn't the result at least behave nicely and just return the larger value?
On Wed, Mar 31, 2010 at 4:15 PM, T J
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
In [3]: np.log2(np.exp2(0.5849625007211563) + np.exp2(53.584962500721154)) Out[3]: 0.58496250072115608
I don't see that In [1]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[1]: 0.58496250072115619 In [2]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[2]: 1.5849625007211561 What system are you running on. Chuck
On Wed, Mar 31, 2010 at 3:36 PM, Charles R Harris
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
I don't see that
In [1]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[1]: 0.58496250072115619
In [2]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[2]: 1.5849625007211561
What system are you running on.
$ python version Python 2.6.4 $ uname a Linux localhost 2.6.3120genericpae #58Ubuntu SMP Fri Mar 12 06:25:51 UTC 2010 i686 GNU/Linux
On Wed, Mar 31, 2010 at 4:42 PM, T J
On Wed, Mar 31, 2010 at 3:36 PM, Charles R Harris
wrote: So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
I don't see that
In [1]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[1]: 0.58496250072115619
In [2]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[2]: 1.5849625007211561
What system are you running on.
$ python version Python 2.6.4
$ uname a Linux localhost 2.6.3120genericpae #58Ubuntu SMP Fri Mar 12 06:25:51 UTC 2010 i686 GNU/Linux
That is a 32 bit kernel, right? Chuck
On Wed, Mar 31, 2010 at 7:06 PM, Charles R Harris
That is a 32 bit kernel, right?
Correct. Regarding the config.h, which config.h? I have a numpyconfig.h. Which compilation options should I obtain and how? When I run setup.py, I see: C compiler: gcc pthread fnostrictaliasing DNDEBUG g fwrapv O2 Wall Wstrictprototypes fPIC compile options: 'Inumpy/core/include Ibuild/src.linuxi6862.6/numpy/core/include/numpy Inumpy/core/src/private Inumpy/core/src Inumpy/core Inumpy/core/src/npymath Inumpy/core/src/multiarray Inumpy/core/src/umath Inumpy/core/include I/usr/include/python2.6 Ibuild/src.linuxi6862.6/numpy/core/src/multiarray Ibuild/src.linuxi6862.6/numpy/core/src/umath c' Here is my show_config(). atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c include_dirs = ['/usr/include'] atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2/atlas', '/usr/lib/sse2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 include_dirs = ['/usr/include'] atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2/atlas', '/usr/lib/sse2'] language = f77 include_dirs = ['/usr/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c include_dirs = ['/usr/include'] mkl_info: NOT AVAILABLE $ gcc version gcc (Ubuntu 4.4.14ubuntu9) 4.4.1 Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
On Wed, Mar 31, 2010 at 10:28 PM, T J
On Wed, Mar 31, 2010 at 7:06 PM, Charles R Harris
wrote: That is a 32 bit kernel, right?
Correct.
Regarding the config.h, which config.h? I have a numpyconfig.h. Which compilation options should I obtain and how? When I run setup.py, I see:
C compiler: gcc pthread fnostrictaliasing DNDEBUG g fwrapv O2 Wall Wstrictprototypes fPIC
compile options: 'Inumpy/core/include Ibuild/src.linuxi6862.6/numpy/core/include/numpy Inumpy/core/src/private Inumpy/core/src Inumpy/core Inumpy/core/src/npymath Inumpy/core/src/multiarray Inumpy/core/src/umath Inumpy/core/include I/usr/include/python2.6 Ibuild/src.linuxi6862.6/numpy/core/src/multiarray Ibuild/src.linuxi6862.6/numpy/core/src/umath c'
Here is my show_config().
atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = c include_dirs = ['/usr/include'] atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2/atlas', '/usr/lib/sse2'] define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] language = f77 include_dirs = ['/usr/include'] atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2/atlas', '/usr/lib/sse2'] language = f77 include_dirs = ['/usr/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/lib/sse2'] language = c include_dirs = ['/usr/include'] mkl_info: NOT AVAILABLE
I'm running the same distro, but 64 bits instead of 32. Strange, maybe sse2 has something to do with it. Chuck
On 31Mar10, at 6:15 PM, T J wrote:
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
In [3]: np.log2(np.exp2(0.5849625007211563) + np.exp2(53.584962500721154)) Out[3]: 0.58496250072115608
Shouldn't the result at least behave nicely and just return the larger value?
Unfortunately there's no good way of getting around orderof operationsrelated rounding error using the reduce() machinery, that I know of. I'd like to see a 'logsumexp' and 'logsumexp2' in NumPy instead, using the generalized ufunc architecture, to do it over an arbitrary dimension of an array, rather than needing to invoke 'reduce' on logaddexp. I tried my hand at writing one but I couldn't manage to get it working for some reason, and I haven't had time to revisit it: http://mail.scipy.org/pipermail/numpydiscussion/2010January/048067.html David
On Wed, Mar 31, 2010 at 3:38 PM, David WardeFarley
Unfortunately there's no good way of getting around orderof operationsrelated rounding error using the reduce() machinery, that I know of.
That seems reasonable, but receiving a nan, in this case, does not. Are my expectations are unreasonable? Would sorting the values before reducing be an ugly(enough) solution? It seems to fix it in this particular case. Or is the method you posted in the link below a better solution?
I'd like to see a 'logsumexp' and 'logsumexp2' in NumPy instead, using the generalized ufunc architecture, to do it over an arbitrary dimension of an array, rather than needing to invoke 'reduce' on logaddexp. I tried my hand at writing one but I couldn't manage to get it working for some reason, and I haven't had time to revisit it: http://mail.scipy.org/pipermail/numpydiscussion/2010January/048067.html
I saw this original post and was hoping someone would post a response. Maybe now...
T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106. Warren
In [3]: np.log2(np.exp2(0.5849625007211563) + np.exp2(53.584962500721154)) Out[3]: 0.58496250072115608
Shouldn't the result at least behave nicely and just return the larger value? _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
np.logaddexp2(0.5849625007211563, 53.584962500721154) nan np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
np.version.version '1.4.0'
WindowsXP 32 Josef
Warren
In [3]: np.log2(np.exp2(0.5849625007211563) + np.exp2(53.584962500721154)) Out[3]: 0.58496250072115608
Shouldn't the result at least behave nicely and just return the larger value? _______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Wed, Mar 31, 2010 at 6:08 PM,
On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote: T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
np.logaddexp2(0.5849625007211563, 53.584962500721154) nan np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
np.version.version '1.4.0'
WindowsXP 32
What compiler? Mingw? Chuck
On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
On Wed, Mar 31, 2010 at 6:08 PM,
wrote: On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote: T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
np.logaddexp2(0.5849625007211563, 53.584962500721154) nan np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
np.version.version '1.4.0'
WindowsXP 32
What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David Josef
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Thu, Apr 1, 2010 at 1:22 AM,
On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 6:08 PM,
wrote: On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote: T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
np.logaddexp2(0.5849625007211563, 53.584962500721154) nan np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
np.version.version '1.4.0'
WindowsXP 32
What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David
sse2 Pentium M
Josef
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Wed, Mar 31, 2010 at 11:25 PM,
On Thu, Apr 1, 2010 at 1:22 AM,
wrote: On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 6:08 PM,
wrote: On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote: T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: > Looks like roundoff error. > >
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported
using
1.5.0.dev8106.
> np.logaddexp2(0.5849625007211563, 53.584962500721154) nan > np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
> np.version.version '1.4.0'
WindowsXP 32
What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David
sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong? Chuck
On 1 April 2010 01:40, Charles R Harris
On Wed, Mar 31, 2010 at 11:25 PM,
wrote: On Thu, Apr 1, 2010 at 1:22 AM,
wrote: On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 6:08 PM,
wrote: On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote: T J wrote: > On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >
wrote: > >> Looks like roundoff error. >> >> > > So this is "expected" behavior? > > In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) > Out[1]: 1.5849625007211561 > > In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) > Out[2]: nan > Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
>> np.logaddexp2(0.5849625007211563, 53.584962500721154) nan >> np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
>> np.version.version '1.4.0'
WindowsXP 32
What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David
sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong?
Works fine for me. If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example: In [23]: np.logaddexp2(0, 53) Out[23]: nan Of course, for me it fails in both orders. Anne
On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
On 1 April 2010 01:40, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:25 PM,
wrote: On Thu, Apr 1, 2010 at 1:22 AM,
wrote: On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 6:08 PM,
wrote: On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote: > T J wrote: >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >> wrote: >> >>> Looks like roundoff error. >>> >>> >> >> So this is "expected" behavior? >> >> In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) >> Out[1]: 1.5849625007211561 >> >> In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) >> Out[2]: nan >> > > Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or
> 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported > using > 1.5.0.dev8106.
>>> np.logaddexp2(0.5849625007211563, 53.584962500721154) nan >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561
>>> np.version.version '1.4.0'
WindowsXP 32
What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David
sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong?
Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious. Chuck
On 1 April 2010 01:59, Charles R Harris
On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
wrote: On 1 April 2010 01:40, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:25 PM,
wrote: On Thu, Apr 1, 2010 at 1:22 AM,
wrote: On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 6:08 PM,
wrote: > > On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser > wrote: > > T J wrote: > >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris > >> wrote: > >> > >>> Looks like roundoff error. > >>> > >>> > >> > >> So this is "expected" behavior? > >> > >> In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) > >> Out[1]: 1.5849625007211561 > >> > >> In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) > >> Out[2]: nan > >> > > > > Is any able to reproduce this? I don't get 'nan' in either 1.4.0 > > or > > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported > > using > > 1.5.0.dev8106. > > > > >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) > nan > >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) > 1.5849625007211561 > > >>> np.version.version > '1.4.0' > > WindowsXP 32 > What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David
sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong?
Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points: In [38]: np.logaddexp2(63.999, 0) Out[38]: nan In [39]: np.logaddexp2(64, 0) Out[39]: 64.0 In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002 In [43]: np.logaddexp2(53, 0) Out[43]: nan It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent. Anne
On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
On 1 April 2010 01:59, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
On 1 April 2010 01:40, Charles R Harris
wrote:
On Wed, Mar 31, 2010 at 11:25 PM,
wrote: On Thu, Apr 1, 2010 at 1:22 AM,
wrote: On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote: > > > On Wed, Mar 31, 2010 at 6:08 PM, wrote: >> >> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser >> wrote: >> > T J wrote: >> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >> >> wrote: >> >> >> >>> Looks like roundoff error. >> >>> >> >>> >> >> >> >> So this is "expected" behavior? >> >> >> >> In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154)
>> >> Out[1]: 1.5849625007211561 >> >> >> >> In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) >> >> Out[2]: nan >> >> >> > >> > Is any able to reproduce this? I don't get 'nan' in either 1.4.0 >> > or >> > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported >> > using >> > 1.5.0.dev8106. >> >> >> >> >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) >> nan >> >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) >> 1.5849625007211561 >> >> >>> np.version.version >> '1.4.0' >> >> WindowsXP 32 >> > > What compiler? Mingw?
yes, mingw 3.4.5. , official binaries release 1.4.0 by David
sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong?
Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas: @type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } } It might be that u != 1 does not imply u1 != 0. Chuck
On 1 April 2010 02:24, Charles R Harris
On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
wrote: On 1 April 2010 01:59, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
wrote: On 1 April 2010 01:40, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:25 PM,
wrote: On Thu, Apr 1, 2010 at 1:22 AM,
wrote: > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris > wrote: >> >> >> On Wed, Mar 31, 2010 at 6:08 PM, wrote: >>> >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser >>> wrote: >>> > T J wrote: >>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >>> >> wrote: >>> >> >>> >>> Looks like roundoff error. >>> >>> >>> >>> >>> >> >>> >> So this is "expected" behavior? >>> >> >>> >> In [1]: np.logaddexp2(1.5849625007211563, >>> >> 53.584962500721154) >>> >> Out[1]: 1.5849625007211561 >>> >> >>> >> In [2]: np.logaddexp2(0.5849625007211563, >>> >> 53.584962500721154) >>> >> Out[2]: nan >>> >> >>> > >>> > Is any able to reproduce this? I don't get 'nan' in either >>> > 1.4.0 >>> > or >>> > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J >>> > reported >>> > using >>> > 1.5.0.dev8106. >>> >>> >>> >>> >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) >>> nan >>> >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) >>> 1.5849625007211561 >>> >>> >>> np.version.version >>> '1.4.0' >>> >>> WindowsXP 32 >>> >> >> What compiler? Mingw? > > yes, mingw 3.4.5. , official binaries release 1.4.0 by David sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong?
Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
That does indeed look highly suspicious. I'm not entirely sure how to work around it. GSL uses a volatile declaration: http://www.google.ca/codesearch/p?hl=en#p9nGS4eQGUI/gnu/gsl/gsl1.8.tar.gz%7C8VCQSLJ5jR8/gsl1.8/sys/log1p.c&q=log1p On the other hand boost declares itself defeated by optimizing compilers and uses a Taylor series: http://www.google.ca/codesearch/p?hl=en#sdP2GRSfgKo/dcplusplus/trunk/boost/boost/math/special_functions/log1p.hpp&q=log1p&sa=N&cd=7&ct=rc While R makes no mention of the corrected formula or optimizing compilers but takes the same approach, only with Chebyshev series: http://www.google.ca/codesearch/p?hl=en#gBBSWbwZmuk/src/base/R2/R2.3.1.tar.gz%7CVuh8XhRbUi8/R2.3.1/src/nmath/log1p.c&q=log1p Since, at least on my machine, ordinary log1p appears to work fine, is there any reason not to have log2_1p call it and scale the result? Or does the compiler make a hash of our log1p too? Anne
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Thu, Apr 1, 2010 at 12:46 AM, Anne Archibald
On 1 April 2010 02:24, Charles R Harris
wrote: On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
On 1 April 2010 01:59, Charles R Harris
wrote:
On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
wrote: On 1 April 2010 01:40, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:25 PM,
wrote: > > On Thu, Apr 1, 2010 at 1:22 AM, wrote: > > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris > > wrote: > >> > >> > >> On Wed, Mar 31, 2010 at 6:08 PM, wrote: > >>> > >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser > >>> wrote: > >>> > T J wrote: > >>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris > >>> >> wrote: > >>> >> > >>> >>> Looks like roundoff error. > >>> >>> > >>> >>> > >>> >> > >>> >> So this is "expected" behavior? > >>> >> > >>> >> In [1]: np.logaddexp2(1.5849625007211563, > >>> >> 53.584962500721154) > >>> >> Out[1]: 1.5849625007211561 > >>> >> > >>> >> In [2]: np.logaddexp2(0.5849625007211563, > >>> >> 53.584962500721154) > >>> >> Out[2]: nan > >>> >> > >>> > > >>> > Is any able to reproduce this? I don't get 'nan' in either > >>> > 1.4.0 > >>> > or > >>> > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J > >>> > reported > >>> > using > >>> > 1.5.0.dev8106. > >>> > >>> > >>> > >>> >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) > >>> nan > >>> >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) > >>> 1.5849625007211561 > >>> > >>> >>> np.version.version > >>> '1.4.0' > >>> > >>> WindowsXP 32 > >>> > >> > >> What compiler? Mingw? > > > > yes, mingw 3.4.5. , official binaries release 1.4.0 by David > > sse2 Pentium M > Can you try the exp2/log2 functions with the problem data and see
if
something goes wrong?
Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
That does indeed look highly suspicious. I'm not entirely sure how to work around it. GSL uses a volatile declaration:
http://www.google.ca/codesearch/p?hl=en#p9nGS4eQGUI/gnu/gsl/gsl1.8.tar.gz%7C8VCQSLJ5jR8/gsl1.8/sys/log1p.c&q=log1p On the other hand boost declares itself defeated by optimizing compilers and uses a Taylor series:
http://www.google.ca/codesearch/p?hl=en#sdP2GRSfgKo/dcplusplus/trunk/boost/boost/math/special_functions/log1p.hpp&q=log1p&sa=N&cd=7&ct=rc While R makes no mention of the corrected formula or optimizing compilers but takes the same approach, only with Chebyshev series:
Since, at least on my machine, ordinary log1p appears to work fine, is there any reason not to have log2_1p call it and scale the result? Or does the compiler make a hash of our log1p too?
Calling log1p and scaling looks like the right thing to do here. And our log1p needs improvement. Chuck
On Thu, Apr 1, 2010 at 8:37 AM, Charles R Harris
On Thu, Apr 1, 2010 at 12:46 AM, Anne Archibald
wrote:
On 1 April 2010 02:24, Charles R Harris
wrote: On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
On 1 April 2010 01:59, Charles R Harris
wrote:
On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
wrote: On 1 April 2010 01:40, Charles R Harris
wrote: > > > On Wed, Mar 31, 2010 at 11:25 PM, wrote: >> >> On Thu, Apr 1, 2010 at 1:22 AM, wrote: >> > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris >> > wrote: >> >> >> >> >> >> On Wed, Mar 31, 2010 at 6:08 PM, wrote:
>> >>> >> >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser >> >>>
wrote: >> >>> > T J wrote: >> >>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >> >>> >> wrote: >> >>> >> >> >>> >>> Looks like roundoff error. >> >>> >>> >> >>> >>> >> >>> >> >> >>> >> So this is "expected" behavior? >> >>> >> >> >>> >> In [1]: np.logaddexp2(1.5849625007211563, >> >>> >> 53.584962500721154) >> >>> >> Out[1]: 1.5849625007211561 >> >>> >> >> >>> >> In [2]: np.logaddexp2(0.5849625007211563, >> >>> >> 53.584962500721154) >> >>> >> Out[2]: nan >> >>> >> >> >>> > >> >>> > Is any able to reproduce this? I don't get 'nan' in either >> >>> > 1.4.0 >> >>> > or >> >>> > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J >> >>> > reported >> >>> > using >> >>> > 1.5.0.dev8106. >> >>> >> >>> >> >>> >> >>> >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) >> >>> nan >> >>> >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) >> >>> 1.5849625007211561 >> >>> >> >>> >>> np.version.version >> >>> '1.4.0' >> >>> >> >>> WindowsXP 32 >> >>> >> >> >> >> What compiler? Mingw? >> > >> > yes, mingw 3.4.5. , official binaries release 1.4.0 by David >> >> sse2 Pentium M >> > > Can you try the exp2/log2 functions with the problem data and see if > something goes wrong? Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
That does indeed look highly suspicious. I'm not entirely sure how to work around it. GSL uses a volatile declaration:
http://www.google.ca/codesearch/p?hl=en#p9nGS4eQGUI/gnu/gsl/gsl1.8.tar.gz%7C8VCQSLJ5jR8/gsl1.8/sys/log1p.c&q=log1p On the other hand boost declares itself defeated by optimizing compilers and uses a Taylor series:
http://www.google.ca/codesearch/p?hl=en#sdP2GRSfgKo/dcplusplus/trunk/boost/boost/math/special_functions/log1p.hpp&q=log1p&sa=N&cd=7&ct=rc While R makes no mention of the corrected formula or optimizing compilers but takes the same approach, only with Chebyshev series:
Since, at least on my machine, ordinary log1p appears to work fine, is there any reason not to have log2_1p call it and scale the result? Or does the compiler make a hash of our log1p too?
Calling log1p and scaling looks like the right thing to do here. And our log1p needs improvement.
Tinkering a bit, I think we should implement the auxiliary function f(p) = log((1+p)/(1  p)), which is antisymmetric and has the expansion 2p*(1 + p^2/3 + p^4/5 + ...). The series in the parens is increasing, so it is easy to terminate. Note that for p = +/ 1 it goes over to the harmonic series, so convergence is slow near the ends, but they can be handled using normal logs. Given 1 + x = (1+p)/(1p) and solving for p gives p = x/(2 + x), so when x ranges from 1/2 > 1/2, p ranges from 1/3 > 1/5, hence achieving double precision should involve no more than about 17 terms. I think this is better than the expansion in R. Chuck
On 1 April 2010 13:38, Charles R Harris
On Thu, Apr 1, 2010 at 8:37 AM, Charles R Harris
wrote: On Thu, Apr 1, 2010 at 12:46 AM, Anne Archibald
wrote: On 1 April 2010 02:24, Charles R Harris
wrote: On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
wrote: On 1 April 2010 01:59, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
wrote: > > On 1 April 2010 01:40, Charles R Harris > wrote: > > > > > > On Wed, Mar 31, 2010 at 11:25 PM, wrote: > >> > >> On Thu, Apr 1, 2010 at 1:22 AM, wrote: > >> > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris > >> > wrote: > >> >> > >> >> > >> >> On Wed, Mar 31, 2010 at 6:08 PM, > >> >> wrote: > >> >>> > >> >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser > >> >>> wrote: > >> >>> > T J wrote: > >> >>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris > >> >>> >> wrote: > >> >>> >> > >> >>> >>> Looks like roundoff error. > >> >>> >>> > >> >>> >>> > >> >>> >> > >> >>> >> So this is "expected" behavior? > >> >>> >> > >> >>> >> In [1]: np.logaddexp2(1.5849625007211563, > >> >>> >> 53.584962500721154) > >> >>> >> Out[1]: 1.5849625007211561 > >> >>> >> > >> >>> >> In [2]: np.logaddexp2(0.5849625007211563, > >> >>> >> 53.584962500721154) > >> >>> >> Out[2]: nan > >> >>> >> > >> >>> > > >> >>> > Is any able to reproduce this? I don't get 'nan' in > >> >>> > either > >> >>> > 1.4.0 > >> >>> > or > >> >>> > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J > >> >>> > reported > >> >>> > using > >> >>> > 1.5.0.dev8106. > >> >>> > >> >>> > >> >>> > >> >>> >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) > >> >>> nan > >> >>> >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) > >> >>> 1.5849625007211561 > >> >>> > >> >>> >>> np.version.version > >> >>> '1.4.0' > >> >>> > >> >>> WindowsXP 32 > >> >>> > >> >> > >> >> What compiler? Mingw? > >> > > >> > yes, mingw 3.4.5. , official binaries release 1.4.0 by David > >> > >> sse2 Pentium M > >> > > > > Can you try the exp2/log2 functions with the problem data and see > > if > > something goes wrong? > > Works fine for me. > > If it helps clarify things, the difference between the two problem > input values is exactly 53 (and that's what logaddexp2 does an exp2 > of); so I can provide a simpler example: > > In [23]: np.logaddexp2(0, 53) > Out[23]: nan > > Of course, for me it fails in both orders. > Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
That does indeed look highly suspicious. I'm not entirely sure how to work around it. GSL uses a volatile declaration:
http://www.google.ca/codesearch/p?hl=en#p9nGS4eQGUI/gnu/gsl/gsl1.8.tar.gz%7C8VCQSLJ5jR8/gsl1.8/sys/log1p.c&q=log1p On the other hand boost declares itself defeated by optimizing compilers and uses a Taylor series:
http://www.google.ca/codesearch/p?hl=en#sdP2GRSfgKo/dcplusplus/trunk/boost/boost/math/special_functions/log1p.hpp&q=log1p&sa=N&cd=7&ct=rc While R makes no mention of the corrected formula or optimizing compilers but takes the same approach, only with Chebyshev series:
Since, at least on my machine, ordinary log1p appears to work fine, is there any reason not to have log2_1p call it and scale the result? Or does the compiler make a hash of our log1p too?
Calling log1p and scaling looks like the right thing to do here. And our log1p needs improvement.
Tinkering a bit, I think we should implement the auxiliary function f(p) = log((1+p)/(1  p)), which is antisymmetric and has the expansion 2p*(1 + p^2/3 + p^4/5 + ...). The series in the parens is increasing, so it is easy to terminate. Note that for p = +/ 1 it goes over to the harmonic series, so convergence is slow near the ends, but they can be handled using normal logs. Given 1 + x = (1+p)/(1p) and solving for p gives p = x/(2 + x), so when x ranges from 1/2 > 1/2, p ranges from 1/3 > 1/5, hence achieving double precision should involve no more than about 17 terms. I think this is better than the expansion in R.
First I guess we should check which systems don't have log1p; if glibc has it as an intrinsic, that should cover Linux (though I suppose we should check its quality). Do Windows and Mac have log1p? For testing I suppose we should make our own implementation somehow available even on systems where it's unnecessary. Power series are certainly easy, but would some of the other available tricks  Chebyshev series or rational function approximations  be better? I notice R uses Chebyshev series, although maybe that's just because they have a good evaluator handy. Anne
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Thu, Apr 1, 2010 at 3:51 PM, Anne Archibald
On 1 April 2010 13:38, Charles R Harris
wrote: On Thu, Apr 1, 2010 at 8:37 AM, Charles R Harris <
wrote:
On Thu, Apr 1, 2010 at 12:46 AM, Anne Archibald
wrote: On 1 April 2010 02:24, Charles R Harris
wrote: On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
wrote: On 1 April 2010 01:59, Charles R Harris
wrote: > > > On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald > > wrote: >> >> On 1 April 2010 01:40, Charles R Harris < charlesr.harris@gmail.com>
>> wrote: >> > >> > >> > On Wed, Mar 31, 2010 at 11:25 PM,
wrote: >> >> >> >> On Thu, Apr 1, 2010 at 1:22 AM, wrote: >> >> > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris >> >> > wrote: >> >> >> >> >> >> >> >> >> On Wed, Mar 31, 2010 at 6:08 PM, >> >> >> wrote: >> >> >>> >> >> >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser >> >> >>> wrote: >> >> >>> > T J wrote: >> >> >>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >> >> >>> >> wrote: >> >> >>> >> >> >> >>> >>> Looks like roundoff error. >> >> >>> >>> >> >> >>> >>> >> >> >>> >> >> >> >>> >> So this is "expected" behavior? >> >> >>> >> >> >> >>> >> In [1]: np.logaddexp2(1.5849625007211563, >> >> >>> >> 53.584962500721154) >> >> >>> >> Out[1]: 1.5849625007211561 >> >> >>> >> >> >> >>> >> In [2]: np.logaddexp2(0.5849625007211563, >> >> >>> >> 53.584962500721154) >> >> >>> >> Out[2]: nan >> >> >>> >> >> >> >>> > >> >> >>> > Is any able to reproduce this? I don't get 'nan' in >> >> >>> > either >> >> >>> > 1.4.0 >> >> >>> > or >> >> >>> > 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J >> >> >>> > reported >> >> >>> > using >> >> >>> > 1.5.0.dev8106. >> >> >>> >> >> >>> >> >> >>> >> >> >>> >>> np.logaddexp2(0.5849625007211563, 53.584962500721154) >> >> >>> nan >> >> >>> >>> np.logaddexp2(1.5849625007211563, 53.584962500721154) >> >> >>> 1.5849625007211561 >> >> >>> >> >> >>> >>> np.version.version >> >> >>> '1.4.0' >> >> >>> >> >> >>> WindowsXP 32 >> >> >>> >> >> >> >> >> >> What compiler? Mingw? >> >> > >> >> > yes, mingw 3.4.5. , official binaries release 1.4.0 by David >> >> >> >> sse2 Pentium M >> >> >> > >> > Can you try the exp2/log2 functions with the problem data and see >> > if >> > something goes wrong? >> >> Works fine for me. >> >> If it helps clarify things, the difference between the two >> input values is exactly 53 (and that's what logaddexp2 does an exp2 >> of); so I can provide a simpler example: >> >> In [23]: np.logaddexp2(0, 53) >> Out[23]: nan >> >> Of course, for me it fails in both orders. >> > > Ah, that's progress then ;) The effective number of bits in a double > is > 53 > (52 + implicit bit). That shouldn't cause problems but it sure looks > suspicious.
Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
That does indeed look highly suspicious. I'm not entirely sure how to work around it. GSL uses a volatile declaration:
On the other hand boost declares itself defeated by optimizing compilers and uses a Taylor series:
While R makes no mention of the corrected formula or optimizing compilers but takes the same approach, only with Chebyshev series:
Since, at least on my machine, ordinary log1p appears to work fine, is there any reason not to have log2_1p call it and scale the result? Or does the compiler make a hash of our log1p too?
Calling log1p and scaling looks like the right thing to do here. And our log1p needs improvement.
Tinkering a bit, I think we should implement the auxiliary function f(p) = log((1+p)/(1  p)), which is antisymmetric and has the expansion 2p*(1 + p^2/3 + p^4/5 + ...). The series in the parens is increasing, so it is easy to terminate. Note that for p = +/ 1 it goes over to the harmonic series, so convergence is slow near the ends, but they can be handled using normal logs. Given 1 + x = (1+p)/(1p) and solving for p gives p = x/(2 + x), so when x ranges from 1/2 > 1/2, p ranges from 1/3 > 1/5, hence achieving double precision should involve no more than about 17 terms. I think
charlesr.harris@gmail.com> problem this
is better than the expansion in R.
First I guess we should check which systems don't have log1p; if glibc has it as an intrinsic, that should cover Linux (though I suppose we should check its quality). Do Windows and Mac have log1p? For testing I suppose we should make our own implementation somehow available even on systems where it's unnecessary.
Power series are certainly easy, but would some of the other available tricks  Chebyshev series or rational function approximations  be better? I notice R uses Chebyshev series, although maybe that's just because they have a good evaluator handy.
The Chebyshev series for 1 + x^2/3 + ... is just as bad as the one in R, i.e., stinky. Rational approximation works well, the ratio of two tenth order polynomials is good to 1e32 (quad precision) over the range of interest. I'd like to use continued fractions, though, so the approximation could terminate early for small values of x. Chuck
Anne Archibald wrote:
First I guess we should check which systems don't have log1p
This is already done  we do use the system log1p on linux (but note that log2_1p is not standard AFAIK). I would guess few systems outside windows don't have log1p, given that msun has an implementation, David
On Thu, Apr 1, 2010 at 7:59 PM, David Cournapeau
Anne Archibald wrote:
First I guess we should check which systems don't have log1p
This is already done  we do use the system log1p on linux (but note that log2_1p is not standard AFAIK). I would guess few systems outside windows don't have log1p, given that msun has an implementation,
I see that msun uses the same series I came to. However, my rational (Pade) approximation is a lot better than their polynomial. Chuck
On Thu, Apr 1, 2010 at 8:16 PM, Charles R Harris
On Thu, Apr 1, 2010 at 7:59 PM, David Cournapeau
wrote: Anne Archibald wrote:
First I guess we should check which systems don't have log1p
This is already done  we do use the system log1p on linux (but note that log2_1p is not standard AFAIK). I would guess few systems outside windows don't have log1p, given that msun has an implementation,
I see that msun uses the same series I came to. However, my rational (Pade) approximation is a lot better than their polynomial.
In fact I get better than 119 bits using the same range as sun and the ratio of two 7'th degree polynomials. I suspect it's better than that, but I only have mpmath set to that precision. Sun got 58 bits with a 14'th degree polynomial. So we can definitely improve on sun. Chuck
On Thu, Apr 1, 2010 at 9:01 PM, Charles R Harris
On Thu, Apr 1, 2010 at 8:16 PM, Charles R Harris < charlesr.harris@gmail.com> wrote:
On Thu, Apr 1, 2010 at 7:59 PM, David Cournapeau
wrote: Anne Archibald wrote:
First I guess we should check which systems don't have log1p
This is already done  we do use the system log1p on linux (but note that log2_1p is not standard AFAIK). I would guess few systems outside windows don't have log1p, given that msun has an implementation,
I see that msun uses the same series I came to. However, my rational (Pade) approximation is a lot better than their polynomial.
In fact I get better than 119 bits using the same range as sun and the ratio of two 7'th degree polynomials. I suspect it's better than that, but I only have mpmath set to that precision. Sun got 58 bits with a 14'th degree polynomial. So we can definitely improve on sun.
A few notes. The Sun routine is pretty good, but results of almost equal quality can be obtained using barycentric interpolation at the Chebyshev II points, i.e., one doesn't need the slight improvement that comes from the Reme(z) algorithm. To get extended precision requires going from degree 14 to degree 16, and quad precision needs degree 30. Note that the function in question is even, so the effective degree is half of those quoted. Next up: using Chebyshev points to interpolate the polynomials from the Pade representation. The Pade version doesn't actually gain any precision over the power series of equivalent degree, but it does allow one to use polynomials of half the degree in num/den. Short term, however, I'm going to take Anne's advice and simply use log1p. That should fix the problem for most users. It's amusing that the Reme[z] typo has persisted in the Sun documentation all these years ;) Chuck
Charles R Harris wrote:
On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
wrote: On 1 April 2010 01:59, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
On 1 April 2010 01:40, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:25 PM,
wrote: On Thu, Apr 1, 2010 at 1:22 AM,
wrote: > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris > wrote: >> >> On Wed, Mar 31, 2010 at 6:08 PM, wrote: >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser >>> wrote: >>>> T J wrote: >>>>> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >>>>> wrote: >>>>> >>>>>> Looks like roundoff error. >>>>>> >>>>>> >>>>> So this is "expected" behavior? >>>>> >>>>> In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154)
>>>>> Out[1]: 1.5849625007211561 >>>>> >>>>> In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) >>>>> Out[2]: nan >>>>> >>>> Is any able to reproduce this? I don't get 'nan' in either 1.4.0 >>>> or >>>> 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported >>>> using >>>> 1.5.0.dev8106. >>> >>> >>>>>> np.logaddexp2(0.5849625007211563, 53.584962500721154) >>> nan >>>>>> np.logaddexp2(1.5849625007211563, 53.584962500721154) >>> 1.5849625007211561 >>> >>>>>> np.version.version >>> '1.4.0' >>> >>> WindowsXP 32 >>> >> What compiler? Mingw? > yes, mingw 3.4.5. , official binaries release 1.4.0 by David sse2 Pentium M
Can you try the exp2/log2 functions with the problem data and see if something goes wrong? Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious. Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
I don't think that's true for floating point, since the spacing at 1 and at 0 are quite different, and that's what defines whether two floating point numbers are close or not. But I think you're right about the problem. I played a bit on my netbook where I have the same problem (Ubuntu 10.04 beta, gc 4.4.3, 32 bits). The problem is in npy_log2_1p (for example, try npy_log2_1p(1e18) vs npy_log2_p1(1e15)). If I compile with ffloatstore, the problem also disappears. I think the fix is to do : @type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if ((u1) == 0) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } } this works for a separate C program, but for some reason, once I try this in numpy, it does not work, and I have no battery anymore, cheers, David
On 1 April 2010 02:49, David Cournapeau
Charles R Harris wrote:
On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
wrote: On 1 April 2010 01:59, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
On 1 April 2010 01:40, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:25 PM,
wrote: > On Thu, Apr 1, 2010 at 1:22 AM, wrote: >> On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris >> wrote: >>> >>> On Wed, Mar 31, 2010 at 6:08 PM, wrote: >>>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser >>>> wrote: >>>>> T J wrote: >>>>>> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris >>>>>> wrote: >>>>>> >>>>>>> Looks like roundoff error. >>>>>>> >>>>>>> >>>>>> So this is "expected" behavior? >>>>>> >>>>>> In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154)
>>>>>> Out[1]: 1.5849625007211561 >>>>>> >>>>>> In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) >>>>>> Out[2]: nan >>>>>> >>>>> Is any able to reproduce this? I don't get 'nan' in either 1.4.0 >>>>> or >>>>> 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported >>>>> using >>>>> 1.5.0.dev8106. >>>> >>>> >>>>>>> np.logaddexp2(0.5849625007211563, 53.584962500721154) >>>> nan >>>>>>> np.logaddexp2(1.5849625007211563, 53.584962500721154) >>>> 1.5849625007211561 >>>> >>>>>>> np.version.version >>>> '1.4.0' >>>> >>>> WindowsXP 32 >>>> >>> What compiler? Mingw? >> yes, mingw 3.4.5. , official binaries release 1.4.0 by David > sse2 Pentium M > Can you try the exp2/log2 functions with the problem data and see if something goes wrong? Works fine for me.
If it helps clarify things, the difference between the two problem input values is exactly 53 (and that's what logaddexp2 does an exp2 of); so I can provide a simpler example:
In [23]: np.logaddexp2(0, 53) Out[23]: nan
Of course, for me it fails in both orders.
Ah, that's progress then ;) The effective number of bits in a double is 53 (52 + implicit bit). That shouldn't cause problems but it sure looks suspicious. Indeed, that's what led me to the totally wrong suspicion that denormals have something to do with the problem. More data points:
In [38]: np.logaddexp2(63.999, 0) Out[38]: nan
In [39]: np.logaddexp2(64, 0) Out[39]: 64.0
In [42]: np.logaddexp2(52.999, 0) Out[42]: 52.999000000000002
In [43]: np.logaddexp2(53, 0) Out[43]: nan
It looks to me like perhaps the NaNs are appearing when the smaller term affects only the "extra" bits provided by the FPU's internal largerthandouble representation. Some such issue would explain why the problem seems to be hardware and compilerdependent.
Hmm, that 63.999 is kinda strange. Here is a bit of code that might confuse a compiler working with different size mantissas:
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if (u == 1) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
It might be that u != 1 does not imply u1 != 0.
I don't think that's true for floating point, since the spacing at 1 and at 0 are quite different, and that's what defines whether two floating point numbers are close or not.
But I think you're right about the problem. I played a bit on my netbook where I have the same problem (Ubuntu 10.04 beta, gc 4.4.3, 32 bits). The problem is in npy_log2_1p (for example, try npy_log2_1p(1e18) vs npy_log2_p1(1e15)). If I compile with ffloatstore, the problem also disappears.
I think the fix is to do :
@type@ npy_log2_1p@c@(@type@ x) { @type@ u = 1 + x; if ((u1) == 0) { return LOG2E*x; } else { return npy_log2@c@(u) * x / (u  1); } }
Particularly given the comments in the boost source code, I'm leery of this fix; who knows what an optimizing compiler will do with it? What if, for example, u1 and u get treated differently  one gets truncated to double, but the other doesn't, for example? Now the "correction" is completely bogus. Or, of course, the compiler might rewrite the expressions. At the least an explicit "volatile" variable or two is necessary, and even so making it compilerproof seems like a real challenge. At the least it's letting us in for all sorts of subtle trouble down the road (though at least unit tests can catch it). Since this is a subtle issue, I vote for delegating it (and log10_1p) to log1p. If *that* has trouble, well, at least it'll only be on nonC99 machines, and we can try compiler gymnastics.
this works for a separate C program, but for some reason, once I try this in numpy, it does not work, and I have no battery anymore,
Clearly the optimizing compiler is inserting the DRDB (drain David's battery) opcode. Anne
Anne Archibald wrote:
Particularly given the comments in the boost source code, I'm leery of this fix; who knows what an optimizing compiler will do with it?
But the current code *is* wrong: it is not true that u == 1 implies u  1 == 0 (and that (u1) != 0 > u != 1), because the spacing between two consecutive floats is much bigger at 1 than at 0. And the current code relies on this wrong assumption: at least with the correction, we test for what we care about. So I would say the code correction I suggest is at least better in that respect.
Now the "correction" is completely bogus.
I am not sure to understand why ? It is at least not worse than the current code.
Since this is a subtle issue, I vote for delegating it (and log10_1p) to log1p. If *that* has trouble, well, at least it'll only be on nonC99 machines, and we can try compiler gymnastics.
Yes, we could do that. I note that on glibc, the function called is an intrinsic for log1p (FYL2XP1) if x is sufficiently small.
Clearly the optimizing compiler is inserting the DRDB (drain David's battery) opcode.
:) David
On 1 April 2010 03:15, David Cournapeau
Anne Archibald wrote:
Particularly given the comments in the boost source code, I'm leery of this fix; who knows what an optimizing compiler will do with it?
But the current code *is* wrong: it is not true that u == 1 implies u  1 == 0 (and that (u1) != 0 > u != 1), because the spacing between two consecutive floats is much bigger at 1 than at 0. And the current code relies on this wrong assumption: at least with the correction, we test for what we care about.
I don't think this is true for IEEE floats, at least in the case we're interested in where u is approximately 1. After all, the issue is representation: if you take an x very close to zero and add 1 to it, you get exactly 1 (which is what you're pointing out); but if you then subtract 1 again, you get exactly zero. After all, u==1 means that the bit patterns for u and 1 are identical, so if you subtract 1 from u you get exactly zero. If u is not equal to 1 (and we're talking about IEEE floats) and you subtract 1, you will get something that is not zero. The problem is that what we're dealing with is not IEEE floatingpoint: it resembles IEEE floatingpoint, but arithmetic is often done with extra digits, which are discarded or not at the whim of the compiler. If the extra digits were kept we'd still have (u1)==0 iff u==1, but if the compiler discards the extra digits in one expression but not the other  which is a question about FPU register scheduling  funny things can happen.
So I would say the code correction I suggest is at least better in that respect.
Now the "correction" is completely bogus.
I am not sure to understand why ? It is at least not worse than the current code.
Well, it would be hard to be worse than inappropriate NaNs. But the point of the correction is that, through some deviousness I don't fully understand, the roundoff error involved in the log is compensated for by the roundoff error in the calculation of (1+x)1. If the compiler uses a different number of digits to calculate the log than it uses to caiculate (1+x)1, the "correction" will make a hash of things.
Since this is a subtle issue, I vote for delegating it (and log10_1p) to log1p. If *that* has trouble, well, at least it'll only be on nonC99 machines, and we can try compiler gymnastics.
Yes, we could do that. I note that on glibc, the function called is an intrinsic for log1p (FYL2XP1) if x is sufficiently small.
Even if we end up having to implement this function by hand, we should at least implement it only once  log1p  and not three times (log1p, log10_1p, log2_1p).
Clearly the optimizing compiler is inserting the DRDB (drain David's battery) opcode.
:)
That was tongueincheek, but there was a real point: optimizing compilers do all sorts of peculiar things, particularly with floatingpoint. For one thing, many Intel CPUs have two (logically)separate units for doing doubleprecision calculations  the FPU, which keeps extra digits, and SSE2, which doesn't. Who knows which one gets used for which operations? ffloatstore is supposed to force the constant discarding of extra digits, but we certainly don't want to apply that to all of numpy. So if we have to get log1p working ourselves it'll be an exercise in suppressing specific optimizations in all optimizing compilers that might compile numpy. Anne
Anne Archibald wrote:
On 1 April 2010 03:15, David Cournapeau
wrote: Anne Archibald wrote:
Particularly given the comments in the boost source code, I'm leery of this fix; who knows what an optimizing compiler will do with it? But the current code *is* wrong: it is not true that u == 1 implies u  1 == 0 (and that (u1) != 0 > u != 1), because the spacing between two consecutive floats is much bigger at 1 than at 0. And the current code relies on this wrong assumption: at least with the correction, we test for what we care about.
I don't think this is true for IEEE floats, at least in the case we're interested in where u is approximately 1.
Yes, sorry, you're right. For log1p, we can use the msun code, it is claimed to be such as the error is bounded by 1 ulp, does not rely on ASM, and we already have all the necessary macros in npymath so that the code should be easy to integrate for single and double precision. I don't see code for the long double version, though. We can use the boost test data to see if we get something sensible there .
Who knows which one gets used for which operations? ffloatstore is supposed to force the constant discarding of extra digits, but we certainly don't want to apply that to all of numpy.
I think relying on ffloatstore is buggy anyway, and it would be a maintenance hell. It is useful to know when it makes a difference, though. cheers, David
On Thu, Apr 1, 2010 at 3:52 AM, David Cournapeau
Anne Archibald wrote:
On 1 April 2010 03:15, David Cournapeau
wrote: Anne Archibald wrote:
Particularly given the comments in the boost source code, I'm leery of this fix; who knows what an optimizing compiler will do with it? But the current code *is* wrong: it is not true that u == 1 implies u  1 == 0 (and that (u1) != 0 > u != 1), because the spacing between two consecutive floats is much bigger at 1 than at 0. And the current code relies on this wrong assumption: at least with the correction, we test for what we care about.
I don't think this is true for IEEE floats, at least in the case we're interested in where u is approximately 1.
Yes, sorry, you're right.
For log1p, we can use the msun code, it is claimed to be such as the error is bounded by 1 ulp, does not rely on ASM, and we already have all the necessary macros in npymath so that the code should be easy to integrate for single and double precision. I don't see code for the long double version, though.
We can use the boost test data to see if we get something sensible there .
I've had fixing these log_1p functions in the back of my mind since 1.3, mostly because I didn't trust their accuracy. I'm inclined to go with one of the series approaches, there are several out there. Note that the complex versions need to be fixed also. Chuck
On Wed, Mar 31, 2010 at 5:37 PM, Warren Weckesser
T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
No luck here on Gentoo Linux: Python 2.6.4 (r264:75706, Mar 11 2010, 09:29:48) [GCC 4.3.4] on linux2 Type "help", "copyright", "credits" or "license" for more information.
import numpy as np np.logaddexp2(0.5849625007211563, 53.584962500721154) 0.58496250072115619 np.logaddexp2(1.5849625007211563, 53.584962500721154) 1.5849625007211561 np.version.version '2.0.0.dev8313'
Ryan  Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma
Ryan May wrote:
On Wed, Mar 31, 2010 at 5:37 PM, Warren Weckesser
wrote: T J wrote:
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote: Looks like roundoff error.
So this is "expected" behavior?
In [1]: np.logaddexp2(1.5849625007211563, 53.584962500721154) Out[1]: 1.5849625007211561
In [2]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[2]: nan
Is any able to reproduce this? I don't get 'nan' in either 1.4.0 or 2.0.0.dev8313 (32 bit Mac OSX). In an earlier email T J reported using 1.5.0.dev8106.
Having the config.h as well as the compilation options would be most useful, to determine which functions are coming from the system, and which one are the numpy ones, cheers, David
On 31 March 2010 16:21, Charles R Harris
On Wed, Mar 31, 2010 at 11:38 AM, T J
wrote: On Wed, Mar 31, 2010 at 10:30 AM, T J
wrote: Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
Sorry, forgot version information:
$ python c "import numpy;print numpy.__version__" 1.5.0.dev8106 __
Looks like roundoff error.
No. The whole point of logaddexp and logaddexp2 is that they function correctly  in particular, avoid unnecessary underflow and overflow  in this sort of situation. This is a genuine problem. I see it using the stock numpy in Ubuntu karmic: In [3]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[3]: nan In [4]: np.logaddexp2(53.584962500721154, 0.5849625007211563) Out[4]: nan In [5]: np.log2(2**(53.584962500721154) + 2**(0.5849625007211563)) Out[5]: 0.58496250072115608 In [6]: numpy.__version__ Out[6]: '1.3.0' In [8]: !uname a Linux octopus 2.6.3120generic #58Ubuntu SMP Fri Mar 12 05:23:09 UTC 2010 i686 GNU/Linux (the machine is a 32bit Pentium M laptop.) I do not see a similar problem with logaddexp. Anne
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Wed, Mar 31, 2010 at 11:03 PM, Anne Archibald
On 31 March 2010 16:21, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:38 AM, T J
wrote: On Wed, Mar 31, 2010 at 10:30 AM, T J
wrote: Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
Sorry, forgot version information:
$ python c "import numpy;print numpy.__version__" 1.5.0.dev8106 __
Looks like roundoff error.
No. The whole point of logaddexp and logaddexp2 is that they function correctly  in particular, avoid unnecessary underflow and overflow  in this sort of situation. This is a genuine problem.
Well, it did function correctly for me ;) Just some roundoff error.
I see it using the stock numpy in Ubuntu karmic: In [3]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[3]: nan
In [4]: np.logaddexp2(53.584962500721154, 0.5849625007211563) Out[4]: nan
In [5]: np.log2(2**(53.584962500721154) + 2**(0.5849625007211563)) Out[5]: 0.58496250072115608
In [6]: numpy.__version__ Out[6]: '1.3.0'
In [8]: !uname a Linux octopus 2.6.3120generic #58Ubuntu SMP Fri Mar 12 05:23:09 UTC 2010 i686 GNU/Linux
(the machine is a 32bit Pentium M laptop.)
I think this is related to 32 bits somehow. The code for the logaddexp and logaddexp2 functions is almost identical. Maybe the compiler? Chuck
On 1 April 2010 01:11, Charles R Harris
On Wed, Mar 31, 2010 at 11:03 PM, Anne Archibald
wrote: On 31 March 2010 16:21, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:38 AM, T J
wrote: On Wed, Mar 31, 2010 at 10:30 AM, T J
wrote: Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
Sorry, forgot version information:
$ python c "import numpy;print numpy.__version__" 1.5.0.dev8106 __
Looks like roundoff error.
No. The whole point of logaddexp and logaddexp2 is that they function correctly  in particular, avoid unnecessary underflow and overflow  in this sort of situation. This is a genuine problem.
Well, it did function correctly for me ;) Just some roundoff error.
I see it using the stock numpy in Ubuntu karmic: In [3]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[3]: nan
In [4]: np.logaddexp2(53.584962500721154, 0.5849625007211563) Out[4]: nan
In [5]: np.log2(2**(53.584962500721154) + 2**(0.5849625007211563)) Out[5]: 0.58496250072115608
In [6]: numpy.__version__ Out[6]: '1.3.0'
In [8]: !uname a Linux octopus 2.6.3120generic #58Ubuntu SMP Fri Mar 12 05:23:09 UTC 2010 i686 GNU/Linux
(the machine is a 32bit Pentium M laptop.)
I think this is related to 32 bits somehow. The code for the logaddexp and logaddexp2 functions is almost identical. Maybe the compiler?
Possibly, or possibly I just didn't find values that trigger the bug for logaddexp. I wonder if the problem is the handling of denormalized floats? Anne
Chuck
_______________________________________________ NumPyDiscussion mailing list NumPyDiscussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpydiscussion
On Wed, Mar 31, 2010 at 11:17 PM, Anne Archibald
On 1 April 2010 01:11, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:03 PM, Anne Archibald <
peridot.faceted@gmail.com>
wrote:
On 31 March 2010 16:21, Charles R Harris
wrote: On Wed, Mar 31, 2010 at 11:38 AM, T J
wrote: On Wed, Mar 31, 2010 at 10:30 AM, T J
wrote: Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations import numpy as np x = np.array([53.584962500721154, 1.5849625007211563, 0.5849625007211563]) for p in permutations([0,1,2]): print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result depends on the order of the array...and we get nans in the "bad" orders. Likely, this also affects logaddexp.
Sorry, forgot version information:
$ python c "import numpy;print numpy.__version__" 1.5.0.dev8106 __
Looks like roundoff error.
No. The whole point of logaddexp and logaddexp2 is that they function correctly  in particular, avoid unnecessary underflow and overflow  in this sort of situation. This is a genuine problem.
Well, it did function correctly for me ;) Just some roundoff error.
I see it using the stock numpy in Ubuntu karmic: In [3]: np.logaddexp2(0.5849625007211563, 53.584962500721154) Out[3]: nan
In [4]: np.logaddexp2(53.584962500721154, 0.5849625007211563) Out[4]: nan
In [5]: np.log2(2**(53.584962500721154) + 2**(0.5849625007211563)) Out[5]: 0.58496250072115608
In [6]: numpy.__version__ Out[6]: '1.3.0'
In [8]: !uname a Linux octopus 2.6.3120generic #58Ubuntu SMP Fri Mar 12 05:23:09 UTC 2010 i686 GNU/Linux
(the machine is a 32bit Pentium M laptop.)
I think this is related to 32 bits somehow. The code for the logaddexp and logaddexp2 functions is almost identical. Maybe the compiler?
Possibly, or possibly I just didn't find values that trigger the bug for logaddexp. I wonder if the problem is the handling of denormalized floats?
I'm suspecting the log2/exp2 function in the 32 bit gcc library or possibly sse2. The exponents aren't large enough to denormalize the numbers. Chuck
participants (8)

Anne Archibald

Charles R Harris

David Cournapeau

David WardeFarley

josef.pktd＠gmail.com

Ryan May

T J

Warren Weckesser