[Numpy-discussion] OS X PPC problem with Numpy 1.3.0b1

Charles R Harris charlesr.harris at gmail.com
Mon Mar 23 15:22:29 EDT 2009


On Mon, Mar 23, 2009 at 12:34 PM, Robert Pyle <rpyle at post.harvard.edu>wrote:

> Hi all,
>
> This is a continuation of something I started last week, but with a
> more appropriate subject line.
>
> To recap, my machine is a dual G5 running OS X 10.5.6, my python is
>
>    Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008,
> 15:28:32)
>
> and numpy 1.3.0b1 was installed from the source tarball in the
> straightforward way with
>
>    sudo python setup.py install
>
>
> On Mar 19, 2009, at 3:46 PM, Charles R Harris wrote:
>
> > On Mar 19, 2009, at 1:38 PM, Pauli Virtanen wrote:
> >
> > Thanks for tracking this! Can you check what your platform gives for:
> >
> > > import numpy as np
> > > info = np.finfo(np.longcomplex)
> > > print "eps:", info.eps, info.eps.dtype
> > > print "tiny:", info.tiny, info.tiny.dtype
> > > print "log10:", np.log10(info.tiny), np.log10(info.tiny/info.eps)
> >
> > eps: 1.3817869701e-76 float128
> > tiny: -1.08420217274e-19 float128
> > log10: nan nan
> >
> > The log of a negative number is nan, so part of the problem is the
> > value of tiny. The size of the values also look suspect to me. On my
> > machine
> >
> > In [8]: finfo(longcomplex).eps
> > Out[8]: 1.084202172485504434e-19
> >
> > In [9]: finfo(float128).tiny
> > Out[9]: array(3.3621031431120935063e-4932, dtype=float128)
> >
> > So at a minimum eps and tiny are reversed.
> >
> > I started to look at the code for this but my eyes rolled up in my
> > head and I passed out. It could use some improvements...
> >
> > Chuck
>
> I have chased this a bit (or perhaps 128 bits) further.
>
> The problem seems to be that float128 is screwed up in general.  I
> tracked the test error back to lines 95-107 in
>
> /PyModules/numpy-1.3.0b1/build/lib.macosx-10.3-ppc-2.5/numpy/lib/
> machar.py
>
> Here is a short program built from these lines that demonstrates what
> I believe to be at the root of the test failure.
>
> ######################################
> #! /usr/bin/env python
>
> import numpy as np
> import binascii as b
>
> def t(type="float"):
>     max_iterN = 10000
>     print "\ntesting %s" % type
>     a = np.array([1.0],type)
>     one = a
>     zero = one - one
>     for _ in xrange(max_iterN):
>         a = a + a
>         temp = a + one
>         temp1 = temp - a
>         print _+1, b.b2a_hex(temp[0]), temp1
>         if any(temp1 - one != zero):
>             break
>     return
>
> if __name__ == '__main__':
>     t(np.float32)
>     t(np.float64)
>     t(np.float128)
>
> ######################################
>
> This tries to find the number of bits in the significand by
> calculating ((2.0**n)+1.0) for increasing n, and stopping when the sum
> is indistinguishable from (2.0**n), that is, when the added 1.0 has
> fallen off the bottom of the significand.
>
> My print statement shows the power of 2.0, the hex representation of
> ((2.0**n)+1.0), and the difference ((2.0**n)+1.0) - (2.0**n), which
> one expects to be 1.0 up to the point where the added 1.0 is lost.
>
> Here are the last few lines printed for float32:
>
> 19 49000010 [ 1.]
> 20 49800008 [ 1.]
> 21 4a000004 [ 1.]
> 22 4a800002 [ 1.]
> 23 4b000001 [ 1.]
> 24 4b800000 [ 0.]
>
> You can see the added 1.0 marching to the right and off the edge at 24
> bits.
>
> Similarly, for float64:
>
> 48 42f0000000000010 [ 1.]
> 49 4300000000000008 [ 1.]
> 50 4310000000000004 [ 1.]
> 51 4320000000000002 [ 1.]
> 52 4330000000000001 [ 1.]
> 53 4340000000000000 [ 0.]
>
> There are 53 bits, just as IEEE 754 would lead us to hope.
>
> However, for float128:
>
> 48 42f00000000000100000000000000000 [1.0]
> 49 43000000000000080000000000000000 [1.0]
> 50 43100000000000040000000000000000 [1.0]
> 51 43200000000000020000000000000000 [1.0]
> 52 43300000000000010000000000000000 [1.0]
> 53 43400000000000003ff0000000000000 [1.0]
> 54 43500000000000003ff0000000000000 [1.0]
>
> Something weird happens as we pass 53 bits.  I think lines 53 and 54
> *should* be
>

PPC stores long doubles as two doubles. I don't recall exactly how the two
are used, but the result is that the numbers aren't in the form you would
expect. Long doubles on the PPC have always been iffy, so it is no surprise
that machar fails. The failure on SPARC quad precision bothers me more.

I think the easy thing to do for the 1.3 release is to fix the precision
test to use a hardwired range of values, I don't think testing the extreme
small values is necessary to check the power series expansion. But I have
been leaving that fixup to Pauli.

Longer term, I think the values in finfo could come from npy_cpu.h and be
hardwired in. We only support ieee floats and I don't think it should be
difficult to track extended precision (current intel) vs quad precision
(SPARC). Although at some point I expect intel will also go to quad
precision and then things might get sticky.  Hmm..., I wonder what some to
the other supported achitectures do? Anyhow, PPC is an exception in the way
it treats long doubles and I'm not even sure it hasn't changed in some of
the more recent models.


>
> 53 43400000000000008000000000000000 [1.0]
> 54 43500000000000004000000000000000 [1.0]
>
> etc., with the added 1.0 continuing to march rightwards to extinction,
> as before.
>
> The calculation eventually terminates with
>
> 1022 7fd00000000000003ff0000000000000 [1.0]
> 1023 7fe00000000000003ff0000000000000 [1.0]
> 1024 7ff00000000000000000000000000000 [NaN]
>
> (7ff00000000000000000000000000000 == Inf).
>

Interesting.


>
> This totally messes up the remaining parts of machar.py, and leaves us
> with Infs and Nans that give the logs of negative numbers, etc. that
> we saw last week.
>
> But wait, there's more!  I also have an Intel Mac (a MacBook Pro).
> This passes numpy.test(), but when I look in detail with the above
> code, I find that the float128 significand has only 64 bits, leading
> me to suspect that it is really the so-called 80-bit "extended
> precision".
>

That's right. On 32 bit machines extended precision is stored in 96 bits
(3*32) for alignment purposes. On 64 bit machines it is stored in 128 bits
(2*64).


>
> Is this true?  If so, should there be such large differences between
> architectures for the same nominal precision?  Is float128 just
> whatever the underlying C compiler thinks a "long double" is?
>

Yes. I've raised the question of using something more explicit than bit
length to track extended precision, not least because of the quad precision
mix up. But we can't really do anything unless C supports it.


>
> I've spent way too much time on this, so I'm going to bow out here
> (unless someone can suggest something for me to try that won't take
> too much time).
>

Thanks for the effort you have made. It helps.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20090323/64e6635f/attachment.html>


More information about the NumPy-Discussion mailing list