struct doesn't handle NaN values?

Grant Edwards grante at visi.com
Thu May 13 21:07:25 EDT 2004


In article <mailman.538.1084489118.25742.python-list at python.org>, Tim Peters wrote:

>> but it never specifies which FP representation is used when.
> 
> The same as everything else: in native mode, whatever float
> and double representation the platform uses is what struct
> uses, just as in native mode struct uses whatever the platform
> uses for chars, shorts, ints and longs. In standard mode, the
> representation is forced to IEEE 754 float or double format.

My question was which "native" and "standard" mode?  There
appear to be two different "modes": "byte order" and "size and
alignment".  Which of the two modes determines the floating
point representation to be used?  My interpretation of the doc
was the latter: use native FP representation when it says
"native" in the "size and alignment" column and use IEEE when
it says "standard" in the "size and alignment" column.

> But it's still the case that all behavior wrt NaNs, Infs, and
> signed zeroes is an accident in standard mode. Indeed, it's
> precisely *because* standard mode tries to force the
> representation to a known format (and Python has no idea
> whether the platform it's running on uses 754 format natively
> or not) that these accidents occur.

In order to provide robust translation between native and IEEE
floating point formats, Python is going to have to know what
the native format is.

> C89 predates 754 adoption, and so offers no portable
> facilities even for recognizing whether a thing is a NaN, Inf,
> or signed 0.  "Standard" C tricks like
> 
>     if (x != x) { /* then x is a NaN */ }
> 
> don't actually work across platforms (although many with
> limited x-platform experience believe they do).

Recognizing and generating IEEE NaNs, infinities, 0's and
denormals is easy enough.

Recognizing and generating native infinities, 0's and denormals
would require some compile-time configuration, but that's not
difficult either.  All the C compilers I've used in the past
dozen or two years provide pre-processor symbols to tell you
want architecture you're compiling for. If one doesn't want to
rely on that, compiling and running some simple test programs
a-la autoconf should be able to determine pretty reliably if
the host is using IEEE representation or not.  

Since the vast majority of hosts out there use IEEE
representation, and the C compiler can tell you that at
compile-tiem, I see no reason why struct can't be made to work
better.  IIRC, the other FP representations I've worked with
(TI and DEC) were both minor variations on IEEE 754 and both
provided NaNs and infinities.  Why shouldn't we expect struct
to convert an IEEE NaN into a native NaN (and the reverse)?

Are there architectures that support multiple floating point
representations that can only be determined at run-time?

>> I would guess that FP native vs. standard representation
>> matches the native vs. standard state of "size and alignment".
> 
> I'm not sure what that sentence said, but bet it's right <wink>.

I tried to state it more clearly above.

-- 
Grant Edwards                   grante             Yow!  I have a TINY BOWL in
                                  at               my HEAD
                               visi.com            



More information about the Python-list mailing list