[Python-Dev] Expert floats

Andrew Koenig ark-mlist at att.net
Tue Mar 30 19:38:06 EST 2004


> > But that rules out those hypothetical machines with greater precision
> > on which you are basing your argument.
> 
> Sorry, couldn't follow that one.

You argued against applying the Scheme rules because that would make
marshalling less accurate when the unmarshalling is done on a machine with
longer floats.  But on such a machine, 17 digits won't be good enough
anyway.

> > I thought that 754 requires input and output to be no more than 0.47
> > LSB away from exact.

> No, and no standard can ask for better than 0.5 ULP error (when the true
> result is halfway between two representable quantities, 0.5 ULP is the
> smallest possible error "even in theory").  It requires perfect rounding
> for "suitably small" inputs; outside that range, and excepting
> underflow/overflow:
> 
> - for nearest/even rounding, it requires no more than 0.47 ULP error
>   *beyond* that allowed for perfect nearest/even conversion (which
>   has a max error of 0.5 ULP on its own)

That's what I meant.  Rather than 0.47 from exact, I meant 0.47 from the
best possible.

> > Surely the spirit of 754 would require more than 17 significant
> > digits on machines with more than 56-bit fractions.

> Yes, and the derivation of "17" for IEEE double format isn't hard.  Of
> course the 754 standard doesn't say anything about non-754 architectures;
> there are generalizations in the related 854 standard.

Yes.

> > Understood.  What I meant when I started this thread was that I think
> > things would be better in some ways if Python did not rely on the
> > underlying C library for its floating-point conversions--especially
> > in light of the fact that not all C libraries meet the 754
> > requirements for conversions.

> No argument there.  In fact, it would be better in some ways if Python
> didn't rely on the platform C libraries for anything.

Hey, I know some people who write C programs that don't rely on the platform
C libraries for anything :-)

> > Naah - I also suggested it because I like the Scheme style of
> > conversions, and because I happen to know David Gay personally.  I
> > have no opinion about how easy his code is to maintain.
> 
> It's not "David Gay" at issue, it's that this code is trying to do an
> extremely delicate and exacting task in a language that offers no native
> support.  So here's a snippet of the #ifdef maze at the top:

<snip>

> There are over 3,000 lines of code "like that" in dtoa.c alone.
> "Obviously correct" isn't obvious, and some days I think I'd rather track
> down a bug in Unicode.

Understood.

> > I completely agree that if you're going to rely on the underlying C
> > implementation for floating-point conversions, there's little point in
> > trying to do anything really good--C implementations are just too
> > variable.


> Well, so far as marshal goes (storing floats in code objects), we could
> and should stop trying to use decimal strings at all -- Python's 8-byte
> binary pickle format for floats is portable and is exact for (finite) IEEE
> doubles.

Gee, then you could go back to rounding to 12 digits and make ?!ng happy :-)




More information about the Python-Dev mailing list