[Python-Dev] Expert floats

Tim Peters tim.one at comcast.net
Tue Mar 30 16:08:59 EST 2004


>> But you can't get away from that via any decimal rounding rule.  One
>> of the *objections* the 754 committee had to the Scheme rule is that
>> moving rounded shortest-possible decimal output to a platform with
>> greater precision could cause the latter platform to read in an
>> unnecessarily poor approximation to the actual number written on the
>> source platform.  It's simply a fact that decimal 1.1000000000000001
>> is a closer approximation to the number stored in an IEEE double
>> (given input "1.1" perfectly rounded to IEEE double format) than
>> decimal 1.1, and that has consequences too when moving to a wider
>> precision.

[Andrew Koenig]
> But if you're moving to a wider precision, surely there is an even
> better decimal approximation to the IEEE-rounded "1.1" than
> 1.1000000000000001 (with even more digits), so isn't the preceding
> paragraph a justification for using that approximation instead?

Like Ping, you're picturing typing in "1.1" by hand, so that you *know*
decimal 1.1 on-the-nose is the number you "really want".  But repr() can't
know that -- it's a general principle of 754 semantics for each operation to
take the bits it's fed at face value, because the implementation can't guess
intent, and it's likely to create more problems than it solves if it tries
to "improve" the bits it actually sees.  So far as reproducing observed
results as closely as possible goes, the wider machine will in fact do
better if it sees "1.100000000000001" instead of "1.1", because the former
is in fact a closer approximation to the number the narrower machine
actually *uses*.

Suppose you had a binary float format with 3 bits of precision, and the
result of a computation on that box is .001 binary = 1/8 = 0.125 decimal.
The "shortest-possible reproducing decimal representation" on that box is
0.1.  Is it more accurate to move that result to a wider machine via the
string "0.1" or via the string "0.125"?  The former is off by 25%, but the
latter is exactly right.  repr() on the former machine has no way to guess
whether the 1/8 it's fed is the result of the user typing in "0.1" or the
result of dividing 1.0 by 8.0.  By taking the bits at face value, and
striving to communicate that as faithfully as possible, it's explainable,
predictable, and indeed as faithful as possible.  "Looks pretty too" isn't a
requirement for serious floating-point work.




More information about the Python-Dev mailing list