Maths error
Tim Peters
tim.one at comcast.net
Tue Jan 9 13:03:45 EST 2007
[Rory Campbell-Lange]
>>> Is using the decimal module the best way around this? (I'm
>>> expecting the first sum to match the second). It seem
>>> anachronistic that decimal takes strings as input, though.
[Nick Maclaren]
>> As Dan Bishop says, probably not. The introduction to the decimal
>> module makes exaggerated claims of accuracy, amounting to propaganda.
>> It is numerically no better than binary, and has some advantages
>> and some disadvantages.
[Carsten Haese]
> Please elaborate. Which exaggerated claims are made,
Well, just about any technical statement can be misleading if not qualified
to such an extent that the only people who can still understand it knew it
to begin with <0.8 wink>. The most dubious statement here to my eyes is
the intro's "exactness carries over into arithmetic". It takes a world of
additional words to explain exactly what it is about the example given (0.1
+ 0.1 + 0.1 - 0.3 = 0 exactly in decimal fp, but not in binary fp) that
does, and does not, generalize. Roughly, it does generalize to one
important real-life use-case: adding and subtracting any number of decimal
quantities delivers the exact decimal result, /provided/ that precision is
set high enough that no rounding occurs.
> and how is decimal no better than binary?
Basically, they both lose info when rounding does occur. For example,
>>> import decimal
>>> 1 / decimal.Decimal(3)
Decimal("0.3333333333333333333333333333")
>>> _ * 3
Decimal("0.9999999999999999999999999999")
That is, (1/3)*3 != 1 in decimal. The reason why is obvious "by eyeball",
but only because you have a lifetime of experience working in base 10. A
bit ironically, the rounding in binary just happens to be such that (1/3)/3
does equal 1:
>>> 1./3
0.33333333333333331
>>> _ * 3
1.0
It's not just * and /. The real thing at work in the 0.1 + 0.1 + 0.1 - 0.3
example is representation error, not sloppy +/-: 0.1 and 0.3 can't be
/represented/ exactly as binary floats to begin with. Much the same can
happen if you instead you use inputs exactly representable in base 2 but
not in base 10 (and while there are none such if precision is infinite,
precision isn't infinite):
>>> x = decimal.Decimal(1) / 2**90
>>> print x
8.077935669463160887416100508E-28
>>> print x + x + x - 3*x # not exactly 0
1E-54
The same in binary f.p. is exact, because 1./2**90 is exactly representable
in binary fp:
>>> x = 1. / 2**90
>>> print x # this displays an inexact decimal approx. to 1./2**90
8.07793566946e-028
>>> print x + x + x - 3*x # but the binary arithmetic is exact
0.0
If you boost decimal's precision high enough, then this specific example is
also exact using decimal; but with the default precision of 28, 1./2**90
can't be represented exactly in decimal to begin with; e.g.,
>>> decimal.Decimal(1) / 2**90 * 2**90
Decimal("0.9999999999999999999999999999")
All forms of fp are subject to representation and rounding errors. The
biggest practical difference here is that the `decimal` module is not
subject to representation error for "natural" decimal quantities, provided
precision is set high enough to retain all the input digits. That's worth
something to many apps, and is the whole ball of wax for some apps -- but
leaves a world of possible "surprises" nevertheless.
More information about the Python-list
mailing list