1-0.95
Steven D'Aprano
steve+comp.lang.python at pearwood.info
Wed Jul 2 20:34:03 CEST 2014
On Wed, 02 Jul 2014 19:59:25 +0300, Marko Rauhamaa wrote:
> Steven D'Aprano <steve+comp.lang.python at pearwood.info>:
>
>> This is a problem with the underlying C double floating point format.
>> Actually, it is not even a problem with the C format, since this
>> problem applies to ANY floating point format, consequently this sort of
>> thing plagues *every* programming language (unless they use
>> arbitrary-precision rationals, but they have their own problems).
>
> Actually, it is not a problem at all. Floating-point numbers are a
> wonderful thing.
No, *numbers* in the abstract mathematical sense are a wonderful thing.
Concrete floating point numbers are a *useful approximation* to
mathematical numbers. But they're messy, inexact, and fail to have the
properties we expect real numbers to have, e.g. any of these can fail
with IEEE-754 floating point numbers:
1/(1/x) == x
x*(y+z) == x*y + x*z
x + y - z == x - z + y
x + y == x implies y == 0
You think maths is hard? That's *nothing* compared to reasoning about
floating point numbers, where you cannot even expect x+1 to be different
from x.
In the Bad Old Days before IEEE-754, things were even worse! I've heard
of CPUs where it was impossible to guard against DivideByZero errors:
if x != 0: # succeeds
print 1/x # divide by zero
because the test for inequality tested more digits than the divider used.
Ouch.
>> This works because the Decimal type stores numbers in base 10, like you
>> learned about in school, and so numbers that are exact in base 10 are
>> (usually) exact in Decimal.
>
> Exactly, the problem is in our base 10 mind.
No no no no! The problem is that *no matter what base you pick* some
exact rational numbers cannot be represented in a finite number of digits.
(Not to mention the irrationals.)
> Note, however:
>
> >>> Decimal(1) / Decimal(3) * Decimal(3)
> Decimal('0.9999999999999999999999999999')
Yes! Because Decimal has a finite (albeit configurable) precision, while
1/3 requires infinite number of decimal places. Consequently,
1/Decimal(3) is a little bit smaller than 1/3, and multiplying by 3 gives
you something a little bit smaller than 1.
Ironically, in base 2, the errors in that calculation cancel out:
py> 1/3*3 == 1
True
and of course in base 3 the calculation would be exact.
> Even "arbitrary-precision" rationals would suffer from the same problem:
Not so.
py> from fractions import Fraction
py> Fraction(1, 3)*3 == 1
True
"Arbitrary precision" rationals like Fraction are capable of representing
*every rational number* exactly (provided you have enough memory).
> >>> Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
> False
Square root of 2 is not a rational number.
--
Steven
More information about the Python-list
mailing list