steve+comp.lang.python at pearwood.info
Wed Jul 2 17:27:20 CEST 2014
On Tue, 01 Jul 2014 14:17:14 -0700, Pedro Izecksohn wrote:
> pedro at microboard:~$ /usr/bin/python3
> Python 3.3.2+ (default, Feb 28 2014, 00:52:16) [GCC 4.8.1] on linux
> Type "help", "copyright", "credits" or "license" for more information.
> How to get 0.05 as result?
Oh, this is a fantastic example of the trouble with floating point! Thank
you for finding it!
py> 1 - 0.95
py> 1 - 0.9 - 0.05
py> 1 - 0.5 - 0.45
*This is not a Python problem*
This is a problem with the underlying C double floating point format.
Actually, it is not even a problem with the C format, since this problem
applies to ANY floating point format, consequently this sort of thing
plagues *every* programming language (unless they use arbitrary-precision
rationals, but they have their own problems).
In this *specific* case, you can get better (but slower) results by using
the Decimal format:
py> from decimal import Decimal
py> 1 - Decimal("0.95")
This works because the Decimal type stores numbers in base 10, like you
learned about in school, and so numbers that are exact in base 10 are
(usually) exact in Decimal. However, the built-in float stores numbers in
base 2, for speed and accuracy. Unfortunately many numbers which are
exact in base 10 are not exact in base 2. Let's look at a simple number
like 0.1 (in decimal), and try to calculate it in base 2:
0.1 in binary (0.1b) equals 1/2, which is too big.
0.01b equals 1/4, which is too big.
0.001b equals 1/8, which is too big.
0.0001b equals 1/16, which is too small, so the answer lies somewhere
between 0.0001b and 0.001b.
0.00011b equals 1/16 + 1/32 = 3/32, which is too small.
0.000111b equals 1/16 + 1/32 + 1/64 = 7/64, which is too big, so the
answer lies somewhere between 0.000111b and 0.00011b.
If you keep going, you will eventually get that the decimal 0.1 written
in base 2 is .000110011001100110011001100110011... where the "0011"
repeats forever. (Just like in decimal, where 1/3 = 0.33333... repeating
forever.) Since floats don't use an infinite amount of memory, this
infinite sequence has to be rounded off somewhere. And so it is that 0.1
stored as a float is a *tiny* bit larger than the true decimal value 0.1:
All the troubles with floating point numbers start with this harsh
reality, numbers have to be rounded off somewhere lest they use infinite
memory, and that rounding introduces errors into the calculation.
Sometimes those errors cancel, and sometimes they reinforce.
To understand what is going on in more detail, you can start with these
More information about the Python-list