[Tutor] How did this decimal error pop up?

eryksun eryksun at gmail.com
Wed Apr 17 01:06:48 CEST 2013


On Tue, Apr 16, 2013 at 6:10 PM, Hugo Arts <hugo.yoshi at gmail.com> wrote:
>>>> i = 0
>>>> for _ in range(1000):
> i += 0.1
>
>>>> i
> 99.9999999999986
>
> That should, by all the laws of math, be 100. But it isn't. Because the
> computer can't store 0.1 in a float, so it stores the closest possible
> number it can instead. The value stored in a above is not actually 0.1, it's
> closer to:
>
> 0.1000000000000000055511151231257827021181583404541015625

Now you have to explain how the summation ends up as less than 100. ;)
 That really gets to the heart of error introduced by intermediate
rounding with fixed-precision arithmetic. This is a more important
point than the fact that 0.1 doesn't have an exact representation in
binary floating point. Python's floats are limited to hardware
precision, which is 53 bits or floor(53*log10(2)) == 15 decimal
digits. There's no option to extend the precision arbitrarily as is
possible with the Decimal type.

BTW, did you know Decimal got a C implementation in 3.3? It's a lot
faster than the pure Python implementation in previous versions, but
still not as fast as hardware floats.


More information about the Tutor mailing list