Count Down to Zero
Steve Holden
sholden at holdenweb.com
Fri Sep 22 22:52:50 CEST 2000
Curtis Jensen wrote:
>
> How come Python can't count down to zero? Was it written by ancient
> Romans or something?
>
[Example in which 0.1 is repeatedly subtracted from 1.0 in
the hopes that an exact zero will be reached]
> 1.38777878078e-16
> >>>
>
> The last one should be 0.0
>
Well, the crux of your posting is that little phrase "should be". You
and I might agree that it should be, but unfortunately computers are
binary devices, and they tend (as Python does) to use a format called
"floating point" to store real numbers. Here's why you don't see what
you apparently expected:
Floating point numbers are made up of two parts: a mantissa (which
you can think of as the significant digits) and an exponent (which
you can think of as telling you how big the number actually is).
In fact, the number your little test printed is *decimal* floating-
point notation: in this case the mantissa is usually shown as a value
between 1 and 10, (1.38777878078), and the exponent (-16) tells you
that to get it in that range the decimal point has been shifted (in
this case leftwards, as the exponent is negative) sixteen places.
So the number is really 0.000000000138777878078. Close, but alas no
cigar.
Now, what your test printed isn't even necessarily the EXACT value
which was computed, because to make sense for our decimal-oriented
brains the Python interpreter has kindly translated it from binary
floating point. But the real (no pun intended) problem is that the
binary floating point numbers have NO WAY to store 0.1 as an exact
number. Unlike decimal numbers, binary floating point usually use
a mantissa between 0.5 and 1 to make sure they have as many significant
digits as possible, and adjust the exponent to make the size right.
We can represent some numbers exactly this way (for example, 0.5,
0.75, 0.625 - anything made up of a sufficiently closely-spaced
number of powers of 2 will do). Look:
>>> x = 10.0
>>> while x > 0:
x = x - 0.25
>>> x
0.0
But your 0.1 will actually be represented in binary as something close,
but not exactly equal, to 0.1. And these errors just accumulate
every time you do arithmetic. It could have been worse, of course:
you could have ended up using binary numbers where the binary was
actually a bit smaller than the decimal, and then your final result
would have been much closer to -0.1 than 0.0!
Sadly, unless you want to go in for rather more esoteric options like
implementing fixed-point arithmetic, you might have to file this one
under "life's a bitch" and learn to live with it. Sorry the news isn't
better!
regards
Steve
--
Helping people meet their information needs with training and technology.
703 967 0887 sholden at bellatlantic.net http://www.holdenweb.com/
More information about the Python-list
mailing list