4 hundred quadrillonth?

Dave Angel davea at ieee.org
Fri May 22 04:13:45 CEST 2009


Rob Clewley wrote:
> On Thu, May 21, 2009 at 8:19 PM, Gary Herron <gherron at islandtraining.com> wrote:
>   
>> MRAB wrote:
>>     
>>> Grant Edwards wrote:
>>>       
>>>> On 2009-05-21, Christian Heimes <lists at cheimes.de> wrote:
>>>>         
>>>>> seanm.py at gmail.com schrieb:
>>>>>           
>>>>>> The explaination in my introductory Python book is not very
>>>>>> satisfying, and I am hoping someone can explain the following to me:
>>>>>>
>>>>>>             
>>>>>>>>> 4 / 5.0
>>>>>>>>>                   
>>>>>> 0.80000000000000004
>>>>>>
>>>>>> 4 / 5.0 is 0.8. No more, no less. So what's up with that 4 at the end.
>>>>>> It bothers me.
>>>>>>             
>>>>> Welcome to IEEE 754 floating point land! :)
>>>>>           
>
> FYI you can explore the various possible IEEE-style implementations
> with my python simulator of arbitrary floating or fixed precision
> numbers:
>
> http://www2.gsu.edu/~matrhc/binary.html
>
>   

It was over 40 years ago I studied Fortran, with the McCracken book.  
There were big warnings in it about the hazards of binary floating 
point.  This was long before the IEEE
 754, Python, Java, or even C.

In any floating point system with finite precision, there will be some 
numbers that cannot be represented exactly.  Beginning programmers 
assume that if you can write it exactly, the computer should understand 
it exactly as well.  (That's one of the reasons the math package I 
microcoded a few decades ago was base 10).

If you try to write 1/3 in decimal notation, you either have to write 
forever, or truncate it somewhere.  The only fractions that terminate 
are those that have a denominator (in lowest terms) comprised only of 
powers of 2 and 5.  So 4/10 can be represented, and so can 379/625.  Any 
other fraction, like 1/7, or 3/91 will make a repeating decimal, 
sometimes taking many digits to repeat, but not repeating with zeroes.

In binary fractions, the rule is similar, but only for powers of 2.  If 
there's a 5 in there, it cannot be represented exactly.

So people learn to use integers, or rational numbers (fractions), or 
decimal representations, depending on what values they're willing to 
have be approximate.

Something that escapes many people is that even when there's an error 
there, sometimes converting it back to decimal hides the error.  So 0.4 
might have an error on the right end, but 0.7 might happen to look good.

Thanks for providing tools that let people play.

An anecdote from many years ago (1975) -- I had people complain about my 
math package, that cos(pi/2) was not zero.  It was something times 
10**-13, but still not zero.  And they wanted it to be zero.  If 
somebody set the math package to work in degrees, they'd see that 
cos(90) was in fact zero.   Why the discrepancy?  Well,  you can't 
represent pi/2 exactly, (in any floating point or fraction system, it's 
irrational).  So if you had a perfect cos package, but give it a number 
that's off just a little from a right angle, you'd expect the answer to 
be off a little from zero.  Turns out that (in a 13 digit floating point 
package), the value was the next 13 digits of pi/2.  And 12 of them were 
accurate.  I was pleased as punch.





More information about the Python-list mailing list