
Hi, I don't know if this is the good place for this metaphysic question...But, I read python docs in each version, but can't find the real answer. 2 days I'm on it :( Here is the question: In python docs 3.4 it is written: "Historically, the Python prompt and built-in repr() function would choose the one with 17 significant digits, 0.10000000000000001. Starting with Python 3.1, Python (on most systems) is now able to choose the shortest of these and simply display 0.1." The question is HOW python choose it? Here are examples I put in the shell:
0.1 Out[1]: 0.1
0.2 Out[2]: 0.2
0.3 Out[3]: 0.3
No problem here... Now:
0.1+0.2 Out[4]: 0.30000000000000004
The problem is NOT the number 4 that appears at the end. I know it deals with IEEE754 representation and that python displays the 17 first digits as tells in the doc. The problem is why it appears and why not for 0.2.... Let's see why it is a problem for me:
Decimal(0.1) Out[10]: Decimal('0.1000000000000000055511151231257827021181583404541015625')
Decimal(0.2) Out[11]: Decimal('0.200000000000000011102230246251565404236316680908203125')
Decimal(0.1+0.2) Out[12]: Decimal('0.3000000000000000444089209850062616169452667236328125')
Decimal(0.3) Out[13]: Decimal('0.299999999999999988897769753748434595763683319091796875')
SO: the first 17 digits for 0.1 python's display are: 0.10000000000000000 so ok for 0.1 as display the first 17 digits for 0.1+0.2 python's display are: 0.30000000000000004 so, ok for the display. BUT: the first 17 digits for 0.2 python's display are: 0.20000000000000001 so, why just only 0.2 displayed? and the same for the 17 first digits for 0.3: 0.29999999999999998 so, why python only displayed 0.3? How python choose the correct display please? Thank you so much!!!!!!!!!! Philippe MOREAU
participants (1)
-
Philippe Moreau