[Tutor] How does the interpreter determine how many decimal places to display for a float?
boB Stepp
robertvstepp at gmail.com
Fri May 14 23:29:39 EDT 2021
In "15. Floating Point Arithmetic: Issues and Limitations" at
https://docs.python.org/3/tutorial/floatingpoint.html it says:
<quote>
Interestingly, there are many different decimal numbers that share the
same nearest approximate binary fraction. For example, the numbers 0.1
and 0.10000000000000001 and
0.1000000000000000055511151231257827021181583404541015625 are all
approximated by 3602879701896397 / 2 ** 55. Since all of these decimal
values share the same approximation, any one of them could be
displayed while still preserving the invariant eval(repr(x)) == x.
Historically, the Python prompt and built-in repr() function would
choose the one with 17 significant digits, 0.10000000000000001.
Starting with Python 3.1, Python (on most systems) is now able to
choose the shortest of these and simply display 0.1.
</quote>
How does >= Python 3.1 determine the "shortest of these"?
TIA!
boB Stepp
More information about the Tutor
mailing list