[Python-ideas] Why not picoseconds?
stephanh42 at gmail.com
Fri Oct 20 10:42:39 EDT 2017
Please excuse me for getting a bit off-topic, but I would like
to point out that except for bean-counters who need to be
bug-compatible with accounting standards, decimal
floating point is generally a bad idea.
That is because the worst-case bound on the rounding error
grows linear with the base size. So you really want to choose
a base size as small as possible, i.e., 2.
This is not really related to the fact that computers use base-2
arithmetic, that is just a happy coincidence.
If we used ternary logic for our computers, FP should still be
based on base-2 and computer architects would complain
about the costly multiplication and division with powers of two
(just as they have historically complained about the costly
implementation of denormals, but we still got that, mostly
thanks to prof. Kahan convincing Intel).
Worse, a base other than 2 also increases the spread in the
average rounding error. This phenomenon is called "wobble"
and adds additional noise into calculations.
The ultimate problem is that the real number line contains
quite a few more elements than our puny computers can handle.
There is no 100% solution for this, but of all the possible
compromises, floating-point forms a fairly optimum point
in the design space for a wide range of applications.
Op 20 okt. 2017 3:13 p.m. schreef "Victor Stinner" <victor.stinner at gmail.com
Python-ideas at python.org
Code of Conduct: http://python.org/psf/codeofconduct/
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas