[Python-ideas] [Python Ideas] Python Float Update

Alexander Belopolsky alexander.belopolsky at gmail.com
Fri Jun 5 00:40:11 CEST 2015


On Wed, Jun 3, 2015 at 9:01 PM, Guido van Rossum <guido at python.org> wrote:

> But there was a big issue that we didn't anticipate. During the course of
> a simple program it was quite common for calculations to slow down
> dramatically, because numbers with ever-larger numerators and denominators
> were being computed (and rational arithmetic quickly slows down as those
> get bigger).


The problem of unlimited growth can be solved by rounding, but the result
is in many ways worse that floating point
numbers.  One obvious problem is that unlike binary floating point where
all bit patterns represent different numbers,
only about 60% of fractions with limited numerators and denominators
represent unique values.  The rest are
reducible by dividing the numerator and denominator by the GCD.

Furthermore, the fractions with limited numerators are distributed very
unevenly on the number line.  This problem
is present in binary floats as well: floats between 1 and 2 are twice as
dense as floats between 2 and 4, but with
fractions it is much worse.  Since a/b - c/d = (ad-bc)/(bd), a fraction
nearest to a/b is at a distance of 1/(bd) from it.
So if the denominators are limited by D (|b| < D and |d| < D), for small
b's the nearest fraction to a/b is at distance
~ 1/D, but if b ~ D, it is at a distance of 1/D^2.  For example, if we
limit denominators to 10 decimal digits, the gaps
between fractions can vary from ~ 10^(-10) to ~ 10^(-20) even if the
fractions are of similar magnitude - say between
1 and 2.

These two problems rule out the use of fractions as a general purpose
number.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150604/a72d2305/attachment-0001.html>


More information about the Python-ideas mailing list