Steven D'Aprano steve at REMOVE-THIS-cybersource.com.au
Sun Feb 17 04:19:27 CET 2008

```On Sat, 16 Feb 2008 18:21:40 -0800, Carl Banks wrote:

> Consider what happens when you add two fractions:
>
> 1/2 + 1/5
>
> To do that, you have to take the LCD of the denomintor, in this case 10,
> so you get
>
> 5/10 + 2/10 = 7/10
>
> Now imagine that you're adding a lot of different numbers with a lot of
> different bases.  That LCD's going to be pretty big.

It *will* be pretty big, or it *could be* pretty big?

The worst case comes from something like this:

a/2 + b/3 + c/4 + d/5 + e/6 + ...

where the denominator could be as high as n! (for n = the number of
fractions you're adding). Since n! gets large quickly, you don't want
that.

But a less-naive implementation shouldn't be that bad: the lowest common
denominator of a/2 + c/4 is 4, not 8. Still, multiplying all the
relatively-prime denominators will still get large quickly, just not
quite as quickly as n!.

Which is where a good fraction implementation should allow you to specify
the largest denominator you wish to see. After all, the difference
between:

975328755018/6827301285133

and

1/7

is less than 2e-13, or a relative error of approximately 0.0000000001%.
If you care about a difference of that magnitude, you're probably already
using numpy.

An interesting question would be, how large a denominator would you need
to beat the precision of floats? My back-of-the-envelope calculation
suggests that limiting your denominator to 4503599627370496 is enough to
give you a precision as good as floats:

# on my PC, the machine accuracy is 2.2204460492503131e-16
# this is the smallest float which, when added to 1.0,
# doesn't give 1.0
>>> eps = 2.2204460492503131e-16
>>> 1/eps
4503599627370496.0

The advantage is that while 1000.0+eps gives 1000.0 (which it shouldn't),
(1000*4503599627370496 + 1)/4503599627370496 is a perfectly reasonable
fraction to work with. Ugly to the human eye perhaps, but if your PC is
grinding to a halt on such a fraction, maybe you should take this as a
sign that 64K *isn't* enough memory for everybody.

*wink*

[snip]

> The thing I don't like about rationals is that they give a false sense
> of security.  They are performing reasonably, and then you make a slight
> change or some circumstance changes slightly and suddenly they blow up.

Or, to put it another way, rationals and floats both are dangerous, but
in different ways. The performance of rationals can fall drastically, and
floating point calculations can suddenly become inaccurate. You make your

--
Steven

```