inverse of a matrix with Fraction entries
casevh at gmail.com
Sun Nov 28 00:44:38 CET 2010
On Nov 27, 3:08 pm, Steven D'Aprano <steve
+comp.lang.pyt... at pearwood.info> wrote:
> On Fri, 26 Nov 2010 19:21:47 -0800, casevh wrote:
> > On Nov 26, 2:11 pm, Steven D'Aprano <steve
> > +comp.lang.pyt... at pearwood.info> wrote:
> >> On Fri, 26 Nov 2010 12:54:12 -0800, John Nagle wrote:
> >> > For ordinary number crunching,
> >> > rational arithmetic is completely inappropriate.
> >> Why?
> >> --
> >> Steven
> > As you perform repeated calculations with rationals, the size of the
> > values (usually) keep growing. (Where size is measured as the length of
> > numerator and denominator.) The speed and memory requirements are no
> > longer constant.
> You're not comparing apples with apples. You're comparing arbitrary
> precision calculations with fixed precision calculations. If you want
> fixed memory requirements, you should use fixed-precision rationals. Most
> rationals I know of have a method for limiting the denominator to a
> maximum value (even if not necessarily convenient).
> On the other hand, if you want infinite precision, there are floating
> point implementations that offer that too. How well do you think they
> perform relative to rationals? (Hint: what are the memory requirements
> for an infinite precision binary float equal to fraction(1, 3)? *wink*)
> Forth originally didn't offer floats, because there is nothing you can do
> with floats that can't be done slightly less conveniently but more
> accurately with a pair of integers treated as a rational. Floats, after
> all, *are* rationals, where the denominator is implied rather than
> I suspect that if rational arithmetic had been given half the attention
> that floating point arithmetic has been given, most of the performance
> difficulties would be significantly reduced. Perhaps not entirely
> eliminated, but I would expect that for a fixed precision calculation, we
> should have equivalent big-Oh behaviour, differing on the multiplicative
> In any case, the real lesson of your benchmark is that infinite precision
> is quite costly, no matter how you implement it :)
I think most users are expecting infinite precision when they use
rationals. Trying to explain limited precision rational arithmetic
might be interesting.
Knuth described "floating-slash" arithmetic that used a fixed number
of bits for both the numerator and denominator and a rounding
algorithm that prefers "simple" fractions versus more complex
fractions. IIRC, the original paper was from the 1960s.
More information about the Python-list