[Python-Dev] C Decimal - is there any interest?

Mark Dickinson dickinsm at gmail.com
Tue Oct 16 18:24:35 CEST 2007

On 10/16/07, Mateusz Rukowicz <mateusz.rukowicz at vp.pl> wrote:
> Well, I am pretty sure, that addition works in linear time in Python
> version :>.

Unfortunately not.  Here are some timings from my laptop:

>>> from timeit import Timer
>>> Timer("x+x", "from decimal import Decimal; x =
>>> Timer("x+x", "from decimal import Decimal; x =
>>> Timer("x+x", "from decimal import Decimal; x =
>>> Timer("x+x", "from decimal import Decimal; x =

I'm almost sure that adding 40000 digit numbers together is not what
Decimal was intended to be used for, but it still seems unreasonable
that it takes almost 5 seconds to do such an addition.  The reason for
the quadratic behaviour is that almost all the arithmetic routines in
decimal.py, at some point, convert the coefficient of their
argument(s) from a tuple of digits to a Python integer, and then do
the reverse conversion to get a Decimal result;  both of these
conversions (tuple of digits <-> integer) take time quadratic in the
size of the tuple/integer.   This means that multiplication of
Decimals is also quadratic time, even though it makes use of Python's
subquadratic Karatsuba multiplication.

The alternative would be to implement addition digit-by-digit in
decimal.py;  this would be asymptotically linear but would be much
slower for the low precision ( < 50 digits, say)
decimals that almost everybody is going to be using in
practice---clearly not a good tradeoff.

So one attraction of the C version of decimal is that with little
effort it gets the best of both worlds:  addition *is* just
digit-by-digit (well, okay, limb-by-limb) but since it's coded in C
it's fast enough for regular users.  So you get fast addition in the
usual case, with good asymptotics.

> Speaking of performance in high precision computation - a year ago that
> was my aim, to make high precision computation fast, but now I see it
> different way. That is - I am not really convinced, if ie. adding 2k+
> lines just to make multiplying fast (some fft-like multiplication) is
> worth it - depends on how many people would like to perform computations
> with prec around 100k+ digits ;).

I quite agree:  people who want high-speed high-precision arithmetic
should probably be using some binary floating-point package like GMP
or MPFR.  It's difficult to believe that there's much of a market for
high-precision decimal floating point.


More information about the Python-Dev mailing list