[Python-ideas] Decimal literal?

Terry Reedy tjreedy at udel.edu
Thu Dec 4 19:04:07 CET 2008


Chris Rebert wrote:

> It seems that decimal arithmetic is more intuitively correct that
> plain floating point and floating point's main (only?) advantage is
> speed, but it seems like premature optimization to favor speed over
> correctness by default at the language level.

One could say the same about rational arithmetic, which as also been 
considered and so far rejected for fractional literals.  In fact, 
fractions are more accurate since there is never rounding unless one 
requests it.

There is an advantage of binary floats that you missed.  One can 
prototype float functions in Python and then translate as necessary for 
real speed to C  and get the same results (using the same compiler on 
the same hardware).  But even prototypes need to run faster than 
molasses. One can also use Python to glue together C (or Fortran) double 
routines without translating the numbers.  The numerical module (now 
numpy) is over a decade old and was, I believe, Python's first killer app.

> Obviously, making decimal the default instead of float would be
> fraught with backward compatibility problems and thus is not presently
> feasible, but at the least for now Python could make it easier to use
> decimals and their associated nice arithmetic by having a literal
> syntax for them and making them built-in.

Ditto for fractions.

> So what do people think of:
> 1. making decimal.Decimal a built-in type, named "decimal" (or "dec"
> if that's too long?)
> 2. adding a literal syntax for decimals; I'd naively suggest a 'd'
> suffix to the float literal syntax (which was suggested in the brief
> aforementioned thread)

I would just as soon do the same for fractions.Fraction, perhaps 1 f/ 2 
or 1///2.  Even with decimal literals, the functions would remain in the 
importable module, just as with math and cmath.

> 3. (in Python 4.0/Python 4000) making decimal the default instead of
> float, with floats instead requiring a 'f' suffix

Decimal is not just a decimal arithmetic module.  It implements and will 
track a particular complex, specialized, possibly changeable standard 
controlled by IBM, which already has a few crazy quirks present for 
commercial rather than technical reasons.  This is fine for an add-on 
class but not, in my opinion, for Python's default fraction arithmetic. 
  If Python's developers did consider replacing floats in that role, I 
would prefer either fractions or a much simplified decimal type designed 
by us for general purpose needs.

Terry Jan Reedy




More information about the Python-ideas mailing list