[Python-Dev] Adventures with Decimal

Guido van Rossum gvanrossum at gmail.com
Fri May 20 06:53:46 CEST 2005


I know I should stay out of here, but isn't Decimal() with a string
literal as argument a rare case (except in examples)? It's like
float() with a string argument -- while you *can* write float("1.01"),
nobody does that. What people do all the time is parse a number out of
some larger context into a string, and then convert the string to a
float by passing it to float(). I assume that most uses of the
Decimal()  constructor will be similar. In that case, it  makes total
sense to me that the context's precision should be used, and if the
parsed string contains an insane number of digits, it will be rounded.

I guess the counter-argument is that because we don't have Decimal
literals, Decimal("12345") is used as a pseudo-literal, so it actually
occurs more frequently than float("12345"). Sure. But the same
argument applies: if I write a floating point literal in Python (or C,
or Java, or any other language) with an insane number of digits, it
will be rounded.

So, together with the 28-digit default precision, I'm fine with
changing the constructor to use the context by default. If you want
all the precision given in the string, even if it's a million digits,
set the precision to the length of the string before you start; that's
a decent upper bound. :-)

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)


More information about the Python-Dev mailing list