[Python-Dev] Adventures with Decimal
Tim Peters
tim.peters at gmail.com
Fri May 20 07:15:02 CEST 2005
[Guido van Rossum]
> I know I should stay out of here,
Hey, it's still your language <wink>.
> but isn't Decimal() with a string literal as argument a rare
> case (except in examples)? It's like float() with a string
> argument -- while you *can* write float("1.01"), nobody does
> that. What people do all the time is parse a number out
> of some larger context into a string, and then convert the
> string to a float by passing it to float(). I assume that most
> uses of the Decimal() constructor will be similar.
I think that's right. For example, currency exchange rates, and stock
prices, are generally transmitted as decimal strings now, and those
will get fed to a Decimal constructor.
OTOH, in scientific computing it's common to specify literals to very
high precision (like 40 decimal digits). Things like pi, e, sqrt(2),
tables of canned numeric quadrature points, canned coefficients for
polynomial approximations of special functions, etc. The authors
don't expect "to get" all they specify, what they expect is that
various compilers on various platforms will give them as much
precision as they're capable of using efficiently. Rounding is
expected then, and indeed pragmatically necessary (carrying precision
beyond that natively supported comes with high runtime costs -- and
that can be equally true of Decimal literals carried with digits
beyond context precision: the standard requires that results be
computed "as if to infinite precision then rounded once" using _all_
digits in the inputs).
> In that case, it makes total sense to me that the context's
> precision should be used, and if the parsed string contains
> an insane number of digits, it will be rounded.
That's the IBM standard's intent (and mandatory in its string->float operation).
> I guess the counter-argument is that because we don't have
> Decimal literals, Decimal("12345") is used as a pseudo-literal,
> so it actually occurs more frequently than float("12345"). Sure.
> But the same argument applies: if I write a floating point literal
> in Python (or C, or Java, or any other language) with an insane
> number of digits, it will be rounded.
Or segfault <0.9 wink>.
> So, together with the 28-digit default precision, I'm fine with
> changing the constructor to use the context by default. If you
> want all the precision given in the string, even if it's a million
> digits, set the precision to the length of the string before you
> start; that's a decent upper bound. :-)
That is indeed the intended way to do it. Note that this also applies
to integers passed to a Decimal constructor.
Maybe it's time to talk about an unbounded rational type again <ducks>.
More information about the Python-Dev
mailing list