[Python-ideas] Decimal literal?
Raymond Hettinger
python at rcn.com
Fri Dec 5 05:59:54 CET 2008
>> The notion that decimal is more 'accurate' than float needs a lot of
>> qualification. Yes, it is intended to give *exactly* the answer to various
>> financial calculations that various jurisdictions mandate, but that is a
>> rather specialized meaning of 'accurate'.
>
> You've said what I mean better than I could. The float implementation
> is more than good enough for almost all applications, and it seems
> ridiculous to me to slow them down for the precious few that need more
> precision (and, at that, just don't want to type quite as much).
While we're mincing words, I would state the case differently.
Neither "precision" or "accuracy" captures the essential difference between
binary and decimal floating point. It is all about what is "exactly representable".
The main reason decimal is good for financial apps is that the numbers of interest
are exactly representable in decimal floating point but not in binary floating point.
In a financial app, it can matter that 1.10 is exact rather than some nearby
value representable in binary floating point, 0x1.199999999999ap+0.
Of course, there are other differences like control over rounding and
variable precision, but the main story is about what is exactly representable.
Raymond
More information about the Python-ideas
mailing list