Nick Maclaren wrote:
The decimal spec was designed to encompass both floating-point and
Well, maybe. There are other approaches, too, and Decimal has its
problems with that. In particular, when people need precisely
defined decimal rounding, they ALWAYS need fixed-point and not
Hogwash. The only issues with decimal are ease-of-use and speed.
(Note that I don't care all that much about round(), but I do think we
want to canonicalize Decimal() for all purposes in standard Python where
people care about accuracy. Additional float features can go into
Really? That sounds like dogma, not science.
Decimal doesn't even help people who care about accuracy. At most
(and with the reservation mentioned above), it means that you can
can map external decimal formats to internal ones without loss of
precision. Not a big deal, as there ARE no requirements for doing
that for floating-point, and there are plenty of other solutions for
People who care about the accuracy of calculations prefer binary,
as it is a more accurate model. That isn't a big deal, either.