[Python-ideas] Python3.3 Decimal Library Released

Chris Angelico rosuav at gmail.com
Tue Mar 4 00:42:57 CET 2014


On Tue, Mar 4, 2014 at 7:44 AM, David Mertz <mertz at gnosis.cx> wrote:
> Taking a look at the documentation for cdecimal itself (now part of Python
> 3.3) at http://www.bytereef.org/mpdecimal/benchmarks.html, it looks like for
> basic add/mul operations that don't exceed the precision of floating point,
> FP is more than twice as fast as optimized decimals.  Of course, where the
> precision needed is more than FP handles at all, it's simply a choice
> between calculating a quantity and not doing so.
>
> This suggests to me a rather strong argument against making decimals the
> *default* numeric datatype.  However, it *does* still remain overly
> cumbersome to create decimals, and a literal notation for them would be very
> welcomed.

You could probably make the same performance argument against making
Unicode the default string datatype. But a stronger argument is that
the default string should be the one that does the right thing with
text. As of Python 3, that's the case. And the default integer type
handles arbitrary sized integers (although Py2 went most of the way
there by having automatic promotion). It's reasonable to suggest that
the default non-integer numeric type should also simply do the right
thing.

It's a trade-off, though, and for most people, float is sufficient. Is
it worth the cost of going decimal everywhere? I want to first see a
decimal literal notation (eg 1.0 == float("1.0") and 1.0d ==
Decimal("1.0"), as has been suggested a few times), and then have a
taggable float marker (1.0f == float("1.0")); then there can be
consideration of a __future__ directive to change the default, which
would let people try it out and see how much performance suffers.Maybe
it'd be worth it for the accuracy. Maybe we'd lose less time
processing than we save answering the "why does this do weird things
sometimes" questions.

ChrisA


More information about the Python-ideas mailing list