[Python-Dev] Decimal floats as default (was: discussion about PEP239 and 240)

Fredrik Johansson fredrik.johansson at gmail.com
Wed Jun 22 13:41:39 CEST 2005


Hi all,

raymond.hettinger at verizon.net  Fri Jun 17 10:36:01 2005 wrote:

> The future direction of the decimal module likely entails literals in
> the form of 123.45d with binary floats continuing to have the form
> 123.45.  This conflicts with the rational literal proposal of having
> 123.45 interpreted as 123 + 45/100.

Decimal literals are a wonderful idea, especially if it means that
decimals and floats can be made to interact with each other directly.
But why not go one step further, making 123.45 decimal and 123.45b
binary?  In fact, I think a good case can be made for replacing the
default float type with a decimal type.

Decimal floats make life easier for humans accustomed to base 10, so
they should be easy to use. This is particularly relevant given
Python's relatively large user base of "non-programmers", but applies
to many domains. Easy-to-use, correct rounding is essential in many
applications that need to process human-readable data (round() would
certainly be more meaningful if it operated on decimals). Not to
mention that arbitrary precision arithmetic just makes the language
more powerful.

Rationals are inappropriate except in highly specialized applications
because of the non-constant size and processing time, but decimals
would only slow down programs by a (usually small) constant factor. I
suspect most Python programs do not demand the performance hardware
floats deliver, nor require the limited precision or particular
behaviour of IEEE 754 binary floats (the need for machine-precision
integers might be greater -- I've written "& 0xffffffffL" many times).

Changing to decimal would not significantly affect users who really
need good numeric performance either. The C interface would convert
Python floats to C doubles as usual, and numarray would function
accordingly. Additionally, "hardware" could be a special value for the
precision in the decimal (future float) context. In that case, decimal
floats could be phased in without breaking compatibility, by leaving
hardware as the default precision.

123.45d is better than Decimal("123.45"), but appending "d" to specify
a quantity with high precision is as illogical as appending "L" to an
integer value to bypass the machine word size limit. I think the step
from hardware floats to arbitrary-precision decimals would be as
natural as going from short to unlimited-size integers.

I've thought of the further implications for complex numbers and the
math library, but I'll stop writing here to listen to feedback in case
there is some obvious technical flaw or people just don't like the
idea :-) Sorry if this has been discussed and/or rejected before (this
is my first post to python-dev, though I've occasionally read the list
since I started using Python extensively about two years ago).

Fredrik Johansson


More information about the Python-Dev mailing list