Re: [Python-Dev] Decimal floats as default (was: discussion about PEP239 and 240)

On 6/22/05, Michael McLay <mclay@python.net> wrote:
This idea is dead on arrival. The change would break many applications and modules. A successful proposal cannot break backwards compatibility. Adding a dpython interpreter to the current code base is one possiblity.
Is there actually much code around that relies on the particular precision of 32- or 64-bit binary floats for arithmetic, and ceases working when higher precision is available? Note that functions like struct.pack would be unaffected. If compatibility is a problem, this could still be a possibility for Python 3.0. In either case, compatibility can be ensured by allowing both n-digit decimal and hardware binary precision for floats, settable via a float context. Then the backwards compatible binary mode can be default, and "decimal mode" can be set with one line of code. d-suffixed literals create floats with decimal precision. There is the alternative of providing decimal literals by using separate decimal and binary float base types, but in my eyes this would be redundant. The primary use of binary floats is performance and compatibility, and both can be achieved with my proposal without sacrificing the simplicity and elegance of having a single type to represent non-integral numbers. It makes more sense to extend the float type with the power and versatility of the decimal module than to have a special type side by side with a default type that is less capable. Fredrik

Fredrik> Is there actually much code around that relies on the Fredrik> particular precision of 32- or 64-bit binary floats for Fredrik> arithmetic, and ceases working when higher precision is Fredrik> available? Umm, yeah... The path you take from one or more string literals representing real numbers through a series of calculations and ending up in a hardware double-precision floating point number is probably going to be different at different precisions. >>> x = Decimal("1.0000000000001") >>> y = Decimal("1.000000000000024567") >>> x Decimal("1.0000000000001") >>> y Decimal("1.000000000000024567") >>> float(x) 1.0000000000000999 >>> float(y) 1.0000000000000246 >>> x/y Decimal("1.000000000000075432999999998") >>> float(x)/float(y) 1.0000000000000753 >>> float(x/y) 1.0000000000000755 Performance matters too: % timeit.py -s 'from decimal import Decimal ; x = Decimal("1.0000000000001") ; y = Decimal("1.000000000000024567")' 'x/y' 1000 loops, best of 3: 1.39e+03 usec per loop % timeit.py -s 'from decimal import Decimal ; x = float(Decimal("1.0000000000001")) ; y = float(Decimal("1.000000000000024567"))' 'x/y' 1000000 loops, best of 3: 0.583 usec per loop I imagine a lot of people would be very unhappy if their fp calculations suddenly began taking 2000x longer, even if their algorithms didn't break. (For all I know, Raymond might have a C version of Decimal waiting for an unsuspecting straight man to complain about Decimal's performance and give him a chance to announce it.) If nothing else, extension module code that executes f = PyFloat_AsDouble(o); or if (PyFloat_Check(o)) { ... } would either have to change or those functions would have to be rewritten to accept Decimal objects and convert them to doubles (probably silently, because otherwise there would be so many warnings). For examples of packages that might make large use of these functions, take a look at Numeric, SciPy, ScientificPython, MayaVi, and any other package that does lots of floating point arithmetic. Like Michael wrote, I think this idea is DOA. Skip

On 6/22/05, Skip Montanaro <skip@pobox.com> wrote:
If nothing else, extension module code that executes
f = PyFloat_AsDouble(o);
or
if (PyFloat_Check(o)) { ... }
would either have to change or those functions would have to be rewritten to accept Decimal objects and convert them to doubles (probably silently, because otherwise there would be so many warnings).
Silent conversion was the idea.
Like Michael wrote, I think this idea is DOA.
Granted, then. However, keeping binary as default does not kill the other idea in my proposal, which is to extend the float type to cover decimals instead of having a separate decimal type. I consider this a more important issue (contradicting the thread title :-) than whether "d" should be needed to specify decimal precision. Fredrik
participants (2)
-
Fredrik Johansson
-
Skip Montanaro