
With Python 3.0 being released, and going over its many changes, I was reminded that decimal numbers (decimal.Decimal) are still relegated to a library and aren't built-in. Has there been any thought to adding decimal literals and making decimal a built-in type? I googled but was unable to locate any discussion of the exact issue. The closest I could find was a suggestion about making decimal the default instead of float: http://mail.python.org/pipermail/python-ideas/2008-May/001565.html It seems that decimal arithmetic is more intuitively correct that plain floating point and floating point's main (only?) advantage is speed, but it seems like premature optimization to favor speed over correctness by default at the language level. Obviously, making decimal the default instead of float would be fraught with backward compatibility problems and thus is not presently feasible, but at the least for now Python could make it easier to use decimals and their associated nice arithmetic by having a literal syntax for them and making them built-in. So what do people think of: 1. making decimal.Decimal a built-in type, named "decimal" (or "dec" if that's too long?) 2. adding a literal syntax for decimals; I'd naively suggest a 'd' suffix to the float literal syntax (which was suggested in the brief aforementioned thread) 3. (in Python 4.0/Python 4000) making decimal the default instead of float, with floats instead requiring a 'f' suffix Obviously #1 & #2 would be shooting for Python 3.1 or later. Cheers, Chris P.S. Yay for the long-awaited release of Python 3.0! Better than can be said for Perl 6. -- Follow the path of the Iguana... http://rebertia.com

From: "Chris Rebert" <clp@rebertia.com>
With Python 3.0 being released, and going over its many changes, I was reminded that decimal numbers (decimal.Decimal) are still relegated to a library and aren't built-in.
Has there been any thought to adding decimal literals and making decimal a built-in type?
It's a non-starter until there is a fast, clean C implementation of decimal. The current module is hundreds of times slower than binary floats. Raymond

On Thu, Dec 4, 2008 at 12:00 AM, Raymond Hettinger <python@rcn.com> wrote:
From: "Chris Rebert" <clp@rebertia.com>
With Python 3.0 being released, and going over its many changes, I was reminded that decimal numbers (decimal.Decimal) are still relegated to a library and aren't built-in.
Has there been any thought to adding decimal literals and making decimal a built-in type?
It's a non-starter until there is a fast, clean C implementation of decimal. The current module is hundreds of times slower than binary floats.
Does performance matter quite *that* critically in most everyday programs? If people need such ruthless speed, they can use floats and accept the consequences or use another language entirely (e.g. C, C++, OCaml) as Python would be too slow even as it currently is. We're talking about giving people the option to explicitly, in a less cumbersome way, make that choice of correctness over performance. If slowing startup time for the interpreter is what worries you, a 'from __future__ import' directive could be required and the timeline for full built-in-ness pushed back. Also, by "built-in" I didn't mean to necessarily imply "written in C", but rather "being present in the builtin namespace and available by default". That said, there appears to be decNumber (http://speleotrove.com/decimal/#decNumber), an ANSI C implementation of the General Decimal Arithmetic spec to which Decimal.decimal adheres. At least there's a place to start. Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com
Raymond

Chris Rebert writes:
Does performance matter quite *that* critically in most everyday programs?
Of course not. But that's the wrong question. Python is a *general-purpose* programming language, not an "everyday application where performance isn't critical programming language". There are plenty of applications that just cry out<wink> for a Python implementation where it does matter.

On Thu, Dec 4, 2008 at 12:52 AM, Stephen J. Turnbull <stephen@xemacs.org> wrote:
Chris Rebert writes:
Does performance matter quite *that* critically in most everyday programs?
Of course not. But that's the wrong question. Python is a *general-purpose* programming language, not an "everyday application where performance isn't critical programming language". There are plenty of applications that just cry out<wink> for a Python implementation where it does matter.
We're talking about adding a feature, not taking speed away. If anything, this would increase adoption of Python as people writing programs that use decimals extensively would be able to use decimals with greater ease. Speed freaks could still use floats; there's no change as far as they're concerned. Yes, people who need BOTH decimals AND maximum speed would still be left out, but let's take this one step at a time, and in a later step maybe we can fully satisfy such people. We wouldn't want the perfect long term (speedy built-in decimals) getting in the way of the pretty good near term (built-in decimals). Additionally, your argument can be turned on its head ;-) Consider:
Does perfect accuracy matter quite *that* critically in most everyday programs? Of course not. But that's the wrong question. Python is a *general-purpose* programming language, not an "everyday application where accuracy isn't critical programming language". There are plenty of applications that just cry out<wink> for a Python implementation where it does matter.
<grin> Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com

Chris Rebert writes:
We're talking about adding a feature, not taking speed away.
OK, that's reasonable. But adding features is expensive. BTW, don't listen to me, I've never done it. Listen to Raymond.
If anything, this would increase adoption of Python as people writing programs that use decimals extensively would be able to use decimals with greater ease.
Maybe. I don't see a huge advantage of over import Decimal I also think that most of the (easy) advantage to Decimal will accrue to people who *never* have to deal with measurement error: accountants. But oops! they don't need Decimal per se; they're perfectly happy with big integers. People who really *do* need Decimal are not going to be deterred by 16 characters (counting the newline<wink>); they're already into real pain.
Additionally, your argument can be turned on its head ;-) Consider: Does perfect accuracy matter quite *that* critically in most everyday programs? Of course not. But that's the wrong question. Python is a *general-purpose* programming language, not an "everyday application where accuracy isn't critical programming language". There are plenty of applications that just cry out<wink> for a Python implementation where it does matter.
I think you've misspelled "precision".<wink> Improved accuracy cannot be achieved simply by adding a new number type.

Raymond Hettinger wrote:
It's a non-starter until there is a fast, clean C implementation of decimal. The current module is hundreds of times slower than binary floats.
If we ever going to consider Cython for core development, the decimal module could be the first module that uses Cython. IMHO it's the perfect candidate for a proof of concept. Christian

From: "Aahz"
That's half-true. Most applications IME that manipulate numbers need to express zero frequently as initializers.
No doubt that's true. Was just pointing-out that much of the utility of the decimal module independent of whether literals are built into the parser. Also noted, that it is a non-trivial exercise to get decimals fully integrated into the language. I would like to see both things happen but it won't be easy. FWIW, when I write decimal code, I use a brief-form for the constructor: from decimal import Decimal as D . . . balance = D(0) From: "Christian Heimes" <lists@cheimes.de>
If we ever going to consider Cython for core development, the decimal module could be the first module that uses Cython. IMHO it's the perfect candidate for a proof of concept.
Certainly, Cython would be helpful. That being said, the decimal module is likely a poor candidate to show-off Cython's capabilities. The current code is not setup in a way that translates well. Much better speed-ups could be had from Cython if the module were rewritten to use alternate data structures for decimal numbers and for contexts and to let temporary numbers (accumulators be mutated in-place). From: "Facundo Batista":
The best we can do *now* with Decimal, if we want it to be included as a literal *somewhen*, is to get it in C.
Well said. From: "Facundo Batista":
There're already some first steps in that direction, but *please* investigate that other path you're suggesting.
IMO, those efforts have been somewhat misdirected. They were going down the path of direct translation. Instead, there needs to be a pure implementation of the spec, using better data structures and then separately adding python wrappers. The first component needs to have its own efficient context objects and fast, temporary accumulators. The latter should match the current API. Raymond

2008/12/4 Raymond Hettinger <python@rcn.com>:
There're already some first steps in that direction, but *please* investigate that other path you're suggesting.
IMO, those efforts have been somewhat misdirected. They were going down the path of direct translation. Instead, there needs to be a pure implementation of the spec, using better data structures and then separately adding python wrappers. The first component needs to have its own efficient context objects and fast, temporary accumulators. The latter should match the current API.
I actually was talking about the issue 2486, which is the first step to "slowly, but steadily, replace parts of Decimal from Python to C as needed." I should have been more explicit, sorry for the confusion. Regards, -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/

Coming in to the thread _way_ late, here's my $0.015: Sure, it would be great to have an accurate and fast implementation of decimal/floating point numbers active by default in the language. We don't have that yet. We have a fast implementation, and we have an accurate one, and until we have both, there is a decision to be made: which one is easy to use (in builtins, has literals, (etc.?)), and which one is the "opt-in" implementation (needs a module import, needs a constructor)? We've been dealing with roughly the same fast and sometimes-inaccurate floating-point implementation for what, almost 40 years of C programming so far. Given that there exist accurate implementations of decimal numbers (GMP, MAPM), why hasn't C moved to make one of these the "default" implementation? Whatever the answer, it seems to me that this sets a sort of precedent in programming that fast floating-point numbers are favored over accurate floating-point numbers. GMP is blindingly fast, and it isn't C's default. Decimal is, I think I saw someone mention "hundreds of times slower" than the current float implementation. I think, until the decimal implementation approaches something like GMP's speed, there really isn't much point in even considering making it a default. Now, to the question of a 'decimal literal': Including support for something like '1.1d' requires that we include the decimal module in builtins. Now, I don't know that there's no way around this, but it seems like a slowdown for everyone just to let a few people type a bit less. -1 -- Cheers, Leif

On Thu, Dec 4, 2008 at 1:18 PM, Leif Walsh <leif.walsh@gmail.com> wrote:
Coming in to the thread _way_ late, here's my $0.015:
Sure, it would be great to have an accurate and fast implementation of decimal/floating point numbers active by default in the language. We don't have that yet. We have a fast implementation, and we have an accurate one, and until we have both, there is a decision to be made: which one is easy to use (in builtins, has literals, (etc.?)), and which one is the "opt-in" implementation (needs a module import, needs a constructor)?
We've been dealing with roughly the same fast and sometimes-inaccurate floating-point implementation for what, almost 40 years of C programming so far. Given that there exist accurate implementations of decimal numbers (GMP, MAPM), why hasn't C moved to make one of these the "default" implementation?
Whatever the answer, it seems to me that this sets a sort of precedent in programming that fast floating-point numbers are favored over accurate floating-point numbers. GMP is blindingly fast, and it isn't C's default. Decimal is, I think I saw someone mention "hundreds of times slower" than the current float implementation.
GMP may be blindingly fast for an arbitrary precision floating point implementation, but it's quite slow compared to hardware floating point. Even in hardware there's a temptation to optimize for single-precision and skip various IEEE 754 special cases that would slow things down. Performance really does count. You're not going to find a broad solution here. Decimal is mildly more precise, but substantially slower. It's also less convenient for interacting with C code. Given the importance of C extensions to Python, interacting with C is the strongest argument here. It's not an elegant reason, but it's very practical. Besides, any user WILL have to learn what floats do to their numbers, so you might as well make it obvious. If you really want to avoid it you should be using a symbolic math library instead. Personally, if I need a calculator I usually use Qalculate, rather than an interactive interpreter. -- Adam Olsen, aka Rhamphoryncus

Leif Walsh wrote:
Coming in to the thread _way_ late, here's my $0.015:
Sure, it would be great to have an accurate and fast implementation of decimal/floating point numbers active by default in the language.
We have one by many definitions of 'accurate'. Being off by a few or even a hundred parts per quintillion is pretty good by some standards.
We don't have that yet.
I disagree.
We have a fast implementation, and we have an accurate one, and until we have both, there is a decision to be made:
The notion that decimal is more 'accurate' than float needs a lot of qualification. Yes, it is intended to give *exactly* the answer to various financial calculations that various jurisdictions mandate, but that is a rather specialized meaning of 'accurate'. tjr

On Thu, Dec 4, 2008 at 5:09 PM, Terry Reedy <tjreedy@udel.edu> wrote:
We have one by many definitions of 'accurate'. Being off by a few or even a hundred parts per quintillion is pretty good by some standards.
I agree. That's why I don't think the decimal module should be the "default implementation".
I disagree.
Okay. "Perfectly accurate" then.
The notion that decimal is more 'accurate' than float needs a lot of qualification. Yes, it is intended to give *exactly* the answer to various financial calculations that various jurisdictions mandate, but that is a rather specialized meaning of 'accurate'.
You've said what I mean better than I could. The float implementation is more than good enough for almost all applications, and it seems ridiculous to me to slow them down for the precious few that need more precision (and, at that, just don't want to type quite as much). -- Cheers, Leif

The notion that decimal is more 'accurate' than float needs a lot of qualification. Yes, it is intended to give *exactly* the answer to various financial calculations that various jurisdictions mandate, but that is a rather specialized meaning of 'accurate'.
You've said what I mean better than I could. The float implementation is more than good enough for almost all applications, and it seems ridiculous to me to slow them down for the precious few that need more precision (and, at that, just don't want to type quite as much).
While we're mincing words, I would state the case differently. Neither "precision" or "accuracy" captures the essential difference between binary and decimal floating point. It is all about what is "exactly representable". The main reason decimal is good for financial apps is that the numbers of interest are exactly representable in decimal floating point but not in binary floating point. In a financial app, it can matter that 1.10 is exact rather than some nearby value representable in binary floating point, 0x1.199999999999ap+0. Of course, there are other differences like control over rounding and variable precision, but the main story is about what is exactly representable. Raymond

On Thu, Dec 4, 2008 at 12:51 AM, Chris Rebert <clp@rebertia.com> wrote:
With Python 3.0 being released, and going over its many changes, I was reminded that decimal numbers (decimal.Decimal) are still relegated to a library and aren't built-in.
Has there been any thought to adding decimal literals and making decimal a built-in type? I googled but was unable to locate any discussion of the exact issue. The closest I could find was a suggestion about making decimal the default instead of float: http://mail.python.org/pipermail/python-ideas/2008-May/001565.html It seems that decimal arithmetic is more intuitively correct that plain floating point and floating point's main (only?) advantage is speed, but it seems like premature optimization to favor speed over correctness by default at the language level.
Intuitively, you'd think it's more correct, but for non-trivial usage I see no reason for it to be. The strongest arguments on [1] seem to be controllable precision and stricter standards. Controllable precision works just as well in a library. Stricter standards (ie very portable semantics) could be done with base-2 floats via software emulating on all platforms (and throwing performance out the window). Do you have some use cases that are (completely!) correct in decimal, and not in base-2 floating point? Something not trivial (emulating a schoolbook, writing a calculator, etc.) I see Decimal as a modest investment for a mild return. Not worth the effort to switch. -- Adam Olsen, aka Rhamphoryncus

On 04 dicembre 2008 alle ore 10:37 AM, Adam Olsen <rhamph@gmail.com> wrote:
Intuitively, you'd think it's more correct, but for non-trivial usage I see no reason for it to be. The strongest arguments on [1] seem to be controllable precision and stricter standards. Controllable precision works just as well in a library. Stricter standards (ie very portable semantics) could be done with base-2 floats via software emulating on all platforms (and throwing performance out the window).
Do you have some use cases that are (completely!) correct in decimal, and not in base-2 floating point? Something not trivial (emulating a schoolbook, writing a calculator, etc.)
I see Decimal as a modest investment for a mild return. Not worth the effort to switch.
But at least it will be more usable to have a short-hand for decimal declaration: a = 1234.5678d is simplier than: import decimal a = decimal.Decimal('1234.5678') or: from decimal import Decimal a = Decimal('1234.5678') Cheers Cesare

From: "Cesare Di Mauro" <cesare.dimauro@a-tono.com>
But at least it will be more usable to have a short-hand for decimal declaration:
a = 1234.5678d
How often do you put non-integer constants in real programs? Don't you find that most real decimal apps start with external data sources instead of all the data values being hard-coded in your program?

On Thu, Dec 4, 2008 at 1:56 AM, Raymond Hettinger <python@rcn.com> wrote:
From: "Cesare Di Mauro" <cesare.dimauro@a-tono.com>
But at least it will be more usable to have a short-hand for decimal declaration:
a = 1234.5678d
How often do you put non-integer constants in real programs? Don't you find that most real decimal apps start with external data sources instead of all the data values being hard-coded in your program?
In all fairness, by that same argument we shouldn't have float literals, yet we do despite that. They're useful in scripts where things are hardcoded. Later, the scripts grow and we do end up reading the numbers in from external sources. That doesn't mean the initial script version wasn't useful. Literals help when writing proofs-of-concept and rapid prototypes, areas where Python has historically done well. Java's designers probably used similar arguments against hard-coding when deciding not to include collection literals; meanwhile Python does have such literals and they appear to be much cherished as language features go. The parallels to the decimal situation are striking. Having decimal literals as well would at least keep things consistent. Sets are less common, yet they now have literals; why not decimals too? Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas

On Thu, Dec 4, 2008 at 11:10 AM, Chris Rebert <clp@rebertia.com> wrote:
How often do you put non-integer constants in real programs? Don't you find that most real decimal apps start with external data sources instead of all the data values being hard-coded in your program?
In all fairness, by that same argument we shouldn't have float literals, yet we do despite that. They're useful in scripts where things are hardcoded. Later, the scripts grow and we do end up reading the numbers in from external sources. That doesn't mean the initial script version wasn't useful. Literals help when writing proofs-of-concept and rapid prototypes, areas where Python has historically done well. Java's designers probably used similar arguments against hard-coding when deciding not to include collection literals; meanwhile Python does have such literals and they appear to be much cherished as language features go. The parallels to the decimal situation are striking. Having decimal literals as well would at least keep things consistent. Sets are less common, yet they now have literals; why not decimals too?
Cheers, Chris
I absolutely agree. Literals, also, can help improve language speed. Cheers, Cesare

On 04 dec 2008 at 10:56 AM, Raymond Hettinger <python@rcn.com> wrote:
But at least it will be more usable to have a short-hand for decimal declaration:
a = 1234.5678d
How often do you put non-integer constants in real programs?
A few times indeed (except for strings). So why we are allowing floats literals?
Don't you find that most real decimal apps start with external data sources instead of all the data values being hard-coded in your program?
The same happens with any kind of application: except for very common cases (like using integers and strings), constant definitions are rare. But working with financial applications, using decimal numerics is a very common practice. Even if implementation is slow, we prefer exact results over speed: there must be no possibility on failing calculations when we are manipulating moneys. If you take a look at other languages / IDEs, like Delphi or CBuilder, there's support for BCD-like type, but I never appreciated the need to import its library to use it on my applications. Also keep in mind that having the possibility to define literals for a set of types can help a lot in generating a more optimized bytecode. That's because we can do a more aggressive static analysis (a field were can be done a lot of work to improve the performance of the language). Cheers Cesare

On Thu, Dec 4, 2008 at 3:19 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
But working with financial applications, using decimal numerics is a very common practice. Even if implementation is slow, we prefer exact results over speed: there must be no possibility on failing calculations when we are manipulating moneys.
This has always bothered me: the suggestion that decimal *floats* are suitable for financial calculations, when fixed point is what you want. However, I now see some FAQ entries in http://docs.python.org/library/decimal.html that show how to get fixed point behaviour out of it. Including a wrapper around multiply and divide, for ease of use, heh. Regardless, although financial use cases are important, their behaviour is not universal. The next country over or a few years down the road may have different rules, different proportions, etc. Not something we want to hardcode. -- Adam Olsen, aka Rhamphoryncus

On Thu, Dec 4, 2008 at 4:45 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
But at least it will be more usable to have a short-hand for decimal declaration:
In isolation, a decimal literal sounds nice. But it may not be used often enough to justify the extra mental complexity. What should the following mean?
a = 123X
It isn't obvious, which means that either it gets used all the time (decimal won't) or people will have to look it up -- or just guess, and sometimes get it wrong.
a = 1234.567d
To someone who hasn't programmed much with decimal floating point, what does the "d" mean? Could it indicate "use double-precision"? Could it just mean that the written representation is "decimal" as opposed to "octal" or "hexadecimal", but that the internal form is still binary?
a = 1234.567d
is simpler than:
[reworded to be even shorter per use]
from decimal import Decimal as d a = d('1234.5678')
but if you really have enough Decimal literals for the difference to matter, you could always write your own helper function.
# pretend to be using the European decimal point a = d(1234,5678)
# maps easily to the tuple-format constructor a = d(12345678, -4)
My own hunch is that until Decimal is used enough that people start putting this sort of constructor into their personal libraries, it probably doesn't need a literal. -jJ

C# uses m (or M) as decimal suffix. Mnemonic: money. -- Marcin Kowalczyk qrczak@knm.org.pl http://qrnik.knm.org.pl/~qrczak/

There is a representation for decimal literals that nicely avoids the problem of remembering that 0d is decimal and 0m is meters etc.:
import decimal decimal.Decimal(3) Decimal("3") Decimal("3") Traceback (most recent call last): File "<stdin>", line 1, in ? NameError: name 'Decimal' is not defined
The error points out that I really need to do both:
import decimal from decimal import Decimal.
and I'd prefer the single import do both. Note that this anomaly of repr is not limited to decimal as I think this is a bit worse:
float('nan') nan float('inf') inf
--- Bruce On Fri, Dec 5, 2008 at 10:53 AM, Jim Jewett <jimjjewett@gmail.com> wrote:
On Thu, Dec 4, 2008 at 4:45 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
But at least it will be more usable to have a short-hand for decimal declaration:
In isolation, a decimal literal sounds nice.
But it may not be used often enough to justify the extra mental complexity.
What should the following mean?
a = 123X
It isn't obvious, which means that either it gets used all the time (decimal won't) or people will have to look it up -- or just guess, and sometimes get it wrong.
a = 1234.567d
To someone who hasn't programmed much with decimal floating point, what does the "d" mean?
Could it indicate "use double-precision"?
Could it just mean that the written representation is "decimal" as opposed to "octal" or "hexadecimal", but that the internal form is still binary?
a = 1234.567d
is simpler than:
[reworded to be even shorter per use]
from decimal import Decimal as d a = d('1234.5678')
but if you really have enough Decimal literals for the difference to matter, you could always write your own helper function.
# pretend to be using the European decimal point a = d(1234,5678)
# maps easily to the tuple-format constructor a = d(12345678, -4)
My own hunch is that until Decimal is used enough that people start putting this sort of constructor into their personal libraries, it probably doesn't need a literal.
-jJ _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas

On Fri, Dec 5, 2008 at 2:02 PM, Bruce Leban <bruce@leapyear.org> wrote:
There is a representation for decimal literals that nicely avoids the problem of remembering that 0d is decimal and 0m is meters etc.:
import decimal decimal.Decimal(3) Decimal("3") Decimal("3") Traceback (most recent call last): File "<stdin>", line 1, in ? NameError: name 'Decimal' is not defined
The error points out that I really need to do both:
import decimal from decimal import Decimal.
You only need the second line there. The first line is unnecessary and does not effect the second. Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com
and I'd prefer the single import do both. Note that this anomaly of repr is not limited to decimal as I think this is a bit worse:
float('nan') nan float('inf') inf
--- Bruce
On Fri, Dec 5, 2008 at 10:53 AM, Jim Jewett <jimjjewett@gmail.com> wrote:
On Thu, Dec 4, 2008 at 4:45 AM, Cesare Di Mauro <cesare.dimauro@a-tono.com> wrote:
But at least it will be more usable to have a short-hand for decimal declaration:
In isolation, a decimal literal sounds nice.
But it may not be used often enough to justify the extra mental complexity.
What should the following mean?
a = 123X
It isn't obvious, which means that either it gets used all the time (decimal won't) or people will have to look it up -- or just guess, and sometimes get it wrong.
a = 1234.567d
To someone who hasn't programmed much with decimal floating point, what does the "d" mean?
Could it indicate "use double-precision"?
Could it just mean that the written representation is "decimal" as opposed to "octal" or "hexadecimal", but that the internal form is still binary?
a = 1234.567d
is simpler than:
[reworded to be even shorter per use]
from decimal import Decimal as d a = d('1234.5678')
but if you really have enough Decimal literals for the difference to matter, you could always write your own helper function.
# pretend to be using the European decimal point a = d(1234,5678)
# maps easily to the tuple-format constructor a = d(12345678, -4)
My own hunch is that until Decimal is used enough that people start putting this sort of constructor into their personal libraries, it probably doesn't need a literal.
-jJ _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas

On Thu, Dec 4, 2008 at 1:37 AM, Adam Olsen <rhamph@gmail.com> wrote:
On Thu, Dec 4, 2008 at 12:51 AM, Chris Rebert <clp@rebertia.com> wrote:
With Python 3.0 being released, and going over its many changes, I was reminded that decimal numbers (decimal.Decimal) are still relegated to a library and aren't built-in.
Has there been any thought to adding decimal literals and making decimal a built-in type? I googled but was unable to locate any discussion of the exact issue. The closest I could find was a suggestion about making decimal the default instead of float: http://mail.python.org/pipermail/python-ideas/2008-May/001565.html It seems that decimal arithmetic is more intuitively correct that plain floating point and floating point's main (only?) advantage is speed, but it seems like premature optimization to favor speed over correctness by default at the language level.
Intuitively, you'd think it's more correct, but for non-trivial usage I see no reason for it to be. The strongest arguments on [1] seem to be controllable precision and stricter standards. Controllable precision works just as well in a library. Stricter standards (ie very portable semantics) could be done with base-2 floats via software emulating on all platforms (and throwing performance out the window).
Do you have some use cases that are (completely!) correct in decimal, and not in base-2 floating point? Something not trivial (emulating a schoolbook, writing a calculator, etc.)
No, not personally, but I assume there must be or the decimal module would never have been added in the first place. PEP 327 suggests that accurate financial calculations benefit from decimal. Someone must have had (a) sufficiently compelling use case(s) to get the BDFL to say yes. GvR doesn't approve PEPs indiscriminately. Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com
I see Decimal as a modest investment for a mild return. Not worth the effort to switch.
-- Adam Olsen, aka Rhamphoryncus

If decimals are to become built-in, there are a number of things that need to happen and one of them includes a C implementation, not just for speed, but also to integrate with the parser and the rest of the language. Last time I looked, the existing C implementations out there were license compatible with Python. Also, there are other integration issues to solved, including that of contexts (which are an integral part of the spec). None of this is a trivial exercise or I would have already done it. I do want to move decimal towards being a builtin but don't underestimate the difficulty of doing so. Also, there are other API issues. As it stands, the decimal module is not friendly to newbies and presents challenges even for expert users. And don't underestimate the significance of performance -- it is a top reason that people currently avoid the decimal module and it is an issue for the language itself (lots of companies avoid Python because of its speed disadvantage). One other thought, decimal literals are likely not very helpful in real programs. Most apps that have specific numeric requirements, will have code that manipulates numbers read-in from external sources and written back out -- the scripts themselves typically contain very few constants (and those are typically integers), so you don't get much help from a decimal literal. Raymond Hettinger ----- Original Message ----- From: "Chris Rebert" <clp@rebertia.com> To: "Python-Ideas" <python-ideas@python.org> Sent: Wednesday, December 03, 2008 11:51 PM Subject: [Python-ideas] Decimal literal?
With Python 3.0 being released, and going over its many changes, I was reminded that decimal numbers (decimal.Decimal) are still relegated to a library and aren't built-in.
Has there been any thought to adding decimal literals and making decimal a built-in type? I googled but was unable to locate any discussion of the exact issue. The closest I could find was a suggestion about making decimal the default instead of float: http://mail.python.org/pipermail/python-ideas/2008-May/001565.html It seems that decimal arithmetic is more intuitively correct that plain floating point and floating point's main (only?) advantage is speed, but it seems like premature optimization to favor speed over correctness by default at the language level. Obviously, making decimal the default instead of float would be fraught with backward compatibility problems and thus is not presently feasible, but at the least for now Python could make it easier to use decimals and their associated nice arithmetic by having a literal syntax for them and making them built-in.
So what do people think of: 1. making decimal.Decimal a built-in type, named "decimal" (or "dec" if that's too long?) 2. adding a literal syntax for decimals; I'd naively suggest a 'd' suffix to the float literal syntax (which was suggested in the brief aforementioned thread) 3. (in Python 4.0/Python 4000) making decimal the default instead of float, with floats instead requiring a 'f' suffix
Obviously #1 & #2 would be shooting for Python 3.1 or later.
Cheers, Chris
P.S. Yay for the long-awaited release of Python 3.0! Better than can be said for Perl 6.
-- Follow the path of the Iguana... http://rebertia.com _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas

On Thu, Dec 4, 2008 at 2:50 AM, Raymond Hettinger <python@rcn.com> wrote:
From: "Raymond Hettinger"
Last time I looked, the existing C implementations out there were license compatible with Python.
That should have said "incompatible".
decNumber is available under the ICU License, which seems to be a variant of the original BSD license. Depending on exactly how the acknowledgement clause is interpreted (IANAL), it seems like it might be compatible. If not, IBM, which has copyright on decNumber, seems to have a fairly pro-open-source stance historically; perhaps if asked nicely by the community, they would be willing to relicense decNumber under the revised BSD license (a very minor change vs. the ICU License), which would certainly be compatible with Python's licensing policy. Or maybe there exists another library that's already compatible. Perhaps I'll investigate. But the key here is we should first determine whether people want decimal to be built-in and have a literal. Once that's established, then the details as to implementing that should be investigated. But yes, practicality and feasibility certainly are factors in all this. Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com

2008/12/4 Chris Rebert <clp@rebertia.com>:
Or maybe there exists another library that's already compatible. Perhaps I'll investigate.
But the key here is we should first determine whether people want decimal to be built-in and have a literal. Once that's established, then the details as to implementing that should be investigated. But
I'd put it around. The best we can do *now* with Decimal, if we want it to be included as a literal *somewhen*, is to get it in C. There're already some first steps in that direction, but *please* investigate that other path you're suggesting. Thanks! -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/

Ok, so just to summarize should anyone bring up this same issue again and come upon this thread: * Decimal literals are a possibly good idea, but a fast C implementation with a Python-compatible license (which as of this writing does not yet exist) would be a necessary prerequisite * Making decimals the default instead of floats would be controversial to say the least and would definitely require further analysis+discussion Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com On Thu, Dec 4, 2008 at 3:02 AM, Chris Rebert <clp@rebertia.com> wrote:
On Thu, Dec 4, 2008 at 2:50 AM, Raymond Hettinger <python@rcn.com> wrote:
From: "Raymond Hettinger"
Last time I looked, the existing C implementations out there were license compatible with Python.
That should have said "incompatible".
decNumber is available under the ICU License, which seems to be a variant of the original BSD license. Depending on exactly how the acknowledgement clause is interpreted (IANAL), it seems like it might be compatible. If not, IBM, which has copyright on decNumber, seems to have a fairly pro-open-source stance historically; perhaps if asked nicely by the community, they would be willing to relicense decNumber under the revised BSD license (a very minor change vs. the ICU License), which would certainly be compatible with Python's licensing policy.
Or maybe there exists another library that's already compatible. Perhaps I'll investigate.
But the key here is we should first determine whether people want decimal to be built-in and have a literal. Once that's established, then the details as to implementing that should be investigated. But yes, practicality and feasibility certainly are factors in all this.
Cheers, Chris
-- Follow the path of the Iguana... http://rebertia.com

On Thu, Dec 04, 2008, Raymond Hettinger wrote:
One other thought, decimal literals are likely not very helpful in real programs. Most apps that have specific numeric requirements, will have code that manipulates numbers read-in from external sources and written back out -- the scripts themselves typically contain very few constants (and those are typically integers), so you don't get much help from a decimal literal.
That's half-true. Most applications IME that manipulate numbers need to express zero frequently as initializers. So yeah, it's easy to just write things like:: total = dzero balance = dzero but I think there's definitely some utility from writing:: total = 0.0d balance = 0.0d How much utility (especially from the readability side) is of course subject to debate, but please don't ignore it altogether. -- Aahz (aahz@pythoncraft.com) <*> http://www.pythoncraft.com/ "It is easier to optimize correct code than to correct optimized code." --Bill Harlan

Chris Rebert wrote:
It seems that decimal arithmetic is more intuitively correct that plain floating point and floating point's main (only?) advantage is speed, but it seems like premature optimization to favor speed over correctness by default at the language level.
One could say the same about rational arithmetic, which as also been considered and so far rejected for fractional literals. In fact, fractions are more accurate since there is never rounding unless one requests it. There is an advantage of binary floats that you missed. One can prototype float functions in Python and then translate as necessary for real speed to C and get the same results (using the same compiler on the same hardware). But even prototypes need to run faster than molasses. One can also use Python to glue together C (or Fortran) double routines without translating the numbers. The numerical module (now numpy) is over a decade old and was, I believe, Python's first killer app.
Obviously, making decimal the default instead of float would be fraught with backward compatibility problems and thus is not presently feasible, but at the least for now Python could make it easier to use decimals and their associated nice arithmetic by having a literal syntax for them and making them built-in.
Ditto for fractions.
So what do people think of: 1. making decimal.Decimal a built-in type, named "decimal" (or "dec" if that's too long?) 2. adding a literal syntax for decimals; I'd naively suggest a 'd' suffix to the float literal syntax (which was suggested in the brief aforementioned thread)
I would just as soon do the same for fractions.Fraction, perhaps 1 f/ 2 or 1///2. Even with decimal literals, the functions would remain in the importable module, just as with math and cmath.
3. (in Python 4.0/Python 4000) making decimal the default instead of float, with floats instead requiring a 'f' suffix
Decimal is not just a decimal arithmetic module. It implements and will track a particular complex, specialized, possibly changeable standard controlled by IBM, which already has a few crazy quirks present for commercial rather than technical reasons. This is fine for an add-on class but not, in my opinion, for Python's default fraction arithmetic. If Python's developers did consider replacing floats in that role, I would prefer either fractions or a much simplified decimal type designed by us for general purpose needs. Terry Jan Reedy

On Thu, Dec 4, 2008 at 10:04 AM, Terry Reedy <tjreedy@udel.edu> wrote:
Chris Rebert wrote: <snip>
3. (in Python 4.0/Python 4000) making decimal the default instead of float, with floats instead requiring a 'f' suffix
Decimal is not just a decimal arithmetic module. It implements and will track a particular complex, specialized, possibly changeable standard controlled by IBM, which already has a few crazy quirks present for commercial rather than technical reasons. This is fine for an add-on class but not, in my opinion, for Python's default fraction arithmetic. If Python's developers did consider replacing floats in that role, I would prefer either fractions or a much simplified decimal type designed by us for general purpose needs.
I'll just point out that GvR seemed to favor the general idea (along with a transition mechanism) in the old thread I mentioned in my original post; otherwise I'd have been much more wary of including #3. I can't speak to how good the standard is comparatively except that the Python devs must have chosen it over others or a custom one for good reason, and at least it's better than plain floats. The PEP mentions it being almost completely ANSI/IEEE-compliant and that it has already taken into account the evil corner cases. Cheers, Chris -- Follow the path of the Iguana... http://rebertia.com
Terry Jan Reedy
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
участники (13)
-
Aahz
-
Adam Olsen
-
Bruce Leban
-
Cesare Di Mauro
-
Chris Rebert
-
Christian Heimes
-
Facundo Batista
-
Jim Jewett
-
Leif Walsh
-
Marcin 'Qrczak' Kowalczyk
-
Raymond Hettinger
-
Stephen J. Turnbull
-
Terry Reedy