Guido van Rossum email@example.com wrote:
That is why I think we should seriously consider moving to IEEE semantics for a decimal literal. I think that would make a *lot* of sense.
What's the proposal? Just using decimal64 for decimal literals, or introducing decimal64 as a new builtin type? (I could get behind either one.)
IEEE 754-2008 is in a certain sense "arbitrary precision", since it allows multiple discrete contexts: Decimal32, Decimal64, Decimal128, ...
In theory this goes on ad infinitum, but the precision increases much slower than the exponents. Mike Cowlishaw's spec fills in the gaps by allowing arbitrary contexts. Additionally there are some minor differences, but if we make moderate changes to Decimal, we get strict IEEE behavior.
So my specific proposal was:
1) Make those changes to Decimal (we can call the module decimal2 if backwards compatibility rules it out). The most important change relevant to this subthread is rounding all literals on input while preserving exact construction via Decimal(value).
2) Keep the arbitrary context facility.
3) Set IEEEContext(Decimal64) as the default. Users will most likely primarily use Decimal32, Decimal64 and Decimal128. (Many users will likely never change the context at all.)
4) Optional: Also use the function names from IEEE 754-2008.
With these changes most users would /think/ that Decimal is a fixed width Decimal64 type. Advanced users can still change the context. I know for a fact that some users really like the option of increasing the precision temporarily.
I have to think about the other solution (decimal64 only). At first glance it seems too restrictive, since I imagine users would at least want Decimal128, too. Additionally there is no speed benefit.