On 10 March 2014 18:09, Stefan Krah email@example.com wrote:
Guido van Rossum firstname.lastname@example.org wrote:
That is why I think we should seriously consider moving to IEEE semantics for a decimal literal. I think that would make a *lot* of sense.
What's the proposal? Just using decimal64 for decimal literals, or introducing decimal64 as a new builtin type? (I could get behind either one.)
IEEE 754-2008 is in a certain sense "arbitrary precision", since it allows multiple discrete contexts: Decimal32, Decimal64, Decimal128, ...
In theory this goes on ad infinitum, but the precision increases much slower than the exponents. Mike Cowlishaw's spec fills in the gaps by allowing arbitrary contexts. Additionally there are some minor differences, but if we make moderate changes to Decimal, we get strict IEEE behavior.
So my specific proposal was:
1) Make those changes to Decimal (we can call the module decimal2 if backwards compatibility rules it out). The most important change relevant to this subthread is rounding all literals on input while preserving exact construction via Decimal(value).
2) Keep the arbitrary context facility.
3) Set IEEEContext(Decimal64) as the default. Users will most likely primarily use Decimal32, Decimal64 and Decimal128. (Many users will likely never change the context at all.)
4) Optional: Also use the function names from IEEE 754-2008.
I generally agree with the above except that...
With these changes most users would /think/ that Decimal is a fixed width Decimal64 type. Advanced users can still change the context.
This is the problem I have with this particular proposal. Users would think that it's a fixed-width type and then write code that naively makes that assumption. Then the code blows up when someone else changes the arithmetic context. I don't think we should encourage non-expert users to think that they can safely rely on this behaviour without actually making it safe to rely on.
I know for a fact that some users really like the option of increasing the precision temporarily.
Agreed. I do this often. But I think it's the kind of thing that happens in a special library like Mark's pcdeclib rather than application code for a casual user. The option will always be there to promote your decimal64s to the Big Decimal type and do all your calculations in whichever precision you want.
I have to think about the other solution (decimal64 only). At first glance it seems too restrictive, since I imagine users would at least want Decimal128, too. Additionally there is no speed benefit.
Decimal128 seems fine to me. I just think it should be a true fixed-width type. The benefits of this are: 1) Naive code doesn't get broken by different contexts. 2) I can tell by looking at a snippet exactly what it does without needing to wonder (or ask) whether or not the context has been fiddled with. 3) I can show code that uses decimal literals and the decimal128 constructor and guarantee that it works without caveats. 4) Since the meaning of any expression is known at compile time it is amenable to constant folding. 5) Unary + is a no-op and - is exact (as users would expect) so negative literals will have the same meaning regardless of the current context.