On 10 March 2014 13:53, Stefan Krah firstname.lastname@example.org wrote:
[My apologies for being terse, I don't have much time to follow this discussion right now.]
Nick Coghlan email@example.com wrote:
I think users of decimal literals will just need to deal with the risk of unexpected rounding, as the alternatives are even more problematic.
That is why I think we should seriously consider moving to IEEE semantics for a decimal literal. Among other things:
Always round the literal inputs.
Supply IEEE contexts.
Make Decimal64 the default.
Add the missing rounding modes to sqrt() and exp().
Keep the ability to create exact Decimals through the constructor when no context is passed.
Make the Decimal constructor use the context for rounding if it is passed.
The existing specification is largely compatible with IEEE 754-2008:
We can still support setting irregular (non IEEE) contexts.
This is what I've been thinking about. Most non-expert users will be very happy with Decimal64 and a single fixed context. There could be a separate type called decimal64 in builtins that always used the same standard context. Literals could create decimal64 instances.
The semantics of code that uses this type and decimal literals would be independent of any arithmetic context which is good not just for constant folding but for understanding. There would be no need to explain what an arithmetic context is to new users. You can just say: "here's a type that represents decimal values with 16 digits. It sometimes needs to round if the result of a calculation exceeds 16 digits so it uses the standard decimal rounding mode XXX."
Where this would get complicated is for people who also use the Decimal type. They'd need to keep track of which objects were of which type and so decimal literals might seem more annoying than useful.