On Mon, Mar 10, 2014 at 02:05:22PM +0000, Oscar Benjamin wrote:
This is what I've been thinking about. Most non-expert users will be very happy with Decimal64 and a single fixed context. There could be a separate type called decimal64 in builtins that always used the same standard context. Literals could create decimal64 instances.
The semantics of code that uses this type and decimal literals would be independent of any arithmetic context which is good not just for constant folding but for understanding. There would be no need to explain what an arithmetic context is to new users. You can just say: "here's a type that represents decimal values with 16 digits. It sometimes needs to round if the result of a calculation exceeds 16 digits so it uses the standard decimal rounding mode XXX."
All this sounds quite good to me. What's the catch? :-)
Where this would get complicated is for people who also use the Decimal type. They'd need to keep track of which objects were of which type and so decimal literals might seem more annoying than useful.
Hmmm. I don't think this should necessarily be complicated, or at least no more complicated than dealing with any other mixed numeric types. If you don't want to mix them, don't mix them :-)
(Perhaps there could be a Decimal context flag to allow/disallow mixed Decimal/decimal64 operations?)
I would expect that decimal64 and decimal.Decimal would be separate types. They might share parts of the implementation under the hood, but I don't think it would be necessary or useful to have decimal64 inherit from Decimal, or visa versa.