On 8 March 2014 00:10, Guido van Rossum email@example.com wrote:
On Fri, Mar 7, 2014 at 3:42 PM, Oscar Benjamin firstname.lastname@example.org wrote: >
The current Decimal constructor comes with a guarantee of exactness.
But Decimal(<float>) is relatively new (it was an error in 2.6). I know it's annoying to change it again, but I also have the feeling that there's not much use in producing a Decimal with 53 (or more?) decimal digits of precision from a float with 53 bits -- at least not by default. Maybe we can relent on this guarantee and make the constructor by default use fewer digits (e.g. using the same algorithm used by repr(<float>)) and have an optional argument to produce the full precision? Or reserve Decimal.from_float() for that?
I'd rather a TypeError than an inexact conversion. The current behaviour allows to control how rounding occurs. (I don't use the decimal module unless this kind of thing is important).
It accepts int, str and float (and tuple) and creates a Decimal object having the exact value of the int or float or the exact value that results from a decimal interpretation of the str. This exactness is not mandated by any of the standards on which the decimal module is based but I think it is a good thing. I strongly oppose a change that would have it use heuristics for interpreting other numeric types.
Well, given that most floating point values are the result of operations that inherently rounded to 53 bits anyway, I'm not sure that this guarantee is useful. Sure, I agree that there should be a way to construct the exact Decimal for any IEEE 754 double (since it is always possible, 10 being a multiple of 2), but I'm not sure how important it is that this is the default constructor.
It's hard to reason about the accuracy of calculations that use the decimal module. It's harder than with a fixed-width type because of the possible presence of higher-than-context precision Decimals. The invariant that the constructor is exact (or a TypeError) is of significant benefit when using Decimal for code that accepts inputs of different numeric types.