On Fri, Mar 7, 2014 at 4:27 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 8 March 2014 00:10, Guido van Rossum <guido@python.org> wrote:
> On Fri, Mar 7, 2014 at 3:42 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com>
> wrote:
>>
>> The current Decimal constructor comes with a guarantee of exactness.
>
> But Decimal(<float>) is relatively new (it was an error in 2.6). I know it's
> annoying to change it again, but I also have the feeling that there's not
> much use in producing a Decimal with 53 (or more?) *decimal digits* of
> precision from a float with 53 *bits* -- at least not by default. Maybe we
> can relent on this guarantee and make the constructor by default use fewer
> digits (e.g. using the same algorithm used by repr(<float>)) and have an
> optional argument to produce the full precision? Or reserve
> Decimal.from_float() for that?

I'd rather a TypeError than an inexact conversion. The current
behaviour allows to control how rounding occurs. (I don't use the
decimal module unless this kind of thing is important).

My proposal still gives you control (e.g. through a keyword argument, or by using Decimal.from_float() instead of the constructor). It just changes the default conversion.

I have to admit I don't recall the discussion that led to accepting float arguments to Decimal(), so I don't know if the exactness guarantee was ever challenged or even discussed. For all other argument types (str, tuple of digits, Decimal) it makes intuitive sense to guarantee exactness, because the input is already in decimal form, so why would the caller pass in digits they don't care about. But for a float the caller has no such control -- I can call round(math.pi, 2) but the resulting float (taken at face value) is still equal to 3.140000000000000124344978758017532527446746826171875, even though just '3.14' is enough as input to float() to produce that exact value.

And unfortunately the default precision for Decimal is much larger than for IEEE double, so using unary + does not get rid of those extraneous digits -- it produces Decimal('3.140000000000000124344978758'). So you really have to work hard to recover produce the intended Decimal('3.14') (or Decimal('3.140000000000000')).

>> It accepts int, str and float (and tuple) and creates a Decimal object
>> having the exact value of the int or float or the exact value that
>> results from a decimal interpretation of the str. This exactness is
>> not mandated by any of the standards on which the decimal module is
>> based but I think it is a good thing. I strongly oppose a change that
>> would have it use heuristics for interpreting other numeric types.
>
> Well, given that most floating point values are the result of operations
> that inherently rounded to 53 bits anyway, I'm not sure that this guarantee
> is useful. Sure, I agree that there should be *a* way to construct the exact
> Decimal for any IEEE 754 double (since it is always possible, 10 being a
> multiple of 2), but I'm not sure how important it is that this is the
> default constructor.

It's hard to reason about the accuracy of calculations that use the
decimal module. It's harder than with a fixed-width type because of
the possible presence of higher-than-context precision Decimals. The
invariant that the constructor is exact (or a TypeError) is of
significant benefit when using Decimal for code that accepts inputs of
different numeric types.

Can you provide an example of this benefit? I can't seem to guess whether you are talking about passing Decimal instances into code that then just uses +, * etc., or whether you are talking about code that uses Decimal internally and takes arguments of different types (e.g. int, float, string).

--
--Guido van Rossum (python.org/~guido)