On Fri, Mar 7, 2014 at 3:42 PM, Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
On 7 March 2014 19:20, Guido van Rossum <guido@python.org> wrote:
>
> Maybe we can fix the conversion between Decimal and float (if this is really
> all that matters to Mark, as it appears to be from his last email -- I'd
> already written most of the above before it arrived). Could it be as simple
> as converting the float to a string using repr()? Given the smarts in the
> float repr() that should fix the examples Mark complained about. Are there
> any roadblocks in the implementation or assumptions of the Decimal type
> here?

The current Decimal constructor comes with a guarantee of exactness.

But Decimal(<float>) is relatively new (it was an error in 2.6). I know it's annoying to change it again, but I also have the feeling that there's not much use in producing a Decimal with 53 (or more?) *decimal digits* of precision from a float with 53 *bits* -- at least not by default. Maybe we can relent on this guarantee and make the constructor by default use fewer digits (e.g. using the same algorithm used by repr(<float>)) and have an optional argument to produce the full precision? Or reserve Decimal.from_float() for that?
 
It accepts int, str and float (and tuple) and creates a Decimal object
having the exact value of the int or float or the exact value that
results from a decimal interpretation of the str. This exactness is
not mandated by any of the standards on which the decimal module is
based but I think it is a good thing. I strongly oppose a change that
would have it use heuristics for interpreting other numeric types.

Well, given that most floating point values are the result of operations that inherently rounded to 53 bits anyway, I'm not sure that this guarantee is useful. Sure, I agree that there should be *a* way to construct the exact Decimal for any IEEE 754 double (since it is always possible, 10 being a multiple of 2), but I'm not sure how important it is that this is the default constructor.
 
>Perhaps the default Decimal context being set for a much higher
> precision makes it philosophically unacceptable?

The context is ignored by the Decimal constructor which will always
create an exact value. If you want the created value rounded to
context use unary +:

>>> from decimal import Decimal as D
>>> D(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
>>> +D(0.1)
Decimal('0.1000000000000000055511151231')

I guess that's too late.

--
--Guido van Rossum (python.org/~guido)