On Fri, Mar 07, 2014 at 05:02:15PM -0800, Guido van Rossum wrote:
On Fri, Mar 7, 2014 at 4:27 PM, Oscar Benjamin
On 8 March 2014 00:10, Guido van Rossum firstname.lastname@example.org wrote: [...]
I also have the feeling that there's not much use in producing a Decimal with 53 (or more?) decimal digits of precision from a float with 53 bits -- at least not by default. Maybe we can relent on this guarantee and make the constructor by default use fewer digits (e.g. using the same algorithm used by repr(<float>)) and have an optional argument to produce the full precision? Or reserve Decimal.from_float() for that?
I'd rather a TypeError than an inexact conversion. The current behaviour allows to control how rounding occurs. (I don't use the decimal module unless this kind of thing is important).
Like Oscar, I would not like to see this change.
My proposal still gives you control (e.g. through a keyword argument, or by using Decimal.from_float() instead of the constructor). It just changes the default conversion.
Who is this default conversion meant to benefit?
I think that anyone passing floats to Decimal ought to know what they are doing, and be quite capable of controlling the rounding via the decimal context, or via an intermediate string:
value = Decimal("%.3f" % some_float)
I believe that Mark wants to use Python as an interactive calculator, and wants decimal-by-default semantics without having to type quotes around his decimal numbers. I suspect that for the amount of time and effort put into this discussion, he probably could have written an interactive calculator using Decimals with the cmd module in half the time :-)
It seems to me that Decimal should not try to guess what precision the user wants for any particular number. Mark is fond of the example of 2.01, and wants to see precisely 2.01, but that assumes that some human being typed in those three digits. If it's the result of some intermediate calculation, there is no reason to believe that a decimal with the value 2.01 exactly is better than one with the value 2.00999999999999978...
Note that the issue is more than just Decimal:
py> from fractions import Fraction py> Fraction(2.01) Fraction(1131529406376837, 562949953421312)
Even more so than for Decimal, I will go to the barricades to defend this exact conversion. Although perhaps Fraction could grow an extra argument to limit the denominator as a short-cut for this:
py> Fraction(2.01).limit_denominator(10000) Fraction(201, 100)
I think it is important that Decimal and Fraction perform conversions as exactly as possible by default. You can always throw precision away afterwards, but you can't recover what was never saved in the first place.
And unfortunately the default precision for Decimal is much larger than for IEEE double, so using unary + does not get rid of those extraneous digits -- it produces Decimal('3.140000000000000124344978758'). So you really have to work hard to recover produce the intended Decimal('3.14') (or Decimal('3.140000000000000')).
I wouldn't object to Decimal gaining a keyword argument for precision, although it is easy to use an intermediate string. In other words,
could be the same as
Decimal("%.2f" % math.pi)
But what would prec do for non-float arguments? Perhaps we should leave Decimal() alone and put any extra bells and whistles that apply only to floats into the from_float() method.