On Fri, Mar 7, 2014 at 5:36 PM, Steven D'Aprano <steve@pearwood.info> wrote:
On Fri, Mar 07, 2014 at 05:02:15PM -0800, Guido van Rossum wrote:
> On Fri, Mar 7, 2014 at 4:27 PM, Oscar Benjamin
> <oscar.j.benjamin@gmail.com>wrote:
>
> > On 8 March 2014 00:10, Guido van Rossum <guido@python.org> wrote:
[...]

> > > I also have the feeling that there's not
> > > much use in producing a Decimal with 53 (or more?) *decimal digits* of
> > > precision from a float with 53 *bits* -- at least not by default. Maybe
> > > we can relent on this guarantee and make the constructor by default use
> > > fewer digits (e.g. using the same algorithm used by repr(<float>)) and
> > > have an optional argument to produce the full precision? Or reserve
> > > Decimal.from_float() for that?
> >
> > I'd rather a TypeError than an inexact conversion. The current
> > behaviour allows to control how rounding occurs. (I don't use the
> > decimal module unless this kind of thing is important).

Like Oscar, I would not like to see this change.

Can you argue your position?
 
> My proposal still gives you control (e.g. through a keyword argument, or by
> using Decimal.from_float() instead of the constructor). It just changes the
> default conversion.

Who is this default conversion meant to benefit?

I think that anyone passing floats to Decimal ought to know what they
are doing,

I think that is provably not the case. (That they don't know -- we can still argue about whether they ought to know.)
 
and be quite capable of controlling the rounding via the
decimal context, or via an intermediate string:

value = Decimal("%.3f" % some_float)

Or, Decimal(repr(some_float)), which DWIM.
 
I believe that Mark wants to use Python as an interactive calculator,
and wants decimal-by-default semantics without having to type quotes
around his decimal numbers. I suspect that for the amount of time and
effort put into this discussion, he probably could have written an
interactive calculator using Decimals with the cmd module in half the
time :-)

Mark doesn't like it when we try to guess what he means or believes, but my guess is that he's actually trying to argue on behalf of Python users who would benefit from using Decimal but don't want to be bothered with a lot of rules. Telling such people "use the Decimal class" is much easier than telling them "use the Decimal class, and always put quotes around your numbers" (plus the latter advice is still very approximate, so the real version would be even longer and more confusing).
 
It seems to me that Decimal should not try to guess what precision the
user wants for any particular number. Mark is fond of the example of
2.01, and wants to see precisely 2.01, but that assumes that some human
being typed in those three digits. If it's the result of some
intermediate calculation, there is no reason to believe that a decimal
with the value 2.01 exactly is better than one with the value
2.00999999999999978...

Given the effort that went into making repr(<float>) do the right thing I think that you're dismissing a strawman here. The real proposal does not require guessing intent -- it just uses an existing, superior conversion from float to decimal that is already an integral part of the language. And it uses some static information that Decimal currently completely ignores, i.e. all Python floats (being IEEE doubles) are created with 53 bits of precision.
 
Note that the issue is more than just Decimal:

py> from fractions import Fraction
py> Fraction(2.01)
Fraction(1131529406376837, 562949953421312)
 
Even more so than for Decimal, I will go to the barricades to defend
this exact conversion. Although perhaps Fraction could grow an extra
argument to limit the denominator as a short-cut for this:

py> Fraction(2.01).limit_denominator(10000)
Fraction(201, 100)

An inferior solution. Anyway, nobody cares about Fraction.
 
I think it is important that Decimal and Fraction perform conversions as
exactly as possible by default.

Again, why produce 56 *digits* of precision by default for something that only intrinsically has 53 *bits*?
 
You can always throw precision away
afterwards, but you can't recover what was never saved in the first
place.

There should definitely be *some* API to convert a float to the exact mathematical Decimal. But I don't think it necessarily needs to be the default constructor. (It would be different if Decimal was a fixed-point type. But it isn't.)
 
[...]
> And unfortunately the default precision for Decimal is much larger than for
> IEEE double, so using unary + does not get rid of those extraneous digits
> -- it produces Decimal('3.140000000000000124344978758'). So you really have
> to work hard to recover produce the intended Decimal('3.14') (or
> Decimal('3.140000000000000')).

I wouldn't object to Decimal gaining a keyword argument for precision,
although it is easy to use an intermediate string. In other words,

    Decimal(math.pi, prec=2)

could be the same as

    Decimal("%.2f" % math.pi)

This is actually another strawman. Nobody in this thread has asked for a way to truncate significant digits like that. It's the *insignificant* digits that bother some. The question is, what should Decimal(math.pi) do. It currently returns Decimal('3.141592653589793115997963468544185161590576171875'), which helps noone except people interested in how floats are represented internally.
 
But what would prec do for non-float arguments? Perhaps we should leave
Decimal() alone and put any extra bells and whistles that apply only to
floats into the from_float() method.

--
--Guido van Rossum (python.org/~guido)