On Sat, Mar 8, 2014 at 4:54 PM, Guido van Rossum <guido@python.org> wrote:

When a non-expert writes Decimal(1.1), each of the three above outcomes surprises. We know that (1) was unpopular, that's why we changed it. We now know that (3) is unpopular at least in some circles (Mark Harrison can't be the only one who doesn't like it). Changing to (2) wouldn't do much to address this, because the default context has way more precision than float, so it still shows a lot of extraneous digits.

That at least could be addressed by making the default context one that corresponds to IEEE's decimal64, which has a precision of 16 decimal digits; there doesn't seem to be any particularly good rationale for the current choice of default context. And that would address your point (which I agree with) that `Decimal(x)` currently shows way more digits than are useful. It wouldn't address Mark Harris's needs, though, and would in some sense make things worse for non-expert users: for *most* floats x, Decimal(x) would then give what the user expects purely because 16 digits is a good match for the binary float precision, but for a small fraction of inputs it would give a surprising result.

>>> from decimal import Decimal, getcontext

>>> getcontext().prec = 16

>>> +Decimal(1.1) # + to simulate rounding to 16 digits

Decimal('1.100000000000000')

>>> +Decimal(1.2)

Decimal('1.200000000000000')

>>> +Decimal(9.7)

Decimal('9.699999999999999')

This feels like a strawman, since Decimal(2**1000 + 1) and Decimal(2.0**1000 + 1) produce different outcomes (the latter gives a lot of digits but is one too small), and the similar examples with a large exponent (10000) differ dramatically (the float version raises an exception).

For most purposes it's a fluke that 2.0**1000 can be represented exactly as a float, and the argument doesn't convince me at all. There are just too many odd examples like that, and it always will remain a minefield.

Accepted, but I believe the proposal would break a number of other expectations too with respect to ints and floats. For one, we currently have the unsurprising, and I believe desirable, property that conversion to Decimal is monotonic: for any finite numbers (int, float or Decimal) x and y, if x <= y then Decimal(x) <= Decimal(y). The proposal would break that property, too: you could find examples of an integer x and float y such that `x < y` and `Decimal(x) > Decimal(y)` would be simultaneously True.

>>> x = 49534541648432951

>>> y = x + 2.0

>>> x < y

True

>>> Decimal(x) > Decimal(repr(y)) # simulating proposed meaning of Decimal(y)

True

I'm still struggling a bit to express exactly what it is that bothers me so much about the proposal. It's a combination of the above with:

- there's an obvious *correct* way to convert any value to Decimal, and that's to round to the context precision using the context rounding mode - that's what's done almost universally for Decimal arithmetic operations; it feels wrong that something as direct as `Decimal(x)` would do anything else (the current exact conversions actually *do* rankle with me a bit, but this is what I'd replace them with)

- this proposal would not play well with other binary number formats if they were ever introduced: if we introduced a float128 type (or a float32 type), it would be awkward to reconcile the proposed conversion with a conversion from float128 to Decimal. That's mostly because the semantics of the proposed conversion use information about the *format* of the input type, which is unusual for floating-point and floating-point standards: operations tend to be based purely on the *values* of the inputs, disregarding the formats.

- if we're aiming to eliminate surprises, the 'fix' doesn't go far enough: Decimal(1.1 + 2.2) will still surprise, as will Decimal(0.12345123451234512345)

- contrarily, the fix goes too far: it feels wrong to be changing the semantics of float to Decimal conversion when the real problem is that the user wants or expects floating-point literals to represent decimal numbers.

If the proposal goes forward, I'll live with it, and will simply avoid using the `Decimal` type to convert from floats or ints. But I'd really prefer to keep the short Decimal(x) spelling as something simple and non-magic, and find more complicated ways (quotes!) of spelling the magic stuff.

I wonder what Cowlishaw would say about our current discussion. He is also the father of Rexx...

I wonder that, too.

I've said too much already; beyond registering my strong -1 on this proposal, I'm going to keep out of further discussion.

Mark