I'll try to respond to Mark Dickinson's second message (and nothing else that happens in the thread since last night), because (a) it concisely summarizes his position and (b) brings up a new strawman.

On Sat, Mar 8, 2014 at 2:01 AM, Mark Dickinson <dickinsm@gmail.com> wrote:
On Sat, Mar 8, 2014 at 9:43 AM, Mark Dickinson <dickinsm@gmail.com> wrote:

I see three sane options for float to Decimal conversion:

1. Raise an exception.
2. Round to the nearest Decimal value using the current context for that round operation.
3. Do what we're currently doing, and do an exact conversion.

I think you're writing this entirely from the POV of an expert in floating point. And I'm glad we have experts like you around! I don't consider myself an expert at all, but I do think I have something to bring to the table -- insight the experience of non-expert users.

When a non-expert writes Decimal(1.1), each of the three above outcomes surprises. We know that (1) was unpopular, that's why we changed it. We now know that (3) is unpopular at least in some circles (Mark Harrison can't be the only one who doesn't like it). Changing to (2) wouldn't do much to address this, because the default context has way more precision than float, so it still shows a lot of extraneous digits.

Yes, it's trivial to get rid of those extra digits, just use quotes. But if *my* proposal were adopted, it would be trivial for numerical experts to get the extra digits -- just use from_float(). At this point, my claim is that we're talking essentially about what is the better experience for most users, and while I am not much of a user of Decimal myself, I believe that my proposal has benefits more people and situations than it has downsides.
 
Proposals for change should also take into consideration that Decimal already does *exact* conversions for integers (and I believe has done since it first existed).  It would be quite surprising for `Decimal(2**1000)` and `Decimal(2.0**1000)` to be different numbers.

This feels like a strawman, since Decimal(2**1000 + 1) and Decimal(2.0**1000 + 1) produce different outcomes (the latter gives a lot of digits but is one too small), and the similar examples with a large exponent (10000) differ dramatically (the float version raises an exception).

For most purposes it's a fluke that 2.0**1000 can be represented exactly as a float, and the argument doesn't convince me at all. There are just too many odd examples like that, and it always will remain a minefield.
 
If we change the float behaviour, we might also want to change conversion from int to round to the nearest Decimal using the current context.  Again, that's closer to what IEEE 754 specifies for the "convertFromInt" operation.

I don't think that's no the table. Python's int doesn't lose precision, and users know this and depend on it. (In fact, I find ignoring the current context in the constructor a totally reasonable approach -- it does this uniformly, regardless of the argument type. My proposal doesn't change this, the context would *still* not be used to decide how a float is converted. Only the fact that it's a float would.)
 

On the other hand, we probably shouldn't lend *too* much weight to IEEE 754, especially when talking about choice of precision.  IEEE 754 isn't a perfect fit for Decimal:  the IEEE standard is mostly concerned with fixed width decimal formats, which is subtly different from Mike Cowlishaw's approach of "extensible precision" where the precision is not so closely tied to the format.  Python's decimal module is based on Cowlishaw's standard, not on IEEE 754.

I wonder what Cowlishaw would say about our current discussion. He is also the father of Rexx...

--
--Guido van Rossum (python.org/~guido)