On Mar 7, 2014, at 20:44, Guido van Rossum email@example.com wrote:
I agree. But what do you think of my main proposal? I would retract it if you advised me so.
I think this may be a good time to raise my points from off-list, in hopes that someone can say "those potential problems are not real problems anyone would ever care about".
Today, given any real, Decimal(float('real')) always gives you the value of the closest binary float to that real. With this change, it will sometimes give you the value of the second-closest repr-of-a-binary-float to that real. This means the error across any range of reals increases. It's still below the rule-of-thumb cutoff that everyone uses for converting through floats, but it is higher by a nonzero amount that doesn't cancel out.
Similarly, today, the distribution of float values across the real number line is... not uniform (because of exponents), and I don't know the right technical term, you know what it looks like. The distribution of repr-of-float variables is not.
Do users of Decimal (or, rather, users of Decimal(float) conversion) care about either of those issues? I don't know. I've never written any code that converted floats to Decimals and then used them as anything other than low-precision fixed-point values (like dollars and cents).
By the way, to dismiss any potential performance worries: with both 3.3 and trunk, Decimal(repr(f)) is actually faster than Decimal.from_float(f) by a factor of 1.3 to 3.2 in a variety of quick tests. If that weren't true, you might have to define Decimal(f) as if it were building and then parsing a string but them implement more complicated code that feeds digits from one algorithm to the other, but fortunately, it looks like this is a three-line change.