On Mar 7, 2014, at 14:18, "Mark H. Harris"
Andrew, you are deliberately side-stepping the point. You apparently disagree with Steven D, and others, who have already stated repeatedly on this list that on 2.7+ binary float literals are copies "exactly" to decimal float representation! That is what is causing this entire problem, sir.
What's causing the problem is that 0.1 and float(0.1) can both be represented exactly in Decimal--and that they are different. You've lost information in converting to float (I may have stated that the wrong way around last time; if so, apologies), and you cannot regain that information when converting back. If you assume people will only ever try to work with numbers that are representable in a few decimal digits, you can use that additional information to retrieve the list data. But that assumption clearly doesn't make sense generally, and if you treat all float values that way, you'll be adding more error on average.
That is the reason for this:
from decimal import Decimal Decimal(.1) Decimal('0.1000000000000000055511151231257827021181583404541015625') <====== this is not a rounding problem
Yes it is, it's a rounding problem converting the ".1" literal to a float. There is no exact decimal->float->decimal conversion, which is why the right way to solve this is to avoid the float conversion--e.g., with your decimal literal syntax, or just constructing decimals from stings.