On Thu, Mar 06, 2014 at 06:19:56PM -0800, Mark H. Harris wrote:
from decimal import Decimal a=Decimal(1) b=Decimal(.1) a+b Decimal('1.100000000000000005551115123') <==== does this not bother you at all ?
Of course it bothers people a little. It bothers me. It also bothers me when I'm too hot wearing a coat and too cold when I take it off, but sometimes that's how the universe works. For the first few releases of Decimal, it prohibited direct conversion of floats specifically to avoid that issue: # Python 2.5 py> decimal.Decimal(2.01) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.5/decimal.py", line 648, in __new__ "First convert the float to a string") TypeError: Cannot convert float to Decimal. First convert the float to a string But that turned out to be more of a nuisance than what it was trying to protect from, so starting in Python 2.7 Decimal now supports direct and exact conversion from float. The problem is, you are focused on one application for numeric computing to the exclusion of all else, specifically using Python as an interactive calculator. But in practice, for many uses, nobody typed in 2.01 and nobody has any expectation that it will be exactly the value 2.01. Rather, the value will be the result of some calculation, and there is *absolutely no reason to think* that the decimal number 2.01 will be more accurate than the binary number 10.101000111101011100001010001111010111000010100, or for that matter, the base-7 number 2.0462046204620462. The difference between those three representations is usually minor compared to the actual measurement errors of the initial data and the rounding errors from the calculation. -- Steven