<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Mar 22, 2010, at 11:54 AM, Guido van Rossum wrote:</div><blockquote type="cite"><div><font class="Apple-style-span" color="#000000"><br></font>So now we have a second-order decision to make -- whether<br>Decimal+float should convert the float to Decimal using the current<br>context's precision, or do it exactly. I think we should just follow<br>Decimal.from_float() here, which AFAIK does the conversion exactly --<br>the operation will already round properly if needed, and I don't think<br>it is a good idea to round twice in the same operation. (Even though<br>this is what long+float does -- but the situation is different there<br>since float has no variable precision.)<br></div></blockquote></div><br><div>I concur. That is consistent with the basic design of the decimal</div><div>module which treats inputs as exact and only applies rounding</div><div>to the results of operations.</div><div><br></div><div>FWIW, Mark and I have both been bitten severely by doubling</div><div>rounding in the world of binary floats. Avoiding double rounding</div><div>is a darned good idea.</div><div><br></div><div><br></div><div>Raymond</div></body></html>