<div dir="ltr">On Sat, Mar 8, 2014 at 12:01 PM, Antoine Pitrou <span dir="ltr"><<a href="mailto:solipsis@pitrou.net" target="_blank">solipsis@pitrou.net</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="">> problems of working with base 10 slide rules. For binary, the largest<br>
> ratio between differences is 2 rather than 10.<br>
<br>
</div>Well, can you explain what difference it does in practice?<br></blockquote><div><br></div><div>Probably not much that the average user would care about, but there are a whole host of 'nice' properties that work in binary floating-point but not in decimal. For example, in binary, assuming IEEE 754, round-to-nearest, etc., you're guaranteed that for any two representable floats x and y, the "average" (x+y)/2 lies in the interval [x, y] (even strictly between x and y provided that x and y are at least 2 ulps apart), so a naively written floating-point bisection search will converge. In decimal that's not true: you can lose a whole digit of precision when adding x and y and end up with a result that's outside [x, y].</div>
<div><br></div><div><p style="margin:0px;font-size:12px;font-family:Monaco">>>> from decimal import *</p>
<p style="margin:0px;font-size:12px;font-family:Monaco">>>> getcontext().prec = 3</p>
<p style="margin:0px;font-size:12px;font-family:Monaco">>>> x = Decimal('0.516')</p>
<p style="margin:0px;font-size:12px;font-family:Monaco">>>> y = Decimal('0.518')</p>
<p style="margin:0px;font-size:12px;font-family:Monaco">>>> (x + y) / 2</p>
<p style="margin:0px;font-size:12px;font-family:Monaco">Decimal('0.515') # ouch!</p></div><div><br></div><div>Then if you're doing numerical analysis and error computations, the "wobble" (the variation in scale of the ratio between the mathematical relative error and the error expressed in ulps) is 2 for binary, 10 for decimal. That makes for weaker error bounds and faster-growing errors for operations done in decimal floating-point rather than binary. (And it's even worse for hexadecimal floating-point, which is why IEEE 754 is a big improvement over IBM's hex float format.)</div>
<div><br></div><div>Binary is just better for serious numerical work. :-)</div><div><br></div><div>Mark</div><div><br></div><div><br></div></div></div></div>