<div dir="ltr"><div class="gmail_quote"><div dir="ltr">[Steven D'Aprano <<a href="mailto:steve@pearwood.info">steve@pearwood.info</a>>]</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks Tim!<br></blockquote><div><br>You're welcome ;-) </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Reading your digressions on the minutia of floating point maths is <br>
certainly an education. It makes algebra and real-valued mathematics <br>
seem easy in comparison.<br></blockquote><div><br>Hard to say, really. The problem with floating point is that it's so God-awful lumpy - special cases all over the place. Signaling and quiet NaNs; signed infinities; signed zeroes; normal finites all with the same number of bits, but where the gap between numbers changes abruptly at power-of-2 boundaries; subnormals where the gap remains the same across power-of-2 boundaries, but the number of _bits_ changes abruptly; all "the rules" break down when you get too close to overflow or underflow; four rounding modes to worry about; and a whole pile of technically defined exceptional conditions and related traps & flags.<br><br>Ignoring all that, though, it's pretty easy ;-) 754 was dead serious about requiring results act is if a single rounding is done to the infinitely precise result, and that actually allows great simplification in reasoning.<br><br>The trend these days appears to be using automated theorem-proving systems to keep track of the mountain of interacting special cases. Those have advanced enough that we may even be on the edge of getting provably-correctly-rounded transcendental functions with reasonable speed. Although it's not clear people will be able to understand the proofs ;-)<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I still haven't got over Mark Dickinson's demonstration a few years <br>
back that under Decimal floating point, but not binary, it is possible <br>
for the ordinary arithmetic average (x+y)/2 to be outside of the <br>
range [x, y]:<br><br>
py> from decimal import getcontext, Decimal<br>
py> getcontext().prec = 3<br>
py> x = Decimal('0.516')<br>
py> y = Decimal('0.518')<br>
py> (x + y) / 2<br>
Decimal('0.515')<br></blockquote><div><br>Ya, decimal fp doesn't really solve anything except the shallow surprise that decimal fractions generally aren't exactly representable as binary fractions. Which is worth a whole lot for casual users, but doesn't address any of the deep problems (to the contrary, it makes those a bit worse).<br><br>I like to illustrate the above with 1-digit decimal fp, because it makes it more apparent at once that - unlike as in binary fp - multiplication and division by 2 may _not_ be exact in decimal fp. We can't even average a number "with itself" reliably:<br><br><div><div>>>> import decimal</div><div>>>> decimal.getcontext().prec = 1</div><div>>>> x = y = decimal.Decimal(8); (x+y)/2 # 10 is much bigger than 8</div><div>Decimal('1E+1')</div><div>>>> x = y = decimal.Decimal(7); (x+y)/2 # 5 is much smaller than 7</div><div>Decimal('5')</div></div></div><div><br>But related things _can_ happen in binary fp too! You have to be near the edge of representable non-zero finites though:<br><br><div>>>> x = y = 1e308</div><div>>>> x</div><div>1e+308</div><div>>>> (x+y)/2</div><div>inf</div><div><br>Oops. So rewrite it:<br><br><div>>>> x/2 + y/2</div><div>1e+308</div></div><div><br>Better! But then:<br><br><div>>>> x = y = float.fromhex("3p-1074")</div><div>>>> x</div><div>1.5e-323</div><div>>>> x/2 + y/2</div><div>2e-323</div></div><div><br>Oops. A math library has to deal with everything "correctly". Believe it or not, this paper<br><br><div> "How do you compute the midpoint of an interval?"</div><div> <a href="https://hal.archives-ouvertes.fr/hal-00576641v1/document">https://hal.archives-ouvertes.fr/hal-00576641v1/document</a><br><br>is solely concerned with computing the average of two IEEE doubles, yet runs to 29(!) pages. Almost everything you try fails for _some_ goofy cases.<br><br>I personally write it as (x+y)/2 anyway ;-)<br><br></div></div></div></div></div>