rosuav at gmail.com
Fri Oct 26 18:45:46 CEST 2012
On Sat, Oct 27, 2012 at 3:23 AM, Steven D'Aprano
<steve+comp.lang.python at pearwood.info> wrote:
> In real life, you are *much* more likely to run into these examples of
> "insanity" of floats than to be troubled by NANs:
> - associativity of addition is lost
> - distributivity of multiplication is lost
> - commutativity of addition is lost
> - not all floats have an inverse
> (0.1 + 0.2) + 0.3 != 0.1 + (0.2 + 0.3)
> 1e6*(1.1 + 2.2) != 1e6*1.1 + 1e6*2.2
> 1e10 + 0.1 + -1e10 != 1e10 + -1e10 + 0.1
> 1/(1/49.0) != 49.0
> Such violations of the rules of real arithmetic aren't even hard to find.
> They're everywhere.
Actually, as I see it, there's only one principle to take note of: the
"HMS Pinafore Floating Point Rule"...
** Floating point expressions should never be tested for equality **
** What, never? **
** Well, hardly ever! **
The problem isn't with the associativity, it's with the equality
comparison. Replace "x == y" with "abs(x-y)<epsilon" for some epsilon
and all your statements fulfill people's expectations. (Possibly with
the exception of "1e10 + 0.1 + -1e10" as it's going to be hard for an
automated algorithm to pick a useful epsilon. But it still works.)
Ultimately, it's the old problem of significant digits. Usually it
only comes up with measured quantities, but this is ultimately the
same issue. Doing calculations to greater precision than the answer
warrants is fine, but when you come to compare, you effectively need
to round both values off to their actual precisions.
More information about the Python-list