steve+comp.lang.python at pearwood.info
Fri Oct 26 18:23:51 CEST 2012
On Fri, 26 Oct 2012 04:00:03 -0400, Terry Reedy wrote:
> On 10/25/2012 10:19 PM, MRAB wrote:
>> In summary, .index() looks for an item which is equal to its argument,
>> but it's a feature of NaN (as defined by the standard) that it doesn't
>> equal NaN, therefore .index() will never find it.
> Except that is *does* find the particular nan object that is in the
> collection. So nan in collection and list.index(nan) look for the nan by
> identity, not equality.
So it does. I made the same mistake as MRAB, thank you for the correction.
> This inconsistency is an intentional decision to
> not propagate the insanity of nan != nan to Python collections.
That's a value judgement about NANs which is not shared by everyone.
Quite frankly, I consider it an ignorant opinion about NANs, despite what
Bertrand Meyer thinks. Reflectivity is an important property, but it is
not the only important property and it is not even the most important
property of numbers. There are far worse problems with floats than the
non-reflexivity of NANs.
Since it is impossible to have a fixed-size numeric type that satisfies
*all* of the properties of real numbers, some properties must be broken.
I can only imagine that the reason Meyer, and presumably you, think that
the loss of reflexivity is more "insane" than the other violations of
floating point numbers is due to unfamiliarity. (And note that I said
*numbers*, not NANs.)
Anyone who has used a pocket calculator will be used to floating point
calculations being wrong, so much so that most people don't even think
about it. They just expect numeric calculations to be off by a little,
and don't give it any further thought. But NANs freak them out because
In real life, you are *much* more likely to run into these examples of
"insanity" of floats than to be troubled by NANs:
- associativity of addition is lost
- distributivity of multiplication is lost
- commutativity of addition is lost
- not all floats have an inverse
(0.1 + 0.2) + 0.3 != 0.1 + (0.2 + 0.3)
1e6*(1.1 + 2.2) != 1e6*1.1 + 1e6*2.2
1e10 + 0.1 + -1e10 != 1e10 + -1e10 + 0.1
1/(1/49.0) != 49.0
Such violations of the rules of real arithmetic aren't even hard to find.
In practical terms, those sorts of errors are *far* more significant in
computational mathematics than the loss of reflexivity. I can't think of
the last time I've cared that x is not necessarily equal to x in a
floating point calculation, but the types of errors shown above are
*constantly* effecting computations and leading to loss of precision or
even completely wrong answers.
Once NANs were introduced, keeping reflexivity would lead to even worse
situations than x != x. It would lead to nonsense identities like
log(-1) ==log(-2), hence 1 == 2.
More information about the Python-list