On 11 October 2012 02:20, Steven D'Aprano wrote:
On 11/10/12 09:05, Joshua Landau wrote:

After re-re-reading this thread, it turns out one *(1)* post and two

to that post have covered a topic very similar to the one I have raised.
All of the others, to my understanding, do not dwell over the fact
that *float("nan") is not float("nan")* .

That's no different from any other float.

py> float('nan') is float('nan')
False
py> float('1.5') is float('1.5')
False

Floats are not interned or cached, although of course interning is
implementation dependent and this is subject to change without notice.

For that matter, it's true of *nearly all builtins* in Python. The
exceptions being bool(obj) which returns one of two fixed instances,
and int() and str(), where *some* but not all instances are cached.

>>> float(1.5) is float(1.5)
True
>>> float("1.5") is float("1.5")
False

Confusing re-use of identity strikes again. Can anyone care to explain what causes this? I understand float(1.5) is likely to return the inputted float, but that's as far as I can reason.

What I was saying, though, is that all other posts assumed equality between two different NaNs should be the same as identity between a NaN and itself. This is what I'm really asking about, I guess.

Response 1:
This implies that you want to differentiate between -0.0 and +0.0. That is

My response:
Why would I want to do that?

If you are doing numeric work, you *should* differentiate between -0.0
and 0.0. That's why the IEEE 754 standard mandates a -0.0.

Both -0.0 and 0.0 compare equal, but they can be distinguished (although
doing so is tricky in Python). The reason for distinguishing them is to
distinguish between underflow to zero from positive or negative values.
E.g. log(x) should return -infinity if x underflows from a positive value,
and a NaN if x underflows from a negative.

Interesting.

Can you give me a more explicit example? When would you not *want* f(-0.0) to always return the result of f(0.0)? [aka, for -0.0 to warp into 0.0 on creation]