I think that it might be worth re-iterating the value of NaN & Infinity – there are times when you are performing calculations and need to flag where something problematic happened but __not__ stop. Examples would be real-time control where you could discard a bad result but still need to move on to the next or operations on large chunks of data where you need to perform the calculation on every element and then discard/ignore the invalid results, (this is almost certainly why Pandas makes a lot of use of NaN).
The value of NaN is that it is a valid input to any float point calculation but will always result in a NaN out. Therefore if something goes wrong at any point in your calculation you can log that a problem occurred and carry on without loosing the fact that the result is invalid. (This is much liked in the functional programming world).
One of the many things that I like about python is that it is easy to create a NaN _with `float(‘nan’)`_ if you wish to generate & return an invalid value (it is a nightmare to do in C – e.g. for when a sensor hasn’t replied but you still have to complete the read & reply process).
Personally I also like the IEEE INF & -INF values with their grantee that regardless of bit length they can never be an actual value – I have had a lot of issues with code that returns a clipped valid value when it really means overflow. I have even argued that it would be great to have a `int(‘nan’)` & ‘int(‘inf’)` values to be able to mirror these useful properties.
I am ambivalent on the need to have constants for these special values but would like to argue strongly against any constructs such as infinity = 1e300, etc., after all would this still be an overflow on a machine that supported decimal128 (overflow at 1e6145) & Octuple (256 bit) overflow at about 1.62e78913 values and in the future 512/1024 bit floats – I love the cross platform support of python.
Sorry to be teaching grandma to suck eggs.