64 bit binary floating point (what Python uses as Float) as 1 sign bit, 11 exponent bits, as 52 mantissa bits. NaNs are numbers with the exponent bits being all ones, with the exception that exponent bits all ones and mantissa bits all zeros is infinity. Within NaNs, the MSB of the mantissa indicates if the NaN is signalling or quiet, so we have 51 mantissa bits, and the sign bit (52 total) to make different NaNs with. On 12/30/19 12:09 AM, David Mertz wrote:
What is it, 2**48-2 signaling NaNs and 2**48 quiet NaNs? Is my quick count correct (in 64-bit)? Great opportunity for steganography, I reckon.
On Sun, Dec 29, 2019 at 11:51 PM Tim Peters <tim.peters@gmail.com <mailto:tim.peters@gmail.com>> wrote:
[David] > Has anyone actually ever used those available bits for the zillions of NaNs for > anything good?
Yes: in Python, many sample programs I've posted cleverly use NaN bits to hide ASCII encodings of delightful puns ;-)
Seriously? Not that I've seen. The _intent_ was that, e.g., quiet NaNs could encode diagnostic information, such as the source code line number of the operation that produced a qNaN. But I don't know that anyone ever exploited that.
Signaling NaNs were even more quixotic. For example, in theory, an implementation _could_ reserve some range of sNaN bit patterns to mean "the lower 20 bits are an index into a table of extended precision values", and a trap handler could catch the signal when the sNaN was used, and do extended-precision calculation in software, store the result in the table, and return an sNaN containing the result's index (or a regular double if the result fit in the format).
In short, the kinds of things hardware designers think software would love ;-)
-- Richard Damon