Why `divmod(float('inf'), 1) == (float('nan'), float('nan'))`
steve+comp.lang.python at pearwood.info
Thu Sep 18 11:25:57 CEST 2014
Marko Rauhamaa wrote:
> Maybe IEEE had some specific numeric algorithms in mind when it
> introduced inf and nan. However, I have a feeling an exception would be
> a sounder response whenever the arithmetics leaves the solid ground.
I'm afraid that you're missing the essential point of INF and quiet NANs,
namely that they *don't* cause an exception. That is their point.
Back in the Dark Ages of numeric computing, prior to IEEE-754, all was
chaos. Numeric computing was a *mess*. To give an example of how bad it
was, there were well-known computers where:
x != 0
would pass, but then:
would fail with a Division By Zero error (which could mean a segfault).
Another machine could have 1.0*x overflow; a whole class of IBM machines
had 1.0*x lop off one bit of precision each time you called it, so that
multiplying by one would gradually and irreversibly change the number. Chip
designers had a cavalier attitude towards the accuracy of floating point
arithmetic, preferring to optimise for speed even when the result was
wrong. Writing correct, platform-independent floating point code was next
When IEEE-754 was designed, the target was low-level languages similar to C,
Pascal, Algol, Lisp, etc. There were no exceptions in the Python sense, but
many platforms provided signals, where certain operations could signal an
exceptional case and cause an interrupt. IEEE-754 standardised those
hardware-based signals and required any compliant system to provide them.
But it also provided a mechanism for *not* interrupting a long running
calculation just because an exception occurred. Remember that not all
exceptions are necessarily fatal. You can choose whether exceptions in a
calculation will cause a signal, or quietly continue. It even defines two
different kinds of NANs, signalling and quiet NANs: signalling NANs are
supposed to signal, always, and quiet NANs are supposed to either silently
propagate or signal, whichever you choose.
Instead of peppering your code with dozens, even hundreds of Look Before You
Leap checks for error conditions, or installing a single signal handler
which will catch exceptions from anywhere in your application, you have the
choice of also allowing calculations to continue to the end even if they
reach an exceptional case. You can then inspect the result and decide what
to do: report an error, re-do the calculation with different values, skip
that iteration, whatever is appropriate.
The standard even gives NANs a payload, so that they can carry diagnostic
information. For instance, NAN might mean 0/0, while NAN might mean
INF-INF. The Standard Apple Numerics Environment (SANE) in the 1980s and
90s supported that, and it worked really well. Alas, I don't know any other
language or library that even offers a way to inspect the NAN payload, let
alone promises to set it consistently.
In any case, other error handling strategies continue to work, or at least
they are supposed to work.
A good way to understand how the IEEE-754 standard is supposed to work is to
read and use the decimal.py module. (Strictly speaking, decimal doesn't
implement IEEE-754, but another, similar, standard.) Python's binary
floats, which are a thin wrapper around the platform C libraries, is sad
and impoverished compared to what IEEE-754 offers.
More information about the Python-list