On Feb 2, 2015, at 10:22, Ed Kellett firstname.lastname@example.org wrote:
On Mon Feb 02 2015 at 4:52:49 PM Chris Barker email@example.com wrote:
On Mon, Feb 2, 2015 at 7:27 AM, Thomas Güttler firstname.lastname@example.org wrote:
Well, inf is supported in floats because it is supported in the native machine double.
There's also the appeal to mathematics: integers and infinities are mutually exclusive.
An affinely-extended integer set is just as easy to define as an affinely-extended real set, and it preserves the laws of integer arithmetic with the obvious extensions to undefined results (e.g., a+(b+c) == (a+b)+c or both are undefined). In particular, undefined results can be safely modeled with either a quiet nan or with exceptions.
(Also, Python integers don't actually quite model the integers, because for large enough values you get a memory error.)
By contrast, floats aren't numbers (by most definitions) to begin with, so there's no axiomatic basis to worry about breaking.
I suppose it depends how you define "number". It's true that they don't model the laws of arithmetic properly, but they do have a well-defined model (well, each implementation has a different one, but most people only care about the IEEE double/binary64 implementation) that's complete in analogs of the same operations as the reals, even if those analogs aren't actually the same operations.
But, more importantly, the floats with inf and -inf model the affinely extended real numbers the same way the floats without infinities model the reals, so there is definitely an axiomatic basis to the design, not just some haphazard choice Intel made for no good reason.