On Tue, Sep 15, 2020 at 3:45 AM Richard Damon

On 9/14/20 12:34 PM, Stephen J. Turnbull wrote:

That's fine, but Python doesn't give you that. In floats, 0.0 is not true 0, it's the set of all underflow results plus true 0. So by your argument, in float arithmetic, we should not have ZeroDivisionErrors. But we do raise them.

Actually, with IEEE, 0.0 should be all numbers, when rounded to the nearest representation give the value 0.

When we get to very small numbers, the 'sub-normals', we get numbers that are really some integral value times 2 to some negative power (I think it is something like 2 to the -1022 for the standard 64 bit floats). This says that as we approach 0, we have sequence of evenly spaced representable values, 3*2**-1022, 2*2**-1022, 1*2**-1022, 0*2**-1022

Thus the concept of "Zero" makes sense as the nearest representable value.

Now, as been mentioned, "Infinity", doesn't match this concept, unless you do something like define it as it represents a value just above the highest represntable value, but that doesn't match the name.

In mathematics, "infinity" isn't a value. One cannot actually perform arithmetic on it. But in mathematics, it's perfectly acceptable to talk about stupid big numbers and do arithmetic that wouldn't ever be possible with full precision (eg calculating the last digits of Graham's Number). Mathematicians are also perfectly happy to work with transcendental numbers as actual values, and, again, to perform arithmetic on them even though we don't know the exact value. Computers can't do that. So we have a set of rules for approximating mathematics in ways that are useful, even though they're not true representations of real numbers. (Something something "practicality beats purity" yada yada.) That's why we have to deal with rounding errors, that's why we have to work with trig functions that aren't perfectly precise, etc, etc. They're *good enough* for real-world usage. And that's why we need "Infinity" to be a value, of sorts. If you wish, every IEEE float can be taken to represent a range of values (all those that would round to it), but that's not often useful (although AIUI that's how Python chooses to give a shorter display for many numbers - it finds some short number that would have the same representation and prints that). Much more practical to treat them as actual numbers, including that 0.0 really does mean the additive identity (and is the sole representation of it), even though this leads to contradictions of sorts: x = 1.0 x != 0.0 # True y = float(1<<53) x + y == y # True # Subtract y from both sides, proving 1.0 == 0.0 If you want to define infinity as any particular value, you're going to get into a contradiction somewhere sooner or later. But its behaviour in IEEE arithmetic is well defined and *useful* even without trying to make everything make sense. Let's stick to discussing whether "inf" (or "Infinity") should become a new builtin name, and let the IEEE experts figure out whether infinity is a thing or not :) ChrisA