Paul Moore writes:
On Tue, 15 Sep 2020 at 08:12, Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Ben Rudiak-Gould writes:
1./0. is not a true infinity.
Granted, I was imprecise. To be precise, 1.0 / 0.0 *could* be a true infinity, and in some cases (1.0 / float(0)) *provably* is, while 1e1000 *provably* is not a true infinity.
I think we're getting to a point where the argument is getting way too theoretical to make sense any more. I'm genuinely not clear from the fragment quoted here,
1. What a "true infinity" is. Are we talking solely about IEEE-754?
Not solely. We're talking about mathematical infinities, such as the count of all integers or the length of the real line, and how they are modeled in IEEE-754 and Python.
Because mathematically, different disciplines have different views, and many of them don't even include infinity in the set of numbers.
But in IEEE-754, inf behaves like the positive infinity of the extended real number line: inf + inf = inf, x > 0 => inf * x = inf, inf - inf = nan, and so on. We need to be talking about a mathematics that at least has defined the extended reals.
2. What do 1.0 and 0.0 mean? The literals in Python translate to specific bit patterns, so we can apply IEEE-754 rules to those bit patterns. There's nothing to discuss here, just the application of a particular set of rules.
Except that application is not consistent. That was the point of Ben's post.
(Unless we're discussing which set of rules to apply, but I thought IEEE-754 was assumed). So Ben's statement seems to imply he's not talking just about IEEE-754 bit patterns.
He's talking about the arithmetic of floating point numbers, which involves rounding rules intended to model the extended real line.
3. Can you give an example of 1.0/0.0 *not* being a "true infinity"? I have a feeling you're going to point to denormalised numbers close to zero, but why are you expressing them as "0.0"?
Not subnormal numbers, but non-zero numbers that round to zero.
1e200 1e200 1e-200 1e-200 1e200 == 1.0 / 1e-200 True 1e-200 * 1e-200 == 0.0 True 1.0 / (1e-200 * 1e-200) Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: float division by zero 1e200 * 1e200 inf
4. Can you provide the proofs you claim exist?
Proofs of what? All of this is just straightforward calculations in the extended real line, on the one hand, in IEEE 754 floats on the other hand, and in Python, on the gripping hand.[1] For numbers with magnitudes "near" 1 (ie, less than about 100 orders of magnitude bigger or smaller), the three arithmetics agree tolerably well for practical purposes (and Python and IEEE 754 agree exactly). If you need trignometric or exponential functions, things get quite a bit more restricted -- things break down at pi/2. For numbers with magnitudes "near" infinity, or nearly infinitesimal, things break down in Python. Whether you consider that breakdown to be problematic or not depends on how you view the (long) list of Curious Things The Dog Did (Not Do) In The Night in Ben's post. I think they're pretty disastrous. I'd be much happier with *either* consistently raising on non-representable finite results, or consistently returning -inf, 0.0, inf, or nan. That's not the Python we have.
(Note: this has drifted a long way from anything that has real relevance to Python - arguments this theoretical will almost certainly fall foul of "practicality beats purity" when we get to arguing what the language should do. I'm just curious to see how the theoretical debate pans out).
I disagree. This is entirely on the practical side of things! Calculations on the extended real line are "just calculations", as are calculations with IEEE 754 floats. Any attentive high school student can learn to do them (at least with the help of a computer for correct rounding in the case of IEEE 754 floats :-). There's no theory to discuss (except for terminology like "true infinity"). The question of whether Python diverges from IEEE 754 (which turns on whether IEEE 754 recommends that you raise exceptions or return inf/nan but not both), and the question of whether the IEEE 754 model of "overflow as infinity" is intuitive/useful to users of Python who *don't* use NumPy, are the questions under discussion here. Now, to be fair, there is a practicality beats purity argument. It can be argued (I think Ben or maybe Chris A alluded to this) that "overflows usually' just happen but division by zero usually' is a logic error". Therefore returning inf in the case of overflow (and continuing the computation), and raising on zero division (stopping the computation and getting the attention of the user), is the pragmatic sweet spot. I disagree, but it's a viable argument. Footnotes: [1] If you haven't read The Mote in God's Eye, that's the third hand of the aliens, which is big, powerful, clumsy, and potentially deadly. Pretty much the way I think of "inf"!