On Tue, 15 Sep 2020 at 08:12, Stephen J. Turnbull
Ben Rudiak-Gould writes:
1./0. is not a true infinity.
Granted, I was imprecise. To be precise, 1.0 / 0.0 *could* be a true infinity, and in some cases (1.0 / float(0)) *provably* is, while 1e1000 *provably* is not a true infinity.
I think we're getting to a point where the argument is getting way too theoretical to make sense any more. I'm genuinely not clear from the fragment quoted here, 1. What a "true infinity" is. Are we talking solely about IEEE-754? Because mathematically, different disciplines have different views, and many of them don't even include infinity in the set of numbers. 2. What do 1.0 and 0.0 mean? The literals in Python translate to specific bit patterns, so we can apply IEEE-754 rules to those bit patterns. There's nothing to discuss here, just the application of a particular set of rules. (Unless we're discussing which set of rules to apply, but I thought IEEE-754 was assumed). So Ben's statement seems to imply he's not talking just about IEEE-754 bit patterns. 3. Can you give an example of 1.0/0.0 *not* being a "true infinity"? I have a feeling you're going to point to denormalised numbers close to zero, but why are you expressing them as "0.0"? 4. Can you provide the proofs you claim exist? I'm not actually that interested in the proofs themselves, I'm just trying to determine what your axioms and underlying model are. (Note: this has drifted a long way from anything that has real relevance to Python - arguments this theoretical will almost certainly fall foul of "practicality beats purity" when we get to arguing what the language should do. I'm just curious to see how the theoretical debate pans out). Paul