On Sun, Sep 13, 2020 at 8:10 AM Stephen J. Turnbull < turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:

As Steven points out, it's an overflow, and IEEE *but not Python* is clear about that. In fact, none of the actual infinities I've tried (1.0 / 0.0 and math.tan(math.pi / 2.0)) result in values of inf. The former raises ZeroDivisionError and the latter gives the finite value 1.633123935319537e+16.

probably because math.pi is not exact, either -- I'm pretty sure Python is wrapping the C lib for math.tan, so that's not a Python issue.

And I don't think that IEEE 754 requires that all overflows go to inf. Apparently it does for literals (thanks Steven), but you can, at the language / library level choose to raise exceptions for overflow.

In the Python world, numpy lets the user control the error handling:

https://numpy.org/doc/stable/reference/generated/numpy.seterr.html

by default, overflow warns, but results in an infinity:

In [47]: arr = np.array([1e300])

In [48]: arr * 1e300

<ipython-input-48-598041e50dbb>:1: RuntimeWarning: overflow encountered in multiply arr * 1e300 Out[48]: array([inf])

So you get a warning, but still the infinity.

But if you change the error handling, you can get an Exception.

In [49]: np.seterr(over='raise')

Out[49]: {'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'}

In [50]: arr * 1e300

--------------------------------------------------------------------------- FloatingPointError Traceback (most recent call last) <ipython-input-50-598041e50dbb> in <module> ----> 1 arr * 1e300

FloatingPointError: overflow encountered in multiply

Python used to have an interface to the fpectl system, presumably to do things like this, but it was removed in py3.7:

""" The fpectl module has been removed. It was never enabled by default, never worked correctly on x86-64, and it changed the Python ABI in ways that caused unexpected breakage of C extensions. (Contributed by Nathaniel J. Smith in bpo-29137.) """

As you've noticed, Python itself has limited use for Inf and NaN values, which is probably why it got as far as it did with the really bad support before 2.6. But these special values are used in external code called from Python (including numpy), an so it's helpful to have good support for them in Python anyway.

Which brings up a point about the Singleton idea (yes, I know that no ine is proposing making singletons of inf and nan at this point anyway): while the Python float type *could* enforce that they be singletons (essentially like interning the special values), you can get them in other types, from other sources, so I think you wouldn't want them to be singletons. Example: numpy float32 type:

In [52]: np32inf = np.float32('inf')

In [53]: type(np32inf)

Out[53]: numpy.float32

In [54]: math.inf == np32inf

Out[54]: True

An "is" check would never work there.

that mathematical infinities and overflows map to inf, mathematical

undefineds map to nan, overflows map to inf, and underflows map to zero. But, as a large set of values, perhaps we'd be better off with overflow treated as nan?

IEEE 754 is a very practical standard -- it was well designed, and is widely used and successful. It is not perfect, and in certain use cases, it may not be the best choice. But it's a really good idea to keep to that standard by default.

If we wanted Python to allow users more control, we could add an fp error handling context, maybe like numpy's -- but I don't think there's been much demand for it.

Note that one of the key things about numpy, and why its defaults are different, is that it does array-oriented math. Most people do not want their entire computation to be stopped if only one value in an array under- or over-flows, etc.

That may apply in pure python to things like comprehensions, but most folks doing heavy duty number crunching are using numpy anyway.

Anyway, the only thing on the table now is putting a couple names in the builtin namespace.

-CHB