On Mon, Feb 21, 2022 at 1:39 PM Tim Peters <tim.peters@gmail.com> wrote:
[Mark Dickinson <dickinsm@gmail.com>]
It Would Be Nice If `-1 * complex(inf, 0.0)` gave `complex(-inf, -0.0)` instead of the current result of `complex(-inf, nan)`.
Except replacing -1 with "-1.0" or "complex(-1)" would presumably _still_ return complex(-inf, nan), despite that
-1 == -1.0 == complex(-1) True
I think Python should do what C99 and C++ do, which is define complex(a, b) * c to mean complex(a * c, b * c). Python's complex constructor accepts two complex arguments, so that definition works even in the complex * complex case, though in practice you'd want to optimize for common argument types and defer to __rmul__ if the second argument is of unknown type. It does have the consequence that values that compare equal can have different arithmetic behavior. That happens in IEEE arithmetic with ±0, and it's a good thing in certain cases. IEEE arithmetic deliberately preserves the sign of 0 where it can. It's not an accident of the representation. There is a difference between a float and a complex with ±0 imaginary part that justifies the different answers in this case: the former is real by construction, while the latter may have underflowed. It's unfortunate that underflowed values compare equal to true zero, but there's a reason they say you shouldn't compare floating-point numbers for equality. If that's wanted, better for complex.__mul__
to detect on its own whether component parts are 0, and use a simpler multiplication implementation if so.
I think it's better not to do that for the reason in the previous paragraph. For example, this similar surprise has nothing to do with type promotion:
1j * complex(math.inf, -0.0) (nan+infj)
1j is, in effect, being prematurely promoted to complex because Python lacks an imaginary type. C99 has _Imaginary for this reason. Another consequence of the missing imaginary type is that you can't write negative zeros in complex literals consistently. C++ doesn't have std::imaginary, possibly because it has no imaginary literals. That, plus I'm still waiting for a plausible use case ;-)
Why are complex numbers in core Python in the first place? I'm not sure, but I think it's for the same reason as Ellipsis and binary @ and the third argument to slice: to support numpy, since numpy can't define its own syntax. The core devs wouldn't normally add syntax just for some third-party library, but numpy is so important that they bend the rules. People use numpy to do heavy-duty real-world number crunching. The weird IEEE corner cases actually affect the stability of these calculations; that's why the IEEE standard tried to pin down their behavior. I think that improving Python's built-in numerics would have benefits for numpy users in the form of fewer failed computations (mysteriously failed for the many that don't have the numeric-analysis chops to work out what went wrong). I think it would have strategic value. It's less noticeable than adding syntax, but also easier.