On Thu, May 7, 2020 at 12:26 PM Oscar Benjamin
On Thu, 7 May 2020 at 02:07, David Mertz
wrote: That's the point though. For *most* functions, the substitution principle is fine in Python. A whole lot of the time, numeric functions can take either an int or a float that are equal to each other and produce results that are equal to each other. Yes, I can write something that will sometimes overflow for floats but not ints. Yes, I can write something where a rounding error will pop up differently between the types. But generally, numeric functions are "mostly the same most of the time" with float vs. int arguments.
The question is whether you (or Chris) care about calculating things accurately with floats or ints. If you do try to write careful code that calculates things for one or the other you'll realise that there is no way to duck-type anything nontrivial because the algorithms for exact vs inexact or bounded vs unbounded arithmetic are very different (e.g. sum vs fsum). If you are not so concerned about that then you might say that 1 and 1.0 are "acceptably interchangeable".
I most certainly DO care about accurate integer calculations, which is one of the reasons I'm very glad to have separate int and float types (ahem, ECMAScript, are you eavesdropping here?). In any situation where I would consider them equivalent, it's actually the float that I want (it's absolutely okay if I have to explicitly truncate a float to int if I want to use it in that context), so the only way they'd not be equivalent is if the number I'm trying to represent actually isn't representable. Having to explicitly say "n + 0.0" to force it to be a float isn't going to change that, so there's no reason to make that explicit. For the situations where things like fsum are important, it's great to be able to grab them. For situations where you have an integer number of seconds and want to say "delay this action by N seconds" and it wants a float? It should be fine accepting an integer.
Please understand though that I am not proposing that 1==1.0 should be changed. It is supposed to be a simple example of the knock on effect of defining __eq__ between non-equivalent objects.
Definitely not. I'm just arguing against your notion that equality should ONLY be between utterly equivalent things. It's far more useful to allow more things to be equal. ChrisA