# [Python-Dev] Comparing heterogeneous types

Andrew Koenig ark-mlist at att.net
Fri Jun 18 11:30:45 EDT 2004

```Michael Chermside cites Armin Rigo:

> Armin Rigo writes:
> > Is it really what we want?  It seems to me that the precision
> > of a float gives roughly the magnitude of "uncertainty" of the
> > value that it represents.  If a float is large enough, then the
> > same float can represent several long integer values.  It should
> > then probably be considered equal to any of these integers,
> > instead of just equal to the one arbitrarily returned by
> > casting it to long.

> I think that what you describe is NOT the behavior that we
> want, and I'll try to give an explanation of why. But please
> bear with me... I'm still a beginner at understanding some
> of the theory behind numerical computing, so I may make some
> mistakes. Hopefully those who understand it better will
> correct me.

I'm not going to correct you--rather, I'd like to amplify what you're saying
by noting that you don't even need to resort to numerical computing to make
the argument.

There's a much simpler line of reasoning:  Sort and other related algorithms
can be made to work reliably only if they have recourse to a total order
relation over the elements being sorted.  A total order relation is one for
which exactly one of x<y, x==y, y<x holds for any x and y, and for which <,
==, and > are transitive, == is reflexive and symmetric, and < and > are
irreflexive and antisymmetric.  In other words, x==y if and only if y==x,
x==y and y==z implies x==z, x<y and y<z implies x<z, and x>y and y>z implies
x>z.

If a float is considered equal to more than one integer, then these
requirements fail, because if x is such a float, there are two integers m
and n such that x==m, x==n, and m!=n.  In other words, equality is not
transitive.  You can't sort numbers reliably under such circumstances.

> There are (at least) two different mental models we can use to
> deal with this limitation of finite precision. One (which you
> are espousing above) is to treat each limited-precision
> number as if it had an inherent uncertainty, and it represented
> all those numbers which are closer to that representable
> number than any other. The other mental model is to suppose
> that each limited-precision number represents just one
> specific value and that the other values simply cannot be
> represented.

Yes.  And as I understand it, that is the school of thought that
computational numerical analysts are more likely to adopt these days.  Such
thinking, for example, gives rise to the requirement in IEEE floating-point
that the primitive operations (+, -, /, *, and square root) must give
results that are exactly equal to the infinite-precision results after
rounding.

```