[Python-ideas] Way to check for floating point "closeness"?

Chris Barker chris.barker at noaa.gov
Thu Jan 15 17:52:36 CET 2015

On Wed, Jan 14, 2015 at 11:29 PM, Steven D'Aprano <steve at pearwood.info>

> The question of which to use as the denominator is more subtle. Like
> you, I used to think that you should choose ahead of time which value
> was expected and which was actual, and divide by the actual. Or should
> that be the expected? I could never decide which I wanted: error
> relative to the expected, or error relative to the actual? And then I
> could never remember which order the two arguments went.

I'm on the fence about this -- it seems clear to me that if the user has
specified an "expected" value, that tolerance would clearly be based on
that magnitude. If nothing else, that is because you would more likely have
many computed values for each expected value than the other way around.

And I was thinking that calling the arguments something like "actual" and
"expected" would make that clear in the API, and would certainly be

But the fact that it was never clear to you , even as you were writing the
code, is good evidence that it wouldn't be clear to everyone ;-)

> calculations should be symmetrical, so that
>     error(a, b) == error(b, a)

That does make it simpler to think about and reason about, and makes the
use-cased more universal (or at least appear more universal) "are these two
values close" is a simple question to ask, if not to answer.

regardless of whether you have absolute or relative error. Furthermore,
> for safety you normally want the larger estimate of error, not the
> smaller: given the choice between
>     (abs(a - b))/abs(a)
> versus
>     (abs(a - b))/abs(b)
> you want the *larger* error estimate, which means the *smaller*
> denominator. That's the conservative way of doing it.

Which is what the Boost "strong" method does -- rather than compute teh max
and use that, it computes both and does an "and" check -- but same result.

A concrete example: given a=5 and b=7, we have:
> absolute error = 2
> relative error (calculated relative to a) = 0.4
> relative error (calculated relative to b) = 0.286
> That is, b is off by 40% relative to a; or a is off by 28.6% relative to
> b. Or another way to put it, given that a is the "true" value, b is 40%
> too big; or if you prefer, 28.6% of b is in error.

The think is, in general, we use this to test for small errors, with low
tolerance. Which value you use to scale only makes a big difference if the
values are far apart, in which case the error will be larger than the
tolerance anyway.

In your above example, if the tolerance is, say, 1%, then if makes no
difference which you use -- you are way off anyway. And in the common use
cases, comparing a double precision floating point calculation, tolerances
are more likely to be around 1e-12, not 1e-2 anyway!

So I think that which relative tolerance you use makes little difference in
practice, but it might as well be robust and symmetrical.

(another option is to use the average of the two values to scale the
tolerance, but why bother?)

> Note that you would never compare to an expected value of zero.
> You *cannot* compare to an expected value of zero, but you certainly can
> be in a situation where you would like to: math.sin(math.pi) should
> return 0.0, but doesn't, it returns 1.2246063538223773e-16 instead. What
> is the relative error of the sin function at x = math.pi?

there isn't one -- that's the whole point -- but there is an absolute
error, so that's what you should check.

We all agree a relative error involving zero is not defined / possible. So
the question is what to do?

1) Raise a ValueError
2) Let it return "not close" regardless of the other input -- that's
mathematically correct, nothing is relatively close to zero.
3) Automagically switch to an absolute tolerance near zero -- user
specified what it should be.

It seems the implementations (Boost's, for instance) I've seen simply do
(2). But if the point of putting this in the standard library is that
people will have something that can be used for common use cases without
thinking about it, I think maybe (1) or (3) would be better. Probably  (3),
as raising an Exception would make a mess of this if it were inside a
comprehension or something.

> What Chris is looking for is a way to get a closeness function that works
> > most of the time. (He posted while I'm writing this.)
> I think the function I have in the statistics test suite is that
> function.

I'll take a look -- it does sound like you've already done pretty much what
I have in mind.

> I would like to see ULP calculations offered as well, but
> Mark thinks that's unnecessary and I'm not going to go to the
> battlements to fight for ULPs.

I suppose it could be added later -- I agree that it could be pretty
useful, but that it's also much harder to wrap your brain around, and
really for a different use-case.

> * you provide two values, and at least one of an absolute error
>   tolerance and a relative error;
> * if the error is less than the error(s) you provided, the test
>   passes, otherwise it fails;
> * NANs and INFs are handled appropriately.

Is this different than the numpy implementation:


In that (according to the docs, I haven't looked at the code):

The relative difference (*rtol* * abs(*b*)) and the absolute difference
*atol* are added together to compare against the absolute difference
between *a *and *b*.

I think it should either be:

- you specify atol or rtol, but only one is used.


- some way to transition from a relative tolerance to an absolute on near
zero -- I a haven't figured out if that can be done smoothly yet.

[Also, it looks like numpy computes the tolerance from the second input,
rather than looking at both, resulting in an asymetric result -- discussed

I've always thought the numpy approach is weird, but now that I think about
it, it would be really horrible (with defaults) for small numbers:

rtol defaults to 1e-5, atol to 1e-8 -- too big, I think, but not the point

In [23]: a, b = 1.1, 1.2

In [24]: np.allclose(a,b)
Out[24]: False
## that's good there are pretty far apart

In [27]: a, b = 1.1e15, 1.2e15

In [28]: np.allclose(a,b)
Out[28]: False

# same thing for large values -- still good.

In [25]: a, b = 1.1e-15, 1.2e-15

In [26]: np.allclose(a,b)
Out[26]: True

OOPS! this is NOT what most people would expect!!

In [30]: np.allclose(a,b, atol=0.0)
Out[30]: False

There we go. But with a default atol as large as 1e-8, this is a rally bad

I can only imagine whoever wrote this was thinking about really large
values, but not  really small values...

(I think this has been brought up in the numpy community, but I'll make


> >      is_close(218.345, 220, 1, .05)   # OHMs
> >      is_close(a, b, ULP, 2)     # ULPs
> >      is_close(a, b, AU, .001)   # astronomical units
> >
> >
> > I don't see anyway to generalise those with just a function.
> Generalise in what way?
> > By using objects we can do a bit more.  I seem to recall coming across
> > measurement objects some place.  They keep a bit more context with them.
> A full system of <value + unit> arithmetic is a *much* bigger problem
> than just calculating error estimates correctly, and should be a
> third-party library before even considering it for the std lib.
> --
> Steve
> _______________________________________________
> Python-ideas mailing list
> Python-ideas at python.org
> https://mail.python.org/mailman/listinfo/python-ideas
> Code of Conduct: http://python.org/psf/codeofconduct/


Christopher Barker, Ph.D.

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150115/3a8c4a1c/attachment-0001.html>

More information about the Python-ideas mailing list