[Python-ideas] PEP 485: A Function for testing approximate equality
steve at pearwood.info
Mon Jan 26 07:39:41 CET 2015
On Sun, Jan 25, 2015 at 05:21:53PM -0800, Chris Barker wrote:
> But adding a relative tolerance to unittest makes a lot of sense -- would
> "assertCloseTo" sound entirely too much like assertAlmostEqual? I think it
> may be OK if the docs for each pointed to the other.
CloseTo assumes an asymetric test, which isn't a given :-)
I prefer ApproxEqual, although given that it is confusingly similar to
AlmostEqual, IsClose would be my second preference.
> > The actual fuzzy comparison itself is handled by a function
> > approx_equal(x, y, tol, rel).
> NOTE: the difference between this and the my current PEP version is that
> that absolute tolerance defaults to something other than zero (though it
> looks like it does default to zero for the assert method), and it is a
> symmetric test (what Boost calls the "strong" test)
> > - somebody other than me should review NumericTestCase.assertApproxEqual
> > and check that it does nothing unreasonable;
> Well it requires the tolerance values to be set on the instance, and they
> default to zero. So if we were to add this to unittest.TestCase, would you
> make those instance attributes of TestCase?
No, I would modify it to do something like this:
if tol is None:
tol = getattr(self, "tol", 0.0) # or some other default
and similar for rel.
I recommend using short names for the two error tolerances, tol and rel,
because if people are going to be writing a lot of tests, having to
self.assertIsClose(x, y, absolute_tolerance=0.001)
will get tiresome.
> > - since there are considerable disagreements about the right way to
> > handle a fuzzy comparison when *both* an absolute and relative error
> > are given, people who disagree with the default definition can simply
> > subclass TestCase and redefine the approx_equal method.
> > (Which is much simpler than having to write the whole assertApproxEqual
> > method from scratch.)
> what assertApproxEqual does is add the ability to test a while sequence of
> values -- much like numpy's all_close. Do any of the other TestCase
> assertions provide that?
I was motivated by assertEqual and the various sequence/list methods. I
wanted to compare two lists element-wise using an approximate
# func1 and func2 are alternate implementations of the same thing
a = [func1(x) for x in values]
b = [func2(x) for x in values]
It wouldn't be meaningful to compare the two lists for approximate
equality as lists, but it does make sense to do an element-by-element
comparison. In the event of failure, you want an error message that is
more specific than just the two lists being unequal.
> But you could also add an optional parameter to pass in an alternate
> comparison function, rather than have it be a method of TestCase. As I
> said, I think it's better to have it available, and discoverable, for use
> outside of unitest.
That's an alternative too. I guess it boils down to whether you prefer
inheritance or the strategy design pattern :-)
I do think there are two distinct use-cases that should be included in
(1) Unit testing, and a better alternative to assertAlmostEqual.
(2) Approximate equality comparisons, as per Guido's example.
Note that those two are slightly different: in the unit testing case,
you usually have an known expected value (not necessarily mathematically
exact, but at least known) while in Guido's example neither value is
necessarily better than they other, you just want to stop when they are
Like Nick, I think the first is the more important one. In the second
case, anyone writing a numeric algorithm is probably copying an
algorithm which already incorporates a fuzzy comparison, or they know
enough to write their own. The benefits of a standard solution are
convenience and correctness. Assuming unittest provides a well-tested
is_close/approx_equal function, why not use it?
> > Note that in this case, at least, we do want a symmetric version of
> > "is_close", since neither guess nor new_guess is "correct", they are
> > both approximations.
> true, but you are also asking the question -- is the new_guess much
> different that guess. Which points to a asymmetric test -- but either would
I can see we're going to have to argue about the "Close To" versus
"Close" distinction :-)
I suggest that in the interest of not flooding everyone's inboxes, we
take that off-list until we have either a concensus or at least
agreement that we cannot reach concensus.
More information about the Python-ideas