[Python-ideas] PEP 485: A Function for testing approximate equality
chris.barker at noaa.gov
Mon Jan 26 01:52:41 CET 2015
I've gone through all the messages in this thread since I posted the draft
PEP. I have updated the code and PEP (on gitHub) with changes that were no
brainers or seemed to have clear consensus. The PEP also needs some better
motivational text -- I"ll try to get that soon.
So I think we're left with only a few key questions:
1) Do we put anything in the stdlib at all? (the big one), which is closely
2) Do we explicitly cal this a testing utility, which would get reflected
in the PEP, and mean that we'd probably want to add a unitest.TestCase
assert that uses it.
3) Do we use an asymmetric or symmetric test? Most people seemed to be fine
with the asymmetric test, but Steven just proposed the symmetric again.
I'll comment on that later.
4) What do we do about tolerance relative to zero? Do we define a specific
zero_tolerance parameter? or expect people to set abs_tolerance when they
need to test against zero? And what should the default be?
Here is my take on those issues:
1) Yes, we put something in. It's quite clear that there is no one
solution that works best for every case (without a lot of parameters to
set, anyway), but I'm quite sure that what I've proposed, modified with any
solutions to the issues above, would be useful in the majority of cases.
Let's keep in mind Nick's comment: "Note that the key requirement here
should be "provide a binary float comparison function that is significantly
less wrong than the current 'a == b'"
2) I do agree that the most frequent use case would be for testing, but
that doesn't always mean format unit testing (it could be quick check on
the command line, or iPython notebook, or....) and it certainly doesn't
mean the unittest package. So I think it's a fine idea to add an assertion
to TestCase that uses this, I'd much rather see it as a stand alone
function (maybe in the math module). A static method of TestCase would be a
compromise -- it's just some extra typing on in import line, but I think it
would get lost to folks not using unittest.
I note that Guido wrote: "To some, that means "unit test" or some other
way of testing software. But I hope that's not the main use case."
While Nick wrote: "I would personally find the PEP more persuasive if it
was framed in
terms of providing an improved definition of assertAlmostEqual that
better handles the limitations of binary floating point dynamic
So I'm not sure what to make of that.
3) I prefer the asymmetric test -- I've already given my reasons. But I'm
pretty convinced that particularly for use in testing that it really
- relative tolerance tend to be small -- on order of 1e-8 or so. The 10%
example I used in the PEP was to keep the math easy -- but it's not a
common use case (for tests anyway)
- folks tend to specify relative tolerance to an order of magnitude: ie.e
1e-8, not 1.323463e-8 -- if the magnitude of the tolerance is much smaller
that its precision, then any of the definitions under consideration are
effectively the same.
So any of these are worth putting in the stdlib.
4) In my thinking and research, I decided that the (essentially optional)
abs_tolerance parameter is the way to handle zero. But if Nathaniel or
anyone else has use-cases in mind where that wouldn't work, we could add
the zero_tol parameter to handle it instead. But not sure what the default
should be -- if we think there is something special enough about order of
magnitude 1, the something like 1e-12 would be good, but I'm not so sure.
But it would be better to set such a default for zero_tolerance than for
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas