[Python-ideas] PEP 485: A Function for testing approximate equality
chris.barker at noaa.gov
Tue Jan 27 18:07:16 CET 2015
On Tue, Jan 27, 2015 at 8:20 AM, Guido van Rossum <guido at python.org> wrote:
> A) Which test do we use:
>> 1) The asymmetric test
>> 2) The "strong" test (minimum relative tolerance)
>> 3) The "weak" test (maximum relative tolerance)
> The problem with this question is that, while it's easy to come up with
> examples where it may matter (e.g. 95 is within 5% of 100, but 100 is not
> within %5 of 95), in practice the tolerance is more likely to be 1e-8, in
> which case it doesn't matter.
Exactly why I'm happy with any of them. I'm trying to suss out whether
anyone else has a reason to reject one or the others. If no one does, then
we can just pick one.
> B) Do we provide a non-zero default for the absolute tolerance? If so what
>> should the value be? Remember that this serves primarily to provide a check
>> against zero.
> It feels like absolute tolerance is a completely different test. And it is
> a much simpler test for which w don't need a helper function -- it's just
> abs(x) < tolerance.
> When does a program need *both* absolute and relative tolerance in a
> single test?
Because we want it to be able to do something sane when comparing to zero
-- the abs_tolerance allows you to set a minimum tolerance that will do
something reasonable near zero (we could use a zero_tolerance, as Nathaniel
has suggested, instead, but that creates the incontinuity that , for
example, 1e-12 is close to zero, but it is not close to 1e-100 -- I think
that's a bad idea for a call with the same arguments). I spend a good while
thinking about this and playing with it, and it became clear to me that
this is the best way to go for a not-to-surprising result. And it's
consistent with what numpy and Steven's statistics test code does.
Still TBD is what the default should be, though.
> I still think this is better off as a recipe than as a helper function.
Are you prepared to reject the PEP? I'd prefer to give it this one last
shot at determining if there really is no way to get consensus on a
good-enough solution. I suspect there is a lot of bike shedding here --
people have ideas about what is best, and want to understand and talk about
it, but that doesn't mean that they wouldn't rather see something else that
nothing. -- that's certainly the case for me (both the bike shedding and
the desire to see something ;-) )
Evidence: The numpy version has its faults -- but it's widely used.
assertAlmost Equal has even more faults (limitations, anyway) but it's also
widely used. Boost has something in it, even though it's a one-liner.
Clearly this is a useful functionality to have available.
ChrisA is right -- I have not done a good job at steering the group toward
Christopher Barker, Ph.D.
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas