[Python-ideas] PEP 485: A Function for testing approximate equality

Chris Barker chris.barker at noaa.gov
Thu Feb 12 17:41:52 CET 2015


This has been very thoughoughly hashed out, so I'll comment on the bits
that are new(ish):

​If you're saying:
>
>  >>> z = 1.0 - sum([0.1]*10)​
> ​​​​​
> ​ >>> z == 0
>  False
>  >>> is_close(0.0, z)
>  True
>
> your "reference" value is probably really "1.0" or "0.1" since those are
> the values you're working with, but neither of those values are derivable
> from the arguments provided to is_close().
>

this is simple -- don't do that!

isclose(1.0, sum([0.1]*10)

Is the right thing to do here. and it will work with any of the methods
that were ever on the table. If you really want to check for something
close to zero, you need an absolute tolerance, not really a reference value
(they are kind of the same, but absolute tolerance is easier to reason
about:

z = 1.0 - sum([0.1]*10)​
​​​​​

>>> is_close(0.0, z, abs_tol=1e-12)
 True

 def is_close(a, b=None, tol=1e-8, ref=None):
>

I think a b-None defaut would be really confusing ot people!

Setting an option reference value (I'd probably call it 'scale_val' or
something like that might make sense:

def isclose(a, b, rel_tol=1e-9, abs_tol=0.0, scale_val=None):

Then we use the scale_val if it's defined, and max(a,b) if it's not --
something like:

if scale_val is  not None:
    return abs(a-b) <= abs(rel_tol* scale_val) or abs(a-b) <= abs_tol
else
   return abs(a-b) <= abs(rel_tol*a) or abs(a-b) <= abs(rel_tol*b) or
abs(a-b) <= abs_tol

This would let users do it pretty much anyway they want, while still
allowing the most common use case to use all defaults -- if you don't know
what the heck scale_val means, then simply ignore it.

However, I think that it would require enough thought to use scale_val that
users might as well simply write that one line of code themselves.

and get reasonable looking results, I think? (If you want to use an
> absolute tolerance, you just specify ref=1, tol=abs_tol).
>

way too much thought required there -- see my version.


An alternative thought: rather than a single "is_close" function, maybe it
> would make sense for is_close to always be relative,
>

yes, that came up...

If you had a sequence of numbers and wanted to do both relative comparisons
> (first n significant digits match) and absolute comparisons you'd just have
> to say:
>
>   for a in nums:
>      assert is_close(a, b) or is_close_abs(a, b)
>
> ​which doesn't seem that onerous.​
>

maybe not, but still a bit more onerous than:

for a in nums:
     assert is_close(a, b, abs_tol= 1e-100)

(and note that your is_close_abs() would require a tolerance value.

But more to the point, if you want to wrap that up in a function (which I'm
hoping someone will do for unittest), then that function would need the
relative an absolute tolerance levels anyway, so have a different API, less
than ideal.

-Chris

-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150212/150cd9f6/attachment-0001.html>


More information about the Python-ideas mailing list