On Jan 15, 2015, at 3:47 PM, Neil Girdhar <mistersheik@gmail.com> wrote:



On Thu, Jan 15, 2015 at 6:36 PM, Chris Barker <chris.barker@noaa.gov> wrote:
On Thu, Jan 15, 2015 at 3:31 PM, Neil Girdhar <mistersheik@gmail.com> wrote:
You can always disable atol by setting atol to zero.  I really don't see what's wrong with their implementation.

1) the default should be zero in that case having a default close to the rtol default is asking for trouble.

It's not that close: rtol defaults to 1000 times bigger than atol.

That's pretty huge if your values are on order of 1e-100 ;-)

Which is my point -- this approach only makes sense for values over order 1.  Then the scaled relative error will be a lot larger, so atol sets a sort of lower bound on the error. But if the magnitude of your values is small, then the scaled value becomes small, and the atol overwhelms it.

This kind of "switch to an absolute tolerance when close to zero" behavior is what I've been looking for, but I don't like how numpy does it.


-Chris


-CHB



2) if the user has both large and small numbers, there IS no appropriate value for a_tol for all of them.

The relative error is not symmetric, so it's not about having "large and small numbers".  Your estimate should not affect the error you tolerate.

They simply should not be mixed in that way.

-Chris



 

On Thu, Jan 15, 2015 at 5:50 PM, Chris Barker <chris.barker@noaa.gov> wrote:
On Thu, Jan 15, 2015 at 1:52 PM, Neil Girdhar <mistersheik@gmail.com> wrote:
The point is that this function is already in Python

I  don't think somethign being in an external package means that we have to do it the same way in teh stdlib -- even a widely used and well regarded package like numpy. And I say this as someone that has "import numpy" in maybe 90% of my python files.

Maybe we should be careful to give it a very distinct name, however, to avoid confusion.
 
and if you want to do something different, you should have a really good reason to do it differently.

I'm not sure I agree, but we do in this case anyway. The truth is, while really smart people wrote numpy, many of the algorithms in there did not go through nearly the level of review currently required for the python standard library
 
  If you were to add a function to math, say math.close, it should work like numpy.allclose in my opinion.

For reference, numpy does this:

absolute(a - b) <= (atol + rtol * absolute(b))

where atol is an absolute tolerance and rtol is a relative tolerance (relative to the actual value b).  This subsumes most of the proposals here.

adding atol  in there "takes care of" the near zero and straddleing zero issue ( I suspect that's why it's done that way), but it is fatally wrong for values much less than 1.0 --  the atol totally overwhelms the rtol.

See my post earlier today.

-Chris


--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker@noaa.gov




--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker@noaa.gov