On Mon, Jan 26, 2015 at 4:42 PM, Nathaniel Smith <njs@pobox.com> wrote:
>> > I really think that having three tolerances, once of which is nearly
>> > always ignored, is poor API design. The user usually knows when they are
>> > comparing against an expected value of zero and can set an absolute
>> > error tolerance.
>>
>> Agreed.
>
>
> also agreed -- Nathanial -- can you live with this?

I can live with it, but I'd ignore the function and use allclose instead :-).

why not just set an appropriate abs_tolerance?
 
In your workflow above, I'm guessing that with allclose then it's <1%
of the time that you have a failure due to too-restrictive default
tolerances, where you then have to switch from thinking about your
problem to thinking about floating point details. With no absolute
tolerance, this rises to ~30% (assuming my super quick and dirty
github statistics are typical). That's a lot of workflow disruption.

well they by definition aren't typical, as assertAlmostEqual isn't the same function -- it is only useful for an absolute tolerance for values near magnitude 1 -- so anyone that needed a relative tolerance (which I"m trying to provide) wouldn't have used it.

But your point about numpy.allclose is taken -- it sets a non-zero default abs_tol ( atol=1e-08 ) -- and apparently that's worked well for you and others.

Honestly, I've been using numpy's allclose for ages without thinking about abs_tol -- but after all this, I need to go back an look at all my tests -- I"m not at all sure that that was always appropriate!

I guess the trade-offs for a default are:

What are the most common cases? 
  - so here -- do people only rarely work with numbers much smaller than 1? (i.e. less than 1e-4 or so?)

How disastrous is it to have an inappropriate default?
  - Here is where I think it's a bad idea to have  default:

A) If there is a default:
   Someone writes a test where they compare their very small value to 0.0, and there is a default, the test will pass, even if the very small value really is far away from zero for their use-case. The test passes, they are happy, even though the result they are testing are totally bogus -- they very well might not notice.

B) If there is no (non-zero) default:
   Someone writes a test where they compare their very small value to 0.0,  the test will fail, even if the very small value really is close to zero for their use case. The test has failed, so they will examine the results, determine that it's as good as it can get, adjust the abs_tolerance value in the test, and away they go.

Maybe B is rare -- I have no idea, but it does seem a substantially worse outcome -- the attractive nuisance referred to earlier.

-Chris
  

 

--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker@noaa.gov