On Jan 13, 2015, at 1:29 AM, Mark Dickinson <dickinsm@gmail.com> wrote:
Comparing by ulps was what I needed for testing library-quality functions for the math and cmath modules; I doubt that it's what's needed for most comparison tasks.
That's the conclusion I was coming to. Ulps are likely to be the right way to do it if your trying to understand/test the accuracy of an algorithm, but not for general "did I get a close enough result". And it would be a lot harder to understand for most of us. As for comparing to zero -- in reading about this, it seems there simply is no general solution -- only the user knows what they want. So the only thing to do is a big warning in the docs about it, and providing an absolute tolerance option. Should that a separate function or a flag? This is actually a harder problem for numpy, as it's an array function, so you need to have the same function/parameters for every value in the array, some of which may be near zero. I haven't thought it out yet, but maybe we could specify an absolute tolerance near zero, and a relative tolerance elsewhere, both at once. Tricky to document, even if possible.
I'd expect the suggested combination of relative error and absolute error to be more appropriate most of the time.
And most of the time is what we are going for. -Chris