OK,
I FINALLY got a chance to look at Steven's code in the statistic module tests.
Not much code there, this really isn't hat big a deal.
It does check for NaN, and inf and all that, so that's good.
It is also symmetric with respect to x and y -- using the maximum of the two to compute the relative error -- I think that's good. (This is essentially the same as Boosts "strong" method -- though implemented a tiny bit differently).
Here is the key definition:
def approx_equal(x, y, tol=1e-12, rel=1e-7):
...
x is approximately equal to y if the difference between them is less than
an absolute error tol or a relative error rel, whichever is bigger.
...
This is a lot like the numpy code, actually, except it does a max test, rather than adding the absolute and relative tolerances together. I think this is a better way to go than numpy's but there is little practical difference.
However, it suffers from the same issue -- "tol" is essentially a minimum error that is considered acceptable. This is nice, as it it will allow zero to be passed in, and if the other input is within tol of zero, it will be considered approximately equal. However, for very small numbers (less that the absolute tolerance), then they will always be considered approximately equal:
In [18]: approx_equal(1.0e-14, 2.0e-14)
Out[18]: True
off by a factor of 2
In [19]: approx_equal(1.0e-20, 2.0e-25)
Out[19]: True
oops! way off!
This is with the defaults of course, and all you need to do is set teh tol much lower:
In [20]: approx_equal(1.0e-20, 2.0e-25, tol=1e-25)
Out[20]: False
This is less fatal than with numpy, as with numpy you are processing a whole array of numbers with the same tolerances, and they may not be all of the same magnitude. But I think think it's trap for users.