[Python-ideas] PEP 485: A Function for testing approximate equality
steve at pearwood.info
Thu Jan 29 13:46:02 CET 2015
On Tue, Jan 27, 2015 at 02:35:50PM -0800, Guido van Rossum wrote:
> I'm still confused. We're talking about a function that compares two simple
> values, right? Where does the sequence of values come from? Numpy users do
> everything by the array, but they already have an isclose(). Or perhaps
> you're talking about assertApproxEqual in test_statistics.py? That has so
> many ways to specify the relative and absolute tolerance that I give up
> understanding it.
Those ways evolved from my actual use of the function. What I found in
practice is that within each TestCase, most of the tests used the same
values for error tolerences. At the very least, I was aiming for all the
tests to use the same error tolerance, but in practice I never quite
achieved that. From time to time I would have a particularly difficult
calculation that just wouldn't meet the desired tolerance, and I had to
accept a lower tolerance. Most individual test methods within a single
TestCase used the same error tolerances, which I set in the setUp method
as self.rel for relative error (or self.tol for absolute), and wrote:
self.assertApproxEqual(x, y, rel=self.rel)
over and over again, occasionally overriding that value:
self.assertApproxEqual(x, y, rel=1e-6)
Since most of the time, the method was taken the tolerances from self, I
reasoned that I should just make the default "take the tolerances from
self" and be done with it. So that's how the method evolved.
Most of those tests were for functions that didn't end up in the std
lib, so I'm not surprised that this was not so clear.
If I remember correctly, at the time I was also reading a lot about OOP
design principles (as well as stats text books) and I think the use of
instance attributes for the default error tolerances was probably
influenced by that.
More information about the Python-ideas