![](https://secure.gravatar.com/avatar/ad13088a623822caf74e635a68a55eae.jpg?s=120&d=mm&r=g)
On Fri, Jul 18, 2014 at 2:44 PM, Nathaniel Smith <njs@pobox.com> wrote:
On 18 Jul 2014 19:31, <josef.pktd@gmail.com> wrote:
Making the behavior of assert_allclose depending on whether desired is exactly zero or 1e-20 looks too difficult to remember, and which
desired I
use would depend on what I get out of R or Stata.
I thought your whole point here was that 1e-20 and zero are qualitatively different values that you would not want to accidentally confuse? Surely R and Stata aren't returning exact zeros for small non-zero values like probability tails?
I was thinking of the case when we only see "pvalue < 1e-16" or something like this, and we replace this by assert close to zero. which would translate to `assert_allclose(pvalue, 0, atol=1e-16)` with maybe an additional rtol=1e-11 if we have an array of pvalues where some are "large" (>0.5).
This example is also handled correctly by my proposal :-)
depends on the details of your proposal alternative: desired is exactly zero means assert_equal (Pdb) self.res_reg.params[m:] array([ 0., 0., 0.]) (Pdb) assert_allclose(0, self.res_reg.params[m:]) (Pdb) assert_allclose(0, self.res_reg.params[m:], rtol=0, atol=0) (Pdb) This test uses currently assert_almost_equal with decimal=4 :( regularized estimation with hard thresholding: the first m values are estimate not equal zero, the m to the end elements are "exactly zero". This is discrete models fit_regularized which predates numpy assert_allclose. I haven't checked what the unit test of Kerby's current additions for fit_regularized looks like. unit testing is serious business: I'd rather have good unit test in SciPy related packages than convincing a few more newbies that they can use the defaults for everything. Josef
-n
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion