On 07/17/2009 12:26 PM, Robert Kern wrote:
On Fri, Jul 17, 2009 at 12:18, Skipper Seabold<jsseabold@gmail.com> wrote:
  
Hello all,

I am polishing up the generalized linear models right now for the
stats.models project and I have a question about using decorators with
my tests.  The GLM framework has a central model with shared
properties and then several variations on this model, so to test I
have just as a simplified example:

from numpy.testing import *
DECIMAL = 4
class check_glm(object):
   '''
   res2 results will be obtained from R or the RModelwrap
   '''

   def test_params(self):
       assert_almost_equal(self.res1.params, self.res2.params, DECIMAL)

   def test_resids(self):
       assert_almost_equal(self.res1.resids, self.res2.resids, DECIMAL)

class test_glm_gamma(check_glm):
   def __init__(self):
# Preprocessing to setup results
       self.res1 = ResultsFromGLM
       self.res2 = R_Results

if __name__=="__main__":
   run_module_suite()

My question is whether I can skip, for arguments sake, test_resids
depending, for example, on the class of self.res2 or because I defined
the test condition as True in the test_<> class.   I tried putting in
the check_glm class

@dec.skipif(TestCondition, "Skipping this test because of ...")
def test_resids(self):
...

TestCondition should be None by default, but how can I get the value
of TestCondition to evaluate to True if appropriate?  I have tried a
few different ways, but I am a little stumped. Does this make sense/is
it possible?  I'm sure I'm missing something obvious, but any insights
would be appreciated.
    

I don't think you can do that with the decorator. Just do the test in
code inside the method and raise nose.SkipTest explicitly:

def test_resids(self):
    if not isinstance(self.res1, GLMResults):
        raise nose.SkipTest("Not a GLM test")

  
I think you should follow (or do something similar to) the approach given under the section 'Creating many similar tests' in the Numpy guidelines under Tips & Tricks:
http://projects.scipy.org/numpy/wiki/TestingGuidelines

This gives the option to group tests from related models together.

Bruce