
On Wed, Jan 20, 2010 at 9:04 PM, Jonathan Lange <jml@mumak.net> wrote:
Mark Roddy <markroddy@...> writes:
Earlier this week on the Testing In Python list, there was a discussion on how to execute a setup and/or teardown for a single test class instead of for each test fixture on the class (see the 'setUp and tearDown behavior' thread). I have had to deal with situation myself before, and I am obviously not the only one (since I did not initiate the thread). As such I'd like to propose adding a class level setup and tear down method the unittest TestCase class.
Rationale: Test cases can at times require the setup of expensive resources. This is often the case when implementing integration testing. Having to perform this setup for each fixture can prohibited for large number of fixtures and/or for resources that are expensive to setup. For example, I have several hundred integration tests that hit a live database. If I create the connection object for each fixture, most of the tests fail due to the maximum connection limit being reached. As a work around, I create a connection object once for each test case. Without such functionality built in, the common idiom runs along these lines:
class MyTest(TestCase):
def setUp(self): if not self.ClassIsSetup: self.setupClass() self.ClassIsSetup=True
While this achieves the desired functionality, it is unclear due to conditional setup code and is also error prone as the same code segment would need to appear in every TestCase requiring the functionality.
I agree that this is a common problem, and that the Python community would benefit from a well-known, well-understood and widely applicable solution.
Having a class wide setup and teardown function that implementers of test cases can override would make the code/intent more clear and alleviate the need to implement test case functionality when the user should be focusing on writing tests.
I'd take issue with the argument that this makes the intent clearer. In your motivating example, what you mean is "the test needs a connection" not "I want to do a set up that spans the scope of this class". A better approach would be to _declare_ the dependency in the test and let something else figure out how to best provide the dependency. I think having the class setup is clearer than the 'if not self.CaseIsSetup' idiom, but I wouldn't claim that it's definitely the clearest way to achieve such functionality.
I like the class setup semantics as the closely follow the existing fixture setup semantics. Having to declare and configure dependencies (while possibly being clearer as far as expressing intent) introduces a completely different set of semantics that have to be grasped in order to be used. I'm going to with hold forming an opinion as to whether or not this presents any meaningful barrier to entry until after I've had a chance to review Rob's resources library.
Also, setUpClass / tearDownClass is probably a bad idea. We implemented such behaviour in Twisted's testing framework some time ago, and have only recently managed to undo the damage.[1]
Thanks, I was not aware of this. Do you have an references as to the particular problems it caused? The ticket only seems to describe it being removed (with much excitement :), but doesn't seem to mention the motivation.
If you do this in such a way that setUpClass methods are encouraged to set attributes on 'self', then you are compelled to share the TestCase instance between tests. This runs contrary to the design of unittest, and breaks code that relies on a test object represent a single test.
I was actually thinking that a setupclass method would be a classmethod since the use case is sharing across fixtures within a class. Wouldn't that be enough to cover this issue?
It turns out that if you are building extensions to testing frameworks (like any big app does), it helps a lot to have an object per runnable test. In particular, it makes distributing tests across multiple processors & multiple computers much, much easier.
True, and clearly the case for highly focused unit tests. However, for integration tests (or whatever we could call tests that are explicitly designed to work with an expensive resource) it can be cost prohibitive to have this type of isolation (or flat out impossible in the case that I gave). I'll look into the implementation of some of the testing frameworks that support distributed testing and see if there isn't a way that this can be supported in both contexts (is it possible to implement in a way that the case setup would get run in each process/machine?).
It also poses a difficult challenge for test runners that provide features such as running the tests in a random order. It's very hard to know if the class is actually "done" and ready to be torn down.
My initial (off the top of my head) thinking was to count the number of test fixtures to be run via a class attribute set in the test case constructor, and then the test runner would either decrement this after the test is complete and call the tear down method once the counter reached zero. This doesn't seem like it would be affected by randomized testing ordering, but I'll look into some existing implementations to see how this could be affected. Any obvious issues I'm missing?
Finally, we found that it's use often lead to hard-to-debug failures due to test isolation issues.
I think there's a distinction between "can lead to bad situations" and "encourages bad situations". The former is almost impossible to avoid without becoming java :). That latter is much subtler, but can be addressed. Do you have any suggestions for altering the semantics to discourage abuse without reducing flexibility. With a similar feature I use, we have a rule to not use the case setup unless explicitly writing integration tests, though there is no functional way to enforce this, only communicating the idea (via documentation and code reviews).
There are already alternatives for this in the Python unit testing world. zope.testing provides a facility called 'layers' which solves this problem. I don't like it[2], but if we are talking about changing the standard library then we should consult existing practice.
Thanks, I will look into this and try to enumerate some pros and cons. Are there any specifics about it that you don't like?
Another solution is testresources[3]. It takes a declarative approach and works with all the code that's already in the Python standard library.
Will be looking into this as well as previously stated.
I'm not deeply familiar with xUnit implementations in other languages, but the problem isn't unique to Python. I imagine it would be worth doing some research on what Nunit, JUnit etc do.
Both JUnit and Nunit have a class setup and teardown functionality: http://www.junit.org/apidocs/org/junit/BeforeClass.html http://www.junit.org/apidocs/org/junit/AfterClass.html http://www.nunit.org/index.php?p=fixtureSetup&r=2.5.3 http://www.nunit.org/index.php?p=fixtureTeardown&r=2.5.3 (The Nunit implementation I found a little confusing as they apparently refer to a TestCase as a Fixture).
I emailed Michael Foord about some of his comments in the TIP thread and to ask if he would be interested in a patch adding this functionality, and I have included his response below. I would like to hear people's comments/suggestions/ideas before I start working on said patch.
Replies below.
...
Michael Foord's Email: ======================================= I would certainly be interested in adding this to unittest.
It needs a discussion of the API and the semantics:
* What should the methods be called? setup_class and teardown_class or setupClass and teardownClass? For consistency with existing methods the camelCase should probably be used.
In Twisted, we called them setUpClass and tearDownClass.
* If the setupClass fails how should the error be reported? The *easiest* way is for the failure to be reported as part of the first test
Of course, it actually represents a failure in all of the tests. Another way of doing it is to construct a test-like object representing the entire class and record it as a failure in that.
* Ditto for teardownClass - again the easiest way is for it to be reported as a failure in the last test
Ditto.
* If setupClass fails then should all the tests in that class be skipped? I think yes.
They should all be failed.
Also details like ensuring that even if just a single test method from a class is run the setupClass and teardownClass are still run. It probably needs to go to python-dev or python-ideas for discussion.
That's really really hard, and not a detail at all. See above.
I know this is a mostly negative response, but I really do hope it helps.
jml
[1] http://twistedmatrix.com/trac/ticket/4175 [2] http://code.mumak.net/2009/09/layers-are-terrible.html [3] http://pypi.python.org/pypi/testresources/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas