Re: [pypy-dev] Improved Unit Test Framework

In a message of Sat, 18 Oct 2003 01:55:36 +0200, holger krekel writes: <snip>
It might be nice to get a hold of other people and see what they would like to have in a unittest module. That way, even if we don't write it that way ourselves, we can make sure that it will be easy for them to add their heart's desire later. Laura

It's probably worth breaking this into two categories: - improvements to unittest itself - a test runner (e.g. Zope3's test.py, twisted's trial) The latter is what's _really_ needed - it'd be nice if something could be done for Python 2.4. Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

[Anthony Baxter Sat, Oct 18, 2003 at 05:52:40PM +1000]
Yes, the distinction makes sense. ASFAIK most testframeworks provide a much better testrunner but don't care that much about the actual unittest machinery (because it's hard to change, anyway :-). As a side note, i think that the distinction between unit-tests and functional tests is not always practical. We certainly do a lot of more-or-less functional testing in PyPy. Actually, a new testing module shouldn't need to worry about unit<->function tests too much. cheers, holger

holger krekel wrote:
Just what would those be? I still haven't heard a single useful improvement. There must be some (at least, that's obviously what a lot of people feel), but what are they?
- a test runner (e.g. Zope3's test.py, twisted's trial)
This was the area where we focused as well. We needed the recursive invocation feature, the change in structure (tests in ./unit and ./accept subfolders rather than in same folder as source), the automatic handling of .pth files on a per-project basis, etc. unittest.py was fine for us as-is, and we're even using some bits exposed in its own TestRunner. In the XP or TDD mailing list, "Phlip" reported on a system whereby merely by saving his files, his test suite would automatically be re-run and the results displayed in the test runner's window, and he could double-click on the errors and it would jump him directly back in the editor to the failing line.
Agreed on that. Although we separate the two for various reasons into unit and acceptance test folders, in some cases its arguable which is which and in any case the testing framework doesn't distinguish at all. For us, the only thing that really distinguishes an acceptance test, other than the inherent level of abstraction of the test, is the degree of mocking and "borging" that we have to do. (At Kaval, a "borg" object is like a mock object in that it simulates the real thing, but in the case of the borg object it is a fake shell around the real code of something the program-under-test must talk to, such as a server. Rather than run the server code on a real (say, Linux) server during the test, we will make a "borg" version that thinks it's on a real server but is just running in a separate process on the Win32 test machine.) -Peter

The unittest improvements I have in my local tree are all pretty minor - I've made the output of an assertRaises failure more meaningful, that sort of thing. At some point I'll go through them and figure out which ones should be in the stdlib, and submit as an SF patch. While I'm not _entirely_ happy with the standard unittest module (it seems to me like I spend a lot of time writing boilerplate code - this is something python's supposed to make better...), it's hard for me to point to a solution that is obviously "better". Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

[Peter Hansen Sun, Oct 19, 2003 at 09:14:43AM -0400]
ok, here a few possible improvements for starters (i have a lot more in mind :-) - beeing able to provide a function that executes a test-method (today you have to replicate all the boilerplate in TestCase.__call__ etc.pp.) - beeing able to raise e.g. TestSkip Error to indicate that a test should not be run with the current setup. (You can't alway tell that offhand in setUp). - allow providing extra options to the cmdline-interface - plan for/do other user-interfaces (web-driven, pygame, whatever :-) - provide means (a TestConsole) to easily get a list of failed tests (interactively), keep the testing-process alive and repetitively redo the failed tests. Or save off the info to some file and reload it on rerunning the test. cheers, holger

The Zope3 testrunner distinguishes between them - the unit tests live in subdirectories called 'test', while the functional tests live in 'ftest'. The main problem with separating them in this way is that people forget to run the functional tests. On the other hand, functional tests probably need to derive from a different TestCase class, that defines a bunch of extra 'setup' (not setUp) methods. For instance, in quetzalcoatl, our ETL tool, functional tests have a bunch of machinery for establishing database connections and the like. You don't want/need these in the unit tests. Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

It's probably worth breaking this into two categories: - improvements to unittest itself - a test runner (e.g. Zope3's test.py, twisted's trial) The latter is what's _really_ needed - it'd be nice if something could be done for Python 2.4. Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

[Anthony Baxter Sat, Oct 18, 2003 at 05:52:40PM +1000]
Yes, the distinction makes sense. ASFAIK most testframeworks provide a much better testrunner but don't care that much about the actual unittest machinery (because it's hard to change, anyway :-). As a side note, i think that the distinction between unit-tests and functional tests is not always practical. We certainly do a lot of more-or-less functional testing in PyPy. Actually, a new testing module shouldn't need to worry about unit<->function tests too much. cheers, holger

holger krekel wrote:
Just what would those be? I still haven't heard a single useful improvement. There must be some (at least, that's obviously what a lot of people feel), but what are they?
- a test runner (e.g. Zope3's test.py, twisted's trial)
This was the area where we focused as well. We needed the recursive invocation feature, the change in structure (tests in ./unit and ./accept subfolders rather than in same folder as source), the automatic handling of .pth files on a per-project basis, etc. unittest.py was fine for us as-is, and we're even using some bits exposed in its own TestRunner. In the XP or TDD mailing list, "Phlip" reported on a system whereby merely by saving his files, his test suite would automatically be re-run and the results displayed in the test runner's window, and he could double-click on the errors and it would jump him directly back in the editor to the failing line.
Agreed on that. Although we separate the two for various reasons into unit and acceptance test folders, in some cases its arguable which is which and in any case the testing framework doesn't distinguish at all. For us, the only thing that really distinguishes an acceptance test, other than the inherent level of abstraction of the test, is the degree of mocking and "borging" that we have to do. (At Kaval, a "borg" object is like a mock object in that it simulates the real thing, but in the case of the borg object it is a fake shell around the real code of something the program-under-test must talk to, such as a server. Rather than run the server code on a real (say, Linux) server during the test, we will make a "borg" version that thinks it's on a real server but is just running in a separate process on the Win32 test machine.) -Peter

The unittest improvements I have in my local tree are all pretty minor - I've made the output of an assertRaises failure more meaningful, that sort of thing. At some point I'll go through them and figure out which ones should be in the stdlib, and submit as an SF patch. While I'm not _entirely_ happy with the standard unittest module (it seems to me like I spend a lot of time writing boilerplate code - this is something python's supposed to make better...), it's hard for me to point to a solution that is obviously "better". Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.

[Peter Hansen Sun, Oct 19, 2003 at 09:14:43AM -0400]
ok, here a few possible improvements for starters (i have a lot more in mind :-) - beeing able to provide a function that executes a test-method (today you have to replicate all the boilerplate in TestCase.__call__ etc.pp.) - beeing able to raise e.g. TestSkip Error to indicate that a test should not be run with the current setup. (You can't alway tell that offhand in setUp). - allow providing extra options to the cmdline-interface - plan for/do other user-interfaces (web-driven, pygame, whatever :-) - provide means (a TestConsole) to easily get a list of failed tests (interactively), keep the testing-process alive and repetitively redo the failed tests. Or save off the info to some file and reload it on rerunning the test. cheers, holger

The Zope3 testrunner distinguishes between them - the unit tests live in subdirectories called 'test', while the functional tests live in 'ftest'. The main problem with separating them in this way is that people forget to run the functional tests. On the other hand, functional tests probably need to derive from a different TestCase class, that defines a bunch of extra 'setup' (not setUp) methods. For instance, in quetzalcoatl, our ETL tool, functional tests have a bunch of machinery for establishing database connections and the like. You don't want/need these in the unit tests. Anthony -- Anthony Baxter <anthony@interlink.com.au> It's never too late to have a happy childhood.
participants (4)
-
Anthony Baxter
-
holger krekel
-
Laura Creighton
-
Peter Hansen