
Hello all, I'm starting to put together a list of cleanups (with documentation changes) for the unittest module. I thought someone had already done this but the issue with the most comments I could find was 2578 which doesn't go into much detail: http://bugs.python.org/issue2578 The thing most people would like is test discovery - but that probably requires some discussion before anything can be added to unittest. What I would like to do is PEP-8'ify the method names (widening the API rather than shrinking it!): assert_true assert_false assert_equals assert_not_equals assert_raises set_up tear_down failure_exception (? - class attribute) TestSuite.add_test (etc) Documenting that these are to be preferred to 'assertEquals' and 'failUnlessEquals' (etc) and that the 'assert' statement should be used instead of 'assert_'. Adding the following new asserts: assert_in (member, container, msg=None) assert_not_in (member, container, msg=None) assert_is (first, second, msg=None) assert_not_is (first, second, msg=None) assert_raises_with_message (exc_class, message, callable, *args, **keywargs) A decorator to turn a test function into a TestCase ("as_test_case" ?). A simple 'RunTests' function that takes a collection of modules, test suites or test cases and runs them with TextTestRunner - passing on keyword arguments to the test runner. This makes running a test suite easier - once you have collected all your test cases together you can just pass them to this function so long as you are happy with the default runner (possibly allowing an alternative runner class to be provided as a keyword argument). I would provide an implementation for discussion of course. I would also like to make the error messages for "assert_equals" and "assert_not_equals" more useful - showing the objects that compare incorrectly even if an explicit message is passed in. Additionally, when comparing lists and tuples that are the same length show the members (and indices?) that were different. I've copied Steve Purcell into this email, but his comments on issue 2578 indicate that he is happy for 'us' to make changes and he no longer has a string sense of "ownership" of this module. Michael Foord

I'm worried that a mass renaming would do anything but inconvenience users during the already stressful 2->3 transition. I'm more in favor of the original proposal of reducing the redundancy post-3.0. If you're looking for useful features, Google has a set of extensions to unittest.py that I find useful: - module-level setUp and tearDown - routines for comparing large lists and strings that produce useful output indicating exactly where the inputs differ. - assertLess etc. - assertSameElements (sort of like assert(set(x) == set(y)) --Guido On Thu, Apr 17, 2008 at 6:49 AM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
-- --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
So nix the PEP-8'ifying until after 3.0. So new methods should follow the current naming scheme (assertIn, assertNotIn etc).
So when a suite is made from a module the setUp should be called before the first test and tearDown after the last. I can look at that.
By etc I assume you mean: assertLessThan assertGreaterThan assertLessThanOrEquals assertGreaterThanOrEquals Would not variants be useful as well - it seems not as the not of one is always another... (I think 'assertLessThan' reads better than 'assertLess' but will do what I'm told...)
- assertSameElements (sort of like assert(set(x) == set(y))
Sounds good. Did you look at the other proposals? * Decorator to make a function a TestCase * Convenience RunTests functions taking modules, suites and TestCases and running them * Improved messages for assertEquals and assertNotEquals when an explicit message is passed in * Improved message when comparing lists/tuples with assertEquals * The additional asserts that I suggested (In/NotIn, RaisesWithMessage, Is/NotIs) I think that there is still work I can do on the docs even before any grand renaming... Michael Foord

On Thu, Apr 17, 2008 at 10:59 AM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
+1 on your changes Michael - I really like the decorator idea. Let me know if you want help on adding the new stuff or updating the documentation. What about aiming this at 2.6 instead of 3.0? -jesse

Michael Foord schrieb:
Most of the etc. could be simplified with a function assertOp which takes an operator as first argument import operator def assertOp(self, op, a, b, msg): func = getattr(operator, op) self.assert_(func(a, b) ...) assertOp("gt", a, b) == assert a > g I also like to have some assert for is, type, isinstance, issubclass and contains. Christian

On Thu, Apr 17, 2008 at 8:27 AM, Christian Heimes <lists@cheimes.de> wrote:
-1 on this; it requires more thinking and has more opportunities for mistakes (e.g. why "gt" and not ">"?).
I also like to have some assert for is, type, isinstance, issubclass and contains.
Yes. Michael had In/NotIn. I have needed all of the others too at various times! -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Great that you're taking this on, Michael! On 17 Apr 2008, at 16:59, Michael Foord wrote:
Note that suites are just clumps of tests, and test runners can choose to run tests in any order, which might complicate the above. In any case, I'd advise against per-module setUp/tearDown because it breaks the fundamental rule that any instance of TestCase can be safely run in isolation, secure in the knowledge that it will set itself up and tear itself down as required. There are usually (always?) superior alternatives to module-level setUp, so I wouldn't suggest encouraging it by providing a sanctioned mechanism. Also, I'd note that assert_() or similar may still be needed: it is provided as a preferred alternative to the assert() built-in because the latter stops working if you run Python with the -O switch, which might be a reasonable thing to do with a test suite. Aside from these points, everything else looks great to me. Better "diff-style" output for assertEquals of strings etc. has been lacking for ages, as well as collection-oriented assertions. -Steve P.S. Hi all!

Michael Foord wrote:
Something I found useful to improve messages is to have the tests catch and then report errors with it's own unique exceptions to indicate the failures. What that does is make a very clear distinction between an error found in a module being tested vs test case. An error in the test case results in a regular python exception and a trace back of the test case. An error in the module being tested results in an specific unittest exception indicating what type of error was found in the module followed by the original exception and traceback from the module being tested. Ron class TestCase(unittest.TestCase): failureException = unittest.TestCase.failureException class Unexpected_Exception_Raised(failureException): def __init__(self, exc, ref): self.exc = exc self.ref = ref def __str__(self): return '\n'.join([repr(self.exc), '\nReference:', self.ref]) class Wrong_Exception_Raised(Unexpected_Exception_Raised): pass class No_Exception_Raised(failureException): def __init__(self, result, ref=""): self.result = repr(result) self.ref = ref def __str__(self): return "returned -> " + '\n'.join([self.result, '\nReference:', self.ref]) class Wrong_Result_Returned(No_Exception_Raised): def __str__(self): return '\n'.join([self.result, '\nReference:', self.ref]) def assertTestReturns(self, test, expect, ref=""): try: result = test() except Exception, e: e0, e1, e2 = sys.exc_info() raise self.Unexpected_Exception_Raised, (e, ref), e2 if result != expect: raise self.Wrong_Result_Returned(result, ref) def assertTestRaises(self, test, expect, ref=""): try: result = test() except Exception, e: if isinstance(e, expect): return e else: e0, e1, e2 = sys.exc_info() raise self.Wrong_Exception_Raised, (e, ref), e2 raise self.No_Exception_Raised(result, ref)

On Thu, Apr 17, 2008 at 7:59 AM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
These names are used here: assertListEqual(self, list1, list2, msg=None): assertIn(self, a, b, msg=None): assertNotIn(self, a, b, msg=None): assertDictEqual(self, d1, d2, msg=None): assertSameElements(self, expected_seq, actual_seq, msg=None): assertMultiLineEqual(self, first, second, msg=None): assertLess(self, a, b, msg=None): assertLessEqual(self, a, b, msg=None): assertGreater(self, a, b, msg=None): assertGreaterEqual(self, a, b, msg=None): assertCommandSucceeds(self, command): assertCommandFails(self, command, regexes): I can look into making those open source, but it'd probably more efficient use of everyone's time to just reimplement them; most are only a few lines. You can skip the assertCommand* ones, they're for testing shell commands.
Did you look at the other proposals?
Not yet.
* Decorator to make a function a TestCase
But what about TOOWTDI?
* Convenience RunTests functions taking modules, suites and TestCases and running them
If it addresses the most common reason why people have to hack this stuff, by all means.
* Improved messages for assertEquals and assertNotEquals when an explicit message is passed in
Sure.
* Improved message when comparing lists/tuples with assertEquals
I'd say do that in assertListEqual
* The additional asserts that I suggested (In/NotIn, RaisesWithMessage, Is/NotIs)
Sure. Google has In/NotIn
I think that there is still work I can do on the docs even before any grand renaming...
Go ahead! -- --Guido van Rossum (home page: http://www.python.org/~guido/)

"Michael Foord" <fuzzyman@voidspace.org.uk> wrote in message news:480765B5.2010904@voidspace.org.uk... | I think that there is still work I can do on the docs even before any | grand renaming... In a related thread, I proposed and Guido approved that the 2.6/3.0 docs be changed to 1. specify that the Fail... names will disappear in the future 2. list the preferred Assert names first for each pair 3. use the preferred Assert names and only those names in the explanatory text and examples. Have you or are you going to make these changes or should I open a separate tracker issue? Terry

-On [20080417 19:46], Terry Reedy (tjreedy@udel.edu) wrote:
Have you or are you going to make these changes or should I open a separate tracker issue?
You can always put it on my account in the tracker (asmodai) and I'll get to it in the coming days. -- Jeroen Ruigrok van der Werven <asmodai(-at-)in-nomine.org> / asmodai イェルーン ラウフロック ヴァン デル ウェルヴェン http://www.in-nomine.org/ | http://www.rangaku.org/ The administration of justice is the firmest pillar of government...

[Michael working on cleaning up the unittest module] it seems like most of the good ideas have been captured already. I'll through two more (low priority) ideas out there. 1) Randomized test runner/option that runs tests in a random order (like regrtest.py -r, but for methods) 2) decorator to verify a test method is supposed to fail #2 is useful for getting test cases into the code sooner rather than later. I'm pretty sure I have a patch that implements this (http://bugs.python.org/issue1399935). It didn't fit in well with the old unittest structure, but seems closer to the direction you are headed. One other idea that probably ought not be done just yet: add a way of failing with the test continuing. We use this at work (not in Python though) and when used appropriately, it works quite well. It provides more information about the failure. It looks something like this: def testMethod(self): # setup self.assertTrue(precondition) self.expectTrue(value) self.expectEqual(expected_result, other_value) All the expect methods duplicate the assert methods. Asserts can the test to fail immediately, expects don't fail immediately allowing the test to continue. All the expect failures are collected and printed at the end of the method run. I was a little skeptical about assert vs expect at first, but it has proven useful in the long run. As I said, I don't think this should be done now, maybe later. n

Neal Norwitz wrote:
I'd agree these kinds of things can be very useful, particularly for tests which run the same test for multiple inputs - when those tests die on the first set of input values that do the wrong thing, it greatly slows down the test-fix-retest cycle, since you have to go through it multiple times. If the test is written to test all the input values, even if some of them fail, and then report at the end "test failed, here are the (input, expected_ouput, actual_output) that didn't work" it can make tests far more efficient. At the moment I do it manually by collected a list of failed triples, then reporting a test failure at the end if the list isn't empty, but something like "expect" methods would avoid the need to recreate that infrastructure every time I need it. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org

On Fri, Apr 18, 2008 at 12:54 AM, Guido van Rossum <guido@python.org> wrote:
Hi, Some things that Bazaar, Twisted and Launchpad have found helpful: - assertRaises returning the exception object that it catches. This allows for easy testing of exception strings. - Assertion methods for 'in', 'is' and 'isinstance' and their negations. - TestCase.addCleanup. This method takes a function, args and kwargs and pushes them to a stack of cleanups. Before tearDown is run, each of these cleanups is popped off the stack and then run. This makes it easier to acquire certain resources in tests: e.g def make_temp_dir(self): dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, dir) return dir Luckily, I have patches (with tests!) for these that I will be filing on the tracker as soon as I get the opportunity. jml

On Thu, Apr 17, 2008 at 11:49 PM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
assert_raises_with_message (exc_class, message, callable, *args, **keywargs)
I don't think this one should go in. I think it would be better if assertRaises just returned the exception object that it catches. That way, you can test properties of the exception other than its message. jml

On Thu, Apr 17, 2008 at 3:31 PM, Jonathan Lange <jml@mumak.net> wrote:
Hm. I've got to say that returning the exception object is, um, an odd API in the set of unittest APIs. I can see how it's sometimes more powerful, but I'd say that in many cases assertRaisesWithMessage will be easier to write and read. (And making it a regex match would be even cooler.) -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On Fri, Apr 18, 2008 at 8:34 AM, Guido van Rossum <guido@python.org> wrote:
I don't know about odd. It works and it's not obviously terrible. Not having it the unittest API simply means that people who do want to test non-message properties will rewrite assertRaises. Which is, in fact, what we've already done. jml

Guido van Rossum wrote:
In which case assertRaisesMatching (and then eventually assert_raises_matching) might be a better name for it? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/

Michael Foord wrote:
that the 'assert' statement should be used instead of 'assert_'.
assert statements are actually a bad idea in tests - they get skipped when the test suite is run with optimisation switched on. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org

I'm worried that a mass renaming would do anything but inconvenience users during the already stressful 2->3 transition. I'm more in favor of the original proposal of reducing the redundancy post-3.0. If you're looking for useful features, Google has a set of extensions to unittest.py that I find useful: - module-level setUp and tearDown - routines for comparing large lists and strings that produce useful output indicating exactly where the inputs differ. - assertLess etc. - assertSameElements (sort of like assert(set(x) == set(y)) --Guido On Thu, Apr 17, 2008 at 6:49 AM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
-- --Guido van Rossum (home page: http://www.python.org/~guido/)

Guido van Rossum wrote:
So nix the PEP-8'ifying until after 3.0. So new methods should follow the current naming scheme (assertIn, assertNotIn etc).
So when a suite is made from a module the setUp should be called before the first test and tearDown after the last. I can look at that.
By etc I assume you mean: assertLessThan assertGreaterThan assertLessThanOrEquals assertGreaterThanOrEquals Would not variants be useful as well - it seems not as the not of one is always another... (I think 'assertLessThan' reads better than 'assertLess' but will do what I'm told...)
- assertSameElements (sort of like assert(set(x) == set(y))
Sounds good. Did you look at the other proposals? * Decorator to make a function a TestCase * Convenience RunTests functions taking modules, suites and TestCases and running them * Improved messages for assertEquals and assertNotEquals when an explicit message is passed in * Improved message when comparing lists/tuples with assertEquals * The additional asserts that I suggested (In/NotIn, RaisesWithMessage, Is/NotIs) I think that there is still work I can do on the docs even before any grand renaming... Michael Foord

On Thu, Apr 17, 2008 at 10:59 AM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
+1 on your changes Michael - I really like the decorator idea. Let me know if you want help on adding the new stuff or updating the documentation. What about aiming this at 2.6 instead of 3.0? -jesse

Michael Foord schrieb:
Most of the etc. could be simplified with a function assertOp which takes an operator as first argument import operator def assertOp(self, op, a, b, msg): func = getattr(operator, op) self.assert_(func(a, b) ...) assertOp("gt", a, b) == assert a > g I also like to have some assert for is, type, isinstance, issubclass and contains. Christian

On Thu, Apr 17, 2008 at 8:27 AM, Christian Heimes <lists@cheimes.de> wrote:
-1 on this; it requires more thinking and has more opportunities for mistakes (e.g. why "gt" and not ">"?).
I also like to have some assert for is, type, isinstance, issubclass and contains.
Yes. Michael had In/NotIn. I have needed all of the others too at various times! -- --Guido van Rossum (home page: http://www.python.org/~guido/)

Great that you're taking this on, Michael! On 17 Apr 2008, at 16:59, Michael Foord wrote:
Note that suites are just clumps of tests, and test runners can choose to run tests in any order, which might complicate the above. In any case, I'd advise against per-module setUp/tearDown because it breaks the fundamental rule that any instance of TestCase can be safely run in isolation, secure in the knowledge that it will set itself up and tear itself down as required. There are usually (always?) superior alternatives to module-level setUp, so I wouldn't suggest encouraging it by providing a sanctioned mechanism. Also, I'd note that assert_() or similar may still be needed: it is provided as a preferred alternative to the assert() built-in because the latter stops working if you run Python with the -O switch, which might be a reasonable thing to do with a test suite. Aside from these points, everything else looks great to me. Better "diff-style" output for assertEquals of strings etc. has been lacking for ages, as well as collection-oriented assertions. -Steve P.S. Hi all!

Michael Foord wrote:
Something I found useful to improve messages is to have the tests catch and then report errors with it's own unique exceptions to indicate the failures. What that does is make a very clear distinction between an error found in a module being tested vs test case. An error in the test case results in a regular python exception and a trace back of the test case. An error in the module being tested results in an specific unittest exception indicating what type of error was found in the module followed by the original exception and traceback from the module being tested. Ron class TestCase(unittest.TestCase): failureException = unittest.TestCase.failureException class Unexpected_Exception_Raised(failureException): def __init__(self, exc, ref): self.exc = exc self.ref = ref def __str__(self): return '\n'.join([repr(self.exc), '\nReference:', self.ref]) class Wrong_Exception_Raised(Unexpected_Exception_Raised): pass class No_Exception_Raised(failureException): def __init__(self, result, ref=""): self.result = repr(result) self.ref = ref def __str__(self): return "returned -> " + '\n'.join([self.result, '\nReference:', self.ref]) class Wrong_Result_Returned(No_Exception_Raised): def __str__(self): return '\n'.join([self.result, '\nReference:', self.ref]) def assertTestReturns(self, test, expect, ref=""): try: result = test() except Exception, e: e0, e1, e2 = sys.exc_info() raise self.Unexpected_Exception_Raised, (e, ref), e2 if result != expect: raise self.Wrong_Result_Returned(result, ref) def assertTestRaises(self, test, expect, ref=""): try: result = test() except Exception, e: if isinstance(e, expect): return e else: e0, e1, e2 = sys.exc_info() raise self.Wrong_Exception_Raised, (e, ref), e2 raise self.No_Exception_Raised(result, ref)

On Thu, Apr 17, 2008 at 7:59 AM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
These names are used here: assertListEqual(self, list1, list2, msg=None): assertIn(self, a, b, msg=None): assertNotIn(self, a, b, msg=None): assertDictEqual(self, d1, d2, msg=None): assertSameElements(self, expected_seq, actual_seq, msg=None): assertMultiLineEqual(self, first, second, msg=None): assertLess(self, a, b, msg=None): assertLessEqual(self, a, b, msg=None): assertGreater(self, a, b, msg=None): assertGreaterEqual(self, a, b, msg=None): assertCommandSucceeds(self, command): assertCommandFails(self, command, regexes): I can look into making those open source, but it'd probably more efficient use of everyone's time to just reimplement them; most are only a few lines. You can skip the assertCommand* ones, they're for testing shell commands.
Did you look at the other proposals?
Not yet.
* Decorator to make a function a TestCase
But what about TOOWTDI?
* Convenience RunTests functions taking modules, suites and TestCases and running them
If it addresses the most common reason why people have to hack this stuff, by all means.
* Improved messages for assertEquals and assertNotEquals when an explicit message is passed in
Sure.
* Improved message when comparing lists/tuples with assertEquals
I'd say do that in assertListEqual
* The additional asserts that I suggested (In/NotIn, RaisesWithMessage, Is/NotIs)
Sure. Google has In/NotIn
I think that there is still work I can do on the docs even before any grand renaming...
Go ahead! -- --Guido van Rossum (home page: http://www.python.org/~guido/)

"Michael Foord" <fuzzyman@voidspace.org.uk> wrote in message news:480765B5.2010904@voidspace.org.uk... | I think that there is still work I can do on the docs even before any | grand renaming... In a related thread, I proposed and Guido approved that the 2.6/3.0 docs be changed to 1. specify that the Fail... names will disappear in the future 2. list the preferred Assert names first for each pair 3. use the preferred Assert names and only those names in the explanatory text and examples. Have you or are you going to make these changes or should I open a separate tracker issue? Terry

-On [20080417 19:46], Terry Reedy (tjreedy@udel.edu) wrote:
Have you or are you going to make these changes or should I open a separate tracker issue?
You can always put it on my account in the tracker (asmodai) and I'll get to it in the coming days. -- Jeroen Ruigrok van der Werven <asmodai(-at-)in-nomine.org> / asmodai イェルーン ラウフロック ヴァン デル ウェルヴェン http://www.in-nomine.org/ | http://www.rangaku.org/ The administration of justice is the firmest pillar of government...

[Michael working on cleaning up the unittest module] it seems like most of the good ideas have been captured already. I'll through two more (low priority) ideas out there. 1) Randomized test runner/option that runs tests in a random order (like regrtest.py -r, but for methods) 2) decorator to verify a test method is supposed to fail #2 is useful for getting test cases into the code sooner rather than later. I'm pretty sure I have a patch that implements this (http://bugs.python.org/issue1399935). It didn't fit in well with the old unittest structure, but seems closer to the direction you are headed. One other idea that probably ought not be done just yet: add a way of failing with the test continuing. We use this at work (not in Python though) and when used appropriately, it works quite well. It provides more information about the failure. It looks something like this: def testMethod(self): # setup self.assertTrue(precondition) self.expectTrue(value) self.expectEqual(expected_result, other_value) All the expect methods duplicate the assert methods. Asserts can the test to fail immediately, expects don't fail immediately allowing the test to continue. All the expect failures are collected and printed at the end of the method run. I was a little skeptical about assert vs expect at first, but it has proven useful in the long run. As I said, I don't think this should be done now, maybe later. n

Neal Norwitz wrote:
I'd agree these kinds of things can be very useful, particularly for tests which run the same test for multiple inputs - when those tests die on the first set of input values that do the wrong thing, it greatly slows down the test-fix-retest cycle, since you have to go through it multiple times. If the test is written to test all the input values, even if some of them fail, and then report at the end "test failed, here are the (input, expected_ouput, actual_output) that didn't work" it can make tests far more efficient. At the moment I do it manually by collected a list of failed triples, then reporting a test failure at the end if the list isn't empty, but something like "expect" methods would avoid the need to recreate that infrastructure every time I need it. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org

On Fri, Apr 18, 2008 at 12:54 AM, Guido van Rossum <guido@python.org> wrote:
Hi, Some things that Bazaar, Twisted and Launchpad have found helpful: - assertRaises returning the exception object that it catches. This allows for easy testing of exception strings. - Assertion methods for 'in', 'is' and 'isinstance' and their negations. - TestCase.addCleanup. This method takes a function, args and kwargs and pushes them to a stack of cleanups. Before tearDown is run, each of these cleanups is popped off the stack and then run. This makes it easier to acquire certain resources in tests: e.g def make_temp_dir(self): dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, dir) return dir Luckily, I have patches (with tests!) for these that I will be filing on the tracker as soon as I get the opportunity. jml

On Thu, Apr 17, 2008 at 11:49 PM, Michael Foord <fuzzyman@voidspace.org.uk> wrote:
assert_raises_with_message (exc_class, message, callable, *args, **keywargs)
I don't think this one should go in. I think it would be better if assertRaises just returned the exception object that it catches. That way, you can test properties of the exception other than its message. jml

On Thu, Apr 17, 2008 at 3:31 PM, Jonathan Lange <jml@mumak.net> wrote:
Hm. I've got to say that returning the exception object is, um, an odd API in the set of unittest APIs. I can see how it's sometimes more powerful, but I'd say that in many cases assertRaisesWithMessage will be easier to write and read. (And making it a regex match would be even cooler.) -- --Guido van Rossum (home page: http://www.python.org/~guido/)

On Fri, Apr 18, 2008 at 8:34 AM, Guido van Rossum <guido@python.org> wrote:
I don't know about odd. It works and it's not obviously terrible. Not having it the unittest API simply means that people who do want to test non-message properties will rewrite assertRaises. Which is, in fact, what we've already done. jml

Guido van Rossum wrote:
In which case assertRaisesMatching (and then eventually assert_raises_matching) might be a better name for it? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/

Michael Foord wrote:
that the 'assert' statement should be used instead of 'assert_'.
assert statements are actually a bad idea in tests - they get skipped when the test suite is run with optimisation switched on. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org
participants (12)
-
Christian Heimes
-
Guido van Rossum
-
Jeroen Ruigrok van der Werven
-
Jesse Noller
-
Jonathan Lange
-
Michael Foord
-
Neal Norwitz
-
Nick Coghlan
-
Ron Adam
-
Steve Holden
-
Steve Purcell
-
Terry Reedy