[Tutor] Questions as to how to run the same unit test multiple times on varying input data.

boB Stepp robertvstepp at gmail.com
Sun Sep 25 01:38:07 EDT 2016


On Sat, Sep 24, 2016 at 11:05 PM, Steven D'Aprano <steve at pearwood.info> wrote:
> On Sat, Sep 24, 2016 at 12:55:28AM -0500, boB Stepp wrote:

>> def right_justify(a_string):
>>     '''This fucntion will take the string, "a_string", and left justify it by
>
> Left justify?

Oops!  Typo.

[snip]

>     def test_random_string(self):
>         input = []
>         for i in range(random.randint(2, 69):
>             input.append(random.choice('abcdefghijklmnopqrstuvwxyz'))
>         input = ''.join(input)
>         if random.random() > 0.5:
>             input = input.upper()
>         expected = ' '*(70-len(input)) + input
>         assert len(expected) == 70
>         self.assertEqual(right_justify(input), expected)

Hmm.  I like this one.

>> class TestRightJustify(unittest.TestCase):
> [...]
>>         self.test_strings = [
>>                 "Monty Python",
>>                 "She's a witch!",
>>                 "Beware of the rabbit!!!  She is a vicious beast who will rip"
>>                 " your throat out!"]
>
> I'm not sure that you should name your data "test something". unittest
> takes methods starting with "test" as test cases. I'm reasonably sure
> that it can cope with non-methods starting with "test", and won't get
> confused, but I'm not sure that it won't confuse *me*. So I would prefer
> to use a different name.

It didn't get confused, but I take your point.

>
>>     def test_returned_len_is_70(self):
>>         '''Check that the string returned by "right_justify(a_string)" is the
>>         length of the entire line, i.e., 70 columns.'''
>>
>>         for test_string in self.test_strings:
>>             with self.subTest(test_string = test_string):
>>                 length_of_returned_string = (
>>                     len(right_justify.right_justify(test_string)))
>>                 print('The current test string is: ', test_string)
>>                 self.assertEqual(length_of_returned_string, 70)
>
>
> It is probably not a good idea to use print in the middle of
> unit testing. The unittest module prints a neat progress display, and it
> would be rude to muck that up with a print. If you really need to
> monitor the internal state of your test cases, write to a log file and
> watch that.
>
> Of course, you can *temporarily* insert a print into the test suite just
> to see what's going on. Test code is code, which means it too can be
> buggy, and sometimes you need to *debug your tests*. And we know that a
> simple but effective way to debug code is using temporary calls to
> print.

The print function is an artifact of multiple efforts to write
something that would do what I wanted, that is, an individual test
result for each string input into the test method.  It was strictly
for debugging those (failed) efforts.

> In this specific case, you probably don't really case about the test
> cases that pass. You only want to see *failed* tests. That's what the
> subTest() call does: if the test fails, it will print out the value of
> "test_string" so you can see which input strings failed. No need for
> print.

Actually, the persnickety part of me wanted to see the little dot for
the first test which did pass and I was puzzled why subTest() does not
result in that happening.

>> Anyway, running the tests gave me this result:
>>
>> c:\thinkpython2\ch3\ex_3-1>py -m unittest
>> The current test string is:  Monty Python
>> The current test string is:  She's a witch!
>> The current test string is:  Beware of the rabbit!!!  She is a vicious
>> beast who will rip your throat out!
>
>
> /me looks carefully at that output...
>
> I'm not seeing the unittest progress display, which I would expect to
> contain at least one F (for Failed test). Normally unittest prints out a
> dot . for each passed test, F for failured tests, E for errors, and S
> for tests which are skipped. I do not see them.

Nor did I and I was and still am puzzled by this.  But the docs for subTest() at
https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests
give an example which also does not show the dots or F's.  So I
suppose this is the intended behavior.


>> Question #1:  Why is the test output NOT showing the first, passed
>> test?  The one where the string is "Monty Python"?
>
> By default, unittest shows a progress bar of dots like this:
>
> .....F....E....FF.....S........F.......F....
>
> That shows 37 passing tests, 1 error (the test itself is buggy), 1
> skipped test, and five failed tests. If everything passes, you just see
> the series of dots.
>
> To see passing tests listed as well as failures, you need to pass the
> verbose flag to unittest:
>
> python -m unittest --verbose

No, that does not seem to work when using subTest().  I had already
tried this.  I get:

c:\thinkpython2\ch3\ex_3-1>py -m unittest --verbose
test_returned_len_is_70 (test_right_justify.TestRightJustify)
Check that the string returned by "right_justify(a_string)" is the ...
The current test string is:  Monty Python
The current test string is:  She's a witch!
The current test string is:  Beware of the rabbit!!!  She is a vicious
beast who will rip your throat out!

======================================================================
FAIL: test_returned_len_is_70 (test_right_justify.TestRightJustify)
(test_string="She's a witch!")
Check that the string returned by "right_justify(a_string)" is the
----------------------------------------------------------------------
Traceback (most recent call last):
  File "c:\thinkpython2\ch3\ex_3-1\test_right_justify.py", line 32, in
test_returned_len_is_70
    self.assertEqual(length_of_returned_string, 70)
AssertionError: 72 != 70

======================================================================
FAIL: test_returned_len_is_70 (test_right_justify.TestRightJustify)
(test_string='Beware of the rabbit!!!  She is a vicious beast who will
rip your throat out!')
Check that the string returned by "right_justify(a_string)" is the
----------------------------------------------------------------------
Traceback (most recent call last):
  File "c:\thinkpython2\ch3\ex_3-1\test_right_justify.py", line 32, in
test_returned_len_is_70
    self.assertEqual(length_of_returned_string, 70)
AssertionError: 135 != 70

----------------------------------------------------------------------
Ran 1 test in 0.013s

FAILED (failures=2)


>> Question #3:  In "with self.subTest(test_string = test_string):" why
>> is "test_string = test_string" necessary?  Is it that using "with"
>> sets up a new local namespace within this context manager?
>
> You need to tell subTest which variables you care about, and what their
> value is.
>
> Your test case might have any number of local or global variables. It
> might care about attributes of the test suite. Who knows what
> information you want to see if a test fails? So rather than using some
> sort of dirty hack to try to guess which variables are important,
> subTest requires you to tell it what you want to know.
>
> Which could be anything you like:
>
>     with subTest(now=time.time(), spam=spam, seed=random.seed()):
>         ...
>
>
> you're not limited to just your own local variables. subTest doesn't
> care what they are or what you call them -- it just collects them as
> keyword arguments, then prints them at the end if the test fails.

Thanks for this info.  It was not clear to me from the documentation.

subTest() does allow ALL failures to show up.  It is a pity it does
not allow the passed tests to be listed as well.  But, as you point
out, for evaluating test results, it is the failures that are
critical.  I will have to decide if it is worth my time to investigate
testscenarios as Ben suggests, or perhaps go with PyTest.  I spent a
lot of time looking for examples using testscenarios this evening and
did not find much, but I think I finally understand its basic usage..
I may just put off my persnickety nature till later and plow on with
learning TDD and testing more generally while working on extending my
Python and computer science knowledge.

-- 
boB


More information about the Tutor mailing list