How to break long method name into more than one line?

Steven D'Aprano steve+comp.lang.python at
Thu Mar 15 01:07:52 CET 2012

On Wed, 14 Mar 2012 13:53:27 -0700, Herman wrote:

> I followed the rule because it was a very good advice. For example, def
> test_plus_1Plus1_2(self):
> If this test fails, you immediately know that it's testing the "plus"
> method, with 1  and 1 as the arguments, and expect to return 2.

That is hardly a representative example, or you would not be asking for 
advice on splitting long method names over multiple lines.

A more relevant example might be


For utterly trivial examples such as testing that 1+1 == 2 nearly any 
naming scheme would be suitable. But for realistic test cases, the rule 
fails utterly.

Here is a real test case from one of my own projects: I have a project 
that defines dozens of related statistical functions, with hundreds of 
tests. Here is one test class for one function, the population variance 

class PVarianceTest(NumericTestCase, UnivariateMixin):
    # Test population variance.
    # This will be subclassed by variance and [p]stdev.
    tol = 1e-11

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.func = calcstats.pvariance
        # Test data for test_main, test_shift: = [4.0, 7.0, 13.0, 16.0]
        self.expected = 22.5  # Exact population variance of
        # If you duplicate each data point, the variance will scale by
        # this value:
        self.dup_scale_factor = 1.0

    def setUp(self):

    def get_allowed_kinds(self):
        kinds = super().get_allowed_kinds()
        return [kind for kind in kinds if hasattr(kind, '__len__')]

    def test_main(self):
        # Test that pvariance calculates the correct result.
        self.assertEqual(self.func(, self.expected)

    def test_shift(self):
        # Shifting the data by a constant amount should not affect
        # the variance.
        for shift in (1e2, 1e6, 1e9):
            data = [x + shift for x in]
            self.assertEqual(self.func(data), self.expected)

    def test_equal_data(self):
        # If the data is constant, the variance should be zero.
        self.assertEqual(self.func([42]*10), 0)

    def testDuplicate(self):
        # Test that the variance behaves as expected when you duplicate
        # each data point [a,b,c,...] -> [a,a,b,b,c,c,...]
        data = [random.uniform(-100, 500) for _ in range(20)]
        expected = self.func(data)*self.dup_scale_factor
        actual = self.func(data*2)
        self.assertApproxEqual(actual, expected)

    def testDomainError(self):
        # Domain error exception reported by Geremy Condra.
        data = [0.123456789012345]*10000
        # All the items are identical, so variance should be exactly zero.
        # We allow some small round-off error.
        self.assertApproxEqual(self.func(data), 0.0, tol=5e-17)

    def testSingleton(self):
        # Population variance of a single value is always zero.
        for x in
            self.assertEqual(self.func([x]), 0)

    def testMeanArgument(self):
        # Variance calculated with the given mean should be the same
        # as that calculated without the mean.
        data = [random.random() for _ in range(15)]
        m = calcstats.mean(data)
        expected = self.func(data, m=None)
        self.assertEqual(self.func(data, m=m), expected)

(I'm a little inconsistent when choosing between camelCase and 
under_score names in my tests. My bad.)

Even with the limited examples shown there, the naming convention you 
give is utterly impractical. Most of the inputs are random (e.g. 
testDuplicate uses 20 randomly selected integers) or implementation 
details (e.g. test_equal_data takes a list of ten 42s, and returns 0, but 
that could have been thirty-five 7s, or six 1.29345e-9s, or nearly any 
other value).

Some of the tests don't even test a *specific* input and output, but 
compare that the variance of one set of data matches the variance of a 
different set of data.

> Sticking
> this rule also means your test cases are small enough, so you clearly
> know what you are testing on. 

Sticking to this rule means that you are probably missing tests that 
don't fit into the simple "arguments -> result" framework, and 
compromising the quality test suite.

If you only test the simple cases, you aren't testing the cases that are 
most likely to fail.


More information about the Python-list mailing list