On Thu, Jun 30, 2016 at 6:25 PM, Glyph Lefkowitz <glyph@twistedmatrix.com> wrote:

On Jun 30, 2016, at 04:13, Jean-Paul Calderone <exarkun@twistedmatrix.com> wrote:

On Thu, Jun 30, 2016 at 6:43 AM, Adi Roiban <adi@roiban.ro> wrote:
Hi,

Recently we have introduced a hard check of 100% coverage for all changes.
This is done via coverage + codecov + github protected branches.

Now, if your patch is not 100% covered github will not let you merge it.


The errback is there to help with test failures ... but the test should never fail, so that errback is never called... and that line is not covered.


It doesn't always make sense to require 100% execution of all test code.  It's not at all uncommon to only have code in a test suite that runs when a test fails.  Historically, Twisted has never had a requirement of 100% execution of test code.  The only test suite coverage requirements that have commonly been requested or enforced is for coverage of implementation code.

I'd suggest removing the coverage enforcement for test suite code.

I am inclined to disagree, albeit mildly.

When one is writing a deliberately un-covered path in test code, presumably, one is writing either a test helper - a mock, fake, or utility for setting up a real implementation - or an assertion method.  Historically, I believe that when we've invested more heavily in making these utilities "real" themselves, and not just throwaway stuff inline in a test method or module, the benefits have far outweighed the costs.  In fact the availability of proto_helpers is one of the selling points of Twisted as opposed to other competing engines.

I mostly agree with this.  However, I was thinking of a slightly different pattern when I wrote my earlier email.  Here's are a couple (fictional) examples of that pattern one might find in unit tests for application code (and there's nothing Twisted-specific here):

if foo:
    self.fail("Foo!")

try:
    foo()
except:
    bar
else:
    self.fail("Foo :(")

It's not exactly that this can't be code that's executed in a passing run of the test suite.  It's more a question of what the right balance point is.  If someone wants to generalize logic like this (and, fortunately, someone did generalize these particular examples - they're assertFalse and assertRaises, respectively) then that's great and the result is a higher level of confidence resulting from a successful run of the test suite.  I'd suggest that if tests like these exercise all of the implementation code (on a successful run), though, then you've still achieved a pretty high level of test coverage and maybe further efforts are more productively directed elsewhere (increasing the coverage level of other implementation code in Twisted, for example :).

If folks want a higher bar than this, I'm not going to argue (at least not much, at least not now).  The bar hasn't been this high in the past though (and there are many many such cases to be found in Twisted's test suite right now and I don't have the impression this has ever been much of a source of problems).


Therefore, I think that asking folks to add independent test coverage to verify their fakes and ensure that the failure-reporting of their assertion messages are helpful in the event a test fails is a generally good idea, and we should keep the requirement for 100% coverage on both test and implementation coverage.

However, if there is contention around this, I'd much rather get a ratchet in place for implementation code that's reliable and everyone is happy with, so I'm OK with disabling coverage reporting for our *.test.* packages as a step towards that.


I completely agree that fakes should be verified.  So much so that I'm not even sure I believe in fakes in general anymore.  Instead, you should just have easy to use interfaces and ship inexpensive implementations alongside whatever other implementations you also need.  And all those implementations should have great test coverage.  I also completely agree that when tests fail, they should do so in a meaningful way.  I suspect slightly the implication that automated test coverage for the failure case demonstrates the failure is reported meaningfully, though. :)  I think we're still stuck with relying on humans (authors, reviewers) to verify that property.

Jean-Paul
 
How should we proceed with these changes?

Maybe this is not the best example and that code could be refactored... but I think that the topic of ignoring missing coverage is still valid.

I suggest to introduce `  # pragma: no cover`

and update the coverage config with

[report]
exclude_lines =
    pragma: no cover


This seems like the wrong solution to me.  It forces contributors to do extra work to mark their test code as an exception and provides a mechanism for incorrectly bypassing the check by using a no-cover pragma in implementation code.

In any case I totally agree with this.  If we have a categorical difference in types of code (test vs. non-test) then let's make that distinction, but we should not be adding one-off exceptions as an exercise of non-uniform reviewer judgement on every review.

-glyph


_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python