While I use code coverage to improve automated unittesting, I am opposed to turning a usable but limited and sometime faulty tool into a blind robotic master that blocks improvements. The prospect of this being done has discouraged me from learning the new system. (More on 'faulty tool' later.)
The temptation to write artificial tests to satisfy an artificial goal is real. Doing so can eat valuable time better used for something else. For instance:
def meth(self, arg): mod.inst.meth(arg, True, ob=self, kw='cut')
Mocking mod.class.meth, calling meth, and checking that the mock is called will satisfy the robot, but does not contribute much to the goal of providing a language that people can use to solve problems.
Victor, can you explain 'tested indirectly' and perhaps give an example?
This is one examples I merged "untested line of code". https://github.com/python/cpython/pull/162/files#diff-0ad86c44e7866421ecaa5a...
file_object = builtins.open(f, 'wb')
try:
self.initfp(file_object)
except:
file_object.close()
raise
self.initfp()
is very unlikely raise exceptions. But MemoryError,
KeyboardInterrupt or
other rare exceptions may be happen.
Test didn't cover this except clause. But I merged it because:
- this code is simple enough.
- I can write test for it with mocking
self.initfp()
to raise exception. But such test code have significant maintenance cost. I don't think this except clause is important enough to maintain such code.
If I remove the except clause, all lines are tested, but there is (very unlikely) leaking unclosed file. Which are "broken"?
Coverage is very nice information to find code which should be tested, but not tested. But I don't think "all code should be tested". It may make hard to improve Python.
So I agree with Victor and Terry.
Regards,