[Python-Dev] Question on a seemingly useless doctest

Erik Bray erik.m.bray at gmail.com
Tue Dec 19 06:12:01 EST 2017


Hi all,

I have a ticket [1] that's hung up on a failure in one doctest in the
form of sage.doctest.sources.FileDocTestSource._test_enough_doctests.

This test has been there since, it seems, as long as the current
doctest framework has been in place and nobody seems to have
questioned it.  Its expected output is generated from the Sage sources
themselves, and can change when tests are added or removed to any
module (if any of those tests should be "skipped").  Over the years
the expected output to this test has just been updated as necessary.

But in taking a closer look at the test--and I could be mistaken--but
it's not even a useful test.  It's *attempting* to validate that the
doctest parser skips tests when it's supposed to.  But it performs
this validation by...implementing its own, less robust doctest parser,
and comparing the results of that to the results of the real doctest
parser.  Sometimes--in fact often--the comparison is wrong (as the
test itself acknowledges).

This doesn't seem to me a correct or useful way to validate the
doctest parser.  If there are cases that the real doctest parser
should be tested against, then unit tests/regression tests should be
written that simply test the real doctest parser against those cases
and check the results.  Having essentially a real doctest parser, and
a "fake" one that's incorrect doesn't make sense to me, unless there's
something about this I'm misunderstanding.

I would propose to just remove the test.  If there are any actual
regressions it's responsible for catching then more focused regression
tests should be written for those cases.

Erik


[1] https://trac.sagemath.org/ticket/24261#comment:24


More information about the Python-Dev mailing list