On 27 February 2012 23:23, Mark Janssen <dreamingforward@gmail.com> wrote:
On Mon, Feb 27, 2012 at 3:59 PM, Michael Foord <fuzzyman@gmail.com> wrote:
As well as fundamental problems, the particular implementation of doctest suffers from these potentially resolvable problems:
Execution of an individual testing section continues after a failure. So a single failure results in the *reporting* of potentially many failures.

Hmm, perhaps I don't understand you.  doctest reports how many failures occur, without blocking on any single failure.

Right. But you  typically group a bunch of actions into a single "test". If a doctest fails in an early action then every line after that will probably fail - a single test failure will cause multiple *reported* failures.
The problem of being dependent on order of unorderable types (actually very difficult to solve).

Well, a crude solution is just to lift any output text that denotes an non-ordered type and pass it through an "eval" operation.

Not a general solution - not all reprs are reversible (in fact very few are as a proportion of all objects).
Things like shared fixtures and mocking become *harder* (although by no means impossible) in a doctest environment.

This, I think, what I was suggesting with doctest "scoping" where the execution environment is a matter of how nested the docstring is in relation to the "python semantic environment", with a final scope of "globs" that can be passed into the test environment, for anything with global scope.
Another thing I dislike is that it encourages a "test last" approach, as by far the easiest way of generating doctests is to copy and paste from the interactive interpreter. The alternative is lots of annoying typing of '>>>' and '...', and as you're editing text and not code IDE support tends to be worse (although this is a tooling issue and not a problem with doctest itself).

This is where I think the idea of having a test() built-in, like help(), would really be nice.  One could run test(myClass.mymethod) iterively while one codes, encouraging TDD and writing tests *along with* your code.   My TDD sense says it couldn't get any better.
More fundamental-ish problems:

    Putting debugging prints into a function can break a myriad of tests (because they're output based).

That's a good point.  But then it's a fairly simple matter of adding the output device:  'print >> stderr, 'here I am'", another possibility, if TDD were to become more of part of the language, is a special debug exception:  "raise Debug("Am at the test point, ", x)"  Such special exceptions could be caught and ignored by doctest.
    With multiple doctest blocks in a test file running an individual test can be difficult (impossible?).

This again solved with the test() built-in an making TDD something that is a feature of the language itself.

I don't fully follow you, but it shouldn't be hard to add this to doctest and see if it is really useful.
    I may be misremembering, but I think debugging support is also problematic because of the stdout redirection

Interesting, I try to pre-conceive tests well enough so I never need to invoke the debugger.

Heh. When I'm adding new features to existing code it is very common for me to write a test that drops into the debugger after setting up some state - and potentially using the test infrastructure (fixtures, django test client perhaps, etc). So not being able to run a single test or drop into a debugger puts the kybosh on that.

So yeah. Not a huge fan.

That's good feedback.  Thanks.



May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html