
On Sun, Sep 30, 2018 at 10:29:50AM -0400, David Mertz wrote:
I think Steven's is backwards in its own way.
- Contracts test the space of arguments *actually used during testing
period* (or during initial production if the performance hit is acceptable).
- Unit tests test the space of arguments *thought of by the developers*.
*A priori,* either one of those can cover cases not addressed by the other.
Fair point.
But given that in general unit tests tend to only exercise a handful of values (have a look at the tests in the Python stdlib) I think it is fair to say that in practice unit tests typically do not have anywhere near the coverage of live data used during alpha and beta testing.
If unit tests use the hypothesis library or similar approaches, unit tests might very well examine arguments unlikely to be encountered in real-world (or test phase) use...
Indeed.
We can consider all of these things as complementary:
- doctests give us confidence that the documentation hasn't rotted;
- unit tests give us confidence that corner cases are tested;
- contracts give us confidence that regular and common cases are tested;
- regression tests give us confidence that bugs aren't re-introduced;
- smoke tests give us confidence that the software at least will run;
- static type checking allows us to drop type checks from our unit tests and contracts;
but of course there can be overlap. And that's perfectly fine.
[...]
- Half of the checks are very far away, in a separate file, assuming
I even remembered or bothered to write the test.
To me, this is the GREATEST VIRTUE of unit tests over DbC. It puts the tests far away where they don't distract from reading and understanding the function itself. I rarely want my checks proximate since I wear a very different hat when thinking about checks than when writing functionality (ideally, a lot of the time, I wear the unit test hat *before* I write the implementation; TDD is usually good practice).
I'm curious. When you write a function or method, do you include input checks? Here's an example from the Python stdlib (docstring removed for brevity):
# bisect.py def insort_right(a, x, lo=0, hi=None): if lo < 0: raise ValueError('lo must be non-negative') if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 if x < a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x)
Do you consider that check for lo < 0 to be disruptive? How would you put that in a unit test?
That check is effectively a pre-condition. Putting aside the question of which exception should be raised (AssertionError or ValueError), we could re-write that as a contract:
def insort_right(a, x, lo=0, hi=None): require: lo >= 0 # implementation follows, as above if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)//2 if x < a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x)
Do you consider that precondition check for lo >= to be disruptive? More or less disruptive than when it was in the body of the function implementation?
- The post-conditions aren't checked unless I run my test suite, and then they only check the canned input in the test suite.
Yes, this is a great advantage of unit tests. No cost until you explicitly run them.
If you're worried about the cost of verifying your program does the right thing during testing and development, I think you're doing something wrong :-)
If there are specific functions/classes where the tests are insanely expensive, that's one thing. I have some code that wants to verify that a number is prime as part of an informal post-condition check, but if it is a *big* prime that check is too costly so I skip it.
But in general, if I'm testing or under active development, what do I care if the program takes 3 seconds to run instead of 2.5 seconds? Either way, its finished by the time I come back from making my coffee :-)
But more seriously, fine, if a particular contract is too expensive to run, disable it or remove it and add some unit tests. And then your devs will complain that the unit tests are too slow, and stop running them, and that's why we can't have nice things... *wink*