When to use assert

Steven D'Aprano steve+comp.lang.python at pearwood.info
Sun Oct 26 10:39:43 CET 2014

Chris Angelico wrote:

> On Fri, Oct 24, 2014 at 6:49 PM, Steven D'Aprano
> <steve+comp.lang.python at pearwood.info> wrote:
>> addresses = [get_address(name) for name in database]
>> assert all(address for address in addresses)
>> # ... much later on ...
>> for i, address in enumerate(addresses):
>>     if some_condition():
>>         addresses[i] = modify(address)
>>         assert addresses[i]
>> will either identify the culprit, or at least prove that neither
>> get_address() nor modify() are to blame. Because you're using an
>> assertion, it's easy to leave the asserts in place forever, and disable
>> them by passing -O to the Python interpreter in production.
> The first assertion is fine, assuming that the emptiness of your
> address corresponds to falsiness as defined in Python. (This could be
> safe to assume, if the address is an object that knows how to boolify
> itself.) The second assertion then proves that modify() isn't
> returning nothing, but that might be better done by sticking the
> assertion into modify itself.

Sure. If you can modify the function, getting it to check it's own
post-condition is a good strategy. But you might not be able to do that.
Perhaps you don't have the source code, or its from a third-party library
and you don't want to have to keep hacking your copy every time it's
updated, or you're calling an external library written in Java.

(And yes, there are work-arounds for those scenarios too. Aren't choices

> And that's what I'm talking about: checking a function's postcondition
> with an assert implies putting that assertion after every call, 

There's no rule that you have to use an assertion after *every* call to a
function. If you can check the post-condition inside the function, that's
great. If you can't, then use your judgement. More asserts are not
necessarily better, just as more unit tests are not necessarily better.

You have to make a value judgement based on how much you trust the code,
trust your own skills, trust your colleagues who will be maintaining the
code in your absence, versus the maximum density of asserts per line of
other code.

> and 
> anything that you have to do every time you call a function belongs
> inside that function. Imagine writing this kind of defensive code, and
> then having lots of places that call modify()... and missing the
> assert in some of them. Can you trust what you're getting back?

Well, if the function is perfect, then adding more assertions won't uncover
bugs that aren't there. And if the function is buggy, then adding more
assertions will increase the likelihood that you'll discover those bugs
earlier rather than later. How much do you trust the function?

One test is better than no tests at all, but a thousand tests is not
necessarily better than ten. The same applies to assertions. One might say
that assertions can be considered tests that automatically run every time
you run your code (in debugging mode), rather than needing to remember to
run a special test program.

In the absence of correctness proofs for your code, anything you do (unit
tests, assertions, regression tests, etc.) is just a statistically sampling
of all the potential paths your code might take, hoping to capture bugs. As
any statistician will tell you, eventually you will reach the point of
diminishing returns, where adding more samples isn't worth the cost. That
point will depend on the cost in programmer effort, the performance
implications, and your confidence in the code.

>> [Aside: it would be nice if Python did it the other way around, and
>> [require
>> a --debugging switch to turn assertions on. Oh well.]
> Maybe. But unless someone actually tests that their assertions are
> being run, there's the risk that they're flying blind and assuming
> that it's all happening. There'll be all these lovely "checked
> comments"... or so people think. Nobody ever runs the app with
> --debugging, so nobody ever sees anything.

Absolutely. And if you write unit tests but never actually run them, the
same thing applies. You need a process that actually runs the code in
debugging mode, and/or runs the tests.

Or if the unit tests all pass but don't actually do anything useful?

    def test_critical_code_path(self):
        # Make sure application works correctly.

Do I have to link to the DailyWTF showing that some people actually do write
tests like this? Sure, why not :-) 


Bad programmers can screw up anything.

>> Assertions and test suites are complementary, not in opposition, like
>> belt and braces. Assertions insure that the code branch will be tested if
>> it is ever exercised, something test suites can't in general promise.
>> Here's a toy example:
>> def some_function(value):
>>     import random
>>     random.seed(value)
>>     if random.random() == 0.25000375:
>>         assert some_condition
>>     else:
>>         pass
>> Try writing a unit test that guarantees to test the some_condition
>> branch :-)
>> [Actually, it's not that hard, if you're willing to monkey-patch the
>> [random
>> module. But you may have reasons for wanting to avoid such drastic
>> measures.]
> Easy. You craft a test case that passes the right argument, and then
> have the test case test itself. You will want to have a script that
> figures out what seed value to use:

The Mersenne Twister isn't cryptographically strong, but even so, I'm pretty
sure that it is computationally infeasible unless you get amazingly lucky.
Given 624 sequential random values, it is possible to predict all
subsequent values. But predicting what seed will produce a specific value
is probably ... difficult. But feel free to try it:

> def find_seed(value):
>     import random
>     for seed in range(1000000): # Restrict to not-too-many tries
>         random.seed(seed)
>         if random.random() == value: return seed
> I didn't say it'd be efficient to run, but hey, it's easy. 

Using the same logic, it's "easy" to break sha512 checksums. Just iterate
over all possible files the same size as the target file until you find one
which has the same checksum.

> I've no 
> idea how many bits of internal state the default Python RNGs use, 

A *lot*.

import random

Enjoy :-)

> But actually, it would be really simple to monkey-patch. 

Like I said :-)

But just because you can monkey-patch, doesn't mean you will. Perhaps you
have a policy not to, or just don't like doing so because you're still
recovering from the pain of working with Ruby developers.


> And in any 
> non-toy situation, there's probably something more significant being
> tested here... unless you really are probing a random number generator
> or something, in which case you probably know more about its
> internals.
>> I like this style:
>> assert x in (a, b, c)
>> if x == a:
>>     do_this()
>> elif x == b:
>>     do_that()
>> else:
>>     assert x == c
>>     do_something_else()
> If all your branches are simple function calls, I'd be happy with a
> KeyError instead of an AssertionError or RuntimeError.
> {a:do_this, b:do_that, c:do_something_else}[x]()

Nah, can't do that, because the "that's an old hack and an obsolete Python
idiom, you have to use nested ternary ifs" crowd will get angry at you.


Translating a chain of if...elif to a dict lookup is a good, hopefully not
obsolete, Python idiom. But it doesn't work so well when the blocks are
blocks of code, and it fails completely if any of the blocks contains a
return, break or continue.

>>>> Or is that insufficiently paranoid?
>>> With good tests, you're probably fine.
>> Is it possible to be too paranoid when it comes to tests?
> Yeah, it is. 

I was actually trying to be funny. Sorry for the fail.


More information about the Python-list mailing list