On Apr 28, 2011, at 12:59 PM, Guido van Rossum wrote:
On Thu, Apr 28, 2011 at 12:54 AM, Tarek Ziadé <firstname.lastname@example.org> wrote:
In my opinion assert should be avoided completely anywhere else than
in the tests. If this is a wrong statement, please let me know why :)
I would turn that around. The assert statement should not be used in
unit tests; unit tests should use self.assertXyzzy() always. In
regular code, assert should be about detecting buggy code. It should
not be used to test for error conditions in input data. (Both these
can be summarized as "if you still want the test to happen with -O,
don't use assert.)
You're both right! :) My take on "assert" is "don't use it, ever".
assert is supposed to be about conditions that never happen. So there are a few cases where I might use it:
If I use it to enforce a precondition, it's wrong because under -OO my preconditions won't be checked and my input might be invalid.
If I use it to enforce a postcondition, then my API's consumers have to occasionally handle this weird error, except it won't be checked under -OO so they won't be able to handle it consistently.
If I use it to try to make assertions about internal state during a computation, then I introduce an additional, untested (at the very least untested under -OO), probably undocumented (did I remember to say "and raises AssertionError when..." in its docstring?) code path where when this "bad" thing happens, I get an exception instead of a result.
If that's an important failure mode, then there ought to be a documented exception, which the computation's consumers can deal with.
If it really should "never happen", then I really should have just written some unit tests verifying that it doesn't happen in any case I can think of. And I shouldn't be writing code to handle cases I can't come up with any way to exercise, because how do I know that it's going to do the right thing? (If I had a dollar for every 'assert' message that didn't have the right number of arguments to its format string, etc.)
Also, when things that should "never happen" do actually happen in real life, is a random exception that interrupts the process actually an improvement over just continuing on with some potentially bad data? In most cases, no, it really isn't, because by blowing up you've removed the ability of the user to take corrective action or do a workaround. (In the cases where blowing up is better because you're about to do something destructive, again, a test seems in order.)
My python code is very well documented, which means that there is sometimes a significant runtime overhead from docstrings. That's really my only interest in -OO: reducing memory footprint of Python processes by dropping dozens of megabytes of library documentation from each process. The fact that it changes the semantics of 'assert' is an unfortunate distraction.
So the only time I'd even consider using 'assert' is in a throwaway script which might be run once, that I'm not going to write any tests for and I'm not going to maintain, but I might care about just enough to want to blow up instead of calling 'os.unlink' if certain conditions are not met.
(But then every time I actually use it that way, I realize that I should have dealt with the error sanely and I probably have to go back and fix it anyway.)