Cost of raising exceptions.

Alex Martelli aleaxit at yahoo.com
Tue May 29 17:29:24 CEST 2001


"Allan Crooks" <amc1 at dcs.qmw.ac.uk> wrote in message
news:mailman.991144423.30414.python-list at python.org...
    ...
>  Are raising exceptions a costly operation?

Up to a point.

> But since that has a high overhead, the hasNext() method is a way of
> detecting the end. My query is, if developing something similar for
> Python, is raising an exception a good way to halt or not?

Just consider that THIS is how Python's for statement ALWAYS
works internally: it exits when it gets an IndexError from
the sequence it's indexing...

Let's take a concrete example.  For some peculiar reason
I want to compute summations of numbers of the form 17/d,
where d = i modulo X, for i from 0 to N-1 included, but
simply skipping the cases where the divisor d would be 0
(which will be one in every X numbers, of course).  How
slow would it be to do it the simplest obvious way:

def with_exceptions(N,x):
    result = 0L
    for i in range(N):
        try:
            result += 17/(N%x)
        except ZeroDivisionError:
            pass
    return result

versus a way that checks carefully instead:

def without_them(N,x):
    result = 0L
    for i in range(N):
        divisor = N%x
        if divisor!=0:
            result += 17/divisor
    return result

???  It's simpler to measure (with care:-) than to worry
and/or wonder...:

import time
def dotimes(N,x):
    start1 = time.clock(); r1=with_exceptions(N,x); stend1 = time.clock()
    start2 = time.clock(); r2=without_them(N,x); stend2 = time.clock()
    print "%d/%d: %d==%d, with ex:%.2f without:%.2f" % (
        N, x, r1, r2, stend1-start1, stend2-start2)

if __name__=='__main__':
    dotimes(1000*1000, 17)
    dotimes(1000*1000, 7)
    dotimes(1000*1000, 3)

D:\py21>python execos.py
1000000/17: 3000016==3000016, with ex:11.64 without:9.14
1000000/7: 5571423==5571423, with ex:16.14 without:8.62
1000000/3: 8333325==8333325, with ex:26.15 without:7.40

So: if we're having an exception about one time in 20, the
cost of the exceptions-in-loop approach is about 20%.  If
one time in 7, exceptions are about twice as costly as
checks in this case.  If one in 3, exceptions cost a
slow-down of over 1 in three.

But this is for reasonably light other-operations and rather
frequent 'exceptions'.  In most cases, the real costs are
not so terrible -- particularly if the checking approach
should incur substantial overhead at each pass through
the loop while the exception approach only pays (albeit
a higher cost) on rare occasions... you might even end
up with exceptions being faster!-)

In the end, you're better off programming with the single
main goal of SIMPLICITY.  If somebody may want to check
"is there more stuff?" WITHOUT consuming a next-item, then
giving them a .hasNext() method to call will be a good
thing, saving the contortions of trying to get-next, and
undo, within a try/except/else.  That is generally a better
goal to keep in mind than marginal performance issues...
*KEEP IT SIMPLE* and be happy.  Exceptions will be good
if they enable for-loop usage, too... client-code for
iteration doesn't get much simpler than that:-).  But for
that you need specifically to raise IndexError within
__getitem__ when you have no more items (it gets better
in 2.2 with iterators, though... you won't any more
have to fake being random-access when you aren't:-).


Alex






More information about the Python-list mailing list