Question: How efficient is using generators for coroutine-like problems?

Carlos Ribeiro carribeiro at gmail.com
Sun Sep 26 19:50:18 CEST 2004


On Sun, 26 Sep 2004 11:47:03 +0200, Alex Martelli <aleaxit at yahoo.com> wrote:
> i.e. you can get result.append into a local once at the start and end up
> almost 3 times faster than with your approach, but even without this
> you're still gaining handsomely with good old result.append vs your
> preferred approach (where a singleton list gets created each time).

Thanks for the info -- specially regarding the "a=result.append"
trick. It's a good one, and also helps to make the code less
cluttered.

> > Now, just because I can do it does not mean it's a good idea :-) For
> > particular cases, a measurement can be done. But I'm curious about the
> > generic case. What is the performance penalty of using generators in
> > situations as the ones shown above?
> 
> Sorry, there's no "generic case" that I can think of.  Since
> implementations of generators, list appends, etc, are allowed to change
> and get optimized at any transition 2.3 -> 2.4 -> 2.5 -> ... I see even
> conceptually no way to compare performance except on a specific case.
> 
> "Generally" I would expect: if you're just looping on the result, a
> generator should _gain_ performance wrt making a list.  Consider cr.py:

That's my basic assumption. BTW, I had an alternative idea. I really
don't need to use "\n".join(). All that I need is to concatenate the
strings yielded by the generator, and the strings themselves should
include the necessary linebreaks. I'm not a timeit wizard, and neither
is a slow Windows machine reliable enough for timing tight loops, but
anyway here are my results:

---- testgen.py ----
def mygen():
    for x in range(100):
        yield "%d: this is a test string.\n" % x

def test_genlist():
    return "".join(list(mygen()))

def test_addstr():
    result = ""
    for x in range(40):
        # there is no append method for strings :(
        result = result + "%d: this is a test string.\n" % x
    return result

def test_gen_addstr():
    result = ""
    for x in mygen(): result = result + x
    return result

I've added the "%" operator because most of the time we are going to
work with generated strings, not with constants, and I thought it
would give a better idea of the timing.

>python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_genlist()"
1000 loops, best of 3: 698 usec per loop

>python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_addstr()"
1000 loops, best of 3: 766 usec per loop

>python c:\python23\lib\timeit.py -s"import timegen" "timegen.test_gen_addstr()"
1000 loops, best of 3: 854 usec per loop

The test has shown that adding strings manually is just a tad slower
than joining. But then I've decided to simplify my tests, and to add
small constant strings. The results have surprised me:

---- testgen2.py ----
def mygen():
    for x in range(100):
        yield "."

def test_genlist():
    return "".join(list(mygen()))

def test_addstr():
    result = ""
    for x in range(100):
        # there is no append method for strings :(
        result = result + "."
    return result

def test_gen_addstr():
    result = ""
    for x in mygen(): result = result + x
    return result

>python c:\python23\lib\timeit.py -s"import timegen2" "timegen2.test_genlist()"
1000 loops, best of 3: 368 usec per loop

>python c:\python23\lib\timeit.py -s"import timegen2" "timegen2.test_addstr()"
1000 loops, best of 3: 263 usec per loop

>python c:\python23\lib\timeit.py -s"import timegen2"
"timegen2.test_gen_addstr()"
1000 loops, best of 3: 385 usec per loop

Now, it turns out that for this case, adding strings was *faster* than
joining lines. But in all cases, the generator was slower than adding
strings.

Up to this point, the answer to my question is: generators _have_ a
measurable performance penalty, and using a generator to return text
line by line to append later is slower than appending to the return
string line by line. But also, the difference between testbeds 1 and 2
show that this is not the dominating factor in performance -- simply
adding some string manipulation made the test run much slower.
Finally, I *haven't* done any testing using 2.4, though, and
generators are supposed to perform better with 2.4.

... and I was about to finish it here, but I decided to check another thing.

There's still a catch. Generators were slower, because there is a
implicit function call whenever a new value is requested. So I've
added a new test:

---- testgen2.py ----
def teststr():
    return "."

def test_addstr_funccall():
    result = ""
    for x in range(100):
        result = result + teststr()
    return result

>python c:\python23\lib\timeit.py -s"import timegen2"
"timegen2.test_addstr_funccall()"
1000 loops, best of 3: 436 usec per loop

In this case, the code was measurably _slower_ than the generator
version (435 vs 385), and both are adding strings. It only shows how
hard is to try to optimize stuff -- depending on details, answers can
be totally different.

My conclusion, as of now, is that using generators in templating
mechanisms is a valuable tool as far as readability is concerned, but
it should not be done solely because of performance concerns. It may
be faster in some cases, but it's slower in the simplest situations.

-- 
Carlos Ribeiro
Consultoria em Projetos
blog: http://rascunhosrotos.blogspot.com
blog: http://pythonnotes.blogspot.com
mail: carribeiro at gmail.com
mail: carribeiro at yahoo.com



More information about the Python-list mailing list