[docs] [issue20727] Improved roundrobin itertools recipe

Gareth Rees report at bugs.python.org
Tue Feb 25 11:55:04 CET 2014


Gareth Rees added the comment:

I suspect I messed up the timing I did yesterday, because today I find that 100 isn't large enough, but here's what I found today (in Python 3.3):

    >>> from timeit import timeit
    >>> test = [tuple(range(300))] + [()] * 100
    >>> timeit(lambda:list(roundrobin1(*test)), number=10000) # old recipe
    8.386148632998811
    >>> timeit(lambda:list(roundrobin2(*test)), number=10000) # new recipe
    16.757110453007044

The new recipe is more than twice as slow as the old in this case, and its performance gets relatively worse as you increase the number 300.

I should add that I do recognise that the new recipe is better for nearly all cases (it's simpler as well as faster), but I want to point out an important feature of the old recipe, namely that it discards iterables as they are finished with, giving it worst-case O(n) performance (albeit slow) whereas the new recipe has worst case O(n^2). As we found out with hash tables, worst-case O(n^2) performance can be a problem when inputs are untrusted, so there are use cases where people might legitimately prefer an O(n) solution even if it's a bit slower in common cases.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue20727>
_______________________________________


More information about the docs mailing list