I was reading the summary of python-dev for August and saw the exchange about Combinatorics, I emailed a few people in the discussion (guido, et al) who suggested I post to python-dev directly instead.
http://probstat.sourceforge.net "Probability and Statistics Utils for Python"
Running Code I released in early August that does Combinations, Permutations, and Cartesian Products over python lists. It has python object wrappers for fast C algorithms. It supports iterating over objects, slices, len(), and random access. It was 10+ times faster than the same algos done in python, but I wrote those without generators and I haven't done a benchmark since. It uses standard algos, largely from the Gnu Scientific Library. It lazily produces the cycles in lexiographic order, so the memory it consumes is about twice the size of a shallow copy of the list. A slice of a Permutation object is another Permutation object with smaller internal start/end bounds. I could write more about the implementation, but I'll save my keystrokes unless people actually want to know.
A Power class that lazily evaluates the output would be easy to add, here is one: from __future__ import generators from probstat import Combination
class Power: def __init__(self, set): self.__set = set def __iter__(self): self.__iter = self.setup_iter() def setup_iter(self): for (i) in range(len(self.__set)+1): if (i == 0): # Combination() doesn't allow N choose zero yield  else: for (i) in Combination(self.__set, i): yield i def next(self): return self.__iter.next()
Eratta: I tried using the Cartesian class to mimic nested for() loops. It is 3 times slower than doing depth 3 nested for loops (i,j,k) in python. That's probably the overhead of the Cartesian class new'ing a tuple and unpacking it for each iteration of the loop.