Guido> It's highly likely that the implementation will have to create a Guido> generator function under the hood, so they will be safely Guido> contained in that frame.
Which suggests they aren't likely to be a major performance win over list comprehensions. If nothing else, they would push the crossover point between list comprehensions and iterator comprehensions toward much longer lists.
Is performance is the main reason this addition is being considered? They don't seem any more expressive than list comprehensions to me.
They are more expressive in one respect: you can't use a list comprehension to express an infinite sequence (that's truncated by the consumer).
They are more efficient in a related situation: a list comprehension buffers all its items before the next processing step begins; an iterator comprehension doesn't need to do any buffering. So iterator comprehensions win if you're pipelining operations just like Unix pipes are a huge win over temporary files in some situations. This is particularly important when the consumer is some accumulator like 'average' or 'sum'. Whether there is an actual gain in speed depends on how large the list is. You should be able to time examples like
sum([x*x for x in R])
def gen(R): for x in R: yield x*x sum(gen(R))
for various lengths of R. (The latter would be a good indication of how fast an iterator generator could run.)
--Guido van Rossum (home page: http://www.python.org/%7Eguido/)