
On Friday 17 October 2003 11:58 pm, Phillip J. Eby wrote: ...
Hm. What if list comprehensions returned a "lazy list", that if you took an iterator of it, you'd get a generator-iterator, but if you tried to use it as a list, it would populate itself? Then there'd be no need to ever *not* use a listcomp, and only one syntax would be necessary.
More specifically, if all you did with the list was iterate over it, and then throw it away, it would never actually populate itself. The principle drawback to this idea from a semantic viewpoint is that listcomps can be done over expressions that have side-effects. :(
The big problem I see is e.g. as follows: l1 = range(6) lc = [ x for x in l1 ] for a in lc: l1.append(a) (or insert the LC inline in the for, same thing either way I'd sure hope). Today, this is perfectly well-defined, since the LC "takes a snapshot" when evaluated -- l1 becomes a 12-elements list, as if I had done l1 *= 2. But if lc _WASN'T_ "populated"... shudder... it would be as nonterminating as "for a in l1:" same loop body. Unfortunately, it seems to me that turning semantics from strict to lazy is generally unfeasible because of such worries (even if one could somehow ignore side effects). Defining semantics as lazy in the first place is fine: as e.g. "for a in iter(l1):" has always produced a nonterminating loop for that body (iter has always been lazy), people just don't use it. But once it has been defined as strict, going to lazy is probably unfeasible. Pity... Alex