[Python-Dev] An issue recently brought up in patch
#872326(generator expression)
Tim Peters
tim.one at comcast.net
Mon Mar 22 22:54:14 EST 2004
[Greg Ewing]
>> ...
>> I'm disturbed by the number of special rules we seem to be needing to
>> make up in the name of getting generator expressions to DWIM.
Yup.
>> First we have free variables getting captured, which is unprecedented
>> anywhere else;
Not sure that's it. In some sense it's also arbitrary that Python decides
in
def f(x=DEFAULT_X_VALUE):
...
to capture the binding of DEFAULT_X_VALUE (which is a free vrbl so far as
the defn of f is concerned) at the time f is defined and reuse it on each
call; it would have been possible to define the language to use whatever
binding is current each time f is called.
Outside of explicitly burying something inside a lambda body, there's no
precendent in Python for "delaying" evaluation of anything in a Python
expression either. So generator expressions aren't like any non-lambda
expressions in Python either: the time at which their guts get evaluated
can be arbitrarily far removed from the time the statement holding the guts
gets executed.
>> now we have some iterators being treated more equally
>> than others.
>>
>> I'm getting an "architecture smell" here. Something is wrong
>> somewhere, and I don't think we're tinkering in the right place to
>> fix it properly.
When scopes and lifetimes get intricate, it's easier to say what you mean in
Scheme (which is a good argument for not letting scopes and lifetimes get
intricate <wink>).
[Guido]
> I'm not disagreeing -- I was originally vehemently against the idea of
> capturing free variables, but Tim gave an overwhelming argument that
> whenever it made a difference that was the desired semantics.
Na, and that's a big part of the problem we're having here: I didn't make
an argument, I just trotted out non-insane examples. They all turned out to
suck under "delay evaluation" semantics, to such an extent that wholly
delayed evaluation appeared to be a downright foolish choice. But decisions
driven *purely* by use cases can be mysterious. I was hoping to see many
more examples, on the chance that a clarifying principle would reveal
itself.
> But I always assumed that the toplevel iterable would be different.
Jiwon's example certainly suggests that it must be. But why <0.7 wink>?
More information about the Python-Dev
mailing list