[Python-porting] Strange behavior of eval() in Python 3.1

Bo Peng ben.bob at gmail.com
Sun Aug 29 16:14:32 CEST 2010


>>>>> dd = {'a': {1: 0.1, 2: 0.2}}
>>>>> print(eval("[a[x] for x in a.keys()]", {}, dd))
>> Traceback (most recent call last):
>>  File "<stdin>", line 1, in <module>
>>  File "<string>", line 1, in <module>
>>  File "<string>", line 1, in <listcomp>
>> NameError: global name 'a' is not defined

> List comprehensions are now properly scoped, so having one in the
> global scope, like you do in this case, will lookup the names in the
> global namespace.

Could you please provide more details (like a link to documentation)
what is going on here? From what I can see, I am evaluating an
expression in a dictionary with 'a' defined (and x is a iterator
variable), why does simuPOP have to look for it in the global
namespace? I mean, if 'a[1]', 'a[2]', '[a[1], a[2]]' are all valid
expressions, why not '[a[x] for x in [1,2]]'? Should not "[a[1],
a[2]]" always return the same result as "[a[x] for x in [1,2]]"? The
fact that

dd = {'a': {1:0.1, 2:0.2}}
dd1 = {'a': {1:1.1, 2:1.2}}
eval('a[1], a[2], list(a.keys())', dd1, dd)

returns 0.1, 0.2, [1, 2] is REALLY puzzling.

Bo


More information about the Python-porting mailing list