
We agreed yesterday that the dictionary() constructor would accept a a list of two-tuples (strictly speaking an iterable object of iterable objects of length 2). That plus list comprehensions pretty much covers the territory of dict comprehensions:
print dictionary([(i, chr(65 + i)) for i in range(4)]) {0: 'A', 1: 'B', 2: 'C', 3: 'D'}
Jeremy

[Jeremy Hylton]
FYI, this is checked in now.
Wow -- that's *exactly* what it prints. You got your own time machine now? While it covers the semantics, the pragmatics may be off, since listcomps produce genuine lists, and so e.g. dictionary([(key, f(key)) for key in file('huge')]) may require constructing an unboundedly large list of twoples before dictionary() sees the first pair. dictionary() per se doesn't require materializing a giant list in one gulp.

Tim Peters wrote:
Cool !
One way or another, you'll use up a giant chunk or two of data on the heap... I'd suggest adding a new builtin huge_file_as_mapping_apply('file', f) ;-) Seriously, this goes down the path of lazy evaluation of expressions. Not sure whether this is the right path to follow though (can cause brain damage due to overloaded builtin debuggers, IMHO). BTW, looks like I can finally get rid off the dict() builtin I have in mxTools which is Good News ! -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/

[M.-A. Lemburg]
... Seriously, this goes down the path of lazy evaluation of expressions.
Lazy sequences specifically, but that's been a tension in Python all along (it started with the for/__getitem__ protocol, and nothing we do from here on in will ever be as much of a hack as xrange() was for that <0.9 wink>).
Not sure whether this is the right path to follow though (can cause brain damage due to overloaded builtin debuggers, IMHO).
We can introduce "L[...]" for explicitly lazy list comprehenions <wink>.
BTW, looks like I can finally get rid off the dict() builtin I have in mxTools which is Good News !
It's not quite the same in the details: CVS dictionary(s) works just like CVS d = {} for k, v in s: d[k] = v In particular, it demands that the elements of s each produce exactly 2 objects, where IIRC the mxTools dict() requires at least 2 (ignoring any after the second). Was ignoring excess objects "a feature", or just exposing an implementation that didn't want to bother to check?

Tim Peters wrote:
... and add a lazy list comprehension object (e.g. xlistcompobj) ? Cool :-)
It was a feature; it's just that I forgot what I needed it for ;-) There are quite a few things in mxTools which I've never *really* needed. The single most used API in the package was and still is irange() which returns a sequence (i, obj[i]). dict() and invdict() are also rather popular ones. Most of the others tend to solve performance problems in some inner loops of some applications I wrote. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/

[Jeremy Hylton]
FYI, this is checked in now.
Wow -- that's *exactly* what it prints. You got your own time machine now? While it covers the semantics, the pragmatics may be off, since listcomps produce genuine lists, and so e.g. dictionary([(key, f(key)) for key in file('huge')]) may require constructing an unboundedly large list of twoples before dictionary() sees the first pair. dictionary() per se doesn't require materializing a giant list in one gulp.

Tim Peters wrote:
Cool !
One way or another, you'll use up a giant chunk or two of data on the heap... I'd suggest adding a new builtin huge_file_as_mapping_apply('file', f) ;-) Seriously, this goes down the path of lazy evaluation of expressions. Not sure whether this is the right path to follow though (can cause brain damage due to overloaded builtin debuggers, IMHO). BTW, looks like I can finally get rid off the dict() builtin I have in mxTools which is Good News ! -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/

[M.-A. Lemburg]
... Seriously, this goes down the path of lazy evaluation of expressions.
Lazy sequences specifically, but that's been a tension in Python all along (it started with the for/__getitem__ protocol, and nothing we do from here on in will ever be as much of a hack as xrange() was for that <0.9 wink>).
Not sure whether this is the right path to follow though (can cause brain damage due to overloaded builtin debuggers, IMHO).
We can introduce "L[...]" for explicitly lazy list comprehenions <wink>.
BTW, looks like I can finally get rid off the dict() builtin I have in mxTools which is Good News !
It's not quite the same in the details: CVS dictionary(s) works just like CVS d = {} for k, v in s: d[k] = v In particular, it demands that the elements of s each produce exactly 2 objects, where IIRC the mxTools dict() requires at least 2 (ignoring any after the second). Was ignoring excess objects "a feature", or just exposing an implementation that didn't want to bother to check?

Tim Peters wrote:
... and add a lazy list comprehension object (e.g. xlistcompobj) ? Cool :-)
It was a feature; it's just that I forgot what I needed it for ;-) There are quite a few things in mxTools which I've never *really* needed. The single most used API in the package was and still is irange() which returns a sequence (i, obj[i]). dict() and invdict() are also rather popular ones. Most of the others tend to solve performance problems in some inner loops of some applications I wrote. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH ______________________________________________________________________ Consulting & Company: http://www.egenix.com/ Python Software: http://www.lemburg.com/python/
participants (4)
-
Jeremy Hylton
-
M.-A. Lemburg
-
Paul Svensson
-
Tim Peters