
Stefan Pochmann writes:
Stephen J. Turnbull wrote:
But you didn't, really.
Yes I did, really. Compare (in fixed-width font):
I had no trouble reading the Python as originally written. Obviously you wrote a comprehension that gets the right answer, and uses the bodies of the lambdas verbatim. The point is that you focus on the lambdas, but what I'm interested in is the dataflows (the implicitly constructed iterators). In fact you also created a whole new subordinate data flow that doesn't exist in the original (the [x+1]). I haven't checked but I bet that a complex comprehension in your style will need to create a singleton iterable per source element for every mapping except the first. One point in favor of doing this calculation with chained iterators is to avoid creating garbage. The nested genexp I proposed creates the same iterators as the proposed method chain, and iterates them the same way, implicitly composing the functions in a one-pass algorithm.
You do have the same steps (increment and keep evens) as the original and execute them in the same order, but you *wrote* the first transformation *before* the source, nested instead of flat, which reads inside-out in zig-zag fashion.
We are well-used to reading parenthesized expressions, though. Without real-world examples, I don't believe the fluent idiom has enough advantages over comprehensions and genexps to justify support in the stdlib, especially given that it's easy to create your own dataflow objects. We don't have a complete specification for a generic facility to be put into the stdlib, except the OP's most limited proposal to add iter, map, filter, and to_list methods to iterators (the first and last of which are actually pointless). But I don't think that would get support from the core devs. It's also not obvious to me that the often awkward comprehension syntax that puts the element-wise transformation first isn't frequently optimal. In log(gdp) for gdp in gdpseries economists don't really care about the dataflow, as it's the same in many many cases. We care about the log transformation, as that's what differentiates this model from others. So putting the transformation before the data source makes a lot of sense for readability (in what is admittedly a case I chose to make the point). I'll grant that putting the source ("x in iterable") between the mapping ("f(x) for") and the filter ("if g(x)") does create readability issues for nested genexps like the one I suggested, if there are more than one or two such filters.
Not that bad with so few transformations, but if we want to do a few more, it'll likely get messy.
If it were up to me (it isn't, but I represent at least a few others in this), "likely" doesn't cut it. I mean, I already admitted that as a *possibility*. We want to see a specification, and real applications that benefit *more* from a generic facility like that proposed than they would from application-specific dataflow objects. Steve