The proposal is mostly about scan/accumulate. Reduce/fold is a "corollary", as it's just the last value of a scan. The idea is to have a way of using the previous iteration output inside a list comprehension (and anything alike). That is, to make them recursive.

last([abs(prev - x) for x in range(100000) from prev = 2])
Why not [abs(prev - x) for x in range(100000) from prev = 2][-1]?
How about list(some_iterable)[-1]?
Probably a "last" function would avoid these.

But the "last" is a pretty easy function to write. This proposal is about the list comprehension syntax (and other things alike). The "last" function and the "scan" functions can be seen as secondary proposals, the main point is a syntax to access to the previous iteration output value inside a list comprehension. For example, a product:

>>> [prev * k for k in [5, 2, 4, 3] from prev = 1]
[1, 5, 10, 40, 120]

That makes sense for me, and seem simpler than:

>>> from itertools import accumulate, chain
>>> list(accumulate(chain([1], [5, 2, 4, 3]), lambda prev, k: prev * k))
[1, 5, 10, 40, 120]

Which is still simpler than using reduce

>>> from functools import reduce
>>> list(reduce(lambda hist, k: hist + [hist[-1] * k], [5, 2, 4, 3], [1]))
[1, 5, 10, 40, 120]

The first is explicit. The imperative approach for that would be much more like the reduce than the scan, as "hist" is the result.

>>> hist = [1]
>>> for k in [5, 2, 4, 3]:
...     prev = hist[-1]
...     hist.append(prev * k)
>>> hist
[1, 5, 10, 40, 120]

The very idea of prefering these approaches instead of the proposal sounds strange to me. What is simpler on them, the McCabe complexity? Number of tokens? Number of repeated tokens? AST tree height?

AFAIK, GvR prefers the list comprehension syntax instead of using the map/filter higher order functions. He even said somewhere that a reduce can be written as list comprehension, and it wasn't obvious for me that a "3-for-sections" list comprehension repeating a target variable name would be a valid Python code, and that's required to get a recursive list comprehension. What I'm proposing is to allow a list comprehension syntax to behave like itertools.accumulate without the "3-for-sections" kludge.

The rationale for the proposal is here:

On Haskell, the above example would be:

Prelude> scanl (*) 1 [5, 2, 4, 3]

And that's what I'm trying to write as a list comprehension. Some months ago, thinking on how I could write this proposal, I started writing PyScanPrev. Among the examples I wrote on PyScanPrev, there are use cases on:
- maths
- physics
- economics
- string processing
- signal processing
- control engineering
- dynamic / time-varying model simulation
- gray code generation

So I can say that's not niche/specific. The most sophisticated scan example I wrote is this one to plot the trajectory of a "leaking bucket-spring-damper" system:

Lag and windowing seem unrelated, something that would be solved with itertools.tee, zip or perhaps a collections.deque and a function (and I remember of doing so on AudioLazy).

2016-10-23 22:38 GMT-02:00 Chris Angelico <>:
On Mon, Oct 24, 2016 at 11:29 AM, Steven D'Aprano <> wrote:
> In fairness I am sick and if I were well I may have been able to keep
> this straight in my head, but calling the variable "prev" is actively
> misleading. I was mislead, and (I think) Chris who just suggested this
> was similar to the SQL "lag" function may have been mislead as well. (Or
> perhaps he was just mislead by me, in which case, sorry Chris!)

All of the above are possible. I'm skimming the thread, not reading it
in full, and I'm a bit lost as to the point of the proposal, so it's
entirely possible that lag() is unrelated.

But if it is indeed just reduce(), then it's even simpler.

Python-ideas mailing list
Code of Conduct:

Danilo J. S. Bellini
"It is not our business to set up prohibitions, but to arrive at conventions." (R. Carnap)