Ah, that's nice, I didn't know that itertools.accumulate now has an optional "func" parameter.  Although to get the exact same behaviour (output the same length as input) you'd actually have to do:

   smooth_signal = itertools.islice(itertools.accumulate([initial_average] + signal, compute_avg), 1, None)

And you'd also have to use iterools.chain to concatenate the initial_average to the rest if "signal" were a generator instead of a list, so the fully general version would be:

    smooth_signal = itertools.islice(itertools.accumulate(itertools.chain([initial_average], signal), compute_avg), 1, None)

I find this a bit awkward, and maintain that it would be nice to have this as a built-in language construct to do this natively.  You have to admit: 

    smooth_signal = [average = (1-decay)*average + decay*x for x in signal from average=0.]

Is a lot cleaner and more intuitive than:

    dev compute_avg(avg, x):
        return (1 - decay)*avg + decay * x

    smooth_signal = itertools.islice(itertools.accumulate(itertools.chain([initial_average], signal), compute_avg), 1, None)

Moreover, if added with the "last" builtin proposed in the link, it could also kill the need for reduce, as you could instead use:

    last_smooth_signal = last(average = (1-decay)*average + decay*x for x in signal from average=0.)



On Thu, Apr 5, 2018 at 1:48 PM, Clint Hepner <clint.hepner@gmail.com> wrote:

> On 2018 Apr 5 , at 12:52 p, Peter O'Connor <peter.ed.oconnor@gmail.com> wrote:
>
> Dear all,
>
> In Python, I often find myself building lists where each element depends on the last.  This generally means making a for-loop, create an initial list, and appending to it in the loop, or creating a generator-function.  Both of these feel more verbose than necessary.
>
> I was thinking it would be nice to be able to encapsulate this common type of operation into a more compact comprehension.
>
> I propose a new "Reduce-Map" comprehension that allows us to write:
> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in range(1000)]
> smooth_signal = [average = (1-decay)*average + decay*x for x in signal from average=0.]
> Instead of:
> def exponential_moving_average(signal: Iterable[float], decay: float, initial_value: float=0.):
>     average = initial_value
>     for xt in signal:
>         average = (1-decay)*average + decay*xt
>         yield average
>
> signal = [math.sin(i*0.01) + random.normalvariate(0, 0.1) for i in range(1000)]
> smooth_signal = list(exponential_moving_average(signal, decay=0.05))
> I've created a complete proposal at: https://github.com/petered/peps/blob/master/pep-9999.rst , (and a pull-request) and I'd be interested to hear what people think of this idea.
>
> Combined with the new "last" builtin discussed in the proposal, this would allow u to replace "reduce" with a more Pythonic comprehension-style syntax.


See itertools.accumulate, comparing the rough implementation in the docs to your exponential_moving_average function:

    signal = [math.sin(i*0.01) + random.normalvariate(0,0.1) for i in range(1000)]

    dev compute_avg(avg, x):
        return (1 - decay)*avg + decay * x

    smooth_signal = accumulate([initial_average] + signal, compute_avg)

--
Clint