Also Tim Peter's one-line example of:

print(list(itertools.accumulate([1, 2, 3], lambda x, y: str(x) + str(y))))

I think makes it clear that itertools.accumulate is not the right vehicle for this change - we should make a new itertools function with a required "initial" argument.  

On Mon, Apr 9, 2018 at 1:44 PM, Peter O'Connor <> wrote:
It seems clear that the name "accumulate" has been kind of antiquated since the "func" argument was added and "sum" became just a default.

And people seem to disagree about whether the result should have a length N or length N+1 (where N is the number of elements in the input iterable).

The behaviour where the first element of the return is the same as the first element of the input can be weird and confusing.  E.g. compare:

>> list(itertools.accumulate([2, 3, 4], lambda accum, val: accum-val))
[2, -1, -5]
>> list(itertools.accumulate([2, 3, 4], lambda accum, val: val-accum))
[2, 1, 3]

One might expect that since the second function returned the negative of the first function, and both are linear, that the results of the second would be the negative of the first, but that is not the case.

Maybe we can instead let "accumulate" fall into deprecation, and instead add a new more general itertools "reducemap" method:

def reducemap(iterable: Iterable[Any], func: Callable[(Any, Any), Any], initial: Any, include_initial_in_return=False): -> Generator[Any]

- The name is more descriptive of the operation (a reduce operation where we keep values at each step, like a map)
- The existence of include_initial_in_return=False makes it somewhat clear that the initial value will by default NOT be provided in the returning generator
- The mandatory initial argument forces you to think about initial conditions.

- The most common use case (summation, product), has a "natural" first element (0, and 1, respectively) when you'd now be required to write out.  (but we could just leave accumulate for sum).

I still prefer a built-in language comprehension syntax for this like: (y := f(y, x) for x in x_vals from y=0), but for a huge discussion on that see the other thread.  

------- More Examples (using "accumulate" as the name for now)  -------

# Kalman filters
def kalman_filter_update(state, measurement):
    return state

online_trajectory_estimate = accumulate(measurement_generator, func=kalman_filter_update, initial = initial_state) 


# Bayesian stats
def update_model(prior, evidence):
   return posterior

model_history  = accumulate(evidence_generator, func=update_model, initial = prior_distribution) 


# Recurrent Neural networks: 
def recurrent_network_layer_step(last_hidden, current_input):
    new_hidden = ....
    return new_hidden

hidden_state_generator = accumulate(input_sequence, func=recurrent_network_layer_step, initial = initial_hidden_state)

On Mon, Apr 9, 2018 at 7:14 AM, Nick Coghlan <> wrote:
On 9 April 2018 at 14:38, Raymond Hettinger <> wrote:
>> On Apr 8, 2018, at 6:43 PM, Tim Peters <> wrote:
>> In short, for _general_ use `accumulate()` needs `initial` for exactly
>> the same reasons `reduce()` needed it.
> The reduce() function had been much derided, so I've had it mentally filed in the anti-pattern category.  But yes, there may be wisdom there.

Weirdly (or perhaps not so weirdly, given my tendency to model
computational concepts procedurally), I find the operation of reduce()
easier to understand when it's framed as "last(accumulate(iterable,
binop, initial=value)))".


Nick Coghlan   |   |   Brisbane, Australia
Python-ideas mailing list
Code of Conduct: