But even I find your use of dysphemisms like "freak show" for non-FP
solutions quite off-putting.

Ah, I'm sorry, "freak show" was not mean to be disparaging to the authors or even the code itself, but to describe the variety of strange solutions (my own included) to this simple problem.  

Indeed. But it seems to me that itertools.accumulate() with a initial value probably will solve that issue.

Kyle Lahnakoski made a pretty good case for not using itertools.accumulate() earlier in this thread, and Tim Peters made the point that it's non-initialized behaviour can be extremely unintuitive (try "print(list(itertools.accumulate([1, 2, 3], lambda x, y: str(x) + str(y))))"  ).  These convinced me that that itertools.accumulate should be avoided altogether.  

Alternatively, if anyone has a proposed syntax that does the same thing as Serhiy Storchaka's:

    smooth_signal = [average for average in [0] for x in signal for average in [(1-decay)*average + decay*x]]

But in a way that more intuitively expresses the intent of the code, it would be great to have more options on the market.

On Tue, Apr 10, 2018 at 1:32 PM, Steven D'Aprano <steve@pearwood.info> wrote:
On Tue, Apr 10, 2018 at 12:18:27PM -0400, Peter O'Connor wrote:

> I added your coroutine to the freak show:

Peter, I realise that you're a fan of functional programming idioms, and
I'm very sympathetic to that. I'm a fan of judicious use of FP too, and
while I'm not keen on your specific syntax, I am interested in the
general concept and would like it to have the best possible case made
for it.

But even I find your use of dysphemisms like "freak show" for non-FP
solutions quite off-putting. (I think this is the second time you've
used the term.)

Python is not a functional programming language like Haskell, it is a
multi-paradigm language with strong support for OO and procedural
idioms. Notwithstanding the problems with OO idioms that you describe,
many Python programmers find OO "better", simpler to understand, learn
and maintain than FP. Or at least more familiar.

The rejection or approval of features into Python is not a popularity
contest, ultimately it only requires one person (Guido) to either reject
or approve a new feature. But popular opinion is not irrelevant either:
like all benevolent dictators, Guido has a good sense of what's popular,
and takes it into account in his considerations. If you put people
off-side, you hurt your chances of having this feature approved.

> I *almost* like the coroutine thing but find it unusable because the
> peculiarity of having to initialize the generator when you use it (you do
> it with next(processor)) is pretty much guaranteed to lead to errors when
> people forget to do it.  Earlier in the thread Steven D'Aprano showed how a
> @coroutine decorator can get around this:

I agree that the (old-style, pre-async) coroutine idiom is little known,
in part because of the awkwardness needed to make it work. Nevertheless,
I think your argument about it leading to errors is overstated: if you
forget to initialize the coroutine, you get a clear and obvious failure:

py> def co():
...     x = (yield 1)
py> a = co()
py> a.send(99)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: can't send non-None value to a just-started generator

> - Still, the whole coroutine thing still feels a bit magical, hacky and
> "clever".  Also the use of generator.send will probably confuse around 90%
> of programmers.

In my experience, heavy use of FP idioms will probably confuse about the
same percentage. Including me: I like FP in moderation, I wouldn't want
to use a strict 100% functional language, and if someone even says the
word "Monad" I break out in hives.

> If you have that much of a complex workflow, you really should not make
> > that a one-liner.
> It's not a complex workflow, it's a moving average.  It just seems complex
> because we don't have a nice, compact way to describe it.

Indeed. But it seems to me that itertools.accumulate() with a initial
value probably will solve that issue.

Besides... moving averages aren't that common that they *necessarily*
need syntactic support. Wrapping the complexity in a function, then
calling the function, may be an acceptible solution instead of putting
the complexity directly into the language itself.

The Conservation Of Complexity Principle suggests that complexity cannot
be created or destroyed, only moved around. If we reduce the complexity
of the Python code needed to write a moving average, we invariably
increase the complexity of the language, the interpreter, and the amount
of syntax people need to learn in order to be productive with Python.

Python-ideas mailing list
Code of Conduct: http://python.org/psf/codeofconduct/