[Python-ideas] (PEP 555 subtopic) Propagation of context in async code

Yury Selivanov yselivanov.ml at gmail.com
Fri Oct 13 12:38:41 EDT 2017


On Fri, Oct 13, 2017 at 11:49 AM, Koos Zevenhoven <k7hoven at gmail.com> wrote:
[..]
> This was my starting point 2.5 years ago, when Yury was drafting this status
> quo (PEP 492). It looked a lot of PEP 492 was inevitable, but that there
> will be a problem, where each API that uses "blocking IO" somewhere under
> the hood would need a duplicate version for asyncio (and one for each
> third-party async framework!). I felt it was necessary to think about a
> solution before PEP 492 is accepted, and this became a fairly short-lived
> thread here on python-ideas:

Well, it's obvious why the thread was "short-lived".  Don't mix
non-blocking and blocking code and don't nest asyncio loops.  But I
believe this new subtopic is a distraction.  You should start a new
thread on Python-ideas if you want to discuss the acceptance of PEP
492 2.5 years ago.

[..]
> The bigger question is, what should happen when a coroutine awaits on
> another coroutine directly, without giving the framework a change to
> interfere:
>
>
> async def inner():
>     do_context_aware_stuff()
>
> async def outer():
>     with first_context():
>         coro = inner()
>
>     with second_context():
>         await coro
>
> The big question is: In the above, which context should the coroutine be run
> in?

The real big question is how people usually write code.  And the
answer is that they *don't write it like that* at all.  Many context
managers in many frameworks (aiohttp, tornado, and even asyncio)
require you to wrap your await expressions in them.  Not coroutine
instantiation.

A more important point is that existing context solutions for async
frameworks can only support a with statement around an await
expression. And people that use such solutions know that 'with ...:
coro = inner()' isn't going to work at all.

Therefore wrapping coroutine instantiation in a 'with' statement is
not a pattern.  It can only become a pattern, if whatever execution
context PEP accepted in Python 3.7 encouraged people to use it.

[..]
> Both of these would have their own stack of (argument, value) assignment
> pairs, explained in the implementation part of the first PEP 555 draft.
> While this is a complication, the performance overhead of these is so small,
> that doubling the overhead should not be a performance concern.

Please stop handwaving performance.  Using big O notation:

PEP 555, worst complexity for uncached lookup: O(N), where 'N' is the
total number of all context values for all context keys for the
current frame stack.  For a recursive function you can easily have a
situation where cache is invalidated often, and code starts to run
slower and slower.

PEP 550 v1, worst complexity for uncached lookup: O(1), see [1].

PEP 550 v2+, worst complexity for uncached lookup: O(k), where 'k' is
the number of nested generators for the current frame. Usually k=1.

While caching will mitigate PEP 555' bad performance characteristics
in *tight loops*, the performance of uncached path must not be
ignored.

Yury

[1] https://www.python.org/dev/peps/pep-0550/#appendix-hamt-performance-analysis


More information about the Python-ideas mailing list