[Async-sig] Inadvertent layering of synchronous code as frameworks adopt asyncio
Guido van Rossum
guido at python.org
Wed Mar 27 16:49:06 EDT 2019
On Wed, Mar 27, 2019 at 1:23 PM Nathaniel Smith <njs at pobox.com> wrote:
> On Wed, Mar 27, 2019 at 10:44 AM Daniel Nugent <nugend at gmail.com> wrote:
> >
> > FWIW, the ayncio_run_encapsulated approach does not work with the
> transport/protocol apis because the loop needs to stay alive concurrent
> with the connection in order for the awaitables to all be on the same loop.
>
> Yeah, there are two basic approaches being discussed here: using two
> different loops, versus re-entering an existing loop.
> asyncio_run_encapsulated is specifically for the two-loops approach.
>
> In this version, the outer loop, and everything running on it, stop
> entirely while the inner loop is running – which is exactly what
> happens with any other synchronous, blocking API. Using
> asyncio_run_encapsulated(aiohttp.get(...)) in Jupyter is exactly like
> using requests.get(...), no better or worse.
>
And Yury's followup suggests that it's hard to achieve total isolation
between loops, due to subprocess management and signal handling (which are
global states in the OS, or at least per-thread -- the OS doesn't know
about event loops).
I just had another silly idea. What if the magical decorator that can be
used to create a sync version of an async def (somewhat like tworoutines)
made the async version hand off control to a thread pool? Could be a tad
slower, but the tenor of the discussion seems to be that performance is not
that much of an issue.
--
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/async-sig/attachments/20190327/39ea337a/attachment.html>
More information about the Async-sig
mailing list