[Python-ideas] async objects

Nick Coghlan ncoghlan at gmail.com
Tue Oct 4 00:05:38 EDT 2016


On 4 October 2016 at 10:48, C Anthony Risinger <anthony at xtfx.me> wrote:
> In Go I can spawn a new control state (goroutine) at any time against any
> function. This is clear in the code. In Erlang I can spawn a new control
> state (Erlang process) at any time and it's also clear. Erlang is a little
> different because it will preempt me, but the point is I am simply choosing
> a target function to run in a new context. Gevent and even threading module
> is another example of this pattern.

Right, this thread is more about "imperative shell, asynchronous
execution", than it is event driven servers.
http://www.curiousefficiency.org/posts/2015/07/asyncio-background-calls.html
and the code at
https://bitbucket.org/ncoghlan/misc/src/default/tinkering/background_tasks.py
gives an example of doing that with "schedule_coroutine",
"run_in_foreground" and "call_in_background" helpers to drive the
event loop.

> In all reality you don't typically need many suspension points other than
> around I/O, and occasionally heavy CPU, so I think folks are struggling to
> understand (I admit, myself included) why the runtime doesn't want to be
> more help and instead punts back to the developer.

Because the asynchronous features are mostly developed by folks
working on event driven servers, and the existing synchronous APIs are
generally fine if you're running from a synchronous shell.

That leads to the following calling models being reasonably well-standardised:

- non-blocking synchronous from anywhere: just call it
- blocking synchronous from synchronous: just call it
- asynchronous from asynchronous: use await
- blocking synchronous from asynchronous: use "loop.run_in_executor()"
on the event loop

The main arguable aspect there is "loop.run_in_executor()" being part
of the main user facing API, rather than offering a module level
`asyncio.call_in_background` helper function.

What's not well-defined are the interfaces for calling into
asynchronous code from synchronous code.

The most transparent interface for that is gevent and the underlying
greenlet support, which implement that at the C stack layer, allowing
arbitrary threads to be suspended at arbitrary points. This doesn't
give you any programming model benefits, it's just a lighter weight
form of operating system level pre-emptive threading (see
http://python-notes.curiousefficiency.org/en/latest/pep_ideas/async_programming.html#a-bit-of-background-info
for more on that point).

The next most transparent would be to offer a more POSIX-like shell
experience, with the concepts of foreground and background jobs, and
the constraint that the background jobs scheduled in the current
thread only run while a foreground task is active.

As far as I know, the main problems that can currently arise with that
latter approach are when you attempt to run something in the
foreground, but the event loop in the current thread is already
running.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the Python-ideas mailing list