[Python-ideas] async objects

Guido van Rossum guido at python.org
Tue Oct 4 11:31:12 EDT 2016


On Mon, Oct 3, 2016 at 10:37 PM, Rene Nejsum <rene at stranden.com> wrote:
> Ideally I think a language (would love it to be Python) should
> permit many (millions) of what we know as coroutines and then have as many
> threads as the CPU have cores to execute this coroutines, but I do not thing
> you as a programmer should be especially aware of this as you code. (Just
> like GC handles your alloc/free, the runtime should handle your
> “concurrency”)

There's a problem with this model (of using all CPUs to run
coroutines), since when you have two coroutines that can run in
unspecified order but update the same datastructure, the current
coroutine model *promises* that they will not run in parallel -- they
may only alternate running if they use `await`. This promise implies
that you can update the datastructure without worrying about locking
as long as you don't use `await` in the middle. (IOW it's
non-pre-emptive scheduling.)

If you were to change the model to allow multiple coroutines being
executed in parallel on multiple CPUs, such coroutines would have to
use locks locks, and then you have all the problems of threading back
in your coroutines! (There might be other things too, but there's no
wait to avoid a fundamental change in the concurrency model.)
Basically you're asking for Go's concurrency model -- it's nice in
some ways, but asyncio wasn't made to do that, and I'm not planning to
change it (let's wait for a GIL-free Python 4 first).

I'm still trying to figure out my position on the other points of
discussion here -- keep discussing!

-- 
--Guido van Rossum (python.org/~guido)


More information about the Python-ideas mailing list