[Python-Dev] Pythonic concurrency - cooperative MT

Martin Blais blais at furius.ca
Sat Oct 1 23:50:25 CEST 2005


I hear a confusion that is annoying me a bit in some of the
discussions on concurrency, and I thought I'd flush my thoughts
here to help me clarify some of that stuff, because some people
on the list appear to discuss generators as a concurrency scheme,
and as far as I know (and please correct me if I'm wrong) they
really are not adressing that at all (full explanation below).
Before I go on, I must say that I am not in any way an authority
on concurrent programming, I'm just a guy who happens to have
done a fair amount of threaded programming, so if any of the
smart people on the list notice something completely stupid and
off the mark that I might be saying here, please feel free to
bang on it with the thousand-pound hammer of your hacker-fu and
put me to shame (I love to learn).

As far as I understand, generators are just a convenient way to
program apparently "independent" control flows (which are not the
same as "concurrent" control flows) in a constrained, structured
way, a way that is more powerful than what is allowed by using a
stack.  By giving up using the stack concept as a fast way to
allocate local function variables, it becomes possible to exit
and enter chunks of code multiple times, at specific points,
within an automatically restored local context (i.e. the local
variables, stored on the heap).  Generators make it more
convenient to do just that:  enter and re-enter some code that is
expressed as if it would be running in a single execution
flow (with explicit points of exit/re-entry, "yields").

The full monty version of that, is what you get when you write
assembly code (*memories of adolescent assembly programming on
the C=64 abound here now*): you can JMP anywhere anytime, and a
chunk of code (a function) can be reentered anywhere anytime as
well, maybe even reentered somewhere else than where it left off.
The price to pay for this is one of complexity: in assembly you
have to manage restoring the local context yourself (i.e in
assembly code this just means restoring the values of some
registers which are assumed set and used by the code, like the
local variables of a function), and there is no clear grouping of
the local scope that is saved.  Generators give you that for
free: they automatically organize all that local context as
belonging to the generator object, and it expresses clear points
of exit/re-entry with the yield calls.  They are really just a
fancy goto, with some convenient assumptions about how control
should flow.  This happens to be good enough for simplifying a
whole class of problems and I suppose the Python and Ruby
communities are all learning to love them and use them more and

(I think the more fundamental consequence of generators is to
raise questions about the definition of what "a function" is:  if
I have a single chunk of code in which different parts uses two
disjoint sets of variables, and it can be entered via a few
entry/exit points, is it really one or two or
multiple "functions"?  What if different parts share some of the
local scope only?  Where does the function begin and end?  And
more importantly, is there a more complex yet stull manageable
abstraction that would allow even more flexible control flow than
generators allow, straddling the boundaries of what a function

You could easily implement something very similar to generators
by encapsulating the local scope explicitly in the form of a
class, with instance attributes, and having an normal
method "step()" that would be careful about saving state in the
object's attributes everytime it returns and restoring state from
those attributes everytime it gets called.  This is what
iterators do.  Whenever you want to "schedule" your object to be
running, you call the step() method.  So just in that sense
generators really aren't all that exciting or "new".  The main
problem that generators solve is that they make this save/restore
mechanism automatic, thus allowing you to write a single flow of
execution as a normal function with explicit exit points (yield).
It's much nicer having that in the language than having to write
code that can be restored (especially when you have to write a
loop with complex conditions/flow which must run and return only
one iteration every time they become runnable).

Therefore, as far as I understand it, generators themselves DO
NOT implement any form of concurrency.

I feel that where generators and concurrency come together is
often met with confusion in the discussions I see about them, but
maybe that's just me.  I see two aspects that allow generators to
participate in the elaboration of a concurrency scheme:

1. The convenience of expression of a single execution flow (with
   explicit interruption points) makes it easy to implement
   pseudo-concurrency IF AND ONLY IF you consider a generator as
   an independent unit of control flow (i.e. a task).  Whether
   those generators can run asynchronously is yet undefined and
   depends on who calls them.

2. In a more advanced framework/language, perhaps some generators
   could be considered to always be possible to run
   asynchronously, ruled by a system of true concurrency with
   some kind of scheduling algorithm that oversees that process.
   Whether this has been implemented by many is still a mystery
   to me, but I can see how a low-level library that provides
   asynchronously running execution vehicles for each CPU could
   be used to manage and run a pool of shared generator objects
   in a way that is better (for a specific application) than the
   relatively uninformed scheduling provided by the threads
   abstraction (more at the end).

Pseudo or cooperative concurrency is not the same as true
asynchronous concurrency.  You can ONLY avoid having to deal with
issues of mutual exclusion if you DO NOT have true asynchronous
concurrency (i.e. two real CPUs running at the same time)--unless
you have some special scheduling system that implements very
specific assumptions about data access vs the code that it
schedules, in which case that scheduling algorithm itself will
have to deal with potential mutual exclusion problems: YOU DON'T
GET OUT OF IT, if you have two real, concurrent processing units
making calculations, you have to deal with the way that they
might access some same piece of data at the same time.

I suppose that the essence of what I want to say with this
diatribe is that everyone who talks about generators as a way to
avoid the hard problems of concurrent programming should really
explicitly frame the discussion in the context of a single
process cooperative scheme that runs on a single processor (at
any one time).  It does not hold outside of that context, outside
of that context you HAVE to deal with mutex issues, and that's
always where it gets messy (even with generators).

Now, IMO where it gets interesting is when you consider that what
you're doing when you are executing multiple asynchronous control
flows with explicit code in your process, is that you're
essentially bringing "up" the scheduler from the kernel layer
into your own code.  This is very cool.  This may allow you
specialize that scheduler with assumptions which may ultimately
simplify the implementation of your independent control flows.
For example, if you have two sets of generators that access
disjoint sets of data, and two processing units, your scheduler
could make sure that no two generators from the same set get
scheduled at the same time.  If you do that then you might not
have to lock access to your data structures at all.  You can
imagine more complex variants on this theme.

One of the problems that you have with using generators like
this, is that automatic "yield" on resource access does not occur
automatically, like it does in threading.  With threads, the
kernel is invoked when access to a low-level resource is
requested, and may decide to put your process in the wait queue
when it judges necessary.  I don't know how you would do that
with generators.  To implement that explicitly, you would need an
asynchronous version of all the functions that may block on
resources (e.g. file open, socket write, etc.), in order to be
able to insert a yield statement at that point, after the async
call, and there should be a way for the scheduler to check if the
resource is "ready" to be able to put your generator back in the
runnable queue.

(A question comes to mind here: Twisted must be doing something
like this with their "deferred objects", no?  I figure they would
need to do something like this too.  I will have to check.)

Any comment welcome.

More information about the Python-Dev mailing list