[Python-ideas] Fwd: [Python-Dev] PyParallel: alternate async I/O and GIL removal

Armin Rigo arigo at tunes.org
Sun Nov 17 16:52:22 CET 2013


Hi Trent,

On Sat, Nov 16, 2013 at 7:13 PM, Trent Nelson <trent at snakebite.org> wrote:
>     Slides are here: https://speakerdeck.com/trent/pyparallel-how-we-removed-the-gil-and-exploited-all-cores-1

Please stop me if I'm wrong.  This allows the Python programmer to run
a bunch of new threads in parallel; each new thread has read-only
access to all pre-existing objects; all objects created by this new
thread must die at the end.

Disregarding issues of performance, this seems to be exactly the same
model as "multiprocessing": the new thread (or process) cannot have
any direct impact on the objects seen by the parent thread (or
process).  In fact, multiprocessing's capabilities are a superset of
PyParallel's: e.g. you can change existing objects.  That will not be
reflected in the parent process, but will be visible in the future in
the same process.  This seems like a very useful thing to do in some
cases.

The performance benefits of PyParallel are probably only relevant on
Windows (because it has no fork()), but I agree it's interesting if
you're on Windows.

However, the main issue I have with the whole approach of PyParallel
is that it seems to be giving a subset of "multiprocessing" in terms
of programming model.  I already hate multiprocessing for giving the
programmer a large set of constrains to work around; I suppose I don't
have to explain my opinion of PyParallel...  But more importantly, why
would we want to hack at the source code of CPython like you did in
order to get a result that was already done?

Please tell me where I'm wrong.


A bientôt,

Armin.


More information about the Python-ideas mailing list