[pypy-dev] pre-emptive micro-threads utilizing shared memory message passing?

Maciej Fijalkowski fijall at gmail.com
Tue Jul 27 11:48:57 CEST 2010

On Tue, Jul 27, 2010 at 4:09 AM, Kevin Ar18 <kevinar18 at hotmail.com> wrote:
> Might as well warn you: This is going to be a rather long post.
> I'm not sure if this is appropriate to post here or if would fit right in with the mailing list.  Sorry, if it is the wrong place to post about this.

This is a relevant list for some of questions below. I'll try to answer them.

> Quick Question: Do queues from the multiprocessing module use shared memory?  If the answer is YES, you can just skip this section, because that would solve this particular problem.

PyPy has no multiprocessing module so far (besides, I think it's an
ugly hack, but that's another issue).

> Does PyPy have any other options for me?

Right now, no. But there are ways in which you can experiment. Truly
concurrent threads (depends on implicit vs explicit shared memory)
might require a truly concurrent GC to achieve performance. This is
work (although not as big as removing refcounting from CPython for

> True Pre-emptive scheduling?
> ----------------------------
> Any way to get pre-emptive micro-threads?  Stackless (the real
> Stackless, not the one in PyPy) has the ability to suspend them after a
> certain number of interpreter instructions; however, this is prone to
> problems because it can run much longer than expected.  Ideally, I would
> like to have true pre-emptive scheduling using
> hardware interrupts based on timing or CPU cycles (like the OS does for
> real threads).
> I am currently not aware of any way to achieve this in CPython, PyPy, Unladen Swallow, Stackless, etc....

Sounds relatively easy, but you would need to write this part in
RPython (however, that does not mean you get rid of GIL).

> Are there detailed docs on why the Python GIL exists?
> -----------------------------------------------------
> I don't mean trivial statements like "because of C extensions" or "because the interpreter can't handle it".
> It may be possible that my particular usage would not require the GIL.  However, I won't know this until I can understand what threading problems the Python interpreter has that the GIL was meant to protect against.  Is there detailed documentation about this anywhere that covers all the threading issues that the GIL was meant to solve?

The short answer is "yes". The long answer is that it's much easier to
write interpreter assuming GIL is around. For fine-grained locking to
work and be efficient, you would need:

* Some sort of concurrent GC (not specifically running in a separate
thread, but having different pools of memory to allocate from)
* Possibly a JIT optimization that would remove some locking.
* The forementioned locking, to ensure that it's not that easy to
screw things up.

So, in short, "work".

More information about the Pypy-dev mailing list