[pypy-dev] pre-emptive micro-threads utilizing shared memory message passing?

Gabriel Lavoie glavoie at gmail.com
Wed Jul 28 21:32:38 CEST 2010


Hello Kevin,
     I don't know if it can be a solution to your problem but for my Master
Thesis I'm working on making Stackless Python distributed. What I did is
working but not complete and I'm right now in the process of writing the
thesis (in french unfortunately). My code currently works with PyPy's
"stackless" module onlyis and use some PyPy specific things. Here's what I
added to Stackless:

- Possibility to move tasklets easily (ref_tasklet.move(node_id)). A node is
an instance of an interpreter.
- Each tasklet has its global namespace (to avoid sharing of data). The
state is also easier to move to another interpreter this way.
- Distributed channels: All requests are known by all nodes using the
channel.
- Distributed objets: When a reference is sent to a remote node, the object
is not copied, a reference is created using PyPy's proxy object space.
- Automated dependency recovery when an object or a tasklet is loaded on
another interpreter

With a proper scheduler, many tasklets could be automatically spread in
multiple interpreters to use multiple cores or on multiple computers. A bit
like the N:M threading model where N lightweight threads/coroutines can be
executed on M threads.

The API is described here in french but it's pretty straightforward:
https://w3.mutehq.net/wiki/maitrise/API_DStackless

The code is available here (Just click on the Download link next to the
trunk folder):
https://w3.mutehq.net/websvn/wildchild/dstackless/trunk/

You need pypy-c built with --stackless. The code is a bit buggy right now
though...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/pypy-dev/attachments/20100728/c12dfb5e/attachment.html>


More information about the Pypy-dev mailing list