Hello Kevin,
I don't know if it can be a solution to your problem but for my Master Thesis I'm working on making Stackless Python distributed. What I did is working but not complete and I'm right now in the process of writing the thesis (in french unfortunately). My code currently works with PyPy's "stackless" module onlyis and use some PyPy specific things. Here's what I added to Stackless:
- Possibility to move tasklets easily (ref_tasklet.move(node_id)). A node is an instance of an interpreter.
- Each tasklet has its global namespace (to avoid sharing of data). The state is also easier to move to another interpreter this way.
- Distributed channels: All requests are known by all nodes using the channel.
- Distributed objets: When a reference is sent to a remote node, the object is not copied, a reference is created using PyPy's proxy object space.
- Automated dependency recovery when an object or a tasklet is loaded on another interpreter
With a proper scheduler, many tasklets could be automatically spread in multiple interpreters to use multiple cores or on multiple computers. A bit like the N:M threading model where N lightweight threads/coroutines can be executed on M threads.
The API is described here in french but it's pretty straightforward:
The code is available here (Just click on the Download link next to the trunk folder):
You need pypy-c built with --stackless. The code is a bit buggy right now though...