![](https://secure.gravatar.com/avatar/8b7229d275bd0e27e3e256ef4f317761.jpg?s=120&d=mm&r=g)
Hello Kevin, I don't know if it can be a solution to your problem but for my Master Thesis I'm working on making Stackless Python distributed. What I did is working but not complete and I'm right now in the process of writing the thesis (in french unfortunately). My code currently works with PyPy's "stackless" module onlyis and use some PyPy specific things. Here's what I added to Stackless:
- Possibility to move tasklets easily (ref_tasklet.move(node_id)). A node is an instance of an interpreter. - Each tasklet has its global namespace (to avoid sharing of data). The state is also easier to move to another interpreter this way. - Distributed channels: All requests are known by all nodes using the channel. - Distributed objets: When a reference is sent to a remote node, the object is not copied, a reference is created using PyPy's proxy object space. - Automated dependency recovery when an object or a tasklet is loaded on another interpreter
With a proper scheduler, many tasklets could be automatically spread in multiple interpreters to use multiple cores or on multiple computers. A bit like the N:M threading model where N lightweight threads/coroutines can be executed on M threads.
Was able to have a look at the API... If others don't mind my asking this on the mailing list: * .send() and .receive() What type of data can you send and receive between the tasklets? Can you pass entire Python objects? * .send() and .receive() memory model When you send data between tasklets (pass messages) or whateve you want to call it, how is this implemented under the hood? Does it use shared memory under the hood or does it involve a more costly copying of the data? I realize that if it is on another machine you have to copy the data, but what about between two threads? You mentioned PyPy's proxy object.... guess I'll need to read up on that.