Well functional languages (like Erlang), variables tend to be immutable. This is a bonus in a concurrent system - makes it easier to reason about the system - and helps to avoid various race conditions. As for the shared memory. I think there is a difference between whether things are shared at the application programmer level, or under the hood controlled by the system. Programmers tend to beare bad at the former.
Your right... and I am actually talking about non-shared memory from the perspective of the programmer, but under the hood, it MUST use shared memory for implementation. The problem I am running into is that there is no way to implement it under the hood because there is no way to do shared memory in Python. Thanks for bringing that up. Maybe that will clarify what I was going on about. :)
I took a quick look. Maybe I am biased but Stackless Python gives you most of that. Also tasklets and channels can do everything a generator can and more (a generator is more specialised than a coroutine). Also it is easy to mimic asynchrony with a CSP style messaging system where microthreads and channels are cheap. A line from the book "Actors: A Model of Concurrent Computation in Distributed Systems" by Gul A. Agha comes to mind: "synchrony is mere buffered asynchrony."
Agreed. Stuff like the stackless module in PyPy, greenlets, twisted, and others do offer some useful options that are even better than generators... I could definitely make use of them for some of the broader implemenation details. However, the problem is always that there is no way to make them parallel within Python itself, because there is no shared memory that I can use for "under the hood" implemenation. Now, if there is a true parallel implementation of stackless, greenlets, twisted, etc... maybe it could fit my purposes... but I'd have to check. I did some basic searching on various Python threading implemenations in the past and didn't really find one that did... but, like you suggested, maybe there is one out there somewhere.
The process of connecting the boxes together was actually designed to be >programmed visually, as you can see from the examples in the book (I have >no idea if it works well, as I am merely starting to experiment with it).
What bought me to Stackless Python and PyPy was work concerning WS-BPEL. Allegedly, WS-BPEL/XLang/WSFL (Web-Services Flow Language) are based on formalisms like pi calculus.
Since I don't own a multi-core machine and I am not doing CPU intense stuff, I never really cared. However I have been doing things where I needed to impose logical orderings upon processes (i.e., process C can only run after process A and B are finished). My initial native uses of Stackless (easy to do in anything system based on CSP), resulted in deadlocking the system. So I found understanding deadlock to be very important.
Thanks... and, uh, about all I can do is bookmark this for later. Really, thanks for the links; I may very well want to research each and every one of these at some point and see what I can learn from each one. If you have more stuff like that, feel free to let me know. :)
My advice: get stuff properly working under a single threaded model first so you understand the machinery. That said, I think Carlos Eduardo de Paula a few years ago played with adapting Stackless for multi-processing. Yeah, I've been considering that. Maybe I'll just go ahead with a single threaded implementation... and if I feel like it, I could always try to edit PyPy or one of the other implemenations later (although I probably never will due to time constraints :) ). Still, I figured I might as well ask around and see if it was possible to do a parallel implementation sooner.
Or... what I may end up doing is using the slow multiprocessing module and queues. Granted, it will probably be slow since it doesn't use shared memory "under the hood", but it would be parallel.
Second piece of advice: start looking at how Go does things. Stackless Python and Go share a common ancestor. However Go does much more on the multi-core front. I have looked at Go Goroutines.... albeit briefly. I noticed that they are co-operative like stackless and, based on your comments, I'm guessing they work on multiple cores? I was really disappointed that they were not pre-emptive, however. I haven't really looked much into it beyond that, but maybe I'll give it another look; but using it would mean not using Python. :(