
I'm sure that there's a lot more like that out there. However, there is also a lot of stuff out there that is *superficially* similar to what I am talking about, and I want to make the distinction clear. For example, any discussion of concurrency in Python will naturally raise the topic of both IPython and Stackless. However, IPython (from what I understand) is really about distributed computing and not so much about fine-grained concurrency; And Stackless (from what I understand) is really about coroutines or continuations, which is a different kind of concurrency. Unless I am mistaken (and I very well could be) neither of these are immediately applicable to the problem of authoring Python programs for multi-core CPUs, but I think that both of them contain valuable ideas that are worth discussing.
From what i understand, i think that the main contribution of the stackless aproach to concurrency is microthreads: The ability to have lots and lots of cheap threads. If you want to program for some huge amount of cores, you will have to have even more threads than cores you have today.
The problem is that rigth now python (on my linux box) will only let me have 381 threads. And if we want concurrent programming to be made easy, the user is not supposed to start its programms with "how many threads can i create? 500? ok, so i should partition my app like this". This leads to: a) applications that wont get faster when the user can create more than 500 threads b) or, a lot of complicated logic to partition the software on runtime The method should go the other way around: make all the threads you can think about, if there are enough cores, they will run in parallel. jm2c. Lucio Regards, Lucio.