ianb at colorstudy.com
Tue Jun 5 21:34:45 CEST 2007
Atul Varma wrote:
> On 6/5/07, Chris McAvoy <chris.mcavoy at gmail.com> wrote:
>> This Stackless tutorial has a blurb about "concurrency is the new
>> thing" http://members.verizon.net/olsongt/stackless/why_stackless.html#concurrency-might-just-might-be-the-next-big-programming-paradigm
>> which I agree with...however...if you make a bunch of microthreads in
>> Stackless...they're not going to take advantage of multiple cores or
>> cpu's, right? Or am I wrong about that?
> I believe you are correct. From what I understand, a lot of people
> consider the idea of a single address space being shared by two
> processors (or threads) as a recipe for disaster, as code readability
> is complicated by all the locking mechanisms and a whole new class of
> extremely hard-to-debug problems crop up as a result (deadlocks,
> starvation, race conditions, etc).
> This post by Guido may help shed more light on it:
> The bottom line, though, is that it appears as though Python isn't
> really going to improve its support for threads; rather, the
> assumption is that if a Python solution needs to take advantage of
> multiple processors, it should use multiple processes instead of
> multiple threads.
I really wish PyPy would implement green thread/processes, with
non-shared memory. Then you'd get really light processes that acted
like traditional processes. You'd need some clever copy-on-write stuff
so that you didn't have a complete memory copy if two processes use the
same module; otherwise the processes aren't all that light.
I've suggested this a few times, and PyPy people always say that sure,
that would be easy; another kind of object space, I guess. But of
course "would be easy" and "exists" aren't the same thing, so the answer
is never that satisfying.
I'm intrigued by this module, but haven't tried it:
Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org
| Write code, do good | http://topp.openplans.org/careers
More information about the Chicago