ANN: Stackless Python 0.2
cwebster at math.tamu.edu
Tue Jun 29 03:06:49 CEST 1999
In article <000c01bec11f$8dfbf800$e19e2299 at tim>,
Tim Peters <tim_one at email.msn.com> wrote:
>[Corran Webster, commenting on Christian Tismer's "stackless Python"]
>> I wonder whether there's the potential here for more than coroutines
>> and to write an implementation of threading within Python.
>It's been mentioned a few times. So far Guido isn't keen on any of this,
>but he's been known to buckle after a few short years of incessant whining
><wink>. BTW, a coroutine pretty much *is* a thread in a time-sliced world,
>just lacking a "transfer" function that invokes implicitly whenever it
>bloody well feels like it <wink>.
Indeed - and it was all this talk of coroutines that made me wonder
why not go that extra bit.
>> Each thread would presumably need its own Python stack,
>Nope! It's a beauty of the implementation already that each code object
>knows exactly how much "Python stack space" it needs, and (just) that much
>is allocated directly into the code object's runtime frame object. IOW,
>there isn't "a Python stack" as such, so there's nothing to change here --
>the stack is implicit in the way frames link up to each other.
Ah - my impression of the way that the stack works was wrong (hadn't
gotten that far in the reading of the source). So each code object
has a little block of stack space set aside (which is precalculated
to be enough), together with a pointer to its own top of the stack
inside that block?
>> and a queueing and locking system would need to be added somehow,
>Yes, and that would require some changes to the core. Sounds doable,
More work than coroutines, I suspect, and I don't think it can be
done easily to use the same framework as the current C threads.
>> but because the Python and C stacks are no longer intertangled, switching
>> between threads should be easy (as opposed to impossible <wink>).
>I think Christian's approach moves lots of crazy ideas from impossible to
Indeed. A very nice piece of code.
>> This wouldn't be quite as flexible as the current implimentations of
>> threads at the C level - C extensions would be called in a single block
>> with no way to swap threads until they return.
>That's mostly true today too: Python threads run serially now, one at a
>time, and if a thread calling out to C doesn't release the global lock no
>other thread will run until it returns.
OK. And presumably most extensions don't bother to release the lock
unless they specifically want to take advantage of threading somehow.
>> On the other hand, this would allow every platform to have some sort
>> of threading available, which would be a boon.
>It's even quite possible that "fake threads" would-- for programs that
>aren't doing true multiprocessing --run significantly more efficiently than
>today's scheme of creating OS-level threads and then choking them into
>taking strict turns; e.g., because "the Python stack" grows only to the
>exact size it needs, there's almost certainly much less memory overhead that
>way than by letting the OS allocate a mostly unused Mb (whatever) to each
>real thread's stack. This opens the possibility to create thousands &
>thousands of fake threads.
Although any machine which is being used that intensively probably already
has some sort of C level threading available.
>> Unfortunately I'm not familiar enough with threads to go out there and
>> implement it right away, but I thought I'd at least raise it as a
>> possibility and see what people think and what the pros and cons are.
>It's sure worth pondering!
>write-some-code-anyway-&-get-famous-one-way-or-another<wink>-ly y'rs - tim
Well, I won't be writing it this week<wink> but I have a better idea of the
ins and outs now.
More information about the Python-list