[Python-Dev] Stackless Python

Phillip J. Eby pje at telecommunity.com
Mon May 31 22:51:39 EDT 2004


At 08:49 PM 5/31/04 -0400, Bob Ippolito wrote:
>On May 31, 2004, at 8:37 PM, Phillip J. Eby wrote:
>
>>At 02:09 AM 6/1/04 +0200, Christian Tismer wrote:
>>>Anyway, I don't really get the point.
>>>95% of Stackless is doing soft-switched stackless calls.
>>>Behavior is completely controllable. We can easily avoid
>>>any special C stack operation, by setting a flag that
>>>disallows it (easy to implement) or by excluding the hard
>>>switching stuff, completely (not an option now, but easy, too).
>>
>>If soft-switching is portable (i.e. pure C, no assembly), and is exposed 
>>as a library module (so that Jython et al can avoid supporting it), then 
>>perhaps a PEP for adding that functionality to mainstream Python would be 
>>meaningful.
>
>Soft switching needs to be implemented in a few key places of the 
>interpreter itself or else Stackless would surely have been maintained as 
>an extension module.

I'm aware of this, which is why I said "exposed" as a library module, not 
"implemented" as one.  :)


>   It is already pure C, no assembly or platform specific 
> code.  Supporting the interface from Jython or IronPython should be 
> possible, though at worst case each tasklet might actually be a new 
> thread so it might not be terribly efficient... though it would work.

Yes, I suppose in the simplest case one could implement switching 
primitives via a simple mutex: acquire it when resuming, release it when 
yielding.  If the PEP described the semantics of tasklets in terms of 
threads, then a heavyweight implementation could be achieved.

However, it seems to me that any Python implementation that supports 
generators should in principle be able to support co-operative multitasking 
anyway, so long as there is no C code in the call chain.  Since 
microthreads are basically doable in pure Python 2.2 as long as you write 
*everything* as a generator and simulate your control stack, it seems you 
should be able to do this in the Python interpreter.

Actually, it seems to me that if there were a way to resume execution of a 
frame that has thrown a special "task switching" exception (sort of a 
non-local 'yield' operation), you could implement lightweight threading 
*without* having to de-stackify Python.  Of course, it's possible that the 
implementation of such a strategy might be just as complex as de-stacking 
Python.

To make it work, I think you'd need a way to tell a nested execution of 
Python code that it's being called from Python code (as opposed to C), so 
that it knows where to "resume to".  A chain of resumable frames could then 
be resumed by a special function.  Then, upon catching an exception that 
didn't unwind via any C code, you can call a function to resume the 
interrupted frame (and its parents).   Hm, actually, you'd have to copy the 
frames in order to be able to resume them, so you'd need to have a 
designated exception type for resumable exceptions, so as not to waste time 
copying frames for other kinds of errors.

Okay, I think I just convinced myself that it would be better to just make 
the interpreter core stackless instead of trying to design all that other 
crud.  :)

(To clarify, that's not an unqualified +1 to some hypothetical 
stacklessizing.  I'm just saying that if supporting ultra-light threading 
in Python is desirable, stacklessness seems at first glance like a 
promising way to do it, compared to trying to deal with unwinding the C 
stack.  And, to top it off, I think you'll end up doing half of what 
Stackless already does in order to "resume" the interrupted frames anyhow.)




More information about the Python-Dev mailing list