Stackless Python, eventual merge?

Oren Tirosh oren-py-l at hishome.net
Wed Sep 18 11:18:34 EDT 2002


On Wed, Sep 18, 2002 at 03:33:32PM +0200, Christian Tismer wrote:
> Oren Tirosh wrote:
> > They may be dangerous in some circumstances. IIRC, if a C extension calls
> > the Python interpreter recursively with pointers to things on its stack
> > it can cause memory corruption. 
> 
> Then don't switch stacks in that context.
> The switching occours on your demand.
> But the theoretical possibility to create a crash
> is no reason to dispense with the benefits.

Definitely not. Don't get me wrong - I'm very excited about stackless. 

> My only chance to do this magic with almost no crash
> possibility would be to allocate a new stack for
> every new tasklet. 

Yes, that's a possible solution. I wouldn't say that it's the only
chance, though. There's always more than one way to skin a cat.

> A benefit would be reasonably faster switching, while
> the drawback would be the reasonable high memory
> requirement for a full machine stack.
> I might consider a hybrid solution, putting certain
> "dangerous" extension stuff into a real new stack
> all the time, together with the quite fast and very
> cheap other safe tasklets.
> 
> Just-need-an-idea-how-to-determine-that - ly y'rs - chris

Who says you have to determine that in advance? Just try. If the
extension actually calls the interpreter recusively it means that it
needs its own stack.  Let it have exclusive ownership of the current OS
thread and its associated stack and allocate another thread from a
dynamic thread pool to continue tasklet scheduling. When the extension
returns back to nesting level 0 its thread will be released back into
the pool. This way a relatively small pool of OS threads can be used to
juggle a much larger pool of tasklets.

This can also be a good way to handle extensions that make blocking I/O
calls like database engines - a relatively small pool of OS threads can
serve requests from a bigger pool of tasklets. This can be transparent:
when the number of concurrent requests exceeds the number of threads any
new request frames can be queued until a thread becomes available. The
level of concurrency for any resource can be tuned dynamically to
optimize the performance without affecting the overall structure of the
program.

	Oren





More information about the Python-list mailing list