Weird pointer errors... Stackless problem? PyString_InternInPlace Fatal error

Christian Tismer tismer at appliedbiometrics.com
Sat Sep 16 12:15:01 CEST 2000


Mike Fletcher wrote:
> 
> Background: Chris's fix stopped one particular manifestation, but the
> general problem still exists...
> 
> I'm beginning to suspect something:
> In atomic (the most commonly used function in uthread), we have a try
> finally clause...
> 
> continuation_uthread_lock = continuation.uthread_lock
> def atomic(func, *args, **kwargs):
>         """Perform a function call as a microthread-safe operation."""
>         tmp = continuation_uthread_lock(1)
>         try:
>                 return apply(func, args, kwargs)
>         finally:
>                 continuation_uthread_lock(tmp)

Just and others already pointed me to some leak problems
with uthreads and exceptions. I couldn't nail them down
yet, but there is something, probably.

> Now, I have a (large) number of fairly new (i.e. added in the last few weeks
> to fix other bugs that had emerged) cases in my code where I need atomic
> action right up to the point where a thread blocks (and then after as well).
> I've implemented this as:
> 
> atomic( thecall )
> 
> With the call blocking the thread as normal.  Now, this seems to work, the
> system keeps running afterward (i.e. the atomic nature doesn't prevent other
> threads from running once we block the thread), but I'm thinking the problem
> may be that continuation_uthread_lock(tmp) is getting called twice on the
> same "tmp" (once when blocked, once when un-blocked).  Thinking about it,
> the lock likely isn't getting re-acquired after blocking when the
> continuation is called, so I may have to alter uthreads to fix this stuff
> anyway (not, of course, that I can see an elegant way to do that.)  I had
> (almost certainly wrongly) assumed the lock was stored as part of the
> continuation and was getting restored when the continuation was called.

Clarification on the lock:
No, it is much much simpler.
The thread_state has a single flag, which tells whether
automatic context switching is allowed or not.
Every new interpreter incarnation starts with this
flag set, so by default there are no running uthreads.
After a recursive interpreter finishes, it restores the
lock value which was there before it was run. So far on
recursive behavior. (nightingale, do I hear yor whispering?)

So let's assume we are running in the same interpreter level
all the time (ignoring recursions, since they are locked).
uthread_lock does nothing else but write a different value
into the thread_state lock, and returning the old value.
Proper handling of multiple calls on this is upto you.

Yes this is very simple, and maybe it is too simple, I don't know.

Anyway, wrong or not, the system may not crash under any
circumstances, and I'm sure that I'm missing some cases
here. Now, can it be that you are unlocking while you
are in an interpreter recursion?
This should normally work until your frame chain returns
to the caller of the recursion by a continuation. The frame
is hit in an unsane state, which is prevented. Nobody can
create a continuation of a frame that is waiting for
completion of a ceval recursion. Instead, an exception
is thrown.
But I realize that you are catching every exception.
Can you try to remove the try..except, or be more specific?

I can imagine that I wind up in a situation where I cannot
run the current frame, but I don't handle the above situation
correctly, causing refcount problems.

Please be aware that you cause an interpreter recursion on every
import, execfile, and every __xxx__ method call of an instance.
These are the (hard to solve) remaining stackfull troubles
of Stackless :-)

cheers - chris

-- 
Christian Tismer             :^)   <mailto:tismer at appliedbiometrics.com>
Applied Biometrics GmbH      :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com




More information about the Python-list mailing list