Python or OS forking/threading problem?

nsiam at yahoo.com nsiam at yahoo.com
Sat Mar 18 01:41:30 CET 2000


Running the Python program below results in either:
i) Python segfaults (that is the parent process, see below)
ii) One or more processes hogging all the CPU
I have tried this on different setups:
a) Uni-processor Linux RH6.0 running kernel 2.2.9
b) Dual-processor Linux RH6.0 running kernel 2.2.5-15smp
Both running Python 1.5.2
When running on setup a) it runs for some time then it fails.
Setup b) fails almost immediately.
I know there are some issues with forking in a thread (I assume
pthreads are used in the implementation) but as far as I can
see it should only affect the child process (deadlocks, etc if
child process inherits locks from parent). The parent process
should not be affected by this.
I would appreciate any suggestions what is wrong here and
what I can to fix this. I have a fairly large multi-threaded
application that calls some backend scripts. My Zope server
crashes once-in-a-while because of this problem.
Since I really don't need to continue running in Python in
the child process (only want to do exec) I am going to implement
my own fork/exec function in C. What I'm hoping to achieve with
that is to avoid the Python VM locks. I suspect they have something
to do with this because when my parent process segfaults it
happens in PyThread_aquireLock (I don't remember the exact name
of the function).
Thanks
--- THE EVIL PROGRAM ---
import threading
import os
class MyThread(threading.Thread):
def run(self):
for i in range(100):
self.once()
def once(self):
pid = os.fork()
if pid == 0:
print " hello mom"
os._exit(0)
pid2, sts = os.waitpid(pid,0)
print "bye baby"
threads = []
for i in range(10):
threads.append(MyThread())
map(lambda x: x.start(), threads)
import time
time.sleep(1000)
print "DONE"


Sent via Deja.com http://www.deja.com/
Before you buy.



More information about the Python-list mailing list