Python Threading and sys.exit() codes

Bob Dickenson bdickenson at serviceware.com
Sat Apr 1 10:56:50 EST 2000


These are good suggestions.  A couple of questions/comments.

I thought about using the Queue model, but I was experimenting with MxStack
at the time.  It seemed to work fine, so...

I think I need the global lock to get proper thread sync because I have to
pop values off two stacks for each "job" - the file to be ftp'd and it's
time stamp on the source machine (for resetting it locally after the
ftp.retrbinary so that the next time the job runs it does valid date
compares.)  If I'm missing something with this logic, I'd appreciate a
clarification.  (I suppose I could compose a tuple and push/pop that, but
otherwise....)

I have read the thread doc and know that the sys.exit() in the FTPthread
class def is bogus on my part--part of the "prettifying" the code needs (mea
culpa).  

It's the sys.exit() (explicit or implied) from the __main__ thread after the
iterative thread.join of the ftp worker threads  finishes that seems to be
causing me problems. The process isn't "finishing" from the perspective of
the job scheduler.  The single-threaded version surfaces both an implied or
explicit exit code to the OS, the multi-threaded version does not surface
anything.  The SMTP message gets sent (last item before the sys.exit() in
__main__) AND the command shell window closes, but something seems to be at
loose ends on the process termination cleanup side.

(BTW - the single threaded version of this program had about 60 files per
minute throughput on the ftp; with 12 worker threads I'm getting between
800-1000 files per minute.  The flux rate on the target site is several
thousand files per day, so it's really worth it to use the multi-threaded
version)

Bob
"For a list of the ways which technology has failed to improve our
quality of life, press 3"



-----Original Message-----
From: David Fisher [mailto:python at rose164.wuh.wustl.edu]
Sent: Friday, March 31, 2000 11:52 PM
To: dickenson at serviceware.com
Cc: python-list at python.org
Subject: Re: Python Threading and sys.exit() codes


----- Original Message -----
From: <dickenson at serviceware.com>
Newsgroups: comp.lang.python
To: <python-list at python.org>
Sent: Friday, March 31, 2000 1:26 PM
Subject: Python Threading and sys.exit() codes

> My problem is this: I want to run this program as a scheduled task
> under a job scheduler (currently testing cronDsys from #ifdef). The
> single-threaded version of the program posts an appropriate code to the
> environment via sys.exit(). The job scheduler traps this and takes
> conditional action. So far, so good.
> In the multi-threaded version of the program, the exit codes are not
> getting posted and the main process is not terminating cleanly, at
> least from the perspective of the job scheduler -- it appears to run
> until manually terminated from the scheduler control console although
> the NT command window disappears from the target desktop as expected.

Well for one thing, sys.exit() doesn't do what you think it does.
Sys.exit() raises an exception SystemExit which if not caught will cause the
thread to exit silently.  Read the docs for thread which threading is built
on.  I don't know of a way to catch an exception that another thread raises.
Doesn't mean there isn't one, just that I don't know one.

Another thing you look at is the Queue module.  That's what got my attention
because you are setting a global lock and poping stuff off of a fifo stack,
and Queue does all that for you. Here's some code:

import Queue
import thread    #yes, I know I'm supposed to use threading

def worker(jobqueue):
    while 1:
        job = jobqueue.get()    # this blocks until a job is available
        print job    # or whatever

def main(joblist):
    jq = Queue.Queue(0)
    for i in range(10):
        thread.start_new_thread(worker,(jq,))
    for job in joblist:
        jq.put(job)

There are lots of things that could be prettified in this code, but I hope
it gives you some ideas.  You might try a second Queue to gather return
information.

Good luck,
David




More information about the Python-list mailing list