Python Threading and sys.exit() codes

David Fisher python at rose164.wuh.wustl.edu
Sat Apr 1 20:40:16 CEST 2000


----- Original Message -----
From: "Bob Dickenson" <bdickenson at serviceware.com>
To: "'David Fisher'" <python at rose164.wuh.wustl.edu>; "Bob Dickenson"
<bdickenson at serviceware.com>
Cc: <python-list at python.org>
Sent: Saturday, April 01, 2000 9:56 AM
Subject: RE: Python Threading and sys.exit() codes


> These are good suggestions.  A couple of questions/comments.

You got lucky, normally my suggestions are pretty lame. :)

>
> I thought about using the Queue model, but I was experimenting with
MxStack
> at the time.  It seemed to work fine, so...
>
> I think I need the global lock to get proper thread sync because I have to
> pop values off two stacks for each "job" - the file to be ftp'd and it's
> time stamp on the source machine (for resetting it locally after the
> ftp.retrbinary so that the next time the job runs it does valid date
> compares.)  If I'm missing something with this logic, I'd appreciate a
> clarification.  (I suppose I could compose a tuple and push/pop that, but
> otherwise....)
>

Yes, that was exactly what I was thinking of, pushing a tuple or list.  Hey,
if you've already done the work and it's doing the job, don't mess with it.
I just think Queues are cool.  I'll attach a program that was inspired by
your first post.

> I have read the thread doc and know that the sys.exit() in the FTPthread
> class def is bogus on my part--part of the "prettifying" the code needs
(mea
> culpa).
>
> It's the sys.exit() (explicit or implied) from the __main__ thread after
the
> iterative thread.join of the ftp worker threads  finishes that seems to be
> causing me problems. The process isn't "finishing" from the perspective of
> the job scheduler.  The single-threaded version surfaces both an implied
or
> explicit exit code to the OS, the multi-threaded version does not surface
> anything.  The SMTP message gets sent (last item before the sys.exit() in
> __main__) AND the command shell window closes, but something seems to be
at
> loose ends on the process termination cleanup side.

Well one thing you might try it to call setDaemon(1) for the worker threads
before start().  In threading.py sys.exitfunc() is overriden for the main
thread.  Sys.exitfunc() is called by the interpreter for cleanup after a
sys.exit().  It sounds like your program is hanging inside there.  Setting
all the threads to daemonic should bypass the logic in the new
sys.exitfunc().  I'm still a little confused though.  The only thing
sys.exitfunc() seems to do it join() to all the non-daemon threads, and
you've already done that, so it shouldn't hang.  But it's worth a try.  If
it were me, I would hack threading.py to not override sys.exitfunc() to see
if that fixed the problem.

>
> (BTW - the single threaded version of this program had about 60 files per
> minute throughput on the ftp; with 12 worker threads I'm getting between
> 800-1000 files per minute.  The flux rate on the target site is several
> thousand files per day, so it's really worth it to use the multi-threaded
> version)

Yeah, threads kick butt.
After your first post, I got to thinkin' about Queue and threads and wrote
this program to scan the internet randomly for NNTP servers that allow
public access.  So thanks, I got some inspiration from you.

Good luck,
David

from nntplib import NNTP
from Queue import Queue
import thread
import whrandom
##import daemonize

def checkserv(servq,repq):
    while 1:
        serv = servq.get()
        try:
            nn = NNTP(serv)
        except:
            repq.put((serv,''))
        else:
            repq.put((serv,'connect'))

def randurl(servq):
    rip = str(whrandom.randint(0,255))
    for i in range(3):
        rip = rip + '.' + str(whrandom.randint(0,255))
    try:
        rurl = socket.gethostbyaddr(rip)[0]
    except:
        servq.put(rip)
    else:
        servq.put(rurl)

def main():
    N_THREADS = 100
##    daemonize.become_daemon()
    f = open('openserv.txt','w')
    servq = Queue(0)
    repq = Queue(0)
    for i in range(N_THREADS * 2):
        thread.start_new_thread(randurl,(servq,))
    for i in range(N_THREADS):
        thread.start_new_thread(checkserv,(servq,repq))
    while 1:
        resp = repq.get()
        if resp[1]:
            f.write(`resp[0]`)
            f.flush()
        thread.start_new_thread(randurl,(servq,))






More information about the Python-list mailing list