os.system(), HTTPServer, and finishing HTTP requests

Erik Johnson ej
Mon Aug 16 04:42:15 CEST 2004


  I am trying to spawn a daemon type program to go off on its own and do
some work (asynchoronously) from within an HTTPServer, but I am running into
a problem where the web browser (actually a Perl LWP program) still seems to
be waiting for more input until the child that is forked in the server
finishes (granchild, actually). I don't understand this. I need to be able
to have the POST to the server complete and go on even though there is a
(grand)child process still running.

    So... that's the short summary, more specific code follows..

   worker.py is a program that does the standard double-fork paradigm,
allowing the original parent to return and leaving an orphaned child behind
to do its own thing. If you call this program directly through the shell, it
returns you immediately to your shell, which is just like what I want and
would expect:

#! /usr/bin/python

import sys, os, time

t = 30

pid = os.fork()
if (pid == 0):
    # child: fork() again and exit so that the original
    # parent is not left waiting on us.
    pid = os.fork()
    if (pid > 0):
        # child
        print "child (PID %d): forked grandchild %d. I'm exiting." %\
            (os.getpid(), pid)

    # grandchild continues here, drops out bottom of if-block

elif (pid > 0):
    # parent: exit so caller can continue
    p_pid = os.getpid()
    print "parent (PID %d): forked child %d. ." % (p_pid, pid)
    (c_pid, status) = os.wait()
    print "parent (PID %d): child %d is done. I'm exiting." % (p_pid, c_pid)
    sys.exit() # normal exit

# grandchild continues here - my worker process
g_pid = os.getpid()
s = "grandchild (PID %d): going to sleep for %d seconds..." % (g_pid, t)
print s

# pretend to do some work

# done
print "grandchild (PID %d) waking up... exiting." % g_pid

Here is sample output from running worker.py:
ej at sand:~/src/python/problem> worker.py
grandchild (PID 9697): going to sleep for 30 seconds...
child (PID 9696): forked grandchild 9697. I'm exiting.
parent (PID 9695): forked child 9696. .
parent (PID 9695): child 9696 is done. I'm exiting.
ej at sand:~/src/python/problem>

and then about 30 sec later, after getting back to my shell, my console

grandchild (PID 9697) waking up... exiting.

This is all fine and dandy.  One step removed from that is to call this
program from another Python program via os.system(). It too exits
immediately and returns me to the shell even though my (grand)child process
is still running. Again, this is just what I want and expect. Here is the
code for that program:

#! /usr/bin/python

import os, sys

PROG_NAME = sys.argv[0]

print PROG_NAME + ": calling worker.py..."
print PROG_NAME + ": all done. exiting."

Here is the output from running it (again, all fine and dandy):

ej at sand:~/src/python/problem> call_worker.py
./call_worker.py: calling worker.py...
grandchild (PID 9710): going to sleep for 30 seconds...
child (PID 9709): forked grandchild 9710. I'm exiting.
parent (PID 9708): forked child 9709. .
parent (PID 9708): child 9709 is done. I'm exiting.
./call_worker.py: all done. exiting.
ej at sand:~/src/python/problem>

and after 30 seconds, your again-active shell window should print:

grandchild (PID 9710) waking up... exiting.

     But if I make this same os.system() call from within my own HTTPServer,
then the browser that is making the request is left hanging until that
sleeping grandchild is done. It's like there is still a socket connection
between it and my browser?!? But the grandchild is not forked from the
server - it's an os.system() call!  It should not be inherting any file
descriptors and such, right? Even though the do_GET() function should be
over, and in fact, you can see in the shell that started the server that the
server process has already exited, the browser is apparently still waiting
for the web page to finish loading.

    I don't get it!  I have a long task I need to instantiate in my Python
server based on info in the HTTP POST, and the program that is making that
POST needs to be able to go on and do other stuff before my task is over.
Why am I not getting a clean exit in my POSTing process? What do I need to
do to get my HTTP request to my server to finish essentially "immediately"?

Thanks for taking the time to read my post. Any help greatly appreciated!
:) -ej

    Below is the code for a simple server you can run. It is currently coded
to listen on port 5238 and only serve one request before the server exits.
It serves a little static chunk of HTML on a GET request regardless of what
PATH you call (I talked about POST above, but the browser-hanging behaviour
is the same).

#! /usr/bin/python


from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
import os, sys

SERVER_PORT = 5238    # arbitrary port not in use (open on firewall?)
WORKER = 'worker.py'    # name of the program to call via os.system()
PROG = sys.argv[0]           # name of this program

class RequestHandler(BaseHTTPRequestHandler):

    def do_GET(self):

        # call my "daemon" to get some stuff done in the background
        print "%s: about to call os.system(%s)" % (PROG, WORKER)
        print "%s: back from os.system(%s)" % (PROG, WORKER)

        # send HTTP headers
        self.send_header("Content-type", 'text/html');

        # start HTML output
        html = """\
  <body><h2>All done!</h2></body>


# BEGIN main
if (__name__ == '__main__'):

    # create a new HTTP server and handle a request
    server = HTTPServer(('', SERVER_PORT), RequestHandler)

More information about the Python-list mailing list