Fetching the stdout & stderr as it flows from a unix command.

Donn Cave donn at u.washington.edu
Fri Jun 4 22:04:44 CEST 2004

In article <c9qi42$908$1 at info4.fnal.gov>,
 Matt Leslie <matthewleslie at hotmail.com> wrote:

> Hans Deragon wrote:
> > Greetings.
> > 
> > 
> > I am performing:
> > 
> >   commands.getstatusoutput("rsync <params>");
> > 
> > rsync takes a long time and prints out a steady stream of lines
> > showing which file it is currently working on.
> > 
> > Unfortunatly, the commands.getstatusoutput() fetchs everything and
> > there is no way to ask it to copy the output of the command to
> > stdout/stderr.  Thus when rsync takes an hour, I have no clue where it
> > is at.
> > 
> > Anybody has a suggestion how to let stdout/stderr print out on the
> > console when calling a unix command?
> > 
> Perhaps you could use the popen2 module. popen2.popen2() returns file 
> handles for stdout and stderr. You could read from these and print the 
> output to the screen while the process runs.

Carefully, though.  If there's no particular reason to keep
the two streams distinct, then it might be a good idea to
merge them, so there is only one file to read.

Otherwise, to read two files in parallel, see select.select.

With select, I would not use the file objects that Popen3 creates,
rather use the pipe file descriptors directly (the integer number
returned by the file object fileno() method, which can be used
with core UNIX/POSIX I/O functions like os.read.)  File objects
are buffered, which makes select more or less useless;  you can
turn off buffering, but that makes functions like readline()
horribly inefficient.  It isn't very hard to live without the
file object here.

And then note that rsync may not deliver its output as promptly
when talking to a pipe.  That can be solved too, but it's
usually not worth the trouble.

   Donn Cave, donn at u.washington.edu

More information about the Python-list mailing list