[issue10482] subprocess and deadlock avoidance

Glenn Linderman report at bugs.python.org
Wed Nov 24 08:56:21 CET 2010


Glenn Linderman <v+python at g.nevcal.com> added the comment:

So I've experimented a bit, and it looks like simply exposing ._readerthread as an external API would handle the buffered case for stdout or stderr.  For http.server CGI scripts, I think it is fine to buffer stderr, as it should not be a high-volume channel... but not both stderr and stdout, as stdout can be huge.  And not stdin, because it can be huge also.

For stdin, something like the following might work nicely for some cases, including http.server (with revisions):

    def _writerthread(self, fhr, fhw, length):
        while length > 0:
            buf = fhr.read( min( 8196, length ))
            fhw.write( buf )
            length -= len( buf )
        fhw.close()

When the stdin data is buffered, but the application wishes to be stdout centric instead of stdin centric (like the current ._communicate code), a variation could be made replacing fhr by a data buffer, and writing it gradually (or fully) to the pipe, but from a secondary thread.

Happily, this sort of code (the above is extracted from a test version of http.server) can be implemented in the server, but would be more usefully provided by subprocess, in my opinion.

To include the above code inside subprocess would just be a matter of tweaking references to class members instead of parameters.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue10482>
_______________________________________


More information about the Python-bugs-list mailing list