Python deadlock using subprocess.popen and communicate
Atherun
atherun at gmail.com
Fri Sep 23 13:59:27 EDT 2011
On Sep 23, 10:47 am, Nobody <nob... at nowhere.com> wrote:
> On Fri, 23 Sep 2011 06:59:12 +0100, Nobody wrote:
> >> kernel32.dll!WaitForSingleObject+0x12
> >> python26.dll!_Py_svnversion+0xcf8
>
> > I haven't a clue how this happens. _Py_svnversion just returns a string:
>
> In retrospect, I think that's a red herring. 0xcf8 seems like too large an
> offset for such a small function. I think that it's more likely to be in a
> non-exported function, and _Py_svnversion just happens to be the last
> exported symbol prior to that point in the code.
I have the call stacks for each python thread running up until the
dead lock:
# ThreadID: 992
out, err = proc.communicate("change: new\ndescription: %s
\n"%changelistDesc)
File: "c:\src\extern\python\lib\subprocess.py", line 689, in
communicate
return self._communicate(input)
File: "c:\src\extern\python\lib\subprocess.py", line 903, in
_communicate
stdout_thread.join()
File: "c:\src\extern\python\lib\threading.py", line 637, in join
self.__block.wait()
File: "c:\src\extern\python\lib\threading.py", line 237, in wait
waiter.acquire()
# ThreadID: 5516
File: "c:\src\extern\python\lib\threading.py", line 497, in
__bootstrap
self.__bootstrap_inner()
File: "c:\src\extern\python\lib\threading.py", line 525, in
__bootstrap_inner
self.run()
File: "c:\src\extern\python\lib\threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File: "c:\src\extern\python\lib\subprocess.py", line 877, in
_readerthread
buffer.append(fh.read())
# ThreadID: 2668
File: "c:\src\extern\python\lib\threading.py", line 497, in
__bootstrap
self.__bootstrap_inner()
File: "c:\src\extern\python\lib\threading.py", line 525, in
__bootstrap_inner
self.run()
File: "c:\src\scripts\auto\Autobuilder\StackTracer.py", line 69, in
run
self.stacktraces()
File: "c:\src\scripts\auto\Autobuilder\StackTracer.py", line 86, in
stacktraces
fout.write(stacktraces())
File: "c:\src\scripts\auto\Autobuilder\StackTracer.py", line 26, in
stacktraces
for filename, lineno, name, line in traceback.extract_stack(stack):
# ThreadID: 3248
out, err = proc.communicate("change: new\ndescription: %s
\n"%changelistDesc)
File: "c:\src\extern\python\lib\subprocess.py", line 689, in
communicate
return self._communicate(input)
File: "c:\src\extern\python\lib\subprocess.py", line 903, in
_communicate
stdout_thread.join()
File: "c:\src\extern\python\lib\threading.py", line 637, in join
self.__block.wait()
File: "c:\src\extern\python\lib\threading.py", line 237, in wait
waiter.acquire()
# ThreadID: 7700
File: "c:\src\extern\python\lib\threading.py", line 497, in
__bootstrap
self.__bootstrap_inner()
File: "c:\src\extern\python\lib\threading.py", line 525, in
__bootstrap_inner
self.run()
File: "c:\src\extern\python\lib\threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File: "c:\src\extern\python\lib\subprocess.py", line 877, in
_readerthread
buffer.append(fh.read())
# ThreadID: 8020
out, err = proc.communicate("change: new\ndescription: %s
\n"%changelistDesc)
File: "c:\src\extern\python\lib\subprocess.py", line 689, in
communicate
return self._communicate(input)
File: "c:\src\extern\python\lib\subprocess.py", line 903, in
_communicate
stdout_thread.join()
File: "c:\src\extern\python\lib\threading.py", line 637, in join
self.__block.wait()
File: "c:\src\extern\python\lib\threading.py", line 237, in wait
waiter.acquire()
# ThreadID: 4252
File: "c:\src\extern\python\lib\threading.py", line 497, in
__bootstrap
self.__bootstrap_inner()
File: "c:\src\extern\python\lib\threading.py", line 525, in
__bootstrap_inner
self.run()
File: "c:\src\extern\python\lib\threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File: "c:\src\extern\python\lib\subprocess.py", line 877, in
_readerthread
buffer.append(fh.read())
The StackTracer thread freezes trying to update my output file, and
yes I'm trying to 3 tasks in parallel which each one starts by
creating a changelist in perforce. This is just an easy repro case
for me, it happens with commands other then p4. This almost looks
like a threading issue more then the output deadlock.
More information about the Python-list
mailing list