[New-bugs-announce] [issue14404] multiprocessing with maxtasksperchild: bug in state machine?
report at bugs.python.org
Sun Mar 25 14:12:24 CEST 2012
New submission from ranga <r_pybugs at curdrice.com>:
I asked this on Stackoverflow and discovered from the discussions there that it might be a Python bug.
HOW TO REPRODUCE
This seemingly-simple program isn't working for me unless I remove the maxtasksperchild parameter. What am I doing wrong?
from multiprocessing import Pool
print "pid: ", os.getpid(), " got: ", x
return [x, x+1]
print "got result: ", r
if __name__ == '__main__':
pool = Pool(processes=1, maxtasksperchild=9)
keys = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
result = pool.map_async(f, keys, chunksize=1, callback=cb)
When I run it, I get:
$ python doit.py
pid: 6409 got: 1
pid: 6409 got: 2
pid: 6409 got: 3
pid: 6409 got: 4
pid: 6409 got: 5
pid: 6409 got: 6
pid: 6409 got: 7
pid: 6409 got: 8
pid: 6409 got: 9
And it hangs. That is, the new worker to process the 10th element didn't get spawned.
In another terminal, I see:
$ ps -C python
PID TTY TIME CMD
6408 pts/11 00:00:00 python
6409 pts/11 00:00:00 python <defunct>
This is done on Ubuntu 11.10 running python 2.7.2+ (installed from ubuntu packages).
This is based on skimming the code and turning on logging.
The call to pool.close() (which the docs say I should call before calling pool.join()), sets the flag pool._state to CLOSE. The function Pool._handle_workers relies on that flag being 'RUN' to kick off new worker processes. Which doesn't happen, leading to everything getting stuck.
One workaround for the bug is to sleep after the map_async call for about 10 seconds before pool.close() is called. That works as it should (because pool._state doesn't get set to CLOSE until the all the jobs have finished).
Sorry if I missed something, didn't RTFM etc.
components: Library (Lib)
title: multiprocessing with maxtasksperchild: bug in state machine?
versions: Python 2.7
Python tracker <report at bugs.python.org>
More information about the New-bugs-announce