multiprocessing deadlock
larudwer
larudwer at freenet.de
Sat Oct 24 06:37:53 EDT 2009
"Brian Quinlan" <brian at sweetapp.com> schrieb im Newsbeitrag
news:mailman.1895.1256264717.2807.python-list at python.org...
>
> Any ideas why this is happening?
>
> Cheers,
> Brian
IMHO your code is buggy. You run in an typical race condition.
consider following part in your code:
> def _make_some_processes(q):
> processes = []
> for _ in range(10):
> p = multiprocessing.Process(target=_process_worker, args=(q,))
> p.start()
> processes.append(p)
> return processes
p.start() may start an process right now, in 5 seconds or an week later,
depending on how the scheduler of your OS works.
Since all your processes are working on the same queue it is -- very --
likely that the first process got started, processed all the input and
finished, while all the others haven't even got started. Though your first
process exits, and your main process also exits, because the queue is empty
now ;).
> while not q.empty():
> pass
If you where using p.join() your main process wourd terminate when the last
process terminates !
That's an different exit condition!
When the main process terminates all the garbage collection fun happens. I
hope you don't wonder that your Queue and the underlaying pipe got closed
and collected!
Well now that all the work has been done, your OS may remember that someone
sometimes in the past told him to start an process.
>def _process_worker(q):
> while True:
> try:
> something = q.get(block=True, timeout=0.1)
> except queue.Empty:
> return
> else:
> print('Grabbed item from queue:', something)
The line
something = q.get(block=True, timeout=0.1)
should cause some kind of runtime error because q is already collected at
that time.
Depending on your luck and the OS this bug may be handled or not. Obviously
you are not lucky on OSX ;)
That's what i think happens.
More information about the Python-list
mailing list