Multiprocessing.Queue deadlock

Felix Schlesinger schlesin at cshl.edu
Thu Oct 8 15:59:53 EDT 2009


On Oct 8, 3:21 am, Dennis Lee Bieber <wlfr... at ix.netcom.com> wrote:
> On Wed, 7 Oct 2009 10:24:08 -0700 (PDT), Felix Schlesinger
> > A bunch of workers push an unknown number of results into a queue. The
> > main process needs to collect all those results.
>
> > What is the right way to implement that with multiprocessing? I tried
> > joining the workers and then reading everything available, but
> > obviously (see above) that does not seem to work.
>
>         The cleanest solution that I can think of is to have the processes
> return a special token which identifies WHICH process is terminating, so
> you can join just that one, and go back and continue looking for data
> from the others.

I implemented the lazy version of this, namely waiting until all
workers signal that they are done (reading results until I encounter
the right number of 'done' tokens'. And only after that joining all
workers.). I think this is stable, but I am not an expert on the
issue.
Putting 'done' is always the last call to queue.put a worker makes.
Does that guarantee that it will not block after 'done' is read by the
main process?

Felix



More information about the Python-list mailing list