Ending data exchange through multiprocessing pipe

Jesse Noller jnoller at gmail.com
Thu Apr 23 07:18:51 EDT 2009


On Thu, Apr 23, 2009 at 5:15 AM, Michal Chruszcz <mchruszcz at gmail.com> wrote:
> On Apr 22, 10:30 pm, Scott David Daniels <Scott.Dani... at Acm.Org>
> wrote:
>> Michal Chruszcz wrote:
>> > ... First idea, which came to my mind, was using a queue. I've got many
>> > producers (all of the workers) and one consumer. Seams quite simple,
>> > but it isn't, at least for me. I presumed that each worker will put()
>> > its results to the queue, and finally will close() it, while the
>> > parent process will get() them as long as there is an active
>> > subprocess....
>>
>> Well, if the protocol for a worker is:
>>      <someloop>:
>>           <calculate>
>>           queue.put(result)
>>      queue.put(<worker_end_sentinel>)
>>      queue.close()
>>
>> Then you can keep count of how many have finished in the consumer.
>
> Yes, I could, but I don't like the idea of using a sentinel, if I
> still need to close the queue. I mean, if I mark queue closed or close
> a connection through a pipe why do I still have to "mark" it closed
> using a sentinel? From my point of view it's a duplication. Thus I
> dare to say multiprocessing module misses something quite important.
>
> Probably it is possible to retain a pipe state using a multiprocessing
> manager, thus omitting the state exchange duplication, but I haven't
> tried it yet.
>
> Best regards,
> Michal Chruszcz
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Using a sentinel, or looping on get/Empty pattern are both valid, and
correct suggestions.

If you think it's a bug, or you want a new feature, post it,
preferably with a patch, to bugs.python.org. Add me to the +noisy, or
if you can assign it to me.

Jesse



More information about the Python-list mailing list