[Chicago] Best practices for this concurrent problem?

Daniel Griffin dgriff1 at gmail.com
Mon Sep 14 01:50:17 CEST 2009


I didnt even see the multiprocessing Pool class. I think that will work
perfectly. The big difference in overhead for Processes is that you need to
re-init certain things for each process which is a lot of work. Since
Windows doesn't have fork it actually just calls your program again and gets
to the point where you make a process which is pretty rough.

Dan

On Sun, Sep 13, 2009 at 6:24 PM, Pete <pfein at pobox.com> wrote:

> On Sep 13, 2009, at 1:51 PM, Daniel Griffin wrote:
>
>  processes - I dont think its a good idea to make processes that are short
>> lived, it seems too expensive. This is even worse on windows.
>>
>
> I don't know diddly about windows, but the process creation overhead on
> Linux is pretty minimal - the differences b/w threads & processes are
> slightly.
>
>  async - I would have to re-factor to make this work and havent tried yet.
>>
>> summary - What is the best way to deal with (sometimes) large numbers of
>> "threads" that do a small amount of processing and a large amount of socket
>> io?
>>
>
> Sounds like the typical case for async, actually, but since I don't know
> anything about your problem, I can't really say.  Async's also a pit of a
> pain to write/read/maintain, as you mention.
>
> Have you thought about refactoring to use long-lived processes/threads?  A
> better design is often to create workers which receive tasks (typically off
> a queue) and process them one at a time.  This usually gives better results
> than spawning a new thread for each task.
>
> --Pete
> _______________________________________________
> Chicago mailing list
> Chicago at python.org
> http://mail.python.org/mailman/listinfo/chicago
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/chicago/attachments/20090913/b5297401/attachment.htm>


More information about the Chicago mailing list