nagle at animats.com
Fri Aug 27 21:57:42 CEST 2010
On 8/11/2010 1:26 PM, EW wrote:
> On Aug 11, 2:52 pm, Paul Rubin<no.em... at nospam.invalid> wrote:
>> EW<ericwoodwo... at gmail.com> writes:
>>> Well I cared because I thought garbage collection would only happen
>>> when the script ended - the entire script. Since I plan on running
>>> this as a service it'll run for months at a time without ending. So I
>>> thought I was going to have heaps of Queues hanging out in memory,
>>> unreferenced and unloved. It seemed like bad practice so I wanted to
>>> get out ahead of it.
>> Even if GC worked that way it wouldn't matter, if you use just one queue
>> per type of task. That number should be a small constant so the memory
>> consumption is small.
> Well I can't really explain it but 1 Queue per task for what I'm
> designing just doesn't feel right to me. It feels like it will lack
> future flexibility. I like having 1 Queue per producer thread object
> and the person instantiating that object can do whatever he wants with
> that Queue. I can't prove I'll need that level of flexibility but I
> don't see why it' bad to have. It's still a small number of Queues,
> it's just a small, variable, number of Queues.
That's backwards. Usually, you want one queue per unique consumer.
That is, if you have a queue that contains one kind of request,
there's one thread reading the queue, blocked until some other
thread puts something on the queue. No polling is needed.
One consumer reading multiple queues is difficult to implement
Note, by the way, that CPython isn't really concurrent. Only
one thread runs at a time, due to an archaic implementation. So
if your threads are compute-bound, even on a multicore CPU threading
will not help.
There's a "multiprocessing module" which allows spreading work
over several processes instead of threads. That can be helpful
as a workaround.
More information about the Python-list