threads or queue for this task

Alex Martelli aleax at
Tue Sep 17 02:40:09 EDT 2002

James J. Besemer wrote:
>>>Generally, Queues never get "full" and you generally don't care if
>>>they're empty.
>>You can define Queues that can hold a maximum number of items; [...]

And I proceed to give a very detailed and rather realistic example.
How come you don't address it at all?

>>Setting size is just an optional embellishment, though
>>sometimes it does pay dividends.
> Setting size has it's place but otherwise introduces unnecessary
> complications.  

If it's not necessary (or at least helpful) it's unnecessary.  Monsieur
de La Palice would be proud of you.

> Among other things, it forces the queue writer to deal
> with the situation of what to do if the queue is full.  Signal error
> back to its client?   Sleep and try again later?  Silently discard the
> request?  Abort?

Almost invariably, just as I said (and you snipped) and in the
example I have, the reason you set a Queue to hold a maximum
number of items is to have threads that post to it implicitly
wait if it's full when they try to post.  The situation is
strongly parallel to dealing with an _empty_ queue -- normally,
you just get items, and implicitly wait if it's empty when
you try to get.

> "Generally," NOT having a limit is by far the more elegant solution.

I dispute the generality you assert, and, again, I gave a
specific example: where the time to produce work-requests
(posted to a queue) is always far less than the time to
consume/process a work-request taken from the queue.  This
is hardly a RARE situation: on the contrary, it's quite a
normal state of affairs that processing a work-request is
much more time-consuming than producing it.

Therefore, I consider your assertion "Generally, Queues never
get *full*" a dangerous over-simplification: they get full
if you specifically define them to have a maximum size (and
post more items than you peel off), and it's not a rare case
to want that ("an optional embellishment", as I said, but one
that does happen quite normally in general cases).

> Even in the general case, the queue may eventually grow to consume all
> available memory if one doesn't have the CPU power to keep up with
> requests.  But, like most Python errors, this eventually will raise a
> specific exception.

If the number of work requests is not unbounded, you may be
thrashing your computer into disgraceful performance rather
than having your program fail with an exception -- wasting
unbounded amounts of memory on an unbounded queue, where a
single, simple call to make it bounded would solve it all
like magic.  It's not a matter how how much CPU power you
have: strictly a ratio between how much time it takes to
produce a work-request, and how much time it takes to process
it.  If that ratio, which may not depend on CPU power at all,
is low enough (it was 1:10 in my example, certainly anything
but an extreme case), using a queue of unbounded size will
degrade performance horrendously (thrashing can slow your
whole machine down almost unlimitedly, if you have virtual
memory larger enough than physical memory) if there are enough

> Anyway, I felt the "general" case was more applicable to this novice's
> line of questioning.  He/she already was confounded by some
> non-essential details.

We disagree on didactics.  I think over-simplification is a
bad way to teach: everything must be kept as simple as possible,
*but no simpler than that*.  The parallels between a queue
becoming empty, and one becoming full (when set to have a max
size), are very strong, and may indeed HELP a bright novice
grasp things better than if one tried to sweep half of the
issue under the carpet.


More information about the Python-list mailing list