threads or queue for this task

James J. Besemer jb at
Tue Sep 17 12:54:36 CEST 2002

Alex Martelli wrote:

>James J. Besemer wrote:
>        ...
>>>>Generally, Queues never get "full" and you generally don't care if
>>>>they're empty.
>>>You can define Queues that can hold a maximum number of items; [...]
>And I proceed to give a very detailed and rather realistic example.
>How come you don't address it at all?

I certainly meant no disrespect. Nor did I mean to contradict anything 
material that you said.

I took your point to be that a bounded queue also is sometimes useful, 
which I do not and did not dispute.

Your example and discussion seemed complete and thorough.  I read it as 
an extension to what I had offered and I had nothing to add or to 

In summary, you expressly classed it as an "optional embellishment" 
which seemed consistent with my own view.

It goes without saying that if the designer knows in advance that he 
doesn't have the throughput to handle incoming requests, then a 
significantly more complicated solution is called for.  I alluded to 
more complex topologies and scenarios but I lacked the time and patience 
to spell out the solutions.  Attempting to do so seemed doubly pointless 
as you seemed to be taking the lead on more elaborate scenarios.

So at a high level I am at a loss, as you seem to be picking a fight I 
did not intend to participate in.  Like I said, I thought we were 
largely in agreement, while expanding upon different scenarios.

>>Generally," NOT having a limit is by far the more elegant solution.
>I dispute the generality you assert, 

I agree there are situations where unbounded queues might be 
inappropriate.  I only meant to suggest to this particular novice that 
it was a complication that I would avoid in a first cut solution.  

In this, I am "coming from" the standpoint of Extreme Programming -- you 
may recall -- that dictates you start by writing the simplest solution 
you don't know won't work.  Performance optimization and other 
"embellishments" are always deferred unless/until you can't avoid it.

That simplest, initial solution IMHO, is an unbounded queue.  I further 
characterize it as "elegant" as it's fully functional for many 
applications and it's as simple as it gets.    To me, real world 
complications, however realistic, tend to result in less "elegant" 
solutions but perhaps you disagree.

>and, again, I gave a
>specific example: where the time to produce work requests
>(posted to a queue) is always far less than the time to
>consume/process a work-request taken from the queue.  
>is hardly a RARE situation: on the contrary, it's quite a
>normal state of affairs that processing a work-request is
>much more time-consuming than producing it.

Incidentally, the ratio of the "time to produce work requests" vs. the 
time to process them is immaterial (since the work requests may be 
created on a different machine).  Queuing theory usually refers to "the 
arrival rate" of the requests vs. the the average processing time that 
matters.  Little's result tells us that in a steady state system, if the 
average arrival rate is R (arrivals per unit) and the average processing 
time is T (units) then the average queue length will be N:

    N = R *  T

Of course, Little's result only holds for steady state systems, where 
the arrival rates are relatively low or the processing times are 
relatively short.  It may be more useful to consider inter-arrival times 
(the inverse of arrival rates), I:

    N = T / I.

Then it's easy to see that as the inter-arrival times shrink then the 
queue length can become arbitrarily large.  Of course, this all is 
assuming stochastic processes, with random variable arrival rates and 
processing times.

If you're looking at a static system with a fixed processing time and 
known arrival rates then it's a matter of simple arithmetic  Bottom line 
is if the requests exceed the capacity of the machine then your queue 
length grows without bounds and eventually you're hosed.  

Frankly, any time you do NOT have enough capacity to handle the expected 
traffic then your system design is flawed on a fundamental level.  You 
can perhaps mitigate the problem with bounded queues or what not but 
that still leaves you with possible data overruns or other problems 
upstream from the writer end of the queue.  Bounded queues only push the 
problem of to some other point in the system, since the writer is 
suspended and not responding to the requests.  Ultimately, at some 
level, the designer is going to have to deal with the problem of 
requests that cannot be fulfilled.  Conversely, if it is given that 
there is ample CPU power then he can dispense with all the complexity 
and use unbounded queues or something even simpler.

In my experience, computer queuing systems first and foremost have to be 
designed so that they can handle the max throughput without fail.  In a 
few embedded systems, throughput considerations drove the choice and 
speed of the CPU(s) and other hardware "assist."  More commonly, the CPU 
speed was fixed and max throughput drove constraints on request rates or 
processing times, usually both.  In a number of systems I worked on, 
queues always only had 0 or 1 items in them.  In one system like this, 
the queues actually were implemented in hardware and writer CPUs 
actually locked up until the reader emptied the previous request.  In 
another, the CPUs themselves did not lock but all further communication 
was effectively blocked until the "queue" cleared.  Blocking for any 
length of time was considered a bug and the system was designed never to 
do that.  In other systems, queues had fixed sizes but this was in cases 
where the RTOS required the developer to declare a fixed amount of 
space.  In this case, the queue size was determined by measuring the 
average arrival rates and processing times (plus a fudge factor), but 
the limits were chosen to conserve RAM; like the CPU's stack they were 
designed never to be exceeded in normal operation.  Then there were 
many, many cases where throughput was known to be sufficiently low that 
queue length was never a consideration.  Usually arrivals were low 
enough and processing time fast enough that the average queue length was 
virtually zero.

As far as producers outperforming consumers being the "normal state of 
affairs" -- while I agree once it was a common problem, I find it to be 
increasingly not to be the case, now that we have computers with 
multiple Giga Hz and hundreds of Mega Bytes of RAM.  Hell, my present 
desktop machine's RAM is 50X larger than my first hard drive. This is 
results in a qualitative difference in programming.  One of the really 
cool things about Python in these modern machines is you can do all 
sorts of things that previously were virtually impossible.  E.g., 
reading and writing multi megabyte files as a unit and treating them 
internally as a big string, where previously only line at a time 
processing was possible.  Used to be you'd never think of using 
"unbounded" strings -- or lists or other large, dynamic structures.  Now 
they're the norm.  Or -- in this case -- Python's unbounded queues are 
useful over a much wider range of conditions than previously thought. 
 Used to be all your elegant little CS algorithms never worked on a real 
computer because you had to translate them from their ideal world to the 
real world of extremely limited memory, disk and CPU power.  While we 
may always be machine limited for some applications, I find it 
delightful that increasingly we can ignore those nasty details, merely 
write cosmic-simple Python code and have it all work.

Really and truly, unbounded Queues seem to me to be the Pythonetic way 
to go.  ;o)

>Therefore, I consider your assertion "Generally, Queues never
>get *full*" a dangerous over-simplification: they get full
>if you specifically define them to have a maximum size (and
>post more items than you peel off), 

Yes.  If you make them too small they will overflow.  

I was saying DO NOT do this.

>and it's not a rare case
>to want that ("an optional embellishment", as I said, but one
>that does happen quite normally in general cases).

I don't know how to react to your shift in emphasis on this point from 
mere "optional embellishment" to "dangerous over-simplification".   I 
think you're exaggerating the "danger."  What is the "dangerous" outcome 
you're striving to avoid?

I think you were right the first time.

>If the number of work requests is not unbounded, 

The total number of requests is immaterial.  You mean if the arrival 
rate is sufficiently high or the processing time too long.

>you may be
>thrashing your computer into disgraceful performance rather
>than having your program fail with an exception -- wasting
>unbounded amounts of memory on an unbounded queue, 

Yes, I acknowledged that your system can thrash or fail some other way 
if you don't have enough power to process the requests.

>where a
>single, simple call to make it bounded would solve it all
>like magic.  

Magic sounds good but what about the time-critical requests that don't 
get serviced in time?

Bounded queues are A solution to some problems but not The solution for 
all situations.  

And as I said, they push part of the problem to some other place in the 
design, which may or may not be a good thing.

>>Anyway, I felt the "general" case was more applicable to this novice's
>>line of questioning.  He/she already was confounded by some
>>non-essential details.
>We disagree on didactics.  I think over-simplification is a
>bad way to teach: everything must be kept as simple as possible,
>*but no simpler than that*.  

Is it my turn to retort with a Monsieur de La Palice allusion?

Actually, it sounds like we agree on didactics but disagree on exactly 
where to draw the line.  And I don't see that we're all that far apart.

I'd be curious to hear Robin's opinion on the matter.

I also sense that I've somehow done something to piss you off.  I'm 
sorry if I did but I can't think what it would have been.



James J. Besemer		503-280-0838 voice
2727 NE Skidmore St.		503-280-0375 fax
Portland, Oregon 97211-6557	mailto:jb at	

More information about the Python-list mailing list