Simple thread pools

Josiah Carlson jcarlson at uci.edu
Thu Nov 11 20:55:47 CET 2004


Steve Holden <steve at holdenweb.com> wrote:
> 
> Josiah Carlson wrote:
> 
> > Steve Holden <steve at holdenweb.com> wrote:

[another snip]
> > I had initially planned to create a listening socket, and generate a
> > bunch of local sockets, then I remembered os.pipe and said to myself, "to
> > hell with it, pipes should be faster, they bypass the network stack".
> > 
> Of course the example is then all running on the one machine, which 
> might influence elapsed times in a way that having servers remote 
> wouldn't, but it was a neat example just the same.

Certainly, it all but removes the latency between sending and receiving
data.  It also keeps the TCP/IP stack out of it.  The main desire was to
test the threads+system call slowdown theory, and it corroborated my
slowdown statement.


> > As they sometimes say, "there is more than one way to skin a cat",
> > though let us hope that there isn't any cat skinning.
> > 
> > If your processor spends time maxed out by your script, then you may do
> > better by reducing threads (processor limited, and not bandwidth/latency
> > limited).  As thread count increases, you spend more processor handling
> > overhead.  If it isn't maxed out, and you are running at the file handle
> > limit and/or the bandwidth limit, congrats.
> > 
> Well I'm happy to say I don't appear to be running at *any* limit just 
> at the moment. But clearly when you are CPU-limited then there's a 
> balance to be struck between thread-handling overhead and applications 
> processing.

Considering that your application is pushing emails out as fast as
possible, and mine is getting emails as fast as possible, then anything
can be the bottleneck.  The real question one should ask themselves when
writing a process that handles (arguably) too much data is "why is it
finishing in X minutes instead of X/2?"  Then again, as long as the
client is happy, it doesn't much matter.


> > Now, just because you are using fewer threads, doesn't mean that you
> > can't get equivalent throughput.  Heck, using a heavily modified variant
> > of asyncore, we've been able to handle 50,000 POP3 account checks (login,
> > stat, and if necessary: list, uidl, download email, delete email,
> > disconnect) every 15 minutes from a laptop.  Our biggest issue is
> > latency of our connection, but even then, we do well considering that
> > this is all with a single thread.
> > 
> The owner of that laptop needs to be given some work. Even *I* don't 
> check my mail every second :-)

It is an automated system, meant to check hundreds of thousands of email
accounts every 15 minutes (multiple processors/machines check 50k
accounts each processor every 15 minutes or so).


> I see we were indeed at cross purposes. The load I'm talking about is 
> bursty in the extreme - I am communicating with the MX hosts for each of 
> the receiving domains, and it takes less than 100ms to send some mails.

We don't do mx lookups, and our traffic can be bursty, though we use
dynamic scheduling to reduce/remove burstiness.  Some of our accounts do
complete fast, but others are on machines accross the world with low
bandwidth and high latency.


> For others I have to try several MX hosts to get any response at all 
> (each one with a 20-second timeout), and the reason I went to the 
> multi-threaded solution int he first place was to avoid having the whole 
> process wait on a single recalcitrant MX host.

Such things are not necessary.  If you keep records of socket
instantiation time, last data received time, last data sent time, data
transferred, etc., you can poll to disconnect on poor quality of service.
We check every 5 seconds for timed out connections, and have found the
results to be quite satisfactory.  No need to wait on a socket, use
select to multiplex.


> The client is very happy, as we've seen in increase in reliability and a 
> 48-fold speed up in elapsed time as a result of the (rather painful) 
> transition to multi-threading. This is clearly justification for using 
> 200 parallel threads, as on many threads the dominant factor in the 
> elapsed time is the remote server response (or absence thereof).

I am glad that your client is happy and that threading has been a good
solution for you.

All I'm saying is that you can get the same kinds of concurrency, same
kinds of throughput, same kinds of elapsed times, etc., with an
asynchronous approach and a single thread.  I suppose 'whatever floats 
[our] boats' and 'whatever implementation presents itself in the most
timely manner'.


There is one really nice thing about client programming...you don't need
to worry as much about a denial of service attack, as you control the
number of threads/forks/sockets/etc.  Well...except in the case of a
cruel server.


> The cases you are taking about may well require a more careful 
> consideration of threading overhead.

There are various reasons we went with an asynchronous approach, among
them being gross scalability (thousands of sockets with a Python
recompile).


 - Josiah




More information about the Python-list mailing list