ThreadPoolingMixIn

pavel.uvarov at gmail.com pavel.uvarov at gmail.com
Mon Jun 2 17:19:05 CEST 2008


On Jun 2, 7:09 pm, pavel.uva... at gmail.com wrote:
> On May 31, 9:13 pm, Rhamphoryncus <rha... at gmail.com> wrote:
>
> > On May 30, 2:40 pm, pavel.uva... at gmail.com wrote:
>
> > > Hi, everybody!
>
> > > I wrote a useful class ThreadPoolingMixIn which can be used to create
> > > fast thread-based servers. This mix-in works much faster than
> > > ThreadingMixIn because it doesn't create a new thread on each request.
>
> > Do you have any benchmarks demonstrating the performance difference/
>
> To benchmark this I used a simple tcp server which writes a small
> (16k)
> string to the client and closes the connection.
>
> I started 100 remote clients and got 500 replies/s for ThreadingMixIn
> and more than 1500 replies/s for ThreadPoolingMixIn. I tested it on
> FreeBSD 6.2 amd64.
>
> I'm very curious about the exactness of the number 500 for
> ThreadingMixIn. It seems to be the same for various packet sizes.
> I suspect there is some OS limit on thread creating rate.
>
> Below I include a bugfixed ThreadPoolingMixIn and the benchmarking
> utility. The utility can be used to start clients on localhost, though
> the reply rate will be slower (around 1000 replies/s).
>
> To start benchmarking server with localhost clients use:
> python ./TestServer.py --server=threading --n-clients=100
> or
> python ./TestServer.py --server=threadpooling --n-clients=100

I've just tested it on a linux box and got a 240 replies/s vs 2000
replies/s, that is 8x performance improvement.



More information about the Python-list mailing list