[Python-Dev] SocketServer issues

Kristján Valur Jónsson kristjan at ccpgames.com
Wed Mar 14 19:37:05 CET 2012


A different implementation, (e.g. one using windows IOCP), can do timeouts without using select (and must, select does not work with IOCP).   So will a gevent based implementation, it will timeout the accept on each socket individually, not by calling select on each of them.

The reason I'm fretting is latency.  There is only one thread accepting connections.  If it has to do an extra event loop dance for every socket that it accepts that adds to the delay in getting a response from the server.  Accept() is indeed critical for socket server performance.

Maybe this is all just nonsense, still it seems odd to jump through extra hoops to emulate a functionality that is already supported by the socket spec, and can be done in the most appropriate way for each implementation.

K

-----Original Message-----
From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Antoine Pitrou
Sent: 14. mars 2012 10:23
To: python-dev at python.org
Subject: Re: [Python-Dev] SocketServer issues

On Wed, 14 Mar 2012 16:59:47 +0000
Kristján Valur Jónsson <kristjan at ccpgames.com> wrote:
> 
> It just seems odd to me that it was designed to use the "select" api 
> to do timeouts, > where timeouts are already part of the socket protocol and can be implemented more efficiently there.

How is it more efficient if it uses the exact same system calls?
And why are you worrying exactly? I don't understand why accept() would be critical for performance.

Thanks

Antoine.





More information about the Python-Dev mailing list