[Python-ideas] Async API: some code to review
Richard Oudkerk
shibturn at gmail.com
Tue Oct 30 23:01:19 CET 2012
On 30/10/2012 8:24pm, Guido van Rossum wrote:
> Here's another unscientific benchmark: I wrote a stupid "http" server
> (stupider than echosvr.py actually) that accepts HTTP requests and
> responds with the shortest possible "200 Ok" response. This should
> provide an adequate benchmark of how fast the event loop, scheduler,
> and transport are at accepting and closing connections (and reading
> and writing small amounts). On my linux box at work, over localhost,
> it seems I can handle 10K requests (sent using 'ab' over localhost) in
> 1.6 seconds. Is that good or bad? The box has insane amounts of memory
> and 12 cores (?) and rates at around 115K pystones.
I tried the simple single threaded benchmark below on my laptop.
| Connections/sec
---------------------------------------+-----------------
Linux | 6000-11000
Linux in a VM (with 1 cpu assigned) | 4600
Windows | 1400
On Windows this sometimes failed with:
OSError: [WinError 10055] An operation on a socket could not
be performed because the system lacked sufficient buffer
space or because a queue was full
import socket, time, sys, argparse
N = 10000
def server():
l = socket.socket()
l.bind(('127.0.0.1', 0))
l.listen(100)
print('listening on port', l.getsockname()[1])
while True:
a, _ = l.accept()
data = a.recv(20)
a.sendall(data.upper())
a.close()
def client(port):
start = time.time()
for i in range(N):
with socket.socket() as c:
c.connect(('127.0.0.1', port))
c.sendall(b'foo')
res = c.recv(20)
assert res == b'FOO'
c.close()
elapsed = time.time() - start
print("elapsed=%s, connections/sec=%s" % (elapsed, N/elapsed))
parser = argparse.ArgumentParser()
parser.add_argument('--port', type=int, default=None,
help='port to connect to')
args = parser.parse_args()
if args.port is not None:
client(args.port)
else:
server()
--
Richard
More information about the Python-ideas
mailing list