datagram queue length

Jonathan Ellis jbellis at gmail.com
Thu Aug 11 06:37:50 CEST 2005


I seem to be running into a limit of 64 queued datagrams.  This isn't a
data buffer size; varying the size of the datagram makes no difference
in the observed queue size.  If more datagrams are sent before some are
read, they are silently dropped.  (By "silently," I mean, "tcpdump
doesn't record these as dropped packets.")  This is a problem because
while my consumer can handle the overall load easily, the requests
often come in large bursts.

This only happens when the sending and receiving processes are on
different machines, btw.

Can anyone tell me where this magic 64 number comes from, so I can
increase it?

Illustration follows.

-Jonathan

# <receive udp requests>
# start this, then immediately start the other
# _on another machine_
import socket, time

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('', 3001))

time.sleep(5)

while True:
     data, client_addr = sock.recvfrom(8192)
     print data

# <separate process to send stuff>
import socket

for i in range(200):
     sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
     sock.sendto('a' * 100, 0, ('***other machine ip***', 3001))
     sock.close()




More information about the Python-list mailing list