[Python-Dev] Socket timeout: reset timeout at each successful syscall?

Victor Stinner victor.stinner at gmail.com
Fri Apr 3 13:56:44 CEST 2015


I reworked the socket and ssl modules to better handle signals (to
implement the PEP 475, retry on EINTR). These changes require to
recompute timeout because syscalls are calling in a loop until it
doesn't with EINTR (or EWOULDBLOCK or EGAIN). Most socket methods exit
when the underlying syscall succeed.

The problem is that the socket.sendall() method may require multiple
syscalls. In this case, does the timeout count for the total time or
only for a single syscall? Asked differently: should we reset the
timeout each time a syscall succeed?

Let's say that a server limits the bandwidth to 10 MB per second per
client (or per connection). If I want to send 1000 MB, the request
will take 100 seconds. Do you expect a timeout exception on calling
call sendall() with a timeout of 60 seconds? Each call to send() may
succeed in less than 60 seconds, for example each send() may send 1 MB
in one second.

In the socket documentation, I understand that the socket timeout is
the total duration of an operation, not the maximum duration of a
single syscall:

"In timeout mode, operations fail if they cannot be completed within
the timeout specified for the socket (they raise a timeout exception)
or if the system returns an error."

In Python 2.7, 3.4 and 3.5, socket.sendall() resets the timeout after
each send() success.

We had similar questions in the asyncio module, especially for
StreamReader methods which can require multiple reads. I propose a
patch to add a timeout parameter to StreamReader: it resets the
timeout after each successful read. It's already possible to put a
global timeout on any asyncio operationg. StreamReader timeout patch:

Note: there is also an open issue to add a socket.recvall() method
which would open similar question on timeout:


More information about the Python-Dev mailing list