On Sat, Jan 19, 2013 at 8:49 PM, Peter Portante email@example.com wrote:
I noticed while stracing a process that sock.setblocking() calls always result in pairs of fcntl() calls on Linux. Checking 2.6.8, 2.7.3, and 3.3.0 Modules/socketmodule.c, the code seems to use the following (unless I have missed something):
delay_flag = fcntl(s->sock_fd, F_GETFL, 0); if (block) delay_flag &= (~O_NONBLOCK); else delay_flag |= O_NONBLOCK; fcntl(s->sock_fd, F_SETFL, delay_flag);
Perhaps a check to see the flags changed might be worth making?
int orig_delay_flag = fcntl(s->sock_fd, F_GETFL, 0); if (block) delay_flag = orig_delay_flag & (~O_NONBLOCK); else delay_flag = orig_delay_flag | O_NONBLOCK; if (delay_flag != orig_delay_flag) fcntl(s->sock_fd, F_SETFL, delay_flag);
OpenStack Swift using the Eventlet module, which sets the accepted socket non-blocking, resulting in twice the number of fcntl() calls. Not a killer on performance, but it seems simple enough to save a system call here.
This would seem to be a simple enough fix, but it seems you are only fixing it if a *redundant* call to setblocking() is made (i.e. one that attempts to set the flag to the value it already has). Why would this be a common pattern? Even if it was, is the cost of one extra fcntl() call really worth making the code more complex?