![](https://secure.gravatar.com/avatar/1ab02b123e4645acb4a53fe974d1742b.jpg?s=120&d=mm&r=g)
I've been prototyping a client that connects to thousands of servers and calls some method. It's not real important to me at this stage whether that's via xmlrpc, perspective broker, or something else. What seems to happen on the client machine is that each network connection that gets opened and then closed goes into a TIME_WAIT state, and eventually there are so many connections in that state that it's impossible to create any more. I'm keeping an eye on the output of netstat -an | wc -l Initially I've got 569 entries there. When I run my test client, that ramps up really quickly and peaks at about 2824. At that point, the client reports a callRemoteFailure: callRemoteFailure [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionLost'>: Connection to the other side was lost in a non-clean fashion: Connection lost. Increasing the file descriptor limits doesn't seem to have any effect on this. Is there an established Twisted sanctioned canonical way to free up this resource? Or am I doing something wrong? I'm looking into tweaking SO_REUSEADDR and SO_LINGER - that sound sane? Just tapping the lazywebs to see if anyone's already seen this in the wild. Thanks guys Donal