[Twisted-Python] Limiting number of concurrent client connections
![](https://secure.gravatar.com/avatar/65acdacc21f441ac8a6b15c4ae585a71.jpg?s=120&d=mm&r=g)
Im finding that Win32Reactor raises an exception on every iteration of the main loop if I exceed the limit of 64 WaitForMultipleObjects. I would prefer to avoid this fairly obvious denial-of-service problem by limiting the number of concurrent client connections. Is there a standard solution for this? Thanks in advance, -- Toby Dickenson
![](https://secure.gravatar.com/avatar/f2bba003db2fcc65b5f30d855ff18749.jpg?s=120&d=mm&r=g)
This isn't the best solution, but I've been using a semaphore to limit the number of "tasks" that take place at a time. I've also been doing some research on IOCP and plan on adding some missing functionality to the IOCP reactor so I don't have to use the win32eventreactor anymore. from twisted.internet import defer class Semaphore: """A semaphore for event driven systems.""" def __init__(self, tokens): self.waiting = [] self.tokens = tokens self.limit = tokens def acquire(self): """Attempt to acquire the token. @return Deferred which returns on token acquisition. """ assert self.tokens >= 0 d = defer.Deferred() if not self.tokens: self.waiting.append(d) else: self.tokens = self.tokens - 1 d.callback(self) return d def release(self): """Release the token. Should be called by whoever did the acquire() when the shared resource is free. """ assert self.tokens < self.limit self.tokens = self.tokens + 1 if self.waiting: # someone is waiting to acquire token self.tokens = self.tokens - 1 d = self.waiting.pop(0) d.callback(self) def _releaseAndReturn(self, r): self.release() return r def run(self, f, *args, **kwargs): """Acquire token, run function, release token. @return Deferred of function result. """ d = self.acquire() d.addCallback(lambda r: defer.maybeDeferred(f, *args, **kwargs).addBoth(self._releaseAndReturn)) return d On 6/28/05, Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk> wrote:
![](https://secure.gravatar.com/avatar/7ed9784cbb1ba1ef75454034b3a8e6a1.jpg?s=120&d=mm&r=g)
On Tue, 28 Jun 2005 10:47:04 +0100, Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk> wrote:
Count the number of connections you have accepted. When you get up to 62 or 63 or so, stop accepting new ones. If ServerFactory.buildProtocol() returns None, Twisted immediately closes the accepted connection. If you do this (perhaps in conjunction with calling stopListening() on the port returned by listenXYZ()), you'll never overrun the 64 object limit. Jp
![](https://secure.gravatar.com/avatar/7ed9784cbb1ba1ef75454034b3a8e6a1.jpg?s=120&d=mm&r=g)
On Tue, 28 Jun 2005 09:50:49 -0500, Justin Johnson <justinjohnson@gmail.com> wrote:
From my experience, the problem has to do with more than just number of connections. I've reached the limit from spawning too many processes.
Right. You need to take into account everything that is happening in the process. Client connections you create, processes you spawn, _and_ incoming connections you accept. The below lets you limit one of these factors. A solution like the semaphore you posted (a version of which is in Twisted 2.0, btw, for those who aren't aware - twisted.internet.defer.DeferredSemaphore) would let you limit the others. Jp
![](https://secure.gravatar.com/avatar/f2bba003db2fcc65b5f30d855ff18749.jpg?s=120&d=mm&r=g)
This isn't the best solution, but I've been using a semaphore to limit the number of "tasks" that take place at a time. I've also been doing some research on IOCP and plan on adding some missing functionality to the IOCP reactor so I don't have to use the win32eventreactor anymore. from twisted.internet import defer class Semaphore: """A semaphore for event driven systems.""" def __init__(self, tokens): self.waiting = [] self.tokens = tokens self.limit = tokens def acquire(self): """Attempt to acquire the token. @return Deferred which returns on token acquisition. """ assert self.tokens >= 0 d = defer.Deferred() if not self.tokens: self.waiting.append(d) else: self.tokens = self.tokens - 1 d.callback(self) return d def release(self): """Release the token. Should be called by whoever did the acquire() when the shared resource is free. """ assert self.tokens < self.limit self.tokens = self.tokens + 1 if self.waiting: # someone is waiting to acquire token self.tokens = self.tokens - 1 d = self.waiting.pop(0) d.callback(self) def _releaseAndReturn(self, r): self.release() return r def run(self, f, *args, **kwargs): """Acquire token, run function, release token. @return Deferred of function result. """ d = self.acquire() d.addCallback(lambda r: defer.maybeDeferred(f, *args, **kwargs).addBoth(self._releaseAndReturn)) return d On 6/28/05, Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk> wrote:
![](https://secure.gravatar.com/avatar/7ed9784cbb1ba1ef75454034b3a8e6a1.jpg?s=120&d=mm&r=g)
On Tue, 28 Jun 2005 10:47:04 +0100, Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk> wrote:
Count the number of connections you have accepted. When you get up to 62 or 63 or so, stop accepting new ones. If ServerFactory.buildProtocol() returns None, Twisted immediately closes the accepted connection. If you do this (perhaps in conjunction with calling stopListening() on the port returned by listenXYZ()), you'll never overrun the 64 object limit. Jp
![](https://secure.gravatar.com/avatar/7ed9784cbb1ba1ef75454034b3a8e6a1.jpg?s=120&d=mm&r=g)
On Tue, 28 Jun 2005 09:50:49 -0500, Justin Johnson <justinjohnson@gmail.com> wrote:
From my experience, the problem has to do with more than just number of connections. I've reached the limit from spawning too many processes.
Right. You need to take into account everything that is happening in the process. Client connections you create, processes you spawn, _and_ incoming connections you accept. The below lets you limit one of these factors. A solution like the semaphore you posted (a version of which is in Twisted 2.0, btw, for those who aren't aware - twisted.internet.defer.DeferredSemaphore) would let you limit the others. Jp
participants (3)
-
Jp Calderone
-
Justin Johnson
-
Toby Dickenson