[Twisted-Python] robust listenUDP with N clients?

hi folks I am new to twisted and new to server programming. Nonetheless, I would like to build a robust client-server looping mechanism in which a server would listen for input from N clients, say 10, and react immediately when it receives data from them. I have a sample (see below) that appears to run well, but am not sure if it will scale under real network load and larger N. (I am parsing the input data received from the clients and using it to calculate a fibonnaci series, just for testing. The reactor can be stopped with keyboard input (I am on a win32 platform)). If you see any problems with this approach, please let me know ! Thanks, marc CODE: import os, sys, time, msvcrt from twisted.internet import reactor, defer, task from twisted.internet.protocol import DatagramProtocol ports = [1007, 2007, 3007, 4007, 5007, 6007, 7007, 8007, 9007] #---------------------------- def check_keyboard(result): try: if(msvcrt.kbhit()): result = "hit" except: pass return result #--------------------------- def mstop(result): if(result == "hit"): print "got hit - closing task and reactor" mtask.stop() reactor.stop() #--------------------------- def print_time(result): print time.asctime(time.localtime()) #--------------------------- def fibonnaci(limit): first, new, second = 0,0,1 for i in xrange(limit - 1): new = first + second first = second second = new return new #--------------------------- def mrun(): d = defer.Deferred() d.addCallback(check_keyboard) d.addCallback(mstop) d.addCallback(print_time) d.callback("blabla") #--------------------------- class server(DatagramProtocol): def datagramReceived(self, data, (host, port)): print "server received: %r from %s:%d" % (data, host, port) self.transport.write(data, (host, port)) limit = int(data.split(": ")[1]) num = fibonnaci(limit) print num #--------------------------- if __name__ == '__main__': print "listening on 9 ports - hit key to quit" mrun() mtask = task.LoopingCall(mrun) mtask.start(1.0) for i in range (0,(len(ports))): reactor.listenUDP(ports[i], server()) reactor.run() #--------------------------

On Mon, 04 Dec 2006 10:28:57 -0500, marc bohlen <marcbohlen@acm.org> wrote:
You probably need to be more specific to get many useful responses. What do you expect N to go to? Why do you use multiple ports? Why are you using UDP? What do you mean by "robust"? What does the Fibonnaci calculation have to do with the application? Is the keyboard input handling code relevant or just part of the example? Is win32 your development platform or your deployment platform or both? Jean-Paul

Jean-Paul Calderone wrote:
Hi Jean-Paul I would want N no larger than 30. I am testing only on localhost and have N ports instead of N computers for now. Fibonnaci calculation: just a cpu cycle eating app for testing, will change. Keyboard handling is important. Win32 is development and deployment platform. UDP choice: I would like the faster of the two protocols (UDP vs TCP), although TCP has congestion control. Is TCP the better choice here? Robust: will it work even under heavy network load marc

On Mon, 04 Dec 2006 12:05:29 -0500, marc bohlen <marcbohlen@acm.org> wrote:
30 is quite small. There should be no performance issues related to the network layer.
I am testing only on localhost and have N ports instead of N computers for now.
Okay. So ultimately it will be 1 port communicating with N computers, instead of the current N ports all communicating with 1 computer?
Fibonnaci calculation: just a cpu cycle eating app for testing, will change.
I don't think your original example realistically represented CPU load. It seems more likely that you will have some work to do in response to each request, which corresponds to datagramReceived
Keyboard handling is important. Win32 is development and deployment platform.
Win32 is pretty crummy. I have no idea how well anything will work on it. I generally assume that it won't, at all.
If that's the primary metric, TCP is a better choice.
Robust: will it work even under heavy network load
If _that's_ the primary metric, TCP is still a better choice. :) Jean-Paul

On Mon, 04 Dec 2006 10:28:57 -0500, marc bohlen <marcbohlen@acm.org> wrote:
You probably need to be more specific to get many useful responses. What do you expect N to go to? Why do you use multiple ports? Why are you using UDP? What do you mean by "robust"? What does the Fibonnaci calculation have to do with the application? Is the keyboard input handling code relevant or just part of the example? Is win32 your development platform or your deployment platform or both? Jean-Paul

Jean-Paul Calderone wrote:
Hi Jean-Paul I would want N no larger than 30. I am testing only on localhost and have N ports instead of N computers for now. Fibonnaci calculation: just a cpu cycle eating app for testing, will change. Keyboard handling is important. Win32 is development and deployment platform. UDP choice: I would like the faster of the two protocols (UDP vs TCP), although TCP has congestion control. Is TCP the better choice here? Robust: will it work even under heavy network load marc

On Mon, 04 Dec 2006 12:05:29 -0500, marc bohlen <marcbohlen@acm.org> wrote:
30 is quite small. There should be no performance issues related to the network layer.
I am testing only on localhost and have N ports instead of N computers for now.
Okay. So ultimately it will be 1 port communicating with N computers, instead of the current N ports all communicating with 1 computer?
Fibonnaci calculation: just a cpu cycle eating app for testing, will change.
I don't think your original example realistically represented CPU load. It seems more likely that you will have some work to do in response to each request, which corresponds to datagramReceived
Keyboard handling is important. Win32 is development and deployment platform.
Win32 is pretty crummy. I have no idea how well anything will work on it. I generally assume that it won't, at all.
If that's the primary metric, TCP is a better choice.
Robust: will it work even under heavy network load
If _that's_ the primary metric, TCP is still a better choice. :) Jean-Paul
participants (2)
-
Jean-Paul Calderone
-
marc bohlen