
Itamar Shtull-Trauring wrote:
Rather than doing queuing in the transport, do it in your DatagramProtocol code - instead of doing transport.write add stuff to a queue that schedules writes fairly for different machines.
If I do this, I'll need: def xmit(self): while True: txqueue = self.txcurrent self.txcurrent = txqueue.next if txqueue: self.write(txqueue.pop(0)) break reactor.callLater(0, self.xmit) def write(self, data, address): self.getqueue(address).append(data) def rcv(self): while True: rxqueue = self.rxcurrent self.rxcurrent = rxqueue.next if rxqueue: pdu = rxqueue.pop(0) pdu.callback() break def datagramReceived(self, data, addr): pdu, queue = self.parse(data, addr) queue.append(pdu) reactor.callLater(0, self.rcv) I'm concerned about all those reactor.callLater - since one of the main problems is the UDP socket queue overflowing, every time I xmit I have to get *out* of the protocol code ASAP and back into the select() loop, however one of the problems with the reactors (problems for me at any rate) is that they do pending calls and thread stuff before IO, which IMHO is not quite the right way round. I'm also slightly concerned about the number of function calls involved in jumping in and out of the reactor that many times a second (several thousand, if I can get it to go as fast as my previous code) given how expensive they are under Python. It would certainly be quicker to implement this inside the reactor.mainLoop. Anyway, I'll give it a go, but I thought I saw stuff recently about scalability of callLater; and certainly I've had problems with a lot of callLaters (I used to use them for the timeouts before going for a simpler expiry scan) - maybe callLater will go faster in Python 2.4 with the C bisect.insort? Thanks for the tips