
Have a look at htb.py, you should be able to rate limit the udp throughput rather easily. For me doing web stuff it was (in its entirety) like this: def RateBucket(parent=None): bucket = htb.Bucket(parent) bucket.rate = 4000 bucket.maxburst = 12000 return bucket bucketFilter = htb.FilterByHost() bucketFilter.bucketFactory = RateBucket application = service.Application("simple") site_factory = server.Site( Directory() ) internet.TCPServer( SERVER_PORT, site_factory ).setServiceParent(application) site_factory.protocol = htb.ShapedProtocolFactory( site_factory.protocol, bucketFilter ) Hope that helps. Stephen Thorne On Mon, 2004-02-09 at 20:51, Mike C. Fletcher wrote:
I've written a UDP-based protocol adaptor (TwistedSNMP) where one of the key requirements is the ability to scan thousands of SNMP Agents "simultaneously" (i.e. the requests should be asynchronously sent and retired as the Agents respond).
http://members.rogers.com/mcfletch/programming/index.htm#TwistedSNMP
Writing a simple asynchronous loop myself (poll on a simple socket, send message from queue when writable, read one into other queue when readable) allowed for doing a few thousand queries simultaneously), with only a few dozen dropped messages. However, with the Twisted equivalent (UDPTransport with my simple protocol object), I was seeing huge drop rates, so, gathering that Twisted isn't queueing up the UDP requests, I wrote a (byzantine) query-throttling mechanism with Twisted defers.
Problem is, it's a byzantine, fragile (and *slow*) solution to what would *seem* to be one of the most common requirements in networked development. Worse yet, because I am seeing such high drop rates I wind up having to batch in very small groups, serially (instead of parallel-ly), so the primary purpose of the system (fast querying of thousands of agents) is lost. (Instead of taking 1 or 2 minutes to query 800 or so Agents it will take on the order of 10 minutes.)
So, the question: is there a simple way to turn on buffered mode in UDP transports so that they can deal with queueing up a few thousand messages to send, sending them, then having a few thousand computers send a reply (within a few seconds of one another)? Is Twisted just not Queue-reliant via use of some mechanism I haven't discovered yet? Even if I do find a decent queueing mechanism, I'm still left with the problem that timeouts and the like are going to wind up being measured from queueing-time, rather than sending time... not an issue if everything gets sent in a half-second or so, but a real problem if it takes 8 or 9 seconds just to send the original messages out.
Looking at the udp.Port class, I'm not seeing anything providing a queue, seems as though there's a non-blocking write or read, but nothing to handle overflows of sends or receives AFAICT, though it looks as though a protocol could do some queueing on incoming in its datagramReceived... just not sure how that would work.
Thoughts appreciated, Mike
_______________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://members.rogers.com/mcfletch/
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python