On 06/20/2011 10:04 AM, Orestis Markou wrote:
I have a question about the "best practices" on making callbacks and making sure you don't hog the reactor.
So you pass a deferred to a client, who attaches a chain of callbacks
Who is "you" and who is "client"?
that might probably do some CPU intensive stuff. How should one guard for that? The obvious solution to me for the server part would be to do
AIUI, callback/errback (or indeed any Twisted function) should not block, which means if they're doing any significant quantity of work they should deferToThread or use a task.cooperator to break it into chunks. You can return a deferred from a callback to pause processing, so this is very easy to implement: def my_callback(data): d = deferToThread(factor_some_prime, data['number']) d.addCallback(lambda x: 'prime factors are '+repr(x)) return d
reactor.callLater(0, d.callback, arg)
That can help in some cases. Specifically if you're receiving datagrams, you might want to service the read() loop as much as possible before packets start to get dropped. But if d.callback is going to do a lot of work, it doesn't solve the problem - just delays it. callbacks/errbacks should not do a lot of work.
What about the client part? What would be the best way to have a
I don't really understand what you mean by client and server part. A deferred is just a deferred. They don't even have to be used in a network context.