[Twisted-Python] Sequential use of asynchronous calls

Hi, Sometimes I want to use several asynchronous calls in a fixed sequence. For example, a web application might: - authenticate the user - fetch info from a database - present the result Implementing this using Deferreds and separate callback+errback functions has the disadvantage that the sequence itself is not easy to recognise anymore, as it gets spread out over multiple functions. So I got creative with the new generator features of Python 2.5 and came up with a decorator named "sequential", which can be applied to generator functions. It consumes Deferreds that are yielded by the generator and sends back the result when it becomes available, or raises an Exception in the generator if the deferred action fails. The decorated function returns a Deferred itself, which is fired upon completion of the sequence. In particular, this allows nesting sequences inside sequences. This is an example of a program using it, it is an elaborated version of the first example from the Deferred Reference: === from twisted.internet import defer, reactor from twisted.python import log from sequential import sequential def getDummyData(x): d = defer.Deferred() if x < 0: reactor.callLater(1, d.errback, ValueError('negative value: %d' % x)) else: reactor.callLater(1, d.callback, x * 3) return d @sequential def work(): print (yield getDummyData(3)) print (yield getDummyData(4)) print (yield 'immediate') print (yield getDummyData(6)) try: print (yield getDummyData(-7)) except ValueError, e: print 'failed:', e @sequential def main(message): print message, 'once...' yield work() print message, 'twice...' yield work() def done(result): reactor.stop() def failed(fail): log.err(fail) reactor.stop() d = main('going') d.addCallback(done) d.addErrback(failed) reactor.run() === And here is the implementation of the "sequential" module: === from twisted.internet import defer from twisted.python import failure from functools import wraps from compiler.consts import CO_GENERATOR class _SequentialCaller(object): '''Repeatedly reads a Deferred from a generator and feeds it back the result when it becomes available. ''' def __init__(self, gen): self.gen = gen self.deferred = defer.Deferred() def start(self): self.next(None) return self.deferred def next(self, result): while True: try: if isinstance(result, failure.Failure): traceback = result.getTracebackObject() \ if hasattr(result, 'getTracebackObject') else None d = self.gen.throw( result.type, result.getErrorMessage(), traceback ) else: d = self.gen.send(result) except StopIteration: self.deferred.callback(None) return except StandardError: self.deferred.errback(failure.Failure()) return if isinstance(d, defer.Deferred): d.addCallback(lambda result: self.next(result)) d.addErrback(lambda fail: self.next(fail)) return else: # Allow non-deferred values as well: for some Twisted calls, # you don't know whether the result will be deferred or not. result = d def sequential(f): if not (f.func_code.co_flags & CO_GENERATOR): raise TypeError('function "%s" is not a generator' % f.__name__) @wraps(f) def wrapper(*args, **kvArgs): return _SequentialCaller(f(*args, **kvArgs)).start() return wrapper === I'd like some feedback on this: - would you consider this useful? - is the interface right or can it be improved? - is the implementation correct? (the example scenario doesn't test the error path extensively, so there might be problems there) - is the use of Failure.getTracebackObject correct? (the version of Twisted installed on my machine does not have it yet, I only read about it in the sources on the API documentation site) - the "compiler.consts" module is not documented in the Python Library Reference, does that mean it should not be used or did they forget to document it? - anything else you'd like to say about it Is there already something like this in Twisted or one of the toolkits built on Twisted? I took a quick look at the "flow" modules, but that seems like a more generic and flexible, but also more complex, approach. If it would be a useful addition to Twisted or a Twisted-based toolkit, I'm willing to improve the documentation and write test cases. Bye, Maarten

http://foss.eepatents.com/api/twisted-goodies/taskqueue.base.TaskQueue.html Maarten ter Huurne wrote:
Hi,
Sometimes I want to use several asynchronous calls in a fixed sequence. For example, a web application might: - authenticate the user - fetch info from a database - present the result
Implementing this using Deferreds and separate callback+errback functions has the disadvantage that the sequence itself is not easy to recognise anymore, as it gets spread out over multiple functions.
So I got creative with the new generator features of Python 2.5 and came up with a decorator named "sequential", which can be applied to generator functions. It consumes Deferreds that are yielded by the generator and sends back the result when it becomes available, or raises an Exception in the generator if the deferred action fails.
The decorated function returns a Deferred itself, which is fired upon completion of the sequence. In particular, this allows nesting sequences inside sequences.
This is an example of a program using it, it is an elaborated version of the first example from the Deferred Reference:
=== from twisted.internet import defer, reactor from twisted.python import log
from sequential import sequential
def getDummyData(x): d = defer.Deferred() if x < 0: reactor.callLater(1, d.errback, ValueError('negative value: %d' % x)) else: reactor.callLater(1, d.callback, x * 3) return d
@sequential def work(): print (yield getDummyData(3)) print (yield getDummyData(4)) print (yield 'immediate') print (yield getDummyData(6)) try: print (yield getDummyData(-7)) except ValueError, e: print 'failed:', e
@sequential def main(message): print message, 'once...' yield work() print message, 'twice...' yield work()
def done(result): reactor.stop()
def failed(fail): log.err(fail) reactor.stop()
d = main('going') d.addCallback(done) d.addErrback(failed)
reactor.run() ===
And here is the implementation of the "sequential" module:
=== from twisted.internet import defer from twisted.python import failure
from functools import wraps from compiler.consts import CO_GENERATOR
class _SequentialCaller(object): '''Repeatedly reads a Deferred from a generator and feeds it back the result when it becomes available. '''
def __init__(self, gen): self.gen = gen self.deferred = defer.Deferred()
def start(self): self.next(None) return self.deferred
def next(self, result): while True: try: if isinstance(result, failure.Failure): traceback = result.getTracebackObject() \ if hasattr(result, 'getTracebackObject') else None d = self.gen.throw( result.type, result.getErrorMessage(), traceback ) else: d = self.gen.send(result) except StopIteration: self.deferred.callback(None) return except StandardError: self.deferred.errback(failure.Failure()) return if isinstance(d, defer.Deferred): d.addCallback(lambda result: self.next(result)) d.addErrback(lambda fail: self.next(fail)) return else: # Allow non-deferred values as well: for some Twisted calls, # you don't know whether the result will be deferred or not. result = d
def sequential(f): if not (f.func_code.co_flags & CO_GENERATOR): raise TypeError('function "%s" is not a generator' % f.__name__) @wraps(f) def wrapper(*args, **kvArgs): return _SequentialCaller(f(*args, **kvArgs)).start() return wrapper ===
I'd like some feedback on this: - would you consider this useful? - is the interface right or can it be improved? - is the implementation correct? (the example scenario doesn't test the error path extensively, so there might be problems there) - is the use of Failure.getTracebackObject correct? (the version of Twisted installed on my machine does not have it yet, I only read about it in the sources on the API documentation site) - the "compiler.consts" module is not documented in the Python Library Reference, does that mean it should not be used or did they forget to document it? - anything else you'd like to say about it
Is there already something like this in Twisted or one of the toolkits built on Twisted? I took a quick look at the "flow" modules, but that seems like a more generic and flexible, but also more complex, approach.
If it would be a useful addition to Twisted or a Twisted-based toolkit, I'm willing to improve the documentation and write test cases.
Bye, Maarten
------------------------------------------------------------------------
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

On Saturday 26 May 2007, Ed Suominen wrote:
http://foss.eepatents.com/api/twisted-goodies/taskqueue.base.TaskQueue.html
That looks interesting, but I don't think it has the same purpose. If I understand it correctly, TaskQueue dispatches synchronous calls to a pool of workers. The thing I posted runs a series of asynchronous calls in succession. In other words, TaskQueue handles callables, while "@sequential" handles Deferreds. Also, TaskQueue seems to be designed for running a number of indepedent tasks, while "@sequential" is designed for situations in which call n+1 depends on the result of call n. The added value of "@sequential" is that you don't have to deal with Deferreds yourself, but can use the return value of the yield instead (or the exception thrown by it). The original example did not demonstrate using the result of a previous call in the next call. Here is a new implementation of the "work" method which does: === @sequential def work(): value = 9 try: while True: print value value = (yield getDummyData(value)) - 20 except ValueError, e: print 'failed:', e === Bye, Maarten

inlineCallbacks are similar to what you are doing. very cool stuff. http://twistedmatrix.com/pipermail/twisted-python/2007-February/014869.html http://mesozoic.geecs.org/ cheers, gabor On 5/26/07, Maarten ter Huurne <maarten@treewalker.org> wrote:
On Saturday 26 May 2007, Ed Suominen wrote:
http://foss.eepatents.com/api/twisted-goodies/taskqueue.base.TaskQueue.html
That looks interesting, but I don't think it has the same purpose.
If I understand it correctly, TaskQueue dispatches synchronous calls to a pool of workers. The thing I posted runs a series of asynchronous calls in succession. In other words, TaskQueue handles callables, while "@sequential" handles Deferreds.
Also, TaskQueue seems to be designed for running a number of indepedent tasks, while "@sequential" is designed for situations in which call n+1 depends on the result of call n. The added value of "@sequential" is that you don't have to deal with Deferreds yourself, but can use the return value of the yield instead (or the exception thrown by it).
The original example did not demonstrate using the result of a previous call in the next call. Here is a new implementation of the "work" method which does:
=== @sequential def work(): value = 9 try: while True: print value value = (yield getDummyData(value)) - 20 except ValueError, e: print 'failed:', e ===
Bye, Maarten
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

On 05:20 pm, gabor.bernath@gmail.com wrote:
inlineCallbacks are similar to what you are doing. very cool stuff.
Unless I misunderstand, inlineCallbacks isn't just similar, it's exactly the same :). Is there any difference? This highlights to me that we need someone to write about more of the hidden corners of Twisted; there are a lot of features included that are not well-known, and it's not clear that anyone would look for them (or if they did, where).

On Saturday 26 May 2007, glyph@divmod.com wrote:
On 05:20 pm, gabor.bernath@gmail.com wrote:
inlineCallbacks are similar to what you are doing. very cool stuff.
Unless I misunderstand, inlineCallbacks isn't just similar, it's exactly the same :). Is there any difference?
inlineCallbacks is better, because it allows you to pass a result to the callback of the "outer" Deferred using "returnValue". My implementation just passed None. Other than that, it seems to be the same thing indeed.
This highlights to me that we need someone to write about more of the hidden corners of Twisted; there are a lot of features included that are not well-known, and it's not clear that anyone would look for them (or if they did, where).
In this particular case, it may be useful to add a section to the Deferred Reference. Bye, Maarten

Hi, Reading through the implementation of inlineCallbacks, I found some more differences, but they are small. In my implementation the decorator has a check whether the thing it is decorating is really a generator. Without this check, you will get an exception only when you start using the decorated function. I think it is usually better to detect errors at an early stage. However, the check uses compiler.consts.CO_GENERATOR, which is not documented in the Python Library Reference, so using it might be risky. My implementation only catches StandardError being raised by the generator. That is a bad idea: it should certainly catch custom Exceptions as well. It should probably catch everything, like inlineCallbacks does, so the asynchronous equivalents of "finally" have a chance to clean things up on system exit. inlineCallbacks has protection against a Deferred being returned which has already fired. My implementation treats that case the same as a Deferred that will be handled by the reactor and will therefore recurse. The way the "throw" call is done is slightly different: http://twistedmatrix.com/trac/browser/trunk/twisted/internet/defer.py#L724 result = g.throw(result.type, result.value, result.tb) My implementation: traceback = result.getTracebackObject() \ if hasattr(result, 'getTracebackObject') else None d = self.gen.throw(result.type, result.getErrorMessage(), traceback) The "value" part is different: result.getErrorMessage() returns the error message string, while result.value contains the exception object, however "throw" is designed to act like "raise" and will accept both uses. Also the "traceback" part is different: if the "tb" field is None, getTracebackObject will construct a traceback object if possible. I don't know if that extra feature will ever be useful in this particular use case. To summarize: InlineCallbacks seems to be better than my implementation, except perhaps: - it might be useful to add a check whether the function to be decorated is really a generator - result.getTracebackObject() may or may not be better than result.tb Bye, Maarten

In my implementation the decorator has a check whether the thing it is decorating is really a generator. Without this check, you will get an exception only when you start using the decorated function. I think it is usually better to detect errors at an early stage. However, the check uses compiler.consts.CO_GENERATOR, which is not documented in the Python Library Reference, so using it might be risky.
If the argument to the decorator is not a generator (i.e., it's a regular function), then you should just treat is as a generator that immediately returned. For example, @sequential def foo(): yield retval should just behave the same as @sequential def foo(): return retval I can't think of any case where passing a non-generator into @sequential is intended to mean anything other than this. -- <http://artins.org/ben> "And when I have understanding of computers, I shall be the supreme being!" -- Evil, "Time Bandits"

On Saturday 26 May 2007, Ben Artin wrote:
In my implementation the decorator has a check whether the thing it is decorating is really a generator. Without this check, you will get an exception only when you start using the decorated function. I think it is usually better to detect errors at an early stage. However, the check uses compiler.consts.CO_GENERATOR, which is not documented in the Python Library Reference, so using it might be risky.
If the argument to the decorator is not a generator (i.e., it's a regular function), then you should just treat is as a generator that immediately returned.
Since it is used as a decorator, the @sequential line will be written by the same person who wrote the function itself. If the author of the function knows it is not a generator, why would he apply @sequential (or @inlineCallbacks) to it?
For example,
@sequential def foo(): yield retval
should just behave the same as
@sequential def foo(): return retval
That behaviour would be: return a Deferred on which: - callback is called immediately if retval is not a Deferred - errback is called immediately if the function raises an exception - callback or errback is called when the retval Deferred fires And the result passed to the callback would be None, not retval. Applying maybeDeferred to the return value sounds more useful, since that will do the Deferred part, but will also preserve the result. By the way, without "@sequential" the closest non-generator equivalent of def foo(): yield retval would be def foo(): return (retval, ) In other words, it would return a sequence containing retval rather than retval itself.
I can't think of any case where passing a non-generator into @sequential is intended to mean anything other than this.
But should it be given any interpretation at all? Unless there is a real use case for it, I think it's better to consider it an error. Bye, Maarten

Since it is used as a decorator, the @sequential line will be written by the same person who wrote the function itself. If the author of the function knows it is not a generator, why would he apply @sequential (or @inlineCallbacks) to it?
One reason is that if the function returns None, then if you require it to be a generator, you have to add a gratuitous "yield None" just to shut up the piece of code that requires a generator. For example: @sequential def foo(): blah = blah would cause the error, but @sequential def foo(): blah = blah yield None would not. In my opinion, that "yield None" is just noise, and doesn't make the code any better, faster, safer, or easier to read, so given the option I would make @sequential cope even when yield None is not there. The other main case where I've run into this is when you have a protocol that expects some method to behave according to @sequential, but a particular implementation of that protocol doesn't need to do more than immediately return. For example: class TakesALongTime(): @sequential def doSomething(self): yield doPart1() yield doPart2() yield someResult class ReturnsImmediately(): @sequential def doSomething(self): return someResult Now, if you use "return" in the first class, you get a syntax error right away, because a non-empty return in a generator is a syntax error. On the other hand, if you use "return" in the second class, you don't get a syntax error, but your implementation of @sequential would produce a runtime error. I understand why it's easy to produce that runtime error, but I don't see any benefit to it -- it doesn't really save the users of @sequential any effort.
But should it be given any interpretation at all? Unless there is a real use case for it, I think it's better to consider it an error.
I would agree with you if I thought there was any possibility that interpreting a plain function as a generator with a single yield could yield unintended results, but I have yet to find such a case. Ben -- <http://artins.org/ben> "The last great thing written in C was Franz Schubert's Symphony number 9."

On Sunday 27 May 2007, Ben Artin wrote:
Since it is used as a decorator, the @sequential line will be written by the same person who wrote the function itself. If the author of the function knows it is not a generator, why would he apply @sequential (or @inlineCallbacks) to it?
One reason is that if the function returns None, then if you require it to be a generator, you have to add a gratuitous "yield None" just to shut up the piece of code that requires a generator.
Maybe it's more clear to call it @inlineCallbacks, since that is the name under which it is available; @sequential was just me re-inventing it. I agree that adding "yield None" is an ugly fix, but why not just remove the @inlineCallbacks decoration from such a function instead? I still don't understand what the point is of decorating a non-generator function with @inlineCallbacks.
The other main case where I've run into this is when you have a protocol that expects some method to behave according to @sequential, but a particular implementation of that protocol doesn't need to do more than immediately return.
The external protocol of @inlineCallbacks is just Deferred. There are a lot of ways to convert something to a Deferred already, such as maybeDeferred, execute, succeed, fail (all in the "defer" module). Using @inlineCallbacks for that purpose seems a bit overcomplicated. There is never a requirement for a function to be decorated with @inlineCallbacks, all the outside world sees is a Deferred. @inlineCallbacks is just an implementation technique to avoid having to split sequential asynchronous code over multiple functions.
For example:
class TakesALongTime(): @sequential def doSomething(self): yield doPart1() yield doPart2() yield someResult
This doesn't do what you might expect... Let's assume doPart1 and doPart2 return Deferreds, named d1 and d2. When doSomething is called, inlineCallbacks will run doPart1, register itself on d1 and return its own Deferred (dr) to the caller of doSomething. When the reactor does the callback on d1, doPart2 will be called and inlineCallbacks will register itself on d2. Finally, when the callback of d2 is called, someResult will be discarded by inlineCallbacks and the callback of dr will be called with result None. To do what you probably intended, the code would look like this: class TakesALongTime(): @defer.inlineCallbacks def doSomething(self): yield doPart1() yield doPart2() defer.returnValue(someResult) Also note that neither of these examples blocks. Instead, the decorated method will typically return a Deferred (dr) very soon. However, it could take a long time before dr's callback (or errback) is called. During that time, the caller of doSomething will probably register itself on dr and end, to pass control to the reactor.
Now, if you use "return" in the first class, you get a syntax error right away, because a non-empty return in a generator is a syntax error. On the other hand, if you use "return" in the second class, you don't get a syntax error, but your implementation of @sequential would produce a runtime error. I understand why it's easy to produce that runtime error, but I don't see any benefit to it -- it doesn't really save the users of @sequential any effort.
Well, if you consider using @inlineCallbacks on a non-generator an error, checking for this error as early as possible does save the user effort: you will see the error as soon as your module is imported, instead of having to trigger the function in question. If you consider using @inlineCallbacks on a non-generator a valid scenario, there should not be a runtime error issued, neither at "decoration time" nor at "invocation time". Note that the current implementation of @inlineCallbacks will raise an error at "invocation time", specifically when "g.send" or "g.throw" is called. I'm proposing to change that so an error will be raised by the decorator instead ("decoration time", typically when the module is imported).
But should it be given any interpretation at all? Unless there is a real use case for it, I think it's better to consider it an error.
I would agree with you if I thought there was any possibility that interpreting a plain function as a generator with a single yield could yield unintended results, but I have yet to find such a case.
Should something be given an interpretation because there are no unintended results, or because there are intended results? Bye, Maarten

Maybe it's more clear to call it @inlineCallbacks, since that is the name under which it is available; @sequential was just me re-inventing it.
That's OK. I reinvented it and called it something else myself :-)
I agree that adding "yield None" is an ugly fix, but why not just remove the @inlineCallbacks decoration from such a function instead? I still don't understand what the point is of decorating a non-generator function with @inlineCallbacks.
There is no theoretical reason. Of course I can and remove the decorator, but that is not how my brain works when write the code, and I suspect I am not alone in this. This is how I usually write my code: .oO(I am writing a function) .oO(The function's interface is that it returns a deferred, so write @inlineCallbacks) @inlineCallbacks def functionName(args): .oO(Now, what's the body going to be like?) type type type type twenty lines later: .oO(Done!) then I run the code, and find out that I have to either add a yield None or remove the decorator because as it happens, there's no yield or return in those twenty lines of code. Now, the reason that my brain works this way is that I consider the fact that the function returns a deferred a part of its interface, and if I am using Python 2.5 (which I almost exclusively am), then @inlineCallbacks is by far the cleanest way to write Deferred-using code. It turns my control flow right-side out and it allows me to stop focusing on how I pass control around using deferreds and start focusing on the problem I am trying to solve. To that end, requiring me to either add noise to my function's implementation (in the form of "yield None") or to tweak the function's header because of a happenstance of its implementation is just distracting me away from writing my code and pulling my attention to something irrelevant. Again, my argument is that the distinction between: @inlineCallbacks def functionName(args): ...some code... yield None and @inlineCallbacks def functionName(args): ...some code... ... without a return statement... is completely irrelevant and serves no purpose except to distract me from my code. If I could find *any* case in which treating the latter case exactly the same as the former case is not what was intended, then I might be willing to concede that there is value in forcing the user to clarify what they meant; however I do not see that case.
For example:
class TakesALongTime(): @sequential def doSomething(self): yield doPart1() yield doPart2() yield someResult
This doesn't do what you might expect...
Let's assume doPart1 and doPart2 return Deferreds, named d1 and d2. When doSomething is called, inlineCallbacks will run doPart1, register itself on d1 and return its own Deferred (dr) to the caller of doSomething. When the reactor does the callback on d1, doPart2 will be called and inlineCallbacks will register itself on d2. Finally, when the callback of d2 is called, someResult will be discarded by inlineCallbacks and the callback of dr will be called with result None.
To do what you probably intended, the code would look like this:
class TakesALongTime(): @defer.inlineCallbacks def doSomething(self): yield doPart1() yield doPart2() defer.returnValue(someResult)
Actually this in itself is IMO a bug in inlineCallbacks. The only reason why inlineCallbacks is implemented this way is that prior to Python 2.5 it was impossible to implement coroutines in terms of just the yield statement, and therefore some extra magic was needed to achieve the desired effect. If you read the coroutines PEP (<http:// www.python.org/dev/peps/pep-0342/>), you'll see that the syntax I used above is the syntax suggested by the PEP (and, not accidentally, also the syntax used by my own version of @inlineCallbacks)
Note that the current implementation of @inlineCallbacks will raise an error at "invocation time", specifically when "g.send" or "g.throw" is called. I'm proposing to change that so an error will be raised by the decorator instead ("decoration time", typically when the module is imported).
I understand. I am proposing that it's not an error, because there is no case in which it's ambiguous what the intent was. Another argument I can make in favor of this idea is that: @inlineCallbacks def doSomething(): raise SomeException() works the way you'd think it would upon reading the code and therefore @inlineCallbacks def doSomething(): return someValue should also work the way you'd think it would upon reading the code.
Should something be given an interpretation because there are no unintended results, or because there are intended results?
I believe that anyone who writes code that returns a plain value out of a function decorated with @inlineCallbacks has a clear intent, and they merely did not fuss over the syntax of @inlineCallback. I further believe that is in no way beneficial to force people to fuss over that syntax, and in no way detrimental to allow them to just return a plain value (because there is no ambiguity). Therefore, it should be done. Ben -- <http://artins.org/ben> "Computers! All they ever think of is hex!"

Just a quick followup with something that I think I didn't state clearly in my previous email: My mental model is that: - What I am writing are cooperative scheduled coroutines using Python yield statements - Deferreds are an implementation detail of those coroutines, and completely invisible to me Because of this, saying "@inlineCallbacks", or "@sequential", or, as I called it, simply "@async" in front of a function is what matters when I am writing the code. I do not think about Deferreds when I am writing code. Your argument that @inlineCallbacks should error if given a "plain" function is a valid one from a "but you could just write your code differently" perspective, but the result is that my code no longer matches my mental model, because now I suddenly have to care about an implementation detail of @inlineCallbacks (namely, that the function it wraps has to yield even if it's merely to yield a None at the end). Ben -- <http://artins.org/ben> "Love is an epic-level challenge." -- Durkon

________________________________ From: twisted-python-bounces@twistedmatrix.com [mailto:twisted-python-bounces@twistedmatrix.com] On Behalf Of glyph@divmod.com Sent: Saturday, May 26, 2007 10:42 AM To: Twisted general discussion Subject: Re: [Twisted-Python] Sequential use of asynchronous calls On 05:20 pm, gabor.bernath@gmail.com wrote:
inlineCallbacks are similar to what you are doing. very cool stuff.
Unless I misunderstand, inlineCallbacks isn't just similar, it's exactly the same :). Is there any difference? This highlights to me that we need someone to write about more of the hidden corners of Twisted; there are a lot of features included that are not well-known, and it's not clear that anyone would look for them (or if they did, where). This is a great idea. inlineCallbacks would have saved me a week of figuring out which functions to start a new callback chain should be in my project. I had to figure this all out after I realized that adbapi existed. I think there is a need for documentation in the deferred section regarding inlineCallbacks, and documentation related to protocols (more) and modules that exist for common operations (i.e. adbapi). Just my 2 cents from trying to learn twisted in my freetime over the last 8 months or so. -Ben

On Saturday 26 May 2007, Gábor Bernáth wrote:
inlineCallbacks are similar to what you are doing. very cool stuff.
http://twistedmatrix.com/pipermail/twisted-python/2007-February/014869.html http://mesozoic.geecs.org/
So it already exists in Twisted 2.5. That's good, it means less code for me to maintain :) Thanks for the links. By the way, I wonder why the API documentation doesn't use the decorator syntax. Since this won't work with anything older than Python 2.5 anyway, decorators are guaranteed to be available to anyone using this function. http://twistedmatrix.com/documents/current/api/twisted.internet.defer.html#i... Bye, Maarten
participants (6)
-
Ben Artin
-
Benjamin Henry
-
Ed Suominen
-
glyph@divmod.com
-
Gábor Bernáth
-
Maarten ter Huurne