![](https://secure.gravatar.com/avatar/72461691f3cbaa91934949e4f2472702.jpg?s=120&d=mm&r=g)
On Wed, Oct 24, 2012 at 4:03 PM, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
Hi Guido,
On 2012-10-24, at 6:43 PM, Guido van Rossum <guido@python.org> wrote:
What's the problem with just letting the cleanup take as long as it wants to and do whatever it wants? That's how try/finally works in regular Python code.
The problem appears when you add timeouts support.
Let me show you an abstract example (I won't use yield_froms, but I'm sure that the problem is the same with them):
@coroutine def fetch_comments(app): session = yield app.new_session() try: return (yield session.query(...)) finally: yield session.close()
and now we execute that with:
#: Get a list of comments; throw a TimeoutError if it #: takes more than 1 second comments = yield fetch_comments(app).with_timeout(1.0)
Now, scheduler starts with 'fetch_comments', then executes 'new_session', then executes 'session.query' in a round-robin fashion.
Imagine, that database query took a bit less than a second to execute, scheduler pushes the result in coroutine, and then a timeout event occurs. So scheduler throws a 'TimeoutError' in the coroutine, thus preventing the 'session.close' to be executed. There is no way for a scheduler to understand, that there is no need in pushing the exception right now, as the coroutine is in its finally block.
And this situation is a pretty common when you have such timeouts mechanism in place and widely used.
Ok, I can understand. But still, this is a problem with timeouts in general, not just with timeouts in a yield-based environment. How does e.g. Twisted deal with this?
As a work-around, I could imagine some kind of with-statement that tells the scheduler we're already in the finally clause (it could still send you a timeout if your cleanup takes way too long):
try: yield <regular code> finally: with protect_finally(): yield <cleanup code>
Of course this could be abused, but at your own risk -- the scheduler only gives you a fixed amount of extra time and then it's quits.
Could another workaround be to spawn the cleanup code without yielding - in effect saying "go and do this, but don't come back"? Then there is nowhere for the scheduler to throw the exception. I ask because this falls out naturally with my implementation (code is coming, but work is taking priority right now): "do_cleanup()" instead of "yield do_cleanup()". I haven't tried it in this context yet, so no idea whether it works, but I don't see why it wouldn't. In a system without the @async decorator you'd need a "scheduler.current.spawn(do_cleanup)" instead of yield [from]s, but it can still be done. Cheers, Steve