Yielding through context managers
I'd like to propose adding the ability for context managers to catch and handle control passing into and out of them via yield and generator.send() / generator.next(). For instance, class cd(object): def __init__(self, path): self.inner_path = path def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path) def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path) def __yield__(self): self.inner_path = os.getcwd() os.chdir(self.outer_path) def __send__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path) Here __yield__() would be called when control is yielded through the with block and __send__() would be called when control is returned via .send() or .next(). To maintain compatibility, it would not be an error to leave either __yield__ or __send__ undefined. The rationale for this is that it's sometimes useful for a context manager to set global or thread-global state as in the example above, but when the code is used in a generator, the author of the generator needs to make assumptions about what the calling code is doing. e.g. def my_generator(path): with cd(path): yield do_something() do_something_else() Even if the author of this generator knows what effect do_something() and do_something_else() have on the current working directory, the author needs to assume that the caller of the generator isn't touching the working directory. For instance, if someone were to create two my_generator() generators with different paths and advance them alternately, the resulting behaviour could be most unexpected. With the proposed change, the context manager would be able to handle this so that the author of the generator doesn't need to make these assumptions. Naturally, nested with blocks would be handled by calling __yield__ from innermost to outermost and __send__ from outermost to innermost. I rather suspect that if this change were included, someone could come up with a variant of the contextlib.contextmanager decorator to simplify writing generators for this sort of situation. Cheers, J. D. Bartlett
On 3/29/2012 8:00 PM, Joshua Bartlett wrote:
I'd like to propose adding the ability for context managers to catch and handle control passing into and out of them via yield and generator.send() / generator.next().
For instance,
class cd(object): def __init__(self, path): self.inner_path = path
def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path)
def __yield__(self): self.inner_path = os.getcwd() os.chdir(self.outer_path)
def __send__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
Here __yield__() would be called when control is yielded through the with block and __send__() would be called when control is returned via .send() or .next(). To maintain compatibility, it would not be an error to leave either __yield__ or __send__ undefined.
This strikes me as the wrong solution to the fragility of dubious code. The context manager protocol is simple: two special methods. Ditto for the iterator protocol. The generator protocol has been complexified; not good, but there are benefits and the extra complexity can be ignored. But I would be reluctant to complexify the cm protocol. This is aside from technical difficulties.
The rationale for this is that it's sometimes useful for a context manager to set global or thread-global state as in the example above, but when the code is used in a generator, the author of the generator needs to make assumptions about what the calling code is doing. e.g.
def my_generator(path): with cd(path): yield do_something() do_something_else()
Pull the yield out of the with block. def my_gen(path): with cd(path): directory = <read directory> yield do_something(directory) do_else(directory) or def my_gen(p): with cd(p): res = do_something() yield res with cd(p): do_else() Use same 'result' trick if do_else also yields.
Even if the author of this generator knows what effect do_something() and do_something_else() have on the current working directory, the author needs to assume that the caller of the generator isn't touching the working directory. For instance, if someone were to create two my_generator() generators with different paths and advance them alternately, the resulting behaviour could be most unexpected. With the proposed change, the context manager would be able to handle this so that the author of the generator doesn't need to make these assumptions.
Or make with manipulation of global resources self-contained, as suggested above and as intended for with blocks. -- Terry Jan Reedy
class cd(object): def __init__(self, path): self.inner_path = path
def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path)
def __yield__(self): self.inner_path = os.getcwd() os.chdir(self.outer_path)
def __send__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path) [snip] def my_generator(path): with cd(path): yield do_something() do_something_else()
Interesting idea, though doing this with present Python does not seem to be very painful: class cd(object): def __init__(self, path): self.inner_path = path def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path) return self def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path) def my_generator(path): with cd(path) as context: output = do_something() with cd(context.outer_path): yield output ... Cheers. *j
Interesting idea, though doing this with present Python does not seem to be very painful:
class cd(object):
def __init__(self, path): self.inner_path = path
def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path) return self
def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path)
def my_generator(path): with cd(path) as context: output = do_something() with cd(context.outer_path): yield output ...
Yes, that's possible, although as the context manager gets more complicated (e.g. modifying os.environ as well as working directory, I'd currently start using something like this: def my_generator(arg): with context_manager(arg) as context: output = do_something() with context.undo(): yield output ... But nevertheless adding __yield__ and __send__ (or equivalent) to context managers means that the author of the context manager can make sure that it's free of unintended side effects, rather than relying on the user to be careful as in the examples above. Cheers, J. D. Bartlett
I've just read through PEP 3156 and I thought I'd resurrect this thread from March. Giving context managers the ability to react to yield and send, and especially to yield from, would allow the eventual introduction of asynchronous locks using PEP 3156 futures. This is one of the open issues listed in the PEP. Cheers, J. D. Bartlett. On 30 March 2012 10:00, Joshua Bartlett <josh@bartletts.id.au> wrote:
I'd like to propose adding the ability for context managers to catch and handle control passing into and out of them via yield and generator.send() / generator.next().
For instance,
class cd(object): def __init__(self, path): self.inner_path = path
def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path)
def __yield__(self): self.inner_path = os.getcwd() os.chdir(self.outer_path)
def __send__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
Here __yield__() would be called when control is yielded through the with block and __send__() would be called when control is returned via .send() or .next(). To maintain compatibility, it would not be an error to leave either __yield__ or __send__ undefined.
The rationale for this is that it's sometimes useful for a context manager to set global or thread-global state as in the example above, but when the code is used in a generator, the author of the generator needs to make assumptions about what the calling code is doing. e.g.
def my_generator(path): with cd(path): yield do_something() do_something_else()
Even if the author of this generator knows what effect do_something() and do_something_else() have on the current working directory, the author needs to assume that the caller of the generator isn't touching the working directory. For instance, if someone were to create two my_generator() generators with different paths and advance them alternately, the resulting behaviour could be most unexpected. With the proposed change, the context manager would be able to handle this so that the author of the generator doesn't need to make these assumptions.
Naturally, nested with blocks would be handled by calling __yield__ from innermost to outermost and __send__ from outermost to innermost.
I rather suspect that if this change were included, someone could come up with a variant of the contextlib.contextmanager decorator to simplify writing generators for this sort of situation.
Cheers,
J. D. Bartlett
Possibly (though it will have to be a separate PEP -- PEP 3156 needs to be able to run on unchanged Python 3.3). Does anyone on this thread have enough understanding of the implementation of context managers and generators to be able to figure out how this could be specified and implemented (or to explain why it is a bad idea, or impossible)? --Guido On Sat, Jan 5, 2013 at 12:52 AM, Joshua Bartlett <josh@bartletts.id.au> wrote:
I've just read through PEP 3156 and I thought I'd resurrect this thread from March. Giving context managers the ability to react to yield and send, and especially to yield from, would allow the eventual introduction of asynchronous locks using PEP 3156 futures. This is one of the open issues listed in the PEP.
Cheers,
J. D. Bartlett.
On 30 March 2012 10:00, Joshua Bartlett <josh@bartletts.id.au> wrote:
I'd like to propose adding the ability for context managers to catch and handle control passing into and out of them via yield and generator.send() / generator.next().
For instance,
class cd(object): def __init__(self, path): self.inner_path = path
def __enter__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
def __exit__(self, exc_type, exc_val, exc_tb): os.chdir(self.outer_path)
def __yield__(self): self.inner_path = os.getcwd() os.chdir(self.outer_path)
def __send__(self): self.outer_path = os.getcwd() os.chdir(self.inner_path)
Here __yield__() would be called when control is yielded through the with block and __send__() would be called when control is returned via .send() or .next(). To maintain compatibility, it would not be an error to leave either __yield__ or __send__ undefined.
The rationale for this is that it's sometimes useful for a context manager to set global or thread-global state as in the example above, but when the code is used in a generator, the author of the generator needs to make assumptions about what the calling code is doing. e.g.
def my_generator(path): with cd(path): yield do_something() do_something_else()
Even if the author of this generator knows what effect do_something() and do_something_else() have on the current working directory, the author needs to assume that the caller of the generator isn't touching the working directory. For instance, if someone were to create two my_generator() generators with different paths and advance them alternately, the resulting behaviour could be most unexpected. With the proposed change, the context manager would be able to handle this so that the author of the generator doesn't need to make these assumptions.
Naturally, nested with blocks would be handled by calling __yield__ from innermost to outermost and __send__ from outermost to innermost.
I rather suspect that if this change were included, someone could come up with a variant of the contextlib.contextmanager decorator to simplify writing generators for this sort of situation.
Cheers,
J. D. Bartlett
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- --Guido van Rossum (python.org/~guido)
On Sun, Jan 6, 2013 at 5:23 AM, Guido van Rossum <guido@python.org> wrote:
Possibly (though it will have to be a separate PEP -- PEP 3156 needs to be able to run on unchanged Python 3.3). Does anyone on this thread have enough understanding of the implementation of context managers and generators to be able to figure out how this could be specified and implemented (or to explain why it is a bad idea, or impossible)?
There aren't any syntax changes needed to implement asynchronous locks, since they're unlikely to experience high latency in __exit__. For that and similar cases, it's enough to use an asynchronous operation to retrieve the CM in the first place (i.e. acquire in __iter__ rather than __enter__) or else have __enter__ produce a Future that acquires the lock in __iter__ (see http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/async_program...) The real challenge is in handling something like an asynchronous database transaction, which will need to yield on __exit__ as it commits or rolls back the database transaction. At the moment, the only solutions for that are to switch to a synchronous-to-asynchronous adapter like gevent or else write out the try/except block and avoid using the with statement. It's not an impossible problem, just a tricky one to solve in a readable fashion. Some possible constraints on the problem space: - any syntactic solution should work for at least "for" statements and "with" statements - also working for comprehensions is highly desirable - syntactic ambiguity with currently legal constructs should be avoided. Even if the compiler can figure it out, large behavioural changes due to a subtle difference in syntax should be avoided because they're hard for *humans* to read For example: # Synchronous for x in y: # Invokes _iter = iter(y) and _iter.__next__() print(x) #Asynchronous: for x in yielding y: # Invokes _iter = yield from iter(y) and yield from _iter.__next__() print(x) # Synchronous with x as y: # Invokes _cm = x, y = _cm.__enter__() and _cm.__exit__(*args) print(y) #Asynchronous: with yielding x as y: # Invokes _cm = x, y = yield from _cm.__enter__() and yield from _cm.__exit__(*args) print(y) A new keyword like "yielding" would make it explicit that what is going on differs from a (yield x) or (yield from x) in the corresponding expression slot. Approaches with function level granularity may also be of interest - PEP 3152 is largely an exploration of that idea (but would need adjustments in light of PEP 3156) Somewhat related, there's also a case to be made that "yield from x" should fall back to being equivalent to "x()" if x implements __call__ but not __iter__. That way, async ready code can be written using "yield from", but passing in a pre-canned result via lambda or functools.partial would no longer require a separate operation that just adapts the asynchronous call API (i.e. __iter__) to the synchronous call one (i.e. __call__): def async_call(f): @functools.wraps(f) def _sync(*args, **kwds): return f(*args, **kwds) yield # Force this to be a generator return _iterable_call The argument against, of course, is the ease with which this can lead to a "wrong answer" problem where the exception gets thrown a long way from the erroneous code which left out the parens for the function call. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Hi Nick, When you say "high latency" (in __exit__), what does "high" mean? Is that order of magnitude what __exit__ usually means now, or network IO included? (Use case: distributed locking and remotely stored locks: it doesn't take a long time on network scales, but it can take a long time on CPU scales.) On Sun, Jan 6, 2013 at 10:06 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Sun, Jan 6, 2013 at 5:23 AM, Guido van Rossum <guido@python.org> wrote:
Possibly (though it will have to be a separate PEP -- PEP 3156 needs to be able to run on unchanged Python 3.3). Does anyone on this thread have enough understanding of the implementation of context managers and generators to be able to figure out how this could be specified and implemented (or to explain why it is a bad idea, or impossible)?
There aren't any syntax changes needed to implement asynchronous locks, since they're unlikely to experience high latency in __exit__. For that and similar cases, it's enough to use an asynchronous operation to retrieve the CM in the first place (i.e. acquire in __iter__ rather than __enter__) or else have __enter__ produce a Future that acquires the lock in __iter__ (see
http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/async_program... )
The real challenge is in handling something like an asynchronous database transaction, which will need to yield on __exit__ as it commits or rolls back the database transaction. At the moment, the only solutions for that are to switch to a synchronous-to-asynchronous adapter like gevent or else write out the try/except block and avoid using the with statement.
It's not an impossible problem, just a tricky one to solve in a readable fashion. Some possible constraints on the problem space:
- any syntactic solution should work for at least "for" statements and "with" statements - also working for comprehensions is highly desirable - syntactic ambiguity with currently legal constructs should be avoided. Even if the compiler can figure it out, large behavioural changes due to a subtle difference in syntax should be avoided because they're hard for *humans* to read
For example:
# Synchronous for x in y: # Invokes _iter = iter(y) and _iter.__next__() print(x) #Asynchronous: for x in yielding y: # Invokes _iter = yield from iter(y) and yield from _iter.__next__() print(x)
# Synchronous with x as y: # Invokes _cm = x, y = _cm.__enter__() and _cm.__exit__(*args) print(y) #Asynchronous: with yielding x as y: # Invokes _cm = x, y = yield from _cm.__enter__() and yield from _cm.__exit__(*args) print(y)
A new keyword like "yielding" would make it explicit that what is going on differs from a (yield x) or (yield from x) in the corresponding expression slot.
Approaches with function level granularity may also be of interest - PEP 3152 is largely an exploration of that idea (but would need adjustments in light of PEP 3156)
Somewhat related, there's also a case to be made that "yield from x" should fall back to being equivalent to "x()" if x implements __call__ but not __iter__. That way, async ready code can be written using "yield from", but passing in a pre-canned result via lambda or functools.partial would no longer require a separate operation that just adapts the asynchronous call API (i.e. __iter__) to the synchronous call one (i.e. __call__):
def async_call(f): @functools.wraps(f) def _sync(*args, **kwds): return f(*args, **kwds) yield # Force this to be a generator return _iterable_call
The argument against, of course, is the ease with which this can lead to a "wrong answer" problem where the exception gets thrown a long way from the erroneous code which left out the parens for the function call.
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
-- cheers lvh
On Sun, Jan 6, 2013 at 8:20 PM, Laurens Van Houtven <_@lvh.cc> wrote:
Hi Nick,
When you say "high latency" (in __exit__), what does "high" mean? Is that order of magnitude what __exit__ usually means now, or network IO included?
(Use case: distributed locking and remotely stored locks: it doesn't take a long time on network scales, but it can take a long time on CPU scales.)
The status quo can only be made to work for in-memory locks. If the release step involves network access, then it's closer to the "database transaction" use case, because the __exit__ method may need to block. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sunday, January 6, 2013, Nick Coghlan wrote:
On Sun, Jan 6, 2013 at 8:20 PM, Laurens Van Houtven <_@lvh.cc> wrote:
Hi Nick,
When you say "high latency" (in __exit__), what does "high" mean? Is that order of magnitude what __exit__ usually means now, or network IO included?
(Use case: distributed locking and remotely stored locks: it doesn't take a long time on network scales, but it can take a long time on CPU scales.)
The status quo can only be made to work for in-memory locks. If the release step involves network access, then it's closer to the "database transaction" use case, because the __exit__ method may need to block.
But you don't need to wait for the release. You can do that asynchronously. Also, have you given the implementation of your 'yielding' proposal any thought yet?
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com <javascript:;> | Brisbane, Australia
-- --Guido van Rossum (python.org/~guido)
On Mon, Jan 7, 2013 at 2:24 AM, Guido van Rossum <guido@python.org> wrote:
On Sunday, January 6, 2013, Nick Coghlan wrote:
On Sun, Jan 6, 2013 at 8:20 PM, Laurens Van Houtven <_@lvh.cc> wrote:
Hi Nick,
When you say "high latency" (in __exit__), what does "high" mean? Is that order of magnitude what __exit__ usually means now, or network IO included?
(Use case: distributed locking and remotely stored locks: it doesn't take a long time on network scales, but it can take a long time on CPU scales.)
The status quo can only be made to work for in-memory locks. If the release step involves network access, then it's closer to the "database transaction" use case, because the __exit__ method may need to block.
But you don't need to wait for the release. You can do that asynchronously.
Ah, true, I hadn't thought of that. So yes, any case where the __exit__ method can be "fire-and-forget" is also straightforward to implement with just PEP 3156. That takes us back to things like database transactions being the only ones where
Also, have you given the implementation of your 'yielding' proposal any thought yet?
Not in depth. Off the top of my head, I'd suggest: - make "yielding" a new kind of node in the grammar (so you can't write "yielding expr" in arbitrary locations, but only in those that are marked as allowing it) - flag for loops and with statements as accepting these nodes as iterables and context managers respectively - create a new Yielding AST node (with a single Expr node as the child) - emit different bytecode in the affected compound statements based on whether the relevant subnode is an ordinary expression (thus invoking the special methods as "obj.__method__()") or a yielding one (thus invoking the special methods as "yield from obj.__method__()"). I'm not seeing any obvious holes in that strategy, but I haven't looked closely at the compiler code in a while, so there may be limitations I haven't accounted for. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Sun, Jan 6, 2013 at 9:47 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Mon, Jan 7, 2013 at 2:24 AM, Guido van Rossum <guido@python.org> wrote:
On Sunday, January 6, 2013, Nick Coghlan wrote:
On Sun, Jan 6, 2013 at 8:20 PM, Laurens Van Houtven <_@lvh.cc> wrote:
Hi Nick,
When you say "high latency" (in __exit__), what does "high" mean? Is that order of magnitude what __exit__ usually means now, or network IO included?
(Use case: distributed locking and remotely stored locks: it doesn't take a long time on network scales, but it can take a long time on CPU scales.)
The status quo can only be made to work for in-memory locks. If the release step involves network access, then it's closer to the "database transaction" use case, because the __exit__ method may need to block.
But you don't need to wait for the release. You can do that asynchronously.
Ah, true, I hadn't thought of that. So yes, any case where the __exit__ method can be "fire-and-forget" is also straightforward to implement with just PEP 3156. That takes us back to things like database transactions being the only ones where
And 'yielding' wouldn't do anything about this, would it?
Also, have you given the implementation of your 'yielding' proposal any thought yet?
Not in depth. Off the top of my head, I'd suggest: - make "yielding" a new kind of node in the grammar (so you can't write "yielding expr" in arbitrary locations, but only in those that are marked as allowing it) - flag for loops and with statements as accepting these nodes as iterables and context managers respectively - create a new Yielding AST node (with a single Expr node as the child) - emit different bytecode in the affected compound statements based on whether the relevant subnode is an ordinary expression (thus invoking the special methods as "obj.__method__()") or a yielding one (thus invoking the special methods as "yield from obj.__method__()").
I'm not seeing any obvious holes in that strategy, but I haven't looked closely at the compiler code in a while, so there may be limitations I haven't accounted for.
So would 'yielding' insert the equivalent of 'yield from' or the equivalent of 'yield' in the code? -- --Guido van Rossum (python.org/~guido)
On Tue, Jan 8, 2013 at 11:06 AM, Guido van Rossum <guido@python.org> wrote:
On Sun, Jan 6, 2013 at 9:47 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Ah, true, I hadn't thought of that. So yes, any case where the __exit__ method can be "fire-and-forget" is also straightforward to implement with just PEP 3156. That takes us back to things like database transactions being the only ones where
And 'yielding' wouldn't do anything about this, would it?
Any new syntax should properly handle the database transaction context manager problem, otherwise what's the point? The workarounds for asynchronous __next__ and __enter__ methods aren't too bad - it's allowing asynchronous __exit__ methods that can only be solved with new syntax.
I'm not seeing any obvious holes in that strategy, but I haven't looked closely at the compiler code in a while, so there may be limitations I haven't accounted for.
So would 'yielding' insert the equivalent of 'yield from' or the equivalent of 'yield' in the code?
Given PEP 3156, the most logical would be for it to use "yield from", since that is becoming the asynchronous equivalent of a normal function call. Something like: with yielding db.session() as : # Do stuff here Could be made roughly equivalent to: _async_cm = db.session() conn = yield from _async_cm.__enter__() try: # Use session here except Exception as exc: # Rollback yield from _async_cm.__exit__(type(exc), exc, exc.__traceback__) else: # Commit yield from _async_cm.__exit__(None, None, None) Creating a contextlib.contextmanager style decorator for writing such asynchronous context managers would be difficult, though, as the two different meanings of "yield" would get in each other's way - you would need something like "yield EnterResult(expr)" to indicate to __enter__ in the wrapper object when to stop. It would probably be easier to just write separate __enter__ and __exit__ methods as coroutines. However, note that I just wanted to be clear that I consider the idea of a syntax for "asynchronous context managers" plausible, and sketched out a possible design to explain *why* I thought it should be possible. My focus will stay with PEP 432 until that's done. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Tue, Jan 8, 2013 at 2:13 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Tue, Jan 8, 2013 at 11:06 AM, Guido van Rossum <guido@python.org> wrote:
On Sun, Jan 6, 2013 at 9:47 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Ah, true, I hadn't thought of that. So yes, any case where the __exit__ method can be "fire-and-forget" is also straightforward to implement with just PEP 3156. That takes us back to things like database transactions being the only ones where
And 'yielding' wouldn't do anything about this, would it?
Any new syntax should properly handle the database transaction context manager problem, otherwise what's the point? The workarounds for asynchronous __next__ and __enter__ methods aren't too bad - it's allowing asynchronous __exit__ methods that can only be solved with new syntax.
Is your idea that if you write "with yielding x as y: blah" this effectively replaces the calls to __enter__ and __exit__ with "yield from x.__enter__()" and "yield from x.__enter__()"? (And assigning the result of yield fro, x.__enter__() to y.)
I'm not seeing any obvious holes in that strategy, but I haven't looked closely at the compiler code in a while, so there may be limitations I haven't accounted for.
So would 'yielding' insert the equivalent of 'yield from' or the equivalent of 'yield' in the code?
Given PEP 3156, the most logical would be for it to use "yield from", since that is becoming the asynchronous equivalent of a normal function call.
Something like:
with yielding db.session() as : # Do stuff here
Could be made roughly equivalent to:
_async_cm = db.session() conn = yield from _async_cm.__enter__() try: # Use session here except Exception as exc: # Rollback yield from _async_cm.__exit__(type(exc), exc, exc.__traceback__) else: # Commit yield from _async_cm.__exit__(None, None, None)
Creating a contextlib.contextmanager style decorator for writing such asynchronous context managers would be difficult, though, as the two different meanings of "yield" would get in each other's way - you would need something like "yield EnterResult(expr)" to indicate to __enter__ in the wrapper object when to stop. It would probably be easier to just write separate __enter__ and __exit__ methods as coroutines.
However, note that I just wanted to be clear that I consider the idea of a syntax for "asynchronous context managers" plausible, and sketched out a possible design to explain *why* I thought it should be possible. My focus will stay with PEP 432 until that's done.
Sure, I didn't intend any time pressure. Others may take this up as well -- or if nobody cares, we can put it off until the need has been demonstrated more. possibly after Python 3.4 is release. -- --Guido van Rossum (python.org/~guido)
On Wed, Jan 9, 2013 at 4:32 AM, Guido van Rossum <guido@python.org> wrote:
On Tue, Jan 8, 2013 at 2:13 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Tue, Jan 8, 2013 at 11:06 AM, Guido van Rossum <guido@python.org> wrote:
On Sun, Jan 6, 2013 at 9:47 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Ah, true, I hadn't thought of that. So yes, any case where the __exit__ method can be "fire-and-forget" is also straightforward to implement with just PEP 3156. That takes us back to things like database transactions being the only ones where
And 'yielding' wouldn't do anything about this, would it?
Any new syntax should properly handle the database transaction context manager problem, otherwise what's the point? The workarounds for asynchronous __next__ and __enter__ methods aren't too bad - it's allowing asynchronous __exit__ methods that can only be solved with new syntax.
Is your idea that if you write "with yielding x as y: blah" this effectively replaces the calls to __enter__ and __exit__ with "yield from x.__enter__()" and "yield from x.__enter__()"? (And assigning the result of yield fro, x.__enter__() to y.)
Yep - that's why it would need a new keyword, as the subexpression itself would be evaluated normally, while the later special method invocations would be wrapped in yield from expressions.
However, note that I just wanted to be clear that I consider the idea of a syntax for "asynchronous context managers" plausible, and sketched out a possible design to explain *why* I thought it should be possible. My focus will stay with PEP 432 until that's done.
Sure, I didn't intend any time pressure. Others may take this up as well -- or if nobody cares, we can put it off until the need has been demonstrated more. possibly after Python 3.4 is release.
Yep - the fact you can fall back to an explicit try-finally if needed, or else use something like gevent to suspend implicitly if you want to use such idioms a lot makes it easy to justify postponing doing anything about it. I'll at least mention the idea in my python-notes essay, though. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
participants (6)
-
Guido van Rossum
-
Jan Kaliszewski
-
Joshua Bartlett
-
Laurens Van Houtven
-
Nick Coghlan
-
Terry Reedy