
Hi everybody, I'd like to suggest adopting something similar to the ScopeGuardStatement from the D programming language. A description of the D version can be found here: http://d.digitalmars.com/2.0/statement.html#ScopeGuardStatement It is also similar to golangs "defer" statement: http://golang.org/doc/go_spec.html#Defer_statements So these are roughly equivelent: defer {block} // in golang scope(exit) {block} // in D I have written a context manager that approximates the behavior of the scope statement in D: http://ideone.com/vNmq8 The use of lambdas or nested functions doesn't look very nice however. So, on to the proposal. I think the "defer" keyword is more appropriate than "scope" and the function like syntax of "scope(exit)" doesn't fit with the overall python syntax. There are three ways to define a defer block. - "defer: BLOCK", which is the same as "defer {BLOCK} in golang or "scope(exit) {BLOCK}" in D. - "defer EXPR as VAR: BLOCK", which is similar to "scope(failure)". It differs in that it specifies the exception that caused the failure and is only called for matching exceptions. - "defer EXPR: BLOCK else: BLOCK", where the else BLOCK is executed when no exception occurs. This is similar to "scope(success)" and the existing except: else: construct in python. As "defer:" is currently invalid syntax, there shouldn't be any code breakage from adding the new keyword. Some rules: - Deferred blocks are executed in the reverse lexical order in which they appear. - If a function returns before reaching a defer statement, it will not be executed. - If a defer block raises an error, a lexically earlier defer block may catch it. - If multiple defer blocks raise errors or return results, the raise or return of the lexically earlier defer will mask the previous result or error. Some example code:
Handling exceptions:
Equivalent using try/except/finally:
The nesting advantage becomes more apparent when more are required. Here is an example from http://www.doughellmann.com/articles/how-tos/python-exception-handling/index... #!/usr/bin/env python import sys import traceback def throws(): raise RuntimeError('error from throws') def cleanup(): raise RuntimeError('error from cleanup') def nested(): try: throws() except Exception as original_error: try: raise finally: try: cleanup() except: pass # ignore errors in cleanup def main(): try: nested() return 0 except Exception as err: traceback.print_exc() return 1 if __name__ == '__main__': sys.exit(main()) Here are the equivalent of main and nested functions using defer: def nested(): defer RuntimeError: pass # ignore errors in cleanup defer: cleanup() throws() def main(): defer Exception as err: traceback.print_exc() return 1 else: return 0 nested() Notice that we don't even need "defer Exception as original_error: raise" after "defer: cleanup()" in order to preserve the stack trace. It will go up the call stack, so long as no defer handles it or masks it with another exception. This proposal would probably have had a better chance before the introduction of the "with" statement, but I still think it may be useful in cases where you don't want to write a context manager. Context managers may also not have access to the scope they are used in, which may be inconvenient in some cases. For code where try/except/finally would otherwise be required, I think the advantages make this proposal at least worth considering. You don't need to nest your normal code in a try block and you can place error handling code together with relevant sections, rather than further down in an except block. I'm sure there is much I have overlooked, possibly this is technically difficult and of course there is the minor task of implementation. But other than that what do you think? Manuel

Manuel Barkhau wrote:
As "defer:" is currently invalid syntax, there shouldn't be any code breakage from adding the new keyword.
Of course there will be. Every new keyword will break code that uses that word as a regular name: defer = True instance.defer = None Both of which will become a SyntaxError if defer becomes a keyword. It's not even like "defer" is an uncommon word unlikely to be used anywhere. (Although I can't find any examples of it in the standard library.)
Why in reverse order? This is unintuitive. If you write: def func(): defer: print(1) defer: print(2) defer: print(3) do_stuff() return the output will be 3 2 1 Is this a deliberate design choice, or an accident of implementation that D and Go have followed? If it is deliberate, what is the rationale for it? [...]
The nesting advantage becomes more apparent when more are required. Here is an example from
I disagree. Nesting is an advantage, the use of defer which eliminates that nesting is a MAJOR disadvantage of the concept. You seem to believe that nesting is a problem to be worked around. I call it a feature to be encouraged. With try...except/finally, the structure of which blocks are called, and when, is directly reflected in the nesting and indentation. With defer, that structure is gone. The reader has to try to recreate the execution order in their head. That is an enormous negative.
I don't understand the point of that example. Wouldn't it be better written as this? def nested(): try: throws() finally: try: cleanup() except: pass As far as I can tell, my version gives the same behaviour as yours: py> main() Traceback (most recent call last): File "<stdin>", line 3, in main File "<stdin>", line 3, in nested File "<stdin>", line 2, in throws RuntimeError: error from throws 1 (Tested in Python 2.5 with the obvious syntax changes.) [...]
How is the reader supposed to know that pass will ignore errors in cleanup, and nothing else, without the comment? Imagine that the first defer line and the second are separated by a bunch of code: def nested(): defer RuntimeError: pass do_this() do_that() do_something_else() if flag: return if condition: defer: something() defer: cleanup() throws() What is there to connect the first defer to the cleanup now? It seems to me that defer would let you write spaghetti code in a way which is really difficult (if not impossible) with try blocks. When considering a proposal, we should consider how it will be abused as well as how it will be used. -- Steven

Steven D'Aprano schrieb am Fr, 17. Feb 2012, um 12:25:41 +1100:
Basically any cleanup mechanism I know of does the cleanups in the reverse order as the initialisations, be it destructor calls in C++, defer handlers in Go or nested 'with' statements in Python. Since the later initialised objects might depend on the previously defined objects, this is also the only sane choice.
I think "defer" has some definite advantages over try/except as far as readability is concerned. It places the cleanup code at the position the necessity for the cleanup occurs, and not way down in the code. Python's "with" statement does a similar thing, but it gets difficult to handle as soon as you try to *conditionally* add a cleanup handler -- we had this discussion before, and it lead to Nick's contextlib2. Cheers, Sven

On Fri, Feb 17, 2012 at 10:06 AM, Manuel Barkhau <mbarkhau@googlemail.com> wrote:
It is also similar to golangs "defer" statement: http://golang.org/doc/go_spec.html#Defer_statements
Since there have been a few proposals along these lines recently: Nothing is going to happen on the dedicated syntax front in the deferred execution space at least until I get contextlib.CallbackStack into Python 3.3 and we gather additional feedback on patterns of use (and, assuming that API addresses the relevant use cases the way I plan, these features will *never* need dedicated syntax). A preliminary version of the API is available in the contextlib2 backport as ContextStack: http://contextlib2.readthedocs.org/en/latest/index.html#contextlib2.ContextS... See the issue tracker for the changes that are planned in order to update that to the new CallbackStack API: https://bitbucket.org/ncoghlan/contextlib2/issue/8/rename-contextstack-to-ca...
With ContextStack: def ordering_example(): with ContextStack() as stack: print(1) stack.register(print, 2) stack.register(print, 3) print(4) With the planned CallbackStack API: def ordering_example(): with CallbackStack() as stack: print(1) stack.push(print, 2) stack.push(print, 3) print(4)
Same output.
Huh? That's a bizarre way to write it. A more sane equivalent would be def nested(): try: throws() except BaseException: try: cleanup() except: pass raise
However, this does raise a reasonable feature request for the planned contextlib2.CallbackStack API, so the above can be written as: def _ignore_exception(*args): return True def _cleanup_on_error(exc_type, exc_val, exc_tb): if exc_type is not None: cleanup() def nested(): with CallbackStack(callback_error=_ignore_exception) as stack: stack.push_exit(_cleanup_on_error) throws() In Python 3 though, your better bet is often going to be just to let the cleanup exception fly - the __context__ attribute means the original exception and the full stack trace will be preserved automatically.
I think contextlib2 and PEP 3144 cover the use cases you have presented more cleanly and without drastic syntax changes. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Every new keyword will break code that uses that word as a regular name:
Ah, my bad. I had assumed that the addition of the with statement didn't break anything and thought the only case I needed to look at was "defer: ...".
It seems to me that defer would let you write spaghetti code in a way which is really difficult (if not impossible) with try blocks.
Sure people can write spaghetti code with this, who ever said it was appropriate for everything in the world? I also wasn't aware there were people so fond of writing try blocks, because to me they luck fugly. Rather than wrapping all my code in a try block, I would rather write the code that deals with peripheral cases in a block, and continue on with the main code.
You seem to believe that nesting is a problem to be worked around. I call it a feature to be encouraged.
Bingo, I don't like nesting too much.
Yes it's because of how they chose to do it and I kept it that way if nothing else, for familiarity. But I'm sure there is some reasoning behind it.
Huh? That's a bizarre way to write it. A more sane equivalent would be
The example given by Doug is intended to preserve the original stack trace of the exception that is thrown by throws.
This raises the exception thrown by cleanup. If you use "raise original_exception", the stack trace isn't preserved, which is what the article is about. But now that you mention it, I'm not sure the defer example I gave actually would produce the same stack trace either. Oh well, context managers it is then I guess. Thanks for the references Nick. Manuel

Hi Nick, I just wanted to chime in on this, because I understand the use cases and benefits of this but the code is very semantically opaque and imperative. I also feel like a lot of C programming concepts and semantics have leaked into the design. Additionally, I feel that there are some benefits to taking a step back and looking at this problem as part of a bigger picture. Fundamentally, context managers are ways to convert a block of code into an event, with __enter__ analogous to "before_block" and __exit__ analogous to an "after_block". There are a couple of problems with context managers that I feel an event system handles more elegantly: 1.) Context is ambiguous. Context could be interpreted to mean a thread, a scope, a point in time, etc. Context managers only deal with the narrow problem of a block of code being run. This is succinctly described as an event. 2.) The context manager API requires you to fire events before and after the code block (yes, you can pass) and does not provide other options, such as (in an ideal world of python with well behaved threads) an event that is fired/in the active state concurrent to the block of code's execution. There are a few ways to hack this behavior but they're all bad, and interoperability between libraries is unlikely. 3.) If you want to extend context management for a particular piece of code, you have to modify the code to add another context manager, or monkey patch the existing context manager. Modifying the code is has some thorny issues, for instance, if you need to modify the context handling in a third party lib, all of a sudden you have to fork the lib and manually patch every time you upgrade or redeploy. Monkey patching is easy, but from a conceptual/readability perspective it is horrible. If the lib fires events, you can just register an action on the event in your code and live happily ever after. 4.) The way context managers are defined only allows you describe a linear chain of events, because they are associated with a block of code, and the act of association precludes other context managers from firing events for that same block of code. Because of this, you have things like register and preserve that exist to add support for (weak) non-linearity. 5.) Going back to event concurrency and touching on non-linearity again, if I have two functions that I've asked to fire when an event occurs, this provides a strong clue to the interpreter that the given functions could potentially run in parallel. Of course, there would need to be other cues, but I don't think people want to be in the business of explicitly writing parallel code forever. 6.) User interface coders going back 30 years understand events pretty well, but will probably give you a blank stare for a second or two if you mention context managers. I feel that "with"/context managers are an elegant solution to the simple problem, however it seems like the generalized solution based on context managers is pretty awkward. The right thing to do in my opinion would be to go back to the drawing board, design an event subsystem that maps to something like pi-calculus/interval temporal logic in a human/pythonic way. This will avoid the immediate issues like the necessary goofiness of contextlib2, and lay the groundwork for nice things like automatic parallelization. Of course, I'm anal about getting things 100% right, and context managers are very nice, simple, elegant 80% solution. If 99% of people are happy with the 80% solution, it is probably the right thing to do just to force a ugly hack on the remaining 1%. Take care, Nathan

On Sat, Feb 18, 2012 at 8:16 AM, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
So... context managers are not a good fit for general event handling. Correct. Given that I agree with your basic point, I'm not sure what the rest of that had to do with anything, unless you heard the word "callback" and immediately assumed I was talking about general event handling rather than Go defer'ed style cleanup APIs (along with a replacement for the bug-prone, irredeemably flawed contextlib.nested API). I'm not - what I'm planning would be a terrible API for general event handling. Fortunately, it's just a replacement for contextlib.nested() as a tool for programmatic management of context managers. If you want nice clean callbacks for general event handling, Python doesn't currently provide that. (We certainly don't have anything that gets remotely close to the elegance of Ruby's blocks for that style of programming: http://www.boredomandlaziness.org/2011/10/correcting-ignorance-learning-bit-...) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 18, 2012 at 10:30 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
My point was more that I feel like you're hitting a point where the context manager as a programming and semantic construct is starting to stretch pretty thin. My gut feeling is that it might be more productive to let context managers alone (I think they're in an okay place with multiple managers in a single with statement) and start to examine the larger class of problems of which the deferred cleanup is a member. Events can unify a lot of concepts in python, while providing a much more elegant handle into third party code than is currently possible. For example... Decorators, descriptors and exceptions can all be unified neatly as events, and events let you reach into 3rd party code in a robust manner. I can't tell you the number of times I have had to subclass multiple things from a third party library to fix a small, unnecessarily limiting design decision. I've even run into this with authors who make very elegant libraries like Armin; nobody can predict all the use cases for their code. The best thing we can do is make it easy to work around such problems. I like the with statement in general, but if python is ever going to embrace events, the farther you travel along this path the more painful switching over is going to be down the line.
I like ruby's blocks a lot. I don't think they don't drink enough of the koolaid though. Blocks can be a gateway to powerful macros (if you have first class expressions) and a mechanism for very elegant currying and partial function evaluation. I think something that is missing for me is a clear picture of where Python is going. I imagine between you, Guido, Martin, Anton, Georg and Raymond (apologies to any of the primary group I'm forgetting) there is some degree of tacit understanding. My perspective on python was framed by Peter Norvig's description of it as aspiring to be a humane reexamination of lisp, but lately I get the feeling the target would better be described as a 21st century pascal. Nathan

Guido van Rossum wrote:
I hope not. I like Pascal. It has nice, clean syntax (if a tad verbose, with the BEGIN/END tags) and straight-forward, simple semantics. Standard Pascal is somewhat lacking (e.g. no strings) but who uses standard Pascal? Without wishing to deny the strengths of C, I think the computing world would be a lot better if C was closer to Pascal than if Pascal had been closer to C. -- Steven

19.02.12 01:52, Steven D'Aprano написав(ла):
Python is not Pascal. For me it s BASIC of nowadays. Really basic, simple and clear (even for non-specialists) language. Not old BASIC with line numbers, GOTO, GOSUB and 1- or 2-symbol identifiers, but modern language with modules, structured programming, powerful basic data structures, OOP, first-class functions, automatic resource management etc.

Was that meant as an insult? Because it sounds to me like one.
I'm sorry if my poor wording caused it to come across that way. Pascal was a very useful language, it with a perspective that was different than its contemporaries because it was originally intended for educational purposes, rather than as an academic language like lisp or a hacker tool like c or fortran. I enjoy writing python a lot, and would prefer to use it rather than ruby/lisp/java/etc in most cases. My suggestions come from frustrations that occur when using python in areas where the right answer is probably just to use a different language. If I knew that what I wanted was at odds with the vision for python, I would have less of an issue just accepting circumstances, and would just get to work rather than sidetracking discussions on this list. Thanks, and again, sorry! Nathan

On Sat, Feb 18, 2012 at 3:57 PM, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
I have no ideal how old you are, or what your background is, so I don't know if you have all that from personal experience or from hearsay. I do know that for me, when I first learned Pascal on the Control Data mainframe in 1974, it was the ultimate hacker tool. (Well, penultimate. Assembler was the ultimate. But even then it was a last resort.) Pascal was also developed by an academic. I never got much out of Lisp. So I guess it's a matter of perspective.
I strongly recommend that you stick to describing your use cases and tentatively exploring possible solutions, instead of trying to spout sweeping controversial statements. Those just get in the way of getting an exchange of ideas going. -- --Guido van Rossum (python.org/~guido)

On Sun, Feb 19, 2012 at 9:57 AM, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
The core problem comes down to the differences between Guido's original PEP 340 idea (which was much closer in power to Ruby's blocks, since it was a new looping construct that allowed 0-or-more executions of the contained block) and the more constrained with statement that is defined in PEP 343 (which will either execute the body once or throw an exception, distinguishing it clearly from both the looping constructs and if statements). The principle Guido articulated when making that decision was: "different forms of flow control should look different at the point of invocation". So, where a language like Ruby just defines one protocol (callbacks, supplemented by anonymous blocks that run directly in the namespace of the containing function) and uses it for pretty much *all* flow control (including all their loop constructs), Python works the other way around, defining *different* protocols for different patterns of invocation. This provides a gain in readability on the Python side. When you see any of the following in Python: @whatever() def f(): pass with whatever(): # Do something! for x in whatever(): # Do something! It places a lot of constraints on the nature of the object returned by "whatever()" - even without knowing anything else about it, you know the first must return a decorator, the second a context manager, and the third an iterable. If that's all you need to know at this point in time, you don't need to worry about the details - the local syntax tells you the important things you need to know about the flow control. In Ruby, though, all of them (assuming it isn't actually important that the function name be bound locally) could be written like this: whatever() do: # Do something! end Is it a context manager? An iterable? Some other kind of callback? There's nothing in the syntax to tell you that - you're relying on naming conventions to provide that information (like the ".foreach" convention for iteration methods). That approach can obviously work (otherwise Ruby wouldn't be as popular as it is), but it *does* make it harder to pick up a piece of code and understand the possible control flows without looking elsewhere. However, this decision to be explicit about flow control for the benefit of the *reader* brings with it a high *cost* on the Python side for the code *author*: where Ruby works by defining a nice syntax and semantics for callback based programming and building other language constructs on top of that, Python *doesn't currently have* a particularly nice general purpose native syntax for callback based programming. Decorators do work in many cases (especially simple callback registration), but they sometimes feel wrong because they're mainly designed to modify how a function is defined, not implement key program flow control constructs. However, their flexibility shouldn't be underestimated, and the CallbackStack API is designed to help Python developers push decorators and context managers closer to those limits *without* needing new language constructs. By decoupling the callback stack from the code layout, it gives you full *programmatic* control of the kinds of things context managers can help with when you know in advance exactly what you want to do. *If* CallbackStack proves genuinely popular (and given the number of proposals I have seen along these lines, and the feedback I have received on ContextStack to date, I expect it will), and people start to develop interesting patterns for using it, *then* we can start looking at the possibility of dedicated syntax to streamline particular use cases (just as the with statement itself was designed to streamline various use cases of the more general try statement). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 18, 2012 at 5:27 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Very lucid explanation, Nick. (I also liked your blog post that you referenced in a previous message, which touches upon the same issues.) Apparently I don't seem to like flow control constructs formed by "quoting" (in Lisp terms) a block of code and leaving its execution to some other party, with the exception of explicit function definitions. Maybe a computer-literate psychoanalyst can do something with this... To this day I am having trouble liking event-based architectures -- I do see a need for them, but I immediately want to hide their mechanisms and offer a *different* mechanism for most use cases. See e.g. the (non-thread-based) async functionality I added to the new App Engine datastore client, NDB: https://docs.google.com/document/pub?id=1LhgEnZXAI8xiEkFA4tta08Hyn5vo4T6HSGL... . Deep down inside it has an event loop, but this is hidden by using Futures, which in turn are mostly wrapped in tasklets , i.e. yield-based coroutines. I expect that if I were to find a use for Twisted, I'd do most of my coding using its so-called inlineCallbacks mechanism (also yield-based coroutines). When I first saw Monocle, which offers a simplified coroutine-based API on top of (amongst others) Twisted, I thought it was a breath of fresh air (NDB is heavily influenced by it). I've probably (implicitly) trained most key Python developers and users to think similarly, and Python isn't likely to morph into Ruby any time soon. It's easy enough to write an event-based architecture in Python (see Twisted and Tornado); but an event loop is never going to be the standard way to solve all your programming problems in Python. I do kind of like the 'defer' idea that started this thread (even if I had syntactic quibbles with it that already came up before the thread was derailed), but I notice that it is a far cry from an event-driven architecture -- like the referenced counterparts in Go and D, 'defer' blocks are not anonymous functions that can be passed off to arbitrary other libraries for possibly later and/or repeated execution -- they are a way to specify out-of-order execution within the current scope, which "tames" them enough to be acceptable from my perspective. Though they may also not be powerful enough to be convincing as a new feature, since you can do everything they can do by rearranging the code of your function somewhat and carefully using try/finally. -- --Guido van Rossum (python.org/~guido)

The main attraction of events for me is that they are a decent model of computational flow that makes it easy to "reach into" other people's code. I won't argue against the statement that they can be less clear or convenient to work with in some cases than other mechanisms. My personal preference would be to have the more powerful mechanism as the underlying technology, and build simpler abstractions on top of that (kind of like @property vs manually creating a descriptor).
I agree that events can make code harder to follow in some cases. I feel the same way about message passing and channels versus method invocation. In both cases I think there is an argument to be made for representing the simpler techniques as a special cases which are emphasized for general use. I also understand not wanting to be stuck dealing with someone else's event or message passing fetish when it's not necessary (and they often aren't), and that is certainly a fair counterargument. Thank you for clarifying your views somewhat, it was instructive. I enjoy writing python code in general, but I shouldn't let that lead me astray when it isn't the right tool for the job. Take care, Nathan

Out of interest, do you see an alternative to events or message passing when they _are_ required? I'm in Guido's apparently minority camp in that I can't stand events. The only decent alternative I've seen is message passing. On Feb 19, 2012 1:15 PM, "Nathan Rice" <nathan.alexander.rice@gmail.com> wrote:

I can appreciate the intention there. That particular case isn't as big a deal from my perspective, my non-local code pain points tend to be centered around boneheaded uses of inheritance and dynamic modification of classes..
More often than not, when I am reading other people's code, I am debugging it (and thus have local/global context information) or just interested in nailing down a poorly documented corner of an API. I think if I were regularly in the habit of working with lots of undocumented code I would probably appreciate this more.
I wasn't suggesting syntax needs to change necessarily, all the pieces are already there. I see it more along the lines of function.Event, class.Event, module.Event, context_manager.Event, etc. It is a moot point though. Thank you again for taking the time to clarify the rational for me. It wasn't intuitive to me because it does not really address issues I have.

On 18 February 2012 22:33, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
You may have a point, but I find it hard to understand what you are getting at. Would you be able to propose a specific syntax/semantics to clarify what you're trying to express? (I think I get the general concept, but I can't see how you imagine it to work). Thanks, Paul

Manuel Barkhau wrote:
As "defer:" is currently invalid syntax, there shouldn't be any code breakage from adding the new keyword.
Of course there will be. Every new keyword will break code that uses that word as a regular name: defer = True instance.defer = None Both of which will become a SyntaxError if defer becomes a keyword. It's not even like "defer" is an uncommon word unlikely to be used anywhere. (Although I can't find any examples of it in the standard library.)
Why in reverse order? This is unintuitive. If you write: def func(): defer: print(1) defer: print(2) defer: print(3) do_stuff() return the output will be 3 2 1 Is this a deliberate design choice, or an accident of implementation that D and Go have followed? If it is deliberate, what is the rationale for it? [...]
The nesting advantage becomes more apparent when more are required. Here is an example from
I disagree. Nesting is an advantage, the use of defer which eliminates that nesting is a MAJOR disadvantage of the concept. You seem to believe that nesting is a problem to be worked around. I call it a feature to be encouraged. With try...except/finally, the structure of which blocks are called, and when, is directly reflected in the nesting and indentation. With defer, that structure is gone. The reader has to try to recreate the execution order in their head. That is an enormous negative.
I don't understand the point of that example. Wouldn't it be better written as this? def nested(): try: throws() finally: try: cleanup() except: pass As far as I can tell, my version gives the same behaviour as yours: py> main() Traceback (most recent call last): File "<stdin>", line 3, in main File "<stdin>", line 3, in nested File "<stdin>", line 2, in throws RuntimeError: error from throws 1 (Tested in Python 2.5 with the obvious syntax changes.) [...]
How is the reader supposed to know that pass will ignore errors in cleanup, and nothing else, without the comment? Imagine that the first defer line and the second are separated by a bunch of code: def nested(): defer RuntimeError: pass do_this() do_that() do_something_else() if flag: return if condition: defer: something() defer: cleanup() throws() What is there to connect the first defer to the cleanup now? It seems to me that defer would let you write spaghetti code in a way which is really difficult (if not impossible) with try blocks. When considering a proposal, we should consider how it will be abused as well as how it will be used. -- Steven

Steven D'Aprano schrieb am Fr, 17. Feb 2012, um 12:25:41 +1100:
Basically any cleanup mechanism I know of does the cleanups in the reverse order as the initialisations, be it destructor calls in C++, defer handlers in Go or nested 'with' statements in Python. Since the later initialised objects might depend on the previously defined objects, this is also the only sane choice.
I think "defer" has some definite advantages over try/except as far as readability is concerned. It places the cleanup code at the position the necessity for the cleanup occurs, and not way down in the code. Python's "with" statement does a similar thing, but it gets difficult to handle as soon as you try to *conditionally* add a cleanup handler -- we had this discussion before, and it lead to Nick's contextlib2. Cheers, Sven

On Fri, Feb 17, 2012 at 10:06 AM, Manuel Barkhau <mbarkhau@googlemail.com> wrote:
It is also similar to golangs "defer" statement: http://golang.org/doc/go_spec.html#Defer_statements
Since there have been a few proposals along these lines recently: Nothing is going to happen on the dedicated syntax front in the deferred execution space at least until I get contextlib.CallbackStack into Python 3.3 and we gather additional feedback on patterns of use (and, assuming that API addresses the relevant use cases the way I plan, these features will *never* need dedicated syntax). A preliminary version of the API is available in the contextlib2 backport as ContextStack: http://contextlib2.readthedocs.org/en/latest/index.html#contextlib2.ContextS... See the issue tracker for the changes that are planned in order to update that to the new CallbackStack API: https://bitbucket.org/ncoghlan/contextlib2/issue/8/rename-contextstack-to-ca...
With ContextStack: def ordering_example(): with ContextStack() as stack: print(1) stack.register(print, 2) stack.register(print, 3) print(4) With the planned CallbackStack API: def ordering_example(): with CallbackStack() as stack: print(1) stack.push(print, 2) stack.push(print, 3) print(4)
Same output.
Huh? That's a bizarre way to write it. A more sane equivalent would be def nested(): try: throws() except BaseException: try: cleanup() except: pass raise
However, this does raise a reasonable feature request for the planned contextlib2.CallbackStack API, so the above can be written as: def _ignore_exception(*args): return True def _cleanup_on_error(exc_type, exc_val, exc_tb): if exc_type is not None: cleanup() def nested(): with CallbackStack(callback_error=_ignore_exception) as stack: stack.push_exit(_cleanup_on_error) throws() In Python 3 though, your better bet is often going to be just to let the cleanup exception fly - the __context__ attribute means the original exception and the full stack trace will be preserved automatically.
I think contextlib2 and PEP 3144 cover the use cases you have presented more cleanly and without drastic syntax changes. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Every new keyword will break code that uses that word as a regular name:
Ah, my bad. I had assumed that the addition of the with statement didn't break anything and thought the only case I needed to look at was "defer: ...".
It seems to me that defer would let you write spaghetti code in a way which is really difficult (if not impossible) with try blocks.
Sure people can write spaghetti code with this, who ever said it was appropriate for everything in the world? I also wasn't aware there were people so fond of writing try blocks, because to me they luck fugly. Rather than wrapping all my code in a try block, I would rather write the code that deals with peripheral cases in a block, and continue on with the main code.
You seem to believe that nesting is a problem to be worked around. I call it a feature to be encouraged.
Bingo, I don't like nesting too much.
Yes it's because of how they chose to do it and I kept it that way if nothing else, for familiarity. But I'm sure there is some reasoning behind it.
Huh? That's a bizarre way to write it. A more sane equivalent would be
The example given by Doug is intended to preserve the original stack trace of the exception that is thrown by throws.
This raises the exception thrown by cleanup. If you use "raise original_exception", the stack trace isn't preserved, which is what the article is about. But now that you mention it, I'm not sure the defer example I gave actually would produce the same stack trace either. Oh well, context managers it is then I guess. Thanks for the references Nick. Manuel

Hi Nick, I just wanted to chime in on this, because I understand the use cases and benefits of this but the code is very semantically opaque and imperative. I also feel like a lot of C programming concepts and semantics have leaked into the design. Additionally, I feel that there are some benefits to taking a step back and looking at this problem as part of a bigger picture. Fundamentally, context managers are ways to convert a block of code into an event, with __enter__ analogous to "before_block" and __exit__ analogous to an "after_block". There are a couple of problems with context managers that I feel an event system handles more elegantly: 1.) Context is ambiguous. Context could be interpreted to mean a thread, a scope, a point in time, etc. Context managers only deal with the narrow problem of a block of code being run. This is succinctly described as an event. 2.) The context manager API requires you to fire events before and after the code block (yes, you can pass) and does not provide other options, such as (in an ideal world of python with well behaved threads) an event that is fired/in the active state concurrent to the block of code's execution. There are a few ways to hack this behavior but they're all bad, and interoperability between libraries is unlikely. 3.) If you want to extend context management for a particular piece of code, you have to modify the code to add another context manager, or monkey patch the existing context manager. Modifying the code is has some thorny issues, for instance, if you need to modify the context handling in a third party lib, all of a sudden you have to fork the lib and manually patch every time you upgrade or redeploy. Monkey patching is easy, but from a conceptual/readability perspective it is horrible. If the lib fires events, you can just register an action on the event in your code and live happily ever after. 4.) The way context managers are defined only allows you describe a linear chain of events, because they are associated with a block of code, and the act of association precludes other context managers from firing events for that same block of code. Because of this, you have things like register and preserve that exist to add support for (weak) non-linearity. 5.) Going back to event concurrency and touching on non-linearity again, if I have two functions that I've asked to fire when an event occurs, this provides a strong clue to the interpreter that the given functions could potentially run in parallel. Of course, there would need to be other cues, but I don't think people want to be in the business of explicitly writing parallel code forever. 6.) User interface coders going back 30 years understand events pretty well, but will probably give you a blank stare for a second or two if you mention context managers. I feel that "with"/context managers are an elegant solution to the simple problem, however it seems like the generalized solution based on context managers is pretty awkward. The right thing to do in my opinion would be to go back to the drawing board, design an event subsystem that maps to something like pi-calculus/interval temporal logic in a human/pythonic way. This will avoid the immediate issues like the necessary goofiness of contextlib2, and lay the groundwork for nice things like automatic parallelization. Of course, I'm anal about getting things 100% right, and context managers are very nice, simple, elegant 80% solution. If 99% of people are happy with the 80% solution, it is probably the right thing to do just to force a ugly hack on the remaining 1%. Take care, Nathan

On Sat, Feb 18, 2012 at 8:16 AM, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
So... context managers are not a good fit for general event handling. Correct. Given that I agree with your basic point, I'm not sure what the rest of that had to do with anything, unless you heard the word "callback" and immediately assumed I was talking about general event handling rather than Go defer'ed style cleanup APIs (along with a replacement for the bug-prone, irredeemably flawed contextlib.nested API). I'm not - what I'm planning would be a terrible API for general event handling. Fortunately, it's just a replacement for contextlib.nested() as a tool for programmatic management of context managers. If you want nice clean callbacks for general event handling, Python doesn't currently provide that. (We certainly don't have anything that gets remotely close to the elegance of Ruby's blocks for that style of programming: http://www.boredomandlaziness.org/2011/10/correcting-ignorance-learning-bit-...) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 18, 2012 at 10:30 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
My point was more that I feel like you're hitting a point where the context manager as a programming and semantic construct is starting to stretch pretty thin. My gut feeling is that it might be more productive to let context managers alone (I think they're in an okay place with multiple managers in a single with statement) and start to examine the larger class of problems of which the deferred cleanup is a member. Events can unify a lot of concepts in python, while providing a much more elegant handle into third party code than is currently possible. For example... Decorators, descriptors and exceptions can all be unified neatly as events, and events let you reach into 3rd party code in a robust manner. I can't tell you the number of times I have had to subclass multiple things from a third party library to fix a small, unnecessarily limiting design decision. I've even run into this with authors who make very elegant libraries like Armin; nobody can predict all the use cases for their code. The best thing we can do is make it easy to work around such problems. I like the with statement in general, but if python is ever going to embrace events, the farther you travel along this path the more painful switching over is going to be down the line.
I like ruby's blocks a lot. I don't think they don't drink enough of the koolaid though. Blocks can be a gateway to powerful macros (if you have first class expressions) and a mechanism for very elegant currying and partial function evaluation. I think something that is missing for me is a clear picture of where Python is going. I imagine between you, Guido, Martin, Anton, Georg and Raymond (apologies to any of the primary group I'm forgetting) there is some degree of tacit understanding. My perspective on python was framed by Peter Norvig's description of it as aspiring to be a humane reexamination of lisp, but lately I get the feeling the target would better be described as a 21st century pascal. Nathan

Guido van Rossum wrote:
I hope not. I like Pascal. It has nice, clean syntax (if a tad verbose, with the BEGIN/END tags) and straight-forward, simple semantics. Standard Pascal is somewhat lacking (e.g. no strings) but who uses standard Pascal? Without wishing to deny the strengths of C, I think the computing world would be a lot better if C was closer to Pascal than if Pascal had been closer to C. -- Steven

19.02.12 01:52, Steven D'Aprano написав(ла):
Python is not Pascal. For me it s BASIC of nowadays. Really basic, simple and clear (even for non-specialists) language. Not old BASIC with line numbers, GOTO, GOSUB and 1- or 2-symbol identifiers, but modern language with modules, structured programming, powerful basic data structures, OOP, first-class functions, automatic resource management etc.

Was that meant as an insult? Because it sounds to me like one.
I'm sorry if my poor wording caused it to come across that way. Pascal was a very useful language, it with a perspective that was different than its contemporaries because it was originally intended for educational purposes, rather than as an academic language like lisp or a hacker tool like c or fortran. I enjoy writing python a lot, and would prefer to use it rather than ruby/lisp/java/etc in most cases. My suggestions come from frustrations that occur when using python in areas where the right answer is probably just to use a different language. If I knew that what I wanted was at odds with the vision for python, I would have less of an issue just accepting circumstances, and would just get to work rather than sidetracking discussions on this list. Thanks, and again, sorry! Nathan

On Sat, Feb 18, 2012 at 3:57 PM, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
I have no ideal how old you are, or what your background is, so I don't know if you have all that from personal experience or from hearsay. I do know that for me, when I first learned Pascal on the Control Data mainframe in 1974, it was the ultimate hacker tool. (Well, penultimate. Assembler was the ultimate. But even then it was a last resort.) Pascal was also developed by an academic. I never got much out of Lisp. So I guess it's a matter of perspective.
I strongly recommend that you stick to describing your use cases and tentatively exploring possible solutions, instead of trying to spout sweeping controversial statements. Those just get in the way of getting an exchange of ideas going. -- --Guido van Rossum (python.org/~guido)

On Sun, Feb 19, 2012 at 9:57 AM, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
The core problem comes down to the differences between Guido's original PEP 340 idea (which was much closer in power to Ruby's blocks, since it was a new looping construct that allowed 0-or-more executions of the contained block) and the more constrained with statement that is defined in PEP 343 (which will either execute the body once or throw an exception, distinguishing it clearly from both the looping constructs and if statements). The principle Guido articulated when making that decision was: "different forms of flow control should look different at the point of invocation". So, where a language like Ruby just defines one protocol (callbacks, supplemented by anonymous blocks that run directly in the namespace of the containing function) and uses it for pretty much *all* flow control (including all their loop constructs), Python works the other way around, defining *different* protocols for different patterns of invocation. This provides a gain in readability on the Python side. When you see any of the following in Python: @whatever() def f(): pass with whatever(): # Do something! for x in whatever(): # Do something! It places a lot of constraints on the nature of the object returned by "whatever()" - even without knowing anything else about it, you know the first must return a decorator, the second a context manager, and the third an iterable. If that's all you need to know at this point in time, you don't need to worry about the details - the local syntax tells you the important things you need to know about the flow control. In Ruby, though, all of them (assuming it isn't actually important that the function name be bound locally) could be written like this: whatever() do: # Do something! end Is it a context manager? An iterable? Some other kind of callback? There's nothing in the syntax to tell you that - you're relying on naming conventions to provide that information (like the ".foreach" convention for iteration methods). That approach can obviously work (otherwise Ruby wouldn't be as popular as it is), but it *does* make it harder to pick up a piece of code and understand the possible control flows without looking elsewhere. However, this decision to be explicit about flow control for the benefit of the *reader* brings with it a high *cost* on the Python side for the code *author*: where Ruby works by defining a nice syntax and semantics for callback based programming and building other language constructs on top of that, Python *doesn't currently have* a particularly nice general purpose native syntax for callback based programming. Decorators do work in many cases (especially simple callback registration), but they sometimes feel wrong because they're mainly designed to modify how a function is defined, not implement key program flow control constructs. However, their flexibility shouldn't be underestimated, and the CallbackStack API is designed to help Python developers push decorators and context managers closer to those limits *without* needing new language constructs. By decoupling the callback stack from the code layout, it gives you full *programmatic* control of the kinds of things context managers can help with when you know in advance exactly what you want to do. *If* CallbackStack proves genuinely popular (and given the number of proposals I have seen along these lines, and the feedback I have received on ContextStack to date, I expect it will), and people start to develop interesting patterns for using it, *then* we can start looking at the possibility of dedicated syntax to streamline particular use cases (just as the with statement itself was designed to streamline various use cases of the more general try statement). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sat, Feb 18, 2012 at 5:27 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
Very lucid explanation, Nick. (I also liked your blog post that you referenced in a previous message, which touches upon the same issues.) Apparently I don't seem to like flow control constructs formed by "quoting" (in Lisp terms) a block of code and leaving its execution to some other party, with the exception of explicit function definitions. Maybe a computer-literate psychoanalyst can do something with this... To this day I am having trouble liking event-based architectures -- I do see a need for them, but I immediately want to hide their mechanisms and offer a *different* mechanism for most use cases. See e.g. the (non-thread-based) async functionality I added to the new App Engine datastore client, NDB: https://docs.google.com/document/pub?id=1LhgEnZXAI8xiEkFA4tta08Hyn5vo4T6HSGL... . Deep down inside it has an event loop, but this is hidden by using Futures, which in turn are mostly wrapped in tasklets , i.e. yield-based coroutines. I expect that if I were to find a use for Twisted, I'd do most of my coding using its so-called inlineCallbacks mechanism (also yield-based coroutines). When I first saw Monocle, which offers a simplified coroutine-based API on top of (amongst others) Twisted, I thought it was a breath of fresh air (NDB is heavily influenced by it). I've probably (implicitly) trained most key Python developers and users to think similarly, and Python isn't likely to morph into Ruby any time soon. It's easy enough to write an event-based architecture in Python (see Twisted and Tornado); but an event loop is never going to be the standard way to solve all your programming problems in Python. I do kind of like the 'defer' idea that started this thread (even if I had syntactic quibbles with it that already came up before the thread was derailed), but I notice that it is a far cry from an event-driven architecture -- like the referenced counterparts in Go and D, 'defer' blocks are not anonymous functions that can be passed off to arbitrary other libraries for possibly later and/or repeated execution -- they are a way to specify out-of-order execution within the current scope, which "tames" them enough to be acceptable from my perspective. Though they may also not be powerful enough to be convincing as a new feature, since you can do everything they can do by rearranging the code of your function somewhat and carefully using try/finally. -- --Guido van Rossum (python.org/~guido)

The main attraction of events for me is that they are a decent model of computational flow that makes it easy to "reach into" other people's code. I won't argue against the statement that they can be less clear or convenient to work with in some cases than other mechanisms. My personal preference would be to have the more powerful mechanism as the underlying technology, and build simpler abstractions on top of that (kind of like @property vs manually creating a descriptor).
I agree that events can make code harder to follow in some cases. I feel the same way about message passing and channels versus method invocation. In both cases I think there is an argument to be made for representing the simpler techniques as a special cases which are emphasized for general use. I also understand not wanting to be stuck dealing with someone else's event or message passing fetish when it's not necessary (and they often aren't), and that is certainly a fair counterargument. Thank you for clarifying your views somewhat, it was instructive. I enjoy writing python code in general, but I shouldn't let that lead me astray when it isn't the right tool for the job. Take care, Nathan

Out of interest, do you see an alternative to events or message passing when they _are_ required? I'm in Guido's apparently minority camp in that I can't stand events. The only decent alternative I've seen is message passing. On Feb 19, 2012 1:15 PM, "Nathan Rice" <nathan.alexander.rice@gmail.com> wrote:

I can appreciate the intention there. That particular case isn't as big a deal from my perspective, my non-local code pain points tend to be centered around boneheaded uses of inheritance and dynamic modification of classes..
More often than not, when I am reading other people's code, I am debugging it (and thus have local/global context information) or just interested in nailing down a poorly documented corner of an API. I think if I were regularly in the habit of working with lots of undocumented code I would probably appreciate this more.
I wasn't suggesting syntax needs to change necessarily, all the pieces are already there. I see it more along the lines of function.Event, class.Event, module.Event, context_manager.Event, etc. It is a moot point though. Thank you again for taking the time to clarify the rational for me. It wasn't intuitive to me because it does not really address issues I have.

On 18 February 2012 22:33, Nathan Rice <nathan.alexander.rice@gmail.com> wrote:
You may have a point, but I find it hard to understand what you are getting at. Would you be able to propose a specific syntax/semantics to clarify what you're trying to express? (I think I get the general concept, but I can't see how you imagine it to work). Thanks, Paul
participants (10)
-
Christopher Reay
-
Guido van Rossum
-
Manuel Barkhau
-
Matt Joiner
-
Nathan Rice
-
Nick Coghlan
-
Paul Moore
-
Serhiy Storchaka
-
Steven D'Aprano
-
Sven Marnach