Re: [Python-ideas] Object grabbing

The latter, the former isn't currently syntax (as in `with [a, b, c]"` will throw an error). This was, however, mainly if we were using a variant of the with syntax, since having But take the following example: def func(ctx): with ns: # namespace with ctx: # only known at runtime .attr You can't have that be a compile time error. Which actually makes the specific syntax of a bare `with` a strong -1 now, since mixing it with normal context managers is incredibly ambiguous. This would either need to be `in` or `given` as mentioned elsewhere, or something like with `Magic_Namespace_Object(ns):` --Josh On Mon, May 2, 2016 at 7:26 PM Erik <python@lucidity.plus.com> wrote:

On Mon, May 2, 2016 at 6:43 PM, Joshua Morton <joshua.morton13@gmail.com> wrote:
Which actually makes the specific syntax of a bare `with` a strong -1 now, since mixing it with normal context managers is incredibly ambiguous.
I'm glad you eventually reached this conclusion. -- --Guido van Rossum (python.org/~guido)

Please excuse my nomenclature. I hope the community can correct the synonyms that clarify my proposal. Problem ------- I program defensively, and surround many of my code blocks with try blocks to catch expected and unexpected errors. Those unexpected errors seem to dominate in my code; I never really know how many ways my SQL library can fail, nor am I really sure that a particular key is in a `dict()`. Most of the time I can do nothing about those unexpected errors; I simply chain them, with some extra description about what the code block was attempting to do. I am using 2.7, so I have made my own convention for chaining exceptions. 3.x chains more elegantly: for t in todo: try: # do error prone stuff except Exception, e: raise ToDoError("oh dear!") from e The “error prone stuff” can itself have more try blocks to catch known failure modes, maybe deal with them. Add some `with` blocks and a conditional, and the nesting gets ugly: def process_todo(todo): try: with Timer("todo processing"): # pre-processing for t in todo: try: # do error prone stuff except Exception, e: raise TodoError("oh dear!") from e # post-processing except Exception, e: raise OverallTodoError("Not expected") from e Not only is my code dominated by exception handling, the meaningful code is deeply nested. Solution -------- I would like Python to have a bare `except` statement, which applies from that line, to the end of enclosing block (or next `except` statement). Here is the same example using the new syntax: def process_todo(todo): except Exception, e: raise OverallTodoError("Not expected") from e with Timer("todo processing"): # pre-processing for t in todo: except Exception, e: raise TodoError("oh dear!") from e # do error prone stuff # post-processing Larger code blocks do a better job of portraying he visual impact of the reduced indentation. I would admit that some readability is lost because the error handling code precedes the happy path, but I believe the eye will overlook this with little practice. Multiple `except` statements are allowed. They apply as if they were used in a `try` statement; matched in the order declared: def process_todo(todo): pre_processing() # has no exception handling except SQLException, e: # effective until end of method raise Exception("Not expected") from e except Exception, e: raise OverallTodoError("Oh dear!") from e processing() A code block can have more than one `except` statement: def process_todo(todo): pre_processing() # no exception handling except SQLException, e: # covers lines from here to beginning of next except statement raise Exception("Not expected") from e except Exception, e: # catches other exception types raise Exception("Oh dear!") from e processing() # Exceptions caught except SQLException, e: # covers a lines to end of method raise Exception("Happens, sometimes") from e post_processing() # SQLException caught, but not Exception In these cases, a whole new block is effectively defined. Here is the same in legit Python: def process_todo(todo): pre_processing() # no exception handling try: processing() # Exceptions caught except SQLException, e: # covers all lines from here to beginning of next except statement raise Exception("Not expected") from e except Exception, e: # catches other exception types raise Exception("Oh dear!") from e try: post_processing() # SQLException caught, but not Exception except SQLException, e: # covers a lines to end of method raise Exception("Happens, sometimes") from e Other Thoughts -------------- I only propose this for replacing `try` blocks that have no `else` or `finally` clause. I am not limiting my proposal to exception chaining; Anything allowed in `except` clause would be allowed. I could propose adding `except` clauses to each of the major statement types (def, for, if, with, etc…). which would make the first example look like: def process_todo(todo): with Timer("todo processing"): # pre-processing for t in todo: # do error prone stuff except Exception, e: raise TodoError("oh dear!") from e # post-processing except Exception, e: raise OverallTodoError("Not expected") from e But, I am suspicious this is more complicated than it looks to implement, and the `except` statement does seem visually detached from the block it applies to. Thank you for your consideration!

On 5/4/2016 4:51 PM, Random832 wrote:
for t in todo: pre_process() except Exception, e: print "problem"+unicode(e) process() Would be the same as for t in todo: pre_process() try: process() except Exception, e: print "problem"+unicode(e) so, the control resumes at the end of the `for` block, but still inside the loop

On 04May2016 15:58, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
I also like context from close to where the exception occurred, while doing that catching at whatever the suitable outer layer may be, as normal. Let me show you what I do... I have a module "cs.logutils": https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/logutils.py which is also on PyPI, thus "pip install"able. It has a context manager called "Pfx", short for "prefix". I use it like this: from cs.logutils import Pfx ... with Pfx(filename): with open(filename) as fp: for lineno, line in enumerate(fp, 1): with Pfx(lineno): ... do stuff here ... If an exception occurs within a Pfx context manager, its message attributes get prepended with the strings of the active Pfx instances, joined by ": ". Then it is reriased and handled exaxtly as if there were no Pfxs in play. So if some ValueError occured while processing line 3 of the file "foo.txt" the ValueError's message would start "foo.txt: 3: " and proceed with the core ValueError message. This provides me with cheap runtime context for all exceptions, with minimal boilerplate in my code. All the catch-and-reraise stuff happens in the __exit__ method of the innermost Pfx instance, if an exception occurs. When no exceptions occur all it is doing is maintaining a thread local stack of current message prefixes. It looks like this might address your concerns without adding things to Python itself. You could certainly make a case for annotating the exceptions with an arbitrary extra object with whatever structured state eg a dict); suggestions there welcome. What do you think of this approach to your concerns? Cheers, Cameron Simpson <cs@zip.com.au>
--

On 5/4/2016 7:05 PM, cs@zip.com.au wrote:
I really like the idea of using a `with` clause for simplify exception chaining. I am concerned I would be missing some of the locals available at exception time, which the `with` clause would not have access to, but more introspection may solve that too. It does not solve the indent problem, but I could live with that if it made the code simpler in other ways. Many of my exception handlers are multiline, and I do not think the `with` clause strategy would work there.

On 5 May 2016 9:16 am, "Kyle Lahnakoski" <klahnakoski@mozilla.com> wrote:
In combination with contextlib.contextmanager (and perhaps passing in references to relevant locals), with statements are designed to handle factoring out almost arbitrary exception handling. Chaining a consistent error, for example: @contextmanager def chain_errors(exc_to_raise): try: yield except Exception as e: raise exc_to_raise from e This is most useful for supplying state that's useful for debugging purposes (e.g. the codec infrastructure tries to do something like that in order to report the codec name) Cheers, Nick.

On Wed, May 4, 2016, 8:42 PM Nick Coghlan <ncoghlan@gmail.com> wrote:
Unfortunately, Kyle is using Python 2.7 still, so ``raise from`` won't help him. The exception context/cause is probably my favorite Python 3 feature. Or at least in my top 5.

On 5 May 2016 at 12:17, Michael Selik <michael.selik@gmail.com> wrote:
For feature proposals on python-ideas it's the current capabilities on 3.x that matter and there, between exception chaining, contextlib.contextmanager, and contextlib.ExitStack, there are already some enormously powerful tools for exception stack manipulation without extensive code duplication. Python 2.7 doesn't have the implicit exception chaining, but it does have the other features (including ExitStack, by way of contextlib2). While the standard traceback display functions wouldn't show it, even explicit exception chaining can be emulated on Python 2.x (since that's mainly just a matter of setting the __cause__ attribute appropriately and using traceback2 to attach and display gc-friendly __traceback__ attributes ) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Thu, May 05, 2016 at 02:17:25AM +0000, Michael Selik wrote:
Unfortunately, Kyle is using Python 2.7 still, so ``raise from`` won't help him.
If Kyle is using Python 2.7, then a new feature which is only introduced to 3.6 or 3.7 isn't going to help him either. -- Steve

On 5/5/2016 11:23 AM, Michael Selik wrote:
I am jealous that Python 3.x has `raise from`, and I can not use it. `raise from` does solve the exception chaining problems in 2.7, but that can be worked around just as effectively [1]. Me being stuck in 2.7 will not last forever. `raise from` does not solve the excessive indentation problem: I have many `try` clauses, causing deep indentation in my code. The block-scoped exception handlers would mitigate this deep indentation, and make exception handling even easier to add.

On 5 May 2016 at 16:35, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
contextmanager/ExitStack would likely help with the indentation problem. As would simply breaking out some of the deeply nested code into independent functions. It may be that there's a problem worth addressing here, but I suggest that you wait until you've had a chance to see if the existing features in Python 3.5 resolve your issues before taking this suggestion any further. Paul

On 5/5/2016 12:04 PM, Paul Moore wrote:
May you provide me with an example of how contextmanager would help with the indentation? From what little I can glean, Python2.7 already has this, and I use it, but I do not see how replacing `try` blocks with `with` blocks reduces indentation. I do agree it looks cleaner than a `try/except` block though. I do agree that breaking out deeply nested code into independent functions can help with the indentation. I find breaking out functions that only have call site quite disappointing. My disappointment is proportional to the number of block-scoped variables. Thank you!

On 6 May 2016 at 05:17, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
May you provide me with an example of how contextmanager would help with the indentation?
contextmanager doesn't, ExitStack does (which is in the standard library for 3.3+, and available via contextlib2 for earlier versions).
One of the cases that ExitStack handles is when you want to unwind all the contexts at the same point in the code, but enter them at different points. It does that by letting you write code like this: with ExitStack() as cm: cm.enter_context(the_first_cm) # Do some things cm.enter_context(the_second_cm) # Do some more things cm.enter_context(the_third_cm) # Do yet more things # All three context managers get unwound here The nested with equivalent would be: with the_first_cm: # Do some things with the_second_cm: # Do some more things with the_third_cm: # Do yet more things # All three context managers get unwound here As an added bonus, the ExitStack approach will also let you push arbitrary callbacks, enter contexts conditionally, and a few other things. Barry Warsaw has an excellent write-up here: http://www.wefearchange.org/2013/05/resource-management-in-python-33-or.html Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 5/5/2016 10:17 PM, Nick Coghlan wrote:
Thank you for the examples, and they do reduce indention when you have callbacks defined, or dealing with resources. This does not seem to help in the case of exception handlers. Exception handlers have access to block scope variables, and have code blocks that do not require a `def`. I should have added different types of exception handlers to my initial email to distract from the chaining exception. Consider the exception handler that simply stops trying to process the todo items: |____def process_todo(todo): |||____||||____|pre_process() ||||____||||||____||||||||for t in todo: |||||____||||||________exc|||||||||||||||||||||||ept Exception, e: |||||_||||||___||||||||_||||||___||||||||_||||||___||||||||_||||||___|||break| ||||_||||||___||||||||_||||||___||||||||_||||||___|||process(t) |||||_||||||___||||||||_||||||___|||post_process() As an alternative to ||____def process_todo(todo): |||____||||____|pre_process() ||||____||||||____||||||||for t in todo: |||||____________try ||||||||||||_____||||||___||||||||_||||||___||||||||_||||||___|||process(t) |____||||||________exc|||||||||||||||||||||||ept Exception, e: |||||_||||||___||||||||_||||||___||||||||_||||||___||||||||_||||||___|||break| ||||_||||||___||||||||_||||||___|||post_process() | Of course, a couple `try` nested statements make the indentation worse. For example, we wish to ignore problems in dealing with todo items, like above, but the todo items are complicated; each is a dict() of details. Failure to deal with one of the details is ignored, but the rest of the details are still attempted: |def process_todo(todo): ____pre_process() ____for t in todo: ________except Exception, e: ____________break ________for u, v in t.items(): ____________except Exception, e: ________________continue ____________process() ____post_process() Which is better than what I do now: def process_todo(todo): ____pre_process() ____for t in todo: ________try: ____________for u, v in t.items(): ________________try: ____________________process() ________________except Exception, e: ____________________continue ________except Exception, e: ____________break ____post_process() | |I have not touched on more complicated except clauses; ones that have multiple lines, and ones that use, or update block-scoped variables. ExitStack is good for manipulating `with` clauses as first order objects, but they are limited to what you can do with `with` statements: Limited to naive exception handling, or simple chaining. If was to use ExitStack in the example above (using `continue` and `exit`), maybe I would write something like: def process_todo(todo): ____with ExitStack() as stack: ________pre_process() ________for t in todo: ____________stack.enter_context(BreakOnException()) ____________for u, v in t.items(): ________________stack.enter_context(ContinueOnException()) ________________process() ________________stack.pop() # DOES THIS EXIST? IT SHOULD ________post_process() | |This terrible piece of (not working) code assumes it is even possible to write a `BreakOnException` and `ContinueOnException`. It also requires a `pop` method, which is not documented, but really should exist. Without a `pop` method, I would need another ExitStack instance, and a `with` statement to hold it.| || || |I hope I have convinced you that ExitStack, and `with` block magic, does a poor job of covering the use cases for `try` statements. Maybe I should make examples that use the loop variables in the `except` clause for more examples?| || |At a high level: The `try/except/finally` is powerful structure. Realizing that the `try/finally` pair is very common leads to the creation of `with`. I am proposing the same logic for `try/catch` pair: They are commonly paired together and should have an optimized syntax. | |||| ||

On Fri, May 6, 2016 at 10:50 AM Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
The refactoring that first comes to mind is to make the inner portion a separate function: def process_one(thing): for u, v in thing.items(): try: process() except Exception as e: continue def process_group(group): pre_process() for thing in group: try: process_one(thing) except Exception as e: break post_process() This avoids the excessive indentation and helps improve the reading of how an error in the group will break but an error in one will continue. Does that not satisfy?

On 5/6/2016 1:28 PM, Michael Selik wrote:
Yes, extracting deeply nested code into methods will reduce the indentation. It is disappointing that I must come up with a name for a method that has only one call site. More disappointment for each local variable I must pass to that method,. More disappointment for each variable the code bock update this locals.

On Fri, May 6, 2016 at 1:56 PM Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
It is difficult to come up with a good name sometimes, but I feel that effort often gives me a better understanding of my code. I don't mind that there's only one call site. In fact, many of the functions I write only have one call site. I often break code out to a function so that I can give it a name and make the code easier to read. I share your distaste for passing the same set of arguments to a function, it's helper, that helper's helper, and so on down the chain. When it gets frustrating, that's more incentive to refactor. Sometimes it pushes me to realize a better design. It's often a sign of too much interdependence, which is hard to reason about. Regarding your proposal, even if it were a pleasant syntax, I think the alternatives are good enough that having both would go against the Zen of one obvious way.

-1 on the whole idea. It's make code much less readable, and is way too implicit for my tastes, and, I think, for Python. My own dbf [1] module is roughly 10,000 lines, and I maintain a private copy of OpenERP which is at least 10 times that size, and no where does either system have so many nested try/except handlers. -- ~Ethan~ [1] https://pypi.python.org/pypi/dbf

On 5/6/2016 1:44 PM, Ethan Furman wrote:
This is a good point. There are probably domains that have clear inputs, or have a mature codebase, with tests covering all input permutations. These domains do not need `try` statements to cover the unknown. Maybe these domains dominate the universe of source code, and I am in the minority. I can easily be convince this is the case: I have seen lots of code that really does not care if an exception gets raised. Understanding why it failed is a real pain, for there are no descriptions, no summary, original causes are not chained, or if they are, the stack trace is missing. My code has no hope of mitigating those errors: It can not retry on HTTP errors, or provide useful feedback if the original cause is a missing file. Your strategy of simply not using `try` statements, may also work. Although, I do not know how you trace down the cause of errors on production systems easily without them. Mature software will not have as many logic errors as my green code, so the cost of chasing down a problem is better amortized, and it more reasonable to leave out `try` blocks. For example, from ver_33.py (in Record._retrieve_field_value), ________try: ____________if null_data[byte] >> bit & 1: ________________return Null ________except IndexError: ____________print(null_data) ____________print(index) ____________print(byte, bit) ____________print(len(self._data), self._data) ____________print(null_def) ____________print(null_data) ____________raise It is not obvious to me that IndexError is the only Exception that can come from here. This code may raise file access exceptions, HTTP exceptions, I do not know. I would, at least, add a `raise Exception` clause to catch those unknown situations. Furthermore, since I do not know how deep the stack will be on those exceptions, I would chain-and-raise ________try: ____________if null_data[byte] >> bit & 1: ________________return Null ________except Exception, e: ____________raise CantGetValueFromFieldException( ________________null_data, ________________index, ________________byte, bit, ________________(len(self._data), self._data), ________________null_def, ________________null_data ____________) from e This is better than letting the SQLExceptions just propagate; I have described what I am doing (CantGetValueFromFieldException), so I can switch on it in a later exception handler, and I still have the original cause, which I can switch on also. Looking at Record.__getattr__(), and Record.__getitem__(), they use the above method, and can raise any number of other exceptions. This means the code that uses this library must catch those possible exceptions. Too many for me to keep track of now. I will catch them all, make some decisions, and re-raise the ones I can not deal with. def do_stuff(**kwargs): ____try: ________my_data = Record(**kwargs) ________value = my_data["value"] ________send(value) ____catch Exception, e: ________if in_causal_chain(SQLException, e): ____________Log.warning("Database problem, not doing anymore stuff", cause=e) ________elif in_causal_chain(CantGetValueFromFieldException, e): ____________raise AppearsToBeADataAccessProblem() from e ________else: ____________raise AlternatProblem() from e Thank you for the time you spend on this subject.

On Sat, May 7, 2016 at 5:15 AM, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
It's nothing to do with the maturity of the codebase. It's more a question of effort-for-effort. You have a couple of options: 1) Put in heaps of effort up front, and during code editing; or 2) Put in a bit more effort in debugging. When you first write code, don't bother with any of the extra boiler-plate. Just let the exceptions propagate as they are, and worry about debugging when you get to it. Later on, follow the basic Rule of Three: if you've done the same thing three times, put in some effort to make it easier (because something you do three times is likely to happen a fourth). Most of your code won't need extra exception info - the traceback will serve you just fine. Once you've had three examples of some particular loop tripping you up (because your debugging work is harder due to not knowing which iteration of the loop raised the exception), you know where to put in a simple exception-chaining block: import random class InfoCarrier(Exception): pass for i in range(30): x = random.randrange(20) try: y = 1/x except: raise InfoCarrier("x = %s" % x) You almost certainly do _not_ need this kind of construct all through your code; that's too much effort in code maintenance for not enough benefit in debugging. If you really think you need this kind of locals inspection everywhere, pick up one of the execution frameworks that lets you do this - I think ipython does? - and have none of it in your code at all. You're using Python. So stop writing so much code. :) ChrisA

Kyle Lahnakoski wrote:
Using a suitably-defined context manager, it should be possible to write that something like this: with Activity("Getting a bit", lambda: (null_data, index, (byte, bit), (len(self._data), self._data), null_def, null_data): if null_data[byte] >> bit & 1: return Null -- Greg

On 05/06/2016 12:15 PM, Kyle Lahnakoski wrote:
Firstly, my compliments for actually checking out the code I was referring to. I'm impressed! Secondly, that whole try/except, especially the multiple print statements, is an example of how to track down something -- but that is debugging code that I forgot take out. In other words, I was getting an IndexError, so I stuck that code in for testing, fixed the problem... and forgot to remove the code. To be fair, I was the primary consumer of that library for a long time. So, as others have said: Just write your code. When something breaks, then put in the debugging code to see what exactly is going on. If you don't already have a test suite, start one at that point: write the test that should succeed, watch it fail, fix your code, watch your test succeed, rest assured that if you break that test in the future you'll catch before you release your code in to the wild. -- ~Ethan~

On Mon, May 2, 2016 at 6:43 PM, Joshua Morton <joshua.morton13@gmail.com> wrote:
Which actually makes the specific syntax of a bare `with` a strong -1 now, since mixing it with normal context managers is incredibly ambiguous.
I'm glad you eventually reached this conclusion. -- --Guido van Rossum (python.org/~guido)

Please excuse my nomenclature. I hope the community can correct the synonyms that clarify my proposal. Problem ------- I program defensively, and surround many of my code blocks with try blocks to catch expected and unexpected errors. Those unexpected errors seem to dominate in my code; I never really know how many ways my SQL library can fail, nor am I really sure that a particular key is in a `dict()`. Most of the time I can do nothing about those unexpected errors; I simply chain them, with some extra description about what the code block was attempting to do. I am using 2.7, so I have made my own convention for chaining exceptions. 3.x chains more elegantly: for t in todo: try: # do error prone stuff except Exception, e: raise ToDoError("oh dear!") from e The “error prone stuff” can itself have more try blocks to catch known failure modes, maybe deal with them. Add some `with` blocks and a conditional, and the nesting gets ugly: def process_todo(todo): try: with Timer("todo processing"): # pre-processing for t in todo: try: # do error prone stuff except Exception, e: raise TodoError("oh dear!") from e # post-processing except Exception, e: raise OverallTodoError("Not expected") from e Not only is my code dominated by exception handling, the meaningful code is deeply nested. Solution -------- I would like Python to have a bare `except` statement, which applies from that line, to the end of enclosing block (or next `except` statement). Here is the same example using the new syntax: def process_todo(todo): except Exception, e: raise OverallTodoError("Not expected") from e with Timer("todo processing"): # pre-processing for t in todo: except Exception, e: raise TodoError("oh dear!") from e # do error prone stuff # post-processing Larger code blocks do a better job of portraying he visual impact of the reduced indentation. I would admit that some readability is lost because the error handling code precedes the happy path, but I believe the eye will overlook this with little practice. Multiple `except` statements are allowed. They apply as if they were used in a `try` statement; matched in the order declared: def process_todo(todo): pre_processing() # has no exception handling except SQLException, e: # effective until end of method raise Exception("Not expected") from e except Exception, e: raise OverallTodoError("Oh dear!") from e processing() A code block can have more than one `except` statement: def process_todo(todo): pre_processing() # no exception handling except SQLException, e: # covers lines from here to beginning of next except statement raise Exception("Not expected") from e except Exception, e: # catches other exception types raise Exception("Oh dear!") from e processing() # Exceptions caught except SQLException, e: # covers a lines to end of method raise Exception("Happens, sometimes") from e post_processing() # SQLException caught, but not Exception In these cases, a whole new block is effectively defined. Here is the same in legit Python: def process_todo(todo): pre_processing() # no exception handling try: processing() # Exceptions caught except SQLException, e: # covers all lines from here to beginning of next except statement raise Exception("Not expected") from e except Exception, e: # catches other exception types raise Exception("Oh dear!") from e try: post_processing() # SQLException caught, but not Exception except SQLException, e: # covers a lines to end of method raise Exception("Happens, sometimes") from e Other Thoughts -------------- I only propose this for replacing `try` blocks that have no `else` or `finally` clause. I am not limiting my proposal to exception chaining; Anything allowed in `except` clause would be allowed. I could propose adding `except` clauses to each of the major statement types (def, for, if, with, etc…). which would make the first example look like: def process_todo(todo): with Timer("todo processing"): # pre-processing for t in todo: # do error prone stuff except Exception, e: raise TodoError("oh dear!") from e # post-processing except Exception, e: raise OverallTodoError("Not expected") from e But, I am suspicious this is more complicated than it looks to implement, and the `except` statement does seem visually detached from the block it applies to. Thank you for your consideration!

On 5/4/2016 4:51 PM, Random832 wrote:
for t in todo: pre_process() except Exception, e: print "problem"+unicode(e) process() Would be the same as for t in todo: pre_process() try: process() except Exception, e: print "problem"+unicode(e) so, the control resumes at the end of the `for` block, but still inside the loop

On 04May2016 15:58, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
I also like context from close to where the exception occurred, while doing that catching at whatever the suitable outer layer may be, as normal. Let me show you what I do... I have a module "cs.logutils": https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/logutils.py which is also on PyPI, thus "pip install"able. It has a context manager called "Pfx", short for "prefix". I use it like this: from cs.logutils import Pfx ... with Pfx(filename): with open(filename) as fp: for lineno, line in enumerate(fp, 1): with Pfx(lineno): ... do stuff here ... If an exception occurs within a Pfx context manager, its message attributes get prepended with the strings of the active Pfx instances, joined by ": ". Then it is reriased and handled exaxtly as if there were no Pfxs in play. So if some ValueError occured while processing line 3 of the file "foo.txt" the ValueError's message would start "foo.txt: 3: " and proceed with the core ValueError message. This provides me with cheap runtime context for all exceptions, with minimal boilerplate in my code. All the catch-and-reraise stuff happens in the __exit__ method of the innermost Pfx instance, if an exception occurs. When no exceptions occur all it is doing is maintaining a thread local stack of current message prefixes. It looks like this might address your concerns without adding things to Python itself. You could certainly make a case for annotating the exceptions with an arbitrary extra object with whatever structured state eg a dict); suggestions there welcome. What do you think of this approach to your concerns? Cheers, Cameron Simpson <cs@zip.com.au>
--

On 5/4/2016 7:05 PM, cs@zip.com.au wrote:
I really like the idea of using a `with` clause for simplify exception chaining. I am concerned I would be missing some of the locals available at exception time, which the `with` clause would not have access to, but more introspection may solve that too. It does not solve the indent problem, but I could live with that if it made the code simpler in other ways. Many of my exception handlers are multiline, and I do not think the `with` clause strategy would work there.

On 5 May 2016 9:16 am, "Kyle Lahnakoski" <klahnakoski@mozilla.com> wrote:
In combination with contextlib.contextmanager (and perhaps passing in references to relevant locals), with statements are designed to handle factoring out almost arbitrary exception handling. Chaining a consistent error, for example: @contextmanager def chain_errors(exc_to_raise): try: yield except Exception as e: raise exc_to_raise from e This is most useful for supplying state that's useful for debugging purposes (e.g. the codec infrastructure tries to do something like that in order to report the codec name) Cheers, Nick.

On Wed, May 4, 2016, 8:42 PM Nick Coghlan <ncoghlan@gmail.com> wrote:
Unfortunately, Kyle is using Python 2.7 still, so ``raise from`` won't help him. The exception context/cause is probably my favorite Python 3 feature. Or at least in my top 5.

On 5 May 2016 at 12:17, Michael Selik <michael.selik@gmail.com> wrote:
For feature proposals on python-ideas it's the current capabilities on 3.x that matter and there, between exception chaining, contextlib.contextmanager, and contextlib.ExitStack, there are already some enormously powerful tools for exception stack manipulation without extensive code duplication. Python 2.7 doesn't have the implicit exception chaining, but it does have the other features (including ExitStack, by way of contextlib2). While the standard traceback display functions wouldn't show it, even explicit exception chaining can be emulated on Python 2.x (since that's mainly just a matter of setting the __cause__ attribute appropriately and using traceback2 to attach and display gc-friendly __traceback__ attributes ) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Thu, May 05, 2016 at 02:17:25AM +0000, Michael Selik wrote:
Unfortunately, Kyle is using Python 2.7 still, so ``raise from`` won't help him.
If Kyle is using Python 2.7, then a new feature which is only introduced to 3.6 or 3.7 isn't going to help him either. -- Steve

On 5/5/2016 11:23 AM, Michael Selik wrote:
I am jealous that Python 3.x has `raise from`, and I can not use it. `raise from` does solve the exception chaining problems in 2.7, but that can be worked around just as effectively [1]. Me being stuck in 2.7 will not last forever. `raise from` does not solve the excessive indentation problem: I have many `try` clauses, causing deep indentation in my code. The block-scoped exception handlers would mitigate this deep indentation, and make exception handling even easier to add.

On 5 May 2016 at 16:35, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
contextmanager/ExitStack would likely help with the indentation problem. As would simply breaking out some of the deeply nested code into independent functions. It may be that there's a problem worth addressing here, but I suggest that you wait until you've had a chance to see if the existing features in Python 3.5 resolve your issues before taking this suggestion any further. Paul

On 5/5/2016 12:04 PM, Paul Moore wrote:
May you provide me with an example of how contextmanager would help with the indentation? From what little I can glean, Python2.7 already has this, and I use it, but I do not see how replacing `try` blocks with `with` blocks reduces indentation. I do agree it looks cleaner than a `try/except` block though. I do agree that breaking out deeply nested code into independent functions can help with the indentation. I find breaking out functions that only have call site quite disappointing. My disappointment is proportional to the number of block-scoped variables. Thank you!

On 6 May 2016 at 05:17, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
May you provide me with an example of how contextmanager would help with the indentation?
contextmanager doesn't, ExitStack does (which is in the standard library for 3.3+, and available via contextlib2 for earlier versions).
One of the cases that ExitStack handles is when you want to unwind all the contexts at the same point in the code, but enter them at different points. It does that by letting you write code like this: with ExitStack() as cm: cm.enter_context(the_first_cm) # Do some things cm.enter_context(the_second_cm) # Do some more things cm.enter_context(the_third_cm) # Do yet more things # All three context managers get unwound here The nested with equivalent would be: with the_first_cm: # Do some things with the_second_cm: # Do some more things with the_third_cm: # Do yet more things # All three context managers get unwound here As an added bonus, the ExitStack approach will also let you push arbitrary callbacks, enter contexts conditionally, and a few other things. Barry Warsaw has an excellent write-up here: http://www.wefearchange.org/2013/05/resource-management-in-python-33-or.html Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 5/5/2016 10:17 PM, Nick Coghlan wrote:
Thank you for the examples, and they do reduce indention when you have callbacks defined, or dealing with resources. This does not seem to help in the case of exception handlers. Exception handlers have access to block scope variables, and have code blocks that do not require a `def`. I should have added different types of exception handlers to my initial email to distract from the chaining exception. Consider the exception handler that simply stops trying to process the todo items: |____def process_todo(todo): |||____||||____|pre_process() ||||____||||||____||||||||for t in todo: |||||____||||||________exc|||||||||||||||||||||||ept Exception, e: |||||_||||||___||||||||_||||||___||||||||_||||||___||||||||_||||||___|||break| ||||_||||||___||||||||_||||||___||||||||_||||||___|||process(t) |||||_||||||___||||||||_||||||___|||post_process() As an alternative to ||____def process_todo(todo): |||____||||____|pre_process() ||||____||||||____||||||||for t in todo: |||||____________try ||||||||||||_____||||||___||||||||_||||||___||||||||_||||||___|||process(t) |____||||||________exc|||||||||||||||||||||||ept Exception, e: |||||_||||||___||||||||_||||||___||||||||_||||||___||||||||_||||||___|||break| ||||_||||||___||||||||_||||||___|||post_process() | Of course, a couple `try` nested statements make the indentation worse. For example, we wish to ignore problems in dealing with todo items, like above, but the todo items are complicated; each is a dict() of details. Failure to deal with one of the details is ignored, but the rest of the details are still attempted: |def process_todo(todo): ____pre_process() ____for t in todo: ________except Exception, e: ____________break ________for u, v in t.items(): ____________except Exception, e: ________________continue ____________process() ____post_process() Which is better than what I do now: def process_todo(todo): ____pre_process() ____for t in todo: ________try: ____________for u, v in t.items(): ________________try: ____________________process() ________________except Exception, e: ____________________continue ________except Exception, e: ____________break ____post_process() | |I have not touched on more complicated except clauses; ones that have multiple lines, and ones that use, or update block-scoped variables. ExitStack is good for manipulating `with` clauses as first order objects, but they are limited to what you can do with `with` statements: Limited to naive exception handling, or simple chaining. If was to use ExitStack in the example above (using `continue` and `exit`), maybe I would write something like: def process_todo(todo): ____with ExitStack() as stack: ________pre_process() ________for t in todo: ____________stack.enter_context(BreakOnException()) ____________for u, v in t.items(): ________________stack.enter_context(ContinueOnException()) ________________process() ________________stack.pop() # DOES THIS EXIST? IT SHOULD ________post_process() | |This terrible piece of (not working) code assumes it is even possible to write a `BreakOnException` and `ContinueOnException`. It also requires a `pop` method, which is not documented, but really should exist. Without a `pop` method, I would need another ExitStack instance, and a `with` statement to hold it.| || || |I hope I have convinced you that ExitStack, and `with` block magic, does a poor job of covering the use cases for `try` statements. Maybe I should make examples that use the loop variables in the `except` clause for more examples?| || |At a high level: The `try/except/finally` is powerful structure. Realizing that the `try/finally` pair is very common leads to the creation of `with`. I am proposing the same logic for `try/catch` pair: They are commonly paired together and should have an optimized syntax. | |||| ||

On Fri, May 6, 2016 at 10:50 AM Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
The refactoring that first comes to mind is to make the inner portion a separate function: def process_one(thing): for u, v in thing.items(): try: process() except Exception as e: continue def process_group(group): pre_process() for thing in group: try: process_one(thing) except Exception as e: break post_process() This avoids the excessive indentation and helps improve the reading of how an error in the group will break but an error in one will continue. Does that not satisfy?

On 5/6/2016 1:28 PM, Michael Selik wrote:
Yes, extracting deeply nested code into methods will reduce the indentation. It is disappointing that I must come up with a name for a method that has only one call site. More disappointment for each local variable I must pass to that method,. More disappointment for each variable the code bock update this locals.

On Fri, May 6, 2016 at 1:56 PM Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
It is difficult to come up with a good name sometimes, but I feel that effort often gives me a better understanding of my code. I don't mind that there's only one call site. In fact, many of the functions I write only have one call site. I often break code out to a function so that I can give it a name and make the code easier to read. I share your distaste for passing the same set of arguments to a function, it's helper, that helper's helper, and so on down the chain. When it gets frustrating, that's more incentive to refactor. Sometimes it pushes me to realize a better design. It's often a sign of too much interdependence, which is hard to reason about. Regarding your proposal, even if it were a pleasant syntax, I think the alternatives are good enough that having both would go against the Zen of one obvious way.

-1 on the whole idea. It's make code much less readable, and is way too implicit for my tastes, and, I think, for Python. My own dbf [1] module is roughly 10,000 lines, and I maintain a private copy of OpenERP which is at least 10 times that size, and no where does either system have so many nested try/except handlers. -- ~Ethan~ [1] https://pypi.python.org/pypi/dbf

On 5/6/2016 1:44 PM, Ethan Furman wrote:
This is a good point. There are probably domains that have clear inputs, or have a mature codebase, with tests covering all input permutations. These domains do not need `try` statements to cover the unknown. Maybe these domains dominate the universe of source code, and I am in the minority. I can easily be convince this is the case: I have seen lots of code that really does not care if an exception gets raised. Understanding why it failed is a real pain, for there are no descriptions, no summary, original causes are not chained, or if they are, the stack trace is missing. My code has no hope of mitigating those errors: It can not retry on HTTP errors, or provide useful feedback if the original cause is a missing file. Your strategy of simply not using `try` statements, may also work. Although, I do not know how you trace down the cause of errors on production systems easily without them. Mature software will not have as many logic errors as my green code, so the cost of chasing down a problem is better amortized, and it more reasonable to leave out `try` blocks. For example, from ver_33.py (in Record._retrieve_field_value), ________try: ____________if null_data[byte] >> bit & 1: ________________return Null ________except IndexError: ____________print(null_data) ____________print(index) ____________print(byte, bit) ____________print(len(self._data), self._data) ____________print(null_def) ____________print(null_data) ____________raise It is not obvious to me that IndexError is the only Exception that can come from here. This code may raise file access exceptions, HTTP exceptions, I do not know. I would, at least, add a `raise Exception` clause to catch those unknown situations. Furthermore, since I do not know how deep the stack will be on those exceptions, I would chain-and-raise ________try: ____________if null_data[byte] >> bit & 1: ________________return Null ________except Exception, e: ____________raise CantGetValueFromFieldException( ________________null_data, ________________index, ________________byte, bit, ________________(len(self._data), self._data), ________________null_def, ________________null_data ____________) from e This is better than letting the SQLExceptions just propagate; I have described what I am doing (CantGetValueFromFieldException), so I can switch on it in a later exception handler, and I still have the original cause, which I can switch on also. Looking at Record.__getattr__(), and Record.__getitem__(), they use the above method, and can raise any number of other exceptions. This means the code that uses this library must catch those possible exceptions. Too many for me to keep track of now. I will catch them all, make some decisions, and re-raise the ones I can not deal with. def do_stuff(**kwargs): ____try: ________my_data = Record(**kwargs) ________value = my_data["value"] ________send(value) ____catch Exception, e: ________if in_causal_chain(SQLException, e): ____________Log.warning("Database problem, not doing anymore stuff", cause=e) ________elif in_causal_chain(CantGetValueFromFieldException, e): ____________raise AppearsToBeADataAccessProblem() from e ________else: ____________raise AlternatProblem() from e Thank you for the time you spend on this subject.

On Sat, May 7, 2016 at 5:15 AM, Kyle Lahnakoski <klahnakoski@mozilla.com> wrote:
It's nothing to do with the maturity of the codebase. It's more a question of effort-for-effort. You have a couple of options: 1) Put in heaps of effort up front, and during code editing; or 2) Put in a bit more effort in debugging. When you first write code, don't bother with any of the extra boiler-plate. Just let the exceptions propagate as they are, and worry about debugging when you get to it. Later on, follow the basic Rule of Three: if you've done the same thing three times, put in some effort to make it easier (because something you do three times is likely to happen a fourth). Most of your code won't need extra exception info - the traceback will serve you just fine. Once you've had three examples of some particular loop tripping you up (because your debugging work is harder due to not knowing which iteration of the loop raised the exception), you know where to put in a simple exception-chaining block: import random class InfoCarrier(Exception): pass for i in range(30): x = random.randrange(20) try: y = 1/x except: raise InfoCarrier("x = %s" % x) You almost certainly do _not_ need this kind of construct all through your code; that's too much effort in code maintenance for not enough benefit in debugging. If you really think you need this kind of locals inspection everywhere, pick up one of the execution frameworks that lets you do this - I think ipython does? - and have none of it in your code at all. You're using Python. So stop writing so much code. :) ChrisA

Kyle Lahnakoski wrote:
Using a suitably-defined context manager, it should be possible to write that something like this: with Activity("Getting a bit", lambda: (null_data, index, (byte, bit), (len(self._data), self._data), null_def, null_data): if null_data[byte] >> bit & 1: return Null -- Greg

On 05/06/2016 12:15 PM, Kyle Lahnakoski wrote:
Firstly, my compliments for actually checking out the code I was referring to. I'm impressed! Secondly, that whole try/except, especially the multiple print statements, is an example of how to track down something -- but that is debugging code that I forgot take out. In other words, I was getting an IndexError, so I stuck that code in for testing, fixed the problem... and forgot to remove the code. To be fair, I was the primary consumer of that library for a long time. So, as others have said: Just write your code. When something breaks, then put in the debugging code to see what exactly is going on. If you don't already have a test suite, start one at that point: write the test that should succeed, watch it fail, fix your code, watch your test succeed, rest assured that if you break that test in the future you'll catch before you release your code in to the wild. -- ~Ethan~
participants (12)
-
Chris Angelico
-
cs@zip.com.au
-
Ethan Furman
-
Greg Ewing
-
Guido van Rossum
-
Joshua Morton
-
Kyle Lahnakoski
-
Michael Selik
-
Nick Coghlan
-
Paul Moore
-
Random832
-
Steven D'Aprano