On Fri, Sep 25, 2020 at 1:43 PM Steven D'Aprano
On Fri, Sep 25, 2020 at 11:14:01PM +1000, Chris Angelico wrote:
On Fri, Sep 25, 2020 at 7:59 PM Sergio Fenoll
wrote: Surely there has to be a better way of programming than running stuff, watching it fail, and then keeping track of how it fails so you can later handle that failure?
Why? Do you really think you can enumerate EVERY possible way that something might fail?
Nobody is demanding that "EVERY possible way" is handled -- but if you need to, then Python lets you do so:
# Please don't do this. try: something() except: # handle EVERY error
Of course this is nearly always the wrong choice. "What can I cope with" is easy: it's always *everything*, if you define "cope with" as just suppressing the error.
Think, instead, about all the possible problems that you can actually cope with. That way, you have a finite - and usually small - set of things to deal with, instead of an infinite field of "well this could go wrong, but we can't do anything about that".
The problem here is that you can't decide what you can deal with in isolation. I can deal with UnicodeEncodeError easily: try again with a different encoding, or with a different error handler, easy-peasy.
But if I'm doing `y = x + 1` and it somehow raised UnicodeEncodeError, what do I do? I'm stuck.
In order to tell what you can deal with, you need to know the circumstances of the error, and why it occurred. In other words, you need to understand the operation being called, in particular, what exceptions it might raise under normal circumstances.
I'm pretty confident that most people, unless they are TDD zealots, start by using either their pre-existing knowledge of the operation, or reading the documentation, to find out what exceptions are likely under normal circumstances, and *only then* start to think about how to deal with such exceptions.
The alternative is to waste time and mental energy thinking about how to deal with exceptions that you will probably never get in real life:
"Well, if I get an import error, I can add some more directories to sys.path and try adding the values again, that might fix it..."
Who does that? Not me. And I bet you don't either.
"Defensive programming" / "Offensive programming" https://en.wikipedia.org/wiki/Defensive_programming ... Tacking additional information onto the exception and re-raising may be the helpful thing to do; though that's still not handling the situation. Recently I learned about the `raise _ from _` syntax (when writing an example implementation for "[Python-ideas] f-strings as assignment targets": """ def cast_match_groupdict(matchobj, typemap): matchdict = matchobj.groupdict() if not typemap: return matchdict for attr, castfunc in typemap.items(): try: matchdict[attr] = castfunc(matchdict[attr]) except ValueError as e: raise ValueError(("attr", attr), ("rgx", matchobj.re)) from e return matchdict """
So I think that most of us:
- start with documented or well-known exceptions;
- and only then decide whether or not we can deal with them.
Of course rare or unusual exceptions probably won't be discovered without a lot of testing, including stress testing, or not until the code goes out into production. That's okay.
While there are plenty of ways to debug in production, debugging in production is a bad idea and is not allowed (because: __, __, __) : log the exception with necessary details (traceback, exception attrs, <full stack frame>) but exclude sensitive information that shouldn't be leaking into the logging system. Catching exceptions early is easier when: - TDD / test coverage are emphasized - fuzzing is incorporated into the release process (fuzzing is easier with parameterized test cases) - unit/functional/integration testing in a copy of production (sufficient DevOps/DevSecOps) - the coding safety guide says that all exceptions must be handled
And I think that is Sergio's point: it would be good to have a standard, consistent place for functions to document which exceptions they are likely to raise under normal circumstances, and one which is available to IDEs and runtime inspection. Annotations.
Annotation: (type) Docstring: (type, docstr)
Of course we can inspect the docstring of the function, but it's hard for an automated tool to distinguish:
This will raise WidgetExplosionError if the widget explodes.
from:
This is guaranteed to never raise WidgetExplosionError even if the widget explodes.
There may be practical difficulties in sticking exceptions into annotations. Annotations already can be pretty long and bulky. But if you are okay with functions documenting that they might raise a certain exception, then *in principle* you should be okay with moving that into an annotation rather than the docstring.
Annotations are just a form of documentation in a consistent standard format to make it easy for IDEs to read them.
Linting and parsing docstrings that contain per-exception ReST docstrs would be real nice and DRY. https://sphinxcontrib-napoleon.readthedocs.io/en/latest/ says:
Python 2/3 compatible annotations aren’t currently supported by Sphinx and won’t show up in the docs.
Is that still the case?