On Fri, Sep 25, 2020 at 11:14:01PM +1000, Chris Angelico wrote:
On Fri, Sep 25, 2020 at 7:59 PM Sergio Fenoll
wrote:
Surely there has to be a better way of programming than running stuff, watching it fail, and then keeping track of how it fails so you can later handle that failure?
Why? Do you really think you can enumerate EVERY possible way that something might fail?
Nobody is demanding that "EVERY possible way" is handled -- but if you need to, then Python lets you do so: # Please don't do this. try: something() except: # handle EVERY error Of course this is nearly always the wrong choice. "What can I cope with" is easy: it's always *everything*, if you define "cope with" as just suppressing the error.
Think, instead, about all the possible problems that you can actually cope with. That way, you have a finite - and usually small - set of things to deal with, instead of an infinite field of "well this could go wrong, but we can't do anything about that".
The problem here is that you can't decide what you can deal with in isolation. I can deal with UnicodeEncodeError easily: try again with a different encoding, or with a different error handler, easy-peasy. But if I'm doing `y = x + 1` and it somehow raised UnicodeEncodeError, what do I do? I'm stuck. In order to tell what you can deal with, you need to know the circumstances of the error, and why it occurred. In other words, you need to understand the operation being called, in particular, what exceptions it might raise under normal circumstances. I'm pretty confident that most people, unless they are TDD zealots, start by using either their pre-existing knowledge of the operation, or reading the documentation, to find out what exceptions are likely under normal circumstances, and *only then* start to think about how to deal with such exceptions. The alternative is to waste time and mental energy thinking about how to deal with exceptions that you will probably never get in real life: "Well, if I get an import error, I can add some more directories to sys.path and try adding the values again, that might fix it..." Who does that? Not me. And I bet you don't either. So I think that most of us: - start with documented or well-known exceptions; - and only then decide whether or not we can deal with them. Of course rare or unusual exceptions probably won't be discovered without a lot of testing, including stress testing, or not until the code goes out into production. That's okay. And I think that is Sergio's point: it would be good to have a standard, consistent place for functions to document which exceptions they are likely to raise under normal circumstances, and one which is available to IDEs and runtime inspection. Annotations. Of course we can inspect the docstring of the function, but it's hard for an automated tool to distinguish: This will raise WidgetExplosionError if the widget explodes. from: This is guaranteed to never raise WidgetExplosionError even if the widget explodes. There may be practical difficulties in sticking exceptions into annotations. Annotations already can be pretty long and bulky. But if you are okay with functions documenting that they might raise a certain exception, then *in principle* you should be okay with moving that into an annotation rather than the docstring. Annotations are just a form of documentation in a consistent standard format to make it easy for IDEs to read them.
In the list of all possible failures, will you include MemoryError?
Sure, if I'm writing some sort of ultra-high availability server application that is expected to run 24/7 for months at a time. while memory_in_reserve(): try: something() break except MemoryError: release_rainy_day_fund() # Frees some emergency memory. Or perhaps I'll catch the exception and try to bail out safely after writing out data files etc. Or maybe: try: fast_but_fat() except MemoryError: slow_but_lean() -- Steve