On Thu, Feb 25, 2021 at 5:59 AM Guido van Rossum <guido@python.org> wrote:

Here's a potentially alternative plan, which is also complex, but doesn't require asyncio or other use cases to define special classes. Let's define two exceptions, BaseExceptionGroup which wraps BaseException instances, and ExceptionGroup which only wraps Exception instances. (Names to be bikeshedded.) They could share a constructor (always invoked via BaseExceptionGroup) which chooses the right class depending on whether there are any non-Exception instances being wrapped -- this would do the right thing for split() and subgroup() and re-raising unhandled exceptions.

Then 'except Exception:' would catch ExceptionGroup but not BaseExceptionGroup, so if a group wraps e.g. KeyboardError it wouldn't be caught (even if there's also e.g. a ValueError among the wrapped errors).


class BaseExceptionGroup(BaseException):
    def __new__(cls, msg, errors):
        if cls is BaseExceptionGroup and all(isinstance(e, Exception) for e in errors):
            cls = ExceptionGroup
        return BaseException.__new__(cls, msg, errors)

class ExceptionGroup(Exception, BaseExceptionGroup):

This could be a valid compromise.

We keep the ability to wrap any exception, while we lose the "fail-fast if you forget to handle an ExceptionGroup" feature, which was intended as a kindness towards those who abuse "except Exception".

If we adopt this solution then letting an ExceptionGroup escape from code that is not supposed to raise it, is not a fatal error, it's just some exception like any other.
So there is no longer a distinction between code that raises ExceptionGroups and code that doesn't. Any code can propagate them, like any code can raise any other exception. 
Does this mean that more code needs to be aware of the possibility of them showing up?  Is that a problem?  Maybe this a simpler state of affairs overall.

What would we have done here if we were building Python from scratch?