As I wrote in an earlier email:
For a PEP to succeed it needs to show two things.
Exactly what problem is being solved, or need is to be fulfilled, and that is a sufficiently large problem, or need, to merit the proposed change.
That the proposed change is the best known solution for the problem being addressed.
IMO, PEP 622 fails on both counts.
This email addresses point 1 (for version 2).
Why the use of "shape" in "scare quotes"? It worries me that the abstract can't explain what PEP 622 does directly.
Could you use a real example in the abstract? Using a contrived example like this seems like a straw man. It feels like it is constructed to favour the PEP, whilst being unlike any real code.
Rationale and Goals -------------------
Python programs frequently need to handle data which varies in type, presence of attributes/keys, or number of elements. Typical examples are operating on nodes of a mixed structure like an AST, handling UI events of different types, processing structured input (like structured files or network messages), or “parsing” arguments for a function that can accept different combinations of types and numbers of parameters.
AST, and UI objects usually (pretty much always) form a class hierarchy, making is easy to add utility matching methods to the base class. Where matching *might* have some value is when unrelated types are involved, but you need to show that enhanced destructuring would be insufficient.
In fact, the classic 'visitor' pattern is an example of this, done in an OOP style -- but matching makes it much less tedious to write.
This might be true of the "classic" visitor pattern, but that's a straw man. The Python visitor pattern is a joy to use. Each case gets its own self contained method, clearly named, which is much better than a giant `match` statement.
Much of the code to do so tends to consist of complex chains of nested if/elif statements, including multiple calls to len(), isinstance() and index/key/attribute access. Inside those branches users sometimes need to destructure the data further to extract the required component values, which may be nested several objects deep.
There seem to be three things you want to enhance here: Unpacking; to avoid calls to `len` Type checking; to avoid calls to `isinstance`. To avoiding nesting by using complex lvalues.
You fail to justify why the first two cannot be handled separately with much simpler extensions to the language and why complex lvalues are better than nesting.
The examples ------------
There are only three examples, none of which are compelling. In fact, two of them serve as a warning against the additional complexity this PEP entails.
Django example: '''''''''''''''
Saves two lines of code, but introduces two bugs! (Assuming that the original behavior should be preserved)
If the authors of the PEP cannot use this feature correctly in the first example they give, what chance do the rest of us have?
is_tuple example: '''''''''''''''''
(Repeating my earlier email) Python's support for OOP provides an alternative to ADTs. For example, by adding a simple "matches" method to Node and Leaf, `is_tuple` can be rewritten as something like:
def is_tuple(node): if not isinstance(node, Node): return False return node.matches("(", ")") or node.matches("(", ..., ")")
The "switch on http response codes" example. ''''''''''''''''''''''''''''''''''''''''''''
This clearly demonstrates the value of symbolic constants for readability and the serious flaw in the PEP that I cannot use them.
I really don't see you how you can claim that `case 406:` is more readable than `elif response == HTTP_UPGRADE_REQUIRED:` ?
Preventing the use of symbolic constants or other complex rvalues is a impairment to usability. Silently failing when symbolic constants are used is terrible.
I do see the value of a `switch` statement for repeated tests against the same value; its intent is clearer than repeated `elif`s. But there is no need for it to be so hard to use.