* Not using an arena allocator for the nodes can introduce more challenges than simplifications. The first is that deleting a deep tree currently is just freeing the arena block, while if the nodes were PyObjects it will involve recursive destruction. That could potentially segfault so we would need to use some custom trashcan mechanism of special deleters. All of this will certainly not simplify the code (at least the parser code) and will impact performance (although just in the parser/compiler phase).
* We would need to (potentially) reimplement the AST sequences into proper owning-containers. That will involve changing a considerable amount of code and some slowdown due to having to use C-API calls.
There is probably another way. We already have code to convert between the C-level AST (the one that's arena-allocated) and the Python-level AST (the one that the `ast` module provides). Mark doesn't seem to mind if processing macros slows down parsing (since .pyc file caching still works). So we could convert the C-level AST to a Python-level AST, give that to the macro processor, which returns another Python-level AST, and then we convert that back to a C-level AST that we graft into the parse tree for the source being parsed.
* The proposal seems to imply that the AST will be a fully public and stable API. This has some danger as any internal optimization regarding AST can be braking changes to macro users. This will raise any problems that now linter and static analyst tools could have to all users (of macros), making of even more difficult to change it.
I think we can say that that's tough luck for the macro processor authors. They may have to do some work to support each new Python version.
This may actually address the worry that was expressed that libraries will become too dependent on macros, by making it painful to maintain a macro processor across many versions. It will serve as a natural deterrent for libraries desiring stability.