On Sat, Jan 30, 2016 at 1:10 AM, Nick Coghlan
On 29 January 2016 at 13:30, Andrew Barnert via Python-ideas
wrote: So, again, PEP 511 isn't helping with the hard part. But, again, I think that may be fine. (Someone who knows how to use byteplay well enough to build a semantically-neutral optimizer function decorator, I'll trust him to be able to turn that into a global optimizer with one line of code. But if he wants to hook things in transparently to .pyc files, or to provide actual language extensions, or something like that, I think it's OK to make him do a bit more work before he can give it to me as production-ready code.)
Rather than trying to categorise things as "hard" or "easy", I find it to be more helpful to categorise them as "inherent complexity" or "incidental complexity".
With inherent complexity, you can never eliminate it, only move it around, and perhaps make it easier to hide from people who don't care about the topic (cf. the helper classes in importlib, which hide a lot of the inherent complexity of the import system). With incidental complexity though, you may be able to find ways to eliminate it entirely.
For a lot of code transformations, determining a suitable scope of application is *inherent* complexity: you need to care about where the transformation is applied, as it actually matters for that particular use case.
For semantically significant transforms, scope of application is inherent complexity, as it affects code readability, and may even be an error if applied inappropriately. This is why: - the finer-grained control offered by decorators is often preferred to metaclasses or import hooks - custom file extensions or in-file markers are typically used to opt in to import hook processing
In these cases, whether or not the standard library is processed doesn't matter, since it will never use the relevant decorator, file extension or in-file marker. You also don't need to worry about subtle start-up bugs, since if the decorator isn't imported, or the relevant import hook isn't installed appropriately, then the code that depends on that happening simply won't run.
This means the only code transformation cases where determining scope of applicability turns out to be *incidental* complexity are those that are intended to be semantically neutral operations. Maybe you're collecting statistics on opcode frequency, maybe you're actually applying safe optimisations, maybe you're doing something else, but the one thing you're promising is that if the transformation breaks code that works without the transformation applied, then it's a *bug in the transformer*, not the code being transformed.
In these cases, you *do* care about whether or not the standard library is processed, so you want an easy way to say "I want to process *all* the code, wherever it comes from". At the moment, that easy way doesn't exist, so you either give up, or you mess about with the encodings.py hack.
PEP 511 erases that piece of incidental complexity and say, "If you want to apply a genuinely global transformation, this is how you do it". The fact we already have decorators and import hooks is why I think PEP 511 can safely ignore the use cases that those handle.
Thank you for the excellent explanation. Can words to this effect be added to the PEP, please? ChrisA