On 5/23/2014 4:25 AM, M.-A. Lemburg wrote:
I believe that Python has always had an 'as if' rule that allows more or less 'hidden' optimizations, as long as the net effect of a statement is as defined.
I was referring to the times before the peephole optimizer was introduced (Python 2.3 and earlier).
What's important here is to look at the difference between what the compiler generates by simply following its rule book and the version of the byte code which is the result of running an optimizer on the byte code or even on the AST before running the transform to byte code.
I have tried to say that the 'rule book' at a particular stage is not a fixed thing. There are several tranformations from source to CPython bytecode. The order and grouping is somewhat a matter of convenience. However, leave that aside. What Ned wants and what Guido has supported is that there be an option to get bytecode that is friendly to execution analysis. They can decide what constraints that places on the end product and therefore on the multiple transformation processes.
For me, a key argument for having a runtime mode without compiler optimizations is that the compiler gains more freedom in applying more aggressive optimizations.
Tools will no longer have to adapt to whatever optimizations are added with each new Python release, since there will be a defined non-optimized runtime mode they can use as basis for their work.
Stability is certainly a useful constraint.
The net result would be faster Pythons and better working debugging tools (well, at least that's the hope ;-).
Good point. It appears that rethinking the current -O, -OO will help. -- Terry Jan Reedy