On 5/22/14 9:49 AM, Skip Montanaro wrote:
Correct me if I'm wrong, but as I understand it, the problem is that the peephole optimizer eliminated an entire line of code. Would it be possible to have it notice when it merges two pieces from different lines, and somehow mark that the resulting bytecode comes from both lines? That would solve the breakpoint and coverage problems simultaneously. It seems to me that Ned has revealed a bug in the peephole optimizer. It zapped an entire source line's worth of bytecode, but failed to delete the relevant entry in the line number table of the resulting code object. If I had my druthers, that would be the change I'd
On Thu, May 22, 2014 at 8:05 AM, Chris Angelico <rosuav@gmail.com> wrote: prefer.
I think it is the nature of optimization that it will destroy useful information. I don't think it will always be possible to retain enough back-mapping that the optimized code can be understood as if it had not been optimized. For example, the debug issue would still be present: if you run pdb and set a breakpoint on the "continue" line, it will never be hit. Even if the optimizer cleaned up after itself perfectly (in fact, especially so), that breakpoint will still not be hit. You simply cannot reason about optimized code without having to mentally understand the transformations that have been applied. The whole point of this proposal is to recognize that there are times (debugging, coverage measurement) when optimizations are harmful, and to avoid them.
That said, I think Ned's proposal is fairly simple. As for the increased testing load, I think the extra cost would be the duplication of the buildbots (or the adjustment of their setup to test with -O and -O0 flags). Is it still the case that -O effectively does nothing (maybe only eliding __debug__ checks)?
Skip _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/