[Python-ideas] Python-ideas Digest, Vol 90, Issue 30

Ethan Furman ethan at stoneleaf.us
Thu May 22 17:43:35 CEST 2014


On 05/22/2014 08:32 AM, Ned Batchelder wrote:
> On 5/22/14 9:49 AM, Skip Montanaro wrote:
>> On Thu, May 22, 2014 at 8:05 AM, Chris Angelico wrote:
>>>
>>> Correct me if I'm wrong, but as I understand it, the problem is that
>>> the peephole optimizer eliminated an entire line of code. Would it be
>>> possible to have it notice when it merges two pieces from different
>>> lines, and somehow mark that the resulting bytecode comes from both
>>> lines? That would solve the breakpoint and coverage problems
>>> simultaneously.
>>
>> It seems to me that Ned has revealed a bug in the peephole optimizer.
>> It zapped an entire source line's worth of bytecode, but failed to
>> delete the relevant entry in the line number table of the resulting
>> code object. If I had my druthers, that would be the change I'd
>> prefer.
>
> I think it is the nature of optimization that it will destroy useful information.  I don't think it will always be
> possible to retain enough back-mapping that the optimized code can be understood as if it had not been optimized.   For
> example, the debug issue would still be present: if you run pdb and set a breakpoint on the "continue" line, it will
> never be hit.  Even if the optimizer cleaned up after itself perfectly (in fact, especially so), that breakpoint will
> still not be hit.  You simply cannot reason about optimized code without having to mentally understand the
> transformations that have been applied.
>
> The whole point of this proposal is to recognize that there are times (debugging, coverage measurement) when
> optimizations are harmful, and to avoid them.

Having read through the issue on the tracker, I find myself swayed towards Neds point of view.  However, I do still 
agree with Raymond that a full-fledged command-line switch is overkill, especially since the unoptimized runs are very 
special-cased (meaning useful for debugging, coverage, curiosity, learning about optimizing, etc).

If we had a sys flag that could be set before a module was loaded, then coverage, pdb, etc., could use that to recompile 
the source, not save a .pyc file, and move forward.  For debugging purposes perhaps a `__no_optimize__ = True` or `from 
__future__ import no_optimize` would help in those cases where you're dropping into the debugger.

The dead-code elimination still has a bug to be fixed, though, because if a line has been optimized away trying to set a 
break-point at it should fail.

--
~Ethan~


More information about the Python-ideas mailing list