[Python-ideas] Python-ideas Digest, Vol 90, Issue 30
M.-A. Lemburg
mal at egenix.com
Fri May 23 10:25:29 CEST 2014
On 23.05.2014 04:07, Terry Reedy wrote:
> On 5/22/2014 11:40 AM, M.-A. Lemburg wrote:
>> On 22.05.2014 17:32, Ned Batchelder wrote:
>>>
>>> The whole point of this proposal is to recognize that there are times (debugging, coverage
>>> measurement) when optimizations are harmful, and to avoid them.
>>
>> +1
>>
>> It's regular practice in other languages to disable optimizations
>> when debugging code. I don't see why Python should be different in this
>> respect.
>>
>> Debuggers, testing, coverage and other such tools should be able to
>> invoke a Python runtime mode that let's the compiler work strictly
>> by the book, without applying any kind of optimization.
>>
>> This used to be the default in Python,
>
> I believe that Python has always had an 'as if' rule that allows more or less 'hidden'
> optimizations, as long as the net effect of a statement is as defined.
I was referring to the times before the peephole optimizer was
introduced (Python 2.3 and earlier).
What's important here is to look at the difference between what
the compiler generates by simply following its rule book and the
version of the byte code which is the result of running an
optimizer on the byte code or even on the AST before running the
transform to byte code.
Note that I'm not talking about optimizations applied at the VM
level implementations of bytecodes and I think neither was Ned.
> 1. By the book, "a,b = b,a" means create a tuple from b,a, unpack the contents to a and b, and
> delete the reference to the tuple. An obvious optimization is to not create the tuple. As I
> remember, this was once tried out before tuple unpacking was generalized to iterable unpacking. I
> don't know if CPython was ever released with that optimization, or if other implementations have or
> do use it. By the 'as if' rule, it does not matter, even though an allocation tracer (such as the
> one added to 3.4?) might detect the non-allocation.
This is an implementation detail of the VM. The code generated
by the compiler is byte code saying rotate the top two arguments
on the stack (ROT_TWO).
> 2. The manual says
> '''
> @f1(arg)
> @f2
> def func(): pass
>
> is equivalent to
>
> def func(): pass
> func = f1(arg)(f2(func))
> '''
> The equivalent is 'as if', in net effect, not in the detailed process. CPython actually executes (or
> at least did at one time)
>
> def <internal rereference>(): pass
> func = f1(arg)(f2(<internal reference>))
>
> Ignore f1. The difference can be detected when f2 is called by examining the approriate namespace
> within f2. When someone filed an issue about the 'bug' of 'func' never being bound to the unwrapped
> function object, Guido said that he neither wanted to change the doc or the implementation. (Sorry,
> I cannot find the issue.)
I'd put that under documentation bug, if at all :-)
Note that the function func does get the name "func". It's just
not bound to the name in the intermediate step, since the function
object serves as parameter to the function f2.
> 3. "a + b" is *usually* equivalent to "a.__class__.__add__(b)" or possibly
> "b.__class__.__radd__(a)". However, my understanding is that if a and b are ints, a 'fast path'
> optimization is applied that bypasses the int.__add slot wrapper. Is so, a call tracer could notice
> the difference and if unaware of such optimizations, falsely report a problem.
Again, this is an optimization in the implementation of the
byte code, not one applied by the compiler. There are quite
a few more such optimizations going in the VM.
> 4. Some Python implementations delay object destruction. I suspect that some (many?) do not really
> destroy objects (zero out the memory block).
I don't see what this has to do with the compiler. Isn't that
just a implementation detail of how GC works on a particular
Python platform ?
>> but there's definitely a need for being able to run Python in
>> a debugger without having it perfectly valid skip code lines
>> (even if they are no ops).
>
> This is a different issue from 'disable the peephole optimizer'.
For me, a key argument for having a runtime mode without
compiler optimizations is that the compiler gains
more freedom in applying more aggressive optimizations.
Tools will no longer have to adapt to whatever optimizations
are added with each new Python release, since there will be
a defined non-optimized runtime mode they can use as basis for
their work.
The net result would be faster Pythons and better working debugging
tools (well, at least that's the hope ;-).
--
Marc-Andre Lemburg
eGenix.com
Professional Python Services directly from the Source (#1, May 23 2014)
>>> Python Projects, Consulting and Support ... http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
________________________________________________________________________
::::: Try our mxODBC.Connect Python Database Interface for free ! ::::::
eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
http://www.egenix.com/company/contact/
More information about the Python-ideas
mailing list