pointless musings on performance

Terry Reedy tjreedy at udel.edu
Tue Nov 24 23:14:15 CET 2009

Chris Rebert wrote:
> On Tue, Nov 24, 2009 at 4:31 AM, Rob Williscroft <rtw at freenet.co.uk> wrote:
>> mk wrote in news:mailman.915.1259064240.2873.python-list at python.org in
>> comp.lang.python:
>>> def pythonic():
>>> def unpythonic():
>>> Decidedly counterintuitive: are there special optimizations for "if
>>> nonevar:" type of statements in cpython implementation?
>> from dis import dis
>> dis( unpythonic )
>> 18          31 LOAD_FAST                0 (nonevar)
>>             34 LOAD_CONST               0 (None)
>>             37 COMPARE_OP               9 (is not)
>>             40 JUMP_IF_FALSE            4 (to 47)
>> dis( pythonic )
>> 11          31 LOAD_FAST                0 (nonevar)
>>             34 JUMP_IF_FALSE            4 (to 41)
> In other words, CPython doesn't happen to optimize `if nonevar is not
> None` as much as it theoretically could (which would essentially
> require a JUMP_IF_NONE opcode). Since CPython isn't known for doing
> fancy optimizations, this isn't surprising.

There is a limit of 256 bytecodes. I believe there are currently fewer. 
It would seem that adding bytecodes that combine current pairs would 
speed the interpreter, depending on how often the combined pair is used. 
And people have looked for frequent pairs across a corpus of code, and 
proposed additions.

However, additional bytecodes enlarge the basic bytecode eval loop, 
possibly (and sometimes actually) leading to to more processor cache 
misses and slower execution. At least some proposed non-essential 
additions have been rejected for the reason.

Also, even though CPython-specific bytecodes are outside the language 
definition, and could theoretically be revamped every minor (x.y) 
release, there is a cost to change. Rewrite the ast to bytecode 
compiler, rewrite dis, rewrite the dis doc, and burden anyone concerned 
with multiple versions to remember. So in practice, change is minimized 
and unassigned bytecodes are left open for the future.

Optimizations that make better use of a fix set of bytecodes are a 
different topic. Guido is conservative because historically, compilier 
optimizations are too often not exactly equivalent to naive, obviously 
correct code in some overlooked corner case, leading to subtle, 
occasional bugs. Of course, byte code changes could mess-up current 
optimizations that are performed.

Terry Jan Reedy

More information about the Python-list mailing list