[Python-Dev] Program runs in 12s on Python 2.7, but 5s on Python 3.5 -- why so much difference?

Ben Hoyt benhoyt at gmail.com
Tue Jul 18 12:03:36 EDT 2017


Hi folks,

(Not entirely sure this is the right place for this question, but hopefully
it's of interest to several folks.)

A few days ago I posted a note in response to Victor Stinner's articles on
his CPython contributions, noting that I wrote a program that ran in 11.7
seconds on Python 2.7, but only takes 5.1 seconds on Python 3.5 (on my 2.5
GHz macOS i7), more than 2x as fast. Obviously this is a Good Thing, but
I'm curious as to why there's so much difference.

The program is a pentomino puzzle solver, and it works via code generation,
generating a ton of nested "if" statements, so I believe it's exercising
the Python bytecode interpreter heavily. Obviously there have been some big
optimizations to make this happen, but I'm curious what the main
improvements are that are causing this much difference.

There's a writeup about my program here, with benchmarks at the bottom:
http://benhoyt.com/writings/python-pentomino/

This is the generated Python code that's being exercised:
https://github.com/benhoyt/python-pentomino/blob/master/generated_solve.py

For reference, on Python 3.6 it runs in 4.6 seconds (same on Python 3.7
alpha). This smallish increase from Python 3.5 to Python 3.6 was more
expected to me due to the bytecode changing to wordcode in 3.6.

I tried using cProfile on both Python versions, but that didn't say much,
because the functions being called aren't taking the majority of the time.
How does one benchmark at a lower level, or otherwise explain what's going
on here?

Thanks,
Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20170718/0d40399b/attachment.html>


More information about the Python-Dev mailing list