[Speed] Analysis of a Python performance issue

serge guelton serge.guelton at telecom-bretagne.eu
Mon Nov 21 14:39:08 EST 2016


On Sat, Nov 19, 2016 at 05:58:19PM -0800, Kevin Modzelewski wrote:
> I think it's safe to not reinvent the wheel here.  Some searching gives:
> http://perso.ensta-paristech.fr/~bmonsuez/Cours/B6-4/Articles/papers15.pdf
> http://www.cs.utexas.edu/users/mckinley/papers/dcm-vee-2006.pdf
> https://github.com/facebook/hhvm/tree/master/hphp/tools/hfsort

Thanks Kevin for the pointers! I'm new to this area of optimization...
another source of fun and weirdness :-$

> Pyston takes a different approach where we pull the list of hot functions
> from the PGO build, ie defer all the hard work to the C compiler.

You're talking about the build of Pyston itself, not the jit generated
code, right? In that case, how is it different to a regular

    -fprofile-generate followed by several runs then -fprofile-use?

PGO builds should perform better than marking some functions as hot, as
it also includes info for better branch prediction too, right?



More information about the Speed mailing list