[Python-Dev] Profile Guided Optimization active by-default

Stewart, David C david.c.stewart at intel.com
Mon Aug 24 23:48:02 CEST 2015

(Sorry about the format here - I honestly just subscribed to Python-dev so
be gentle ...)

> Date: Sat, 22 Aug 2015 11:25:59 -0600
> From: Eric Snow <ericsnowcurrently at gmail.com>

>On Aug 22, 2015 9:02 AM, "Patrascu, Alecsandru" <alecsandru.patrascu at
>intel.com <https://mail.python.org/mailman/listinfo/python-dev>>
>wrote:[snip]> For instance, as shown from attached sample performance
>results from theGrand Unified Python Benchmark, >20% speed up was

Eric ­ I'm the manager of Intel's server scripting language optimization
team, so I'll answer from that perspective.

>Are you referring to the tests in the benchmarks repo? [1] How does the
>real-world performance improvement compare with otherlanguages you are
>targeting for optimization?

Yes, we're using [1].

We're seeing up to 10% improvement on Swift (a project in OpenStack) on
some architectures using the ssbench workload, which is as close to
real-world as we can get. Relative to other languages we target, this is
quite good actually. For example, Java's Hotspot JIT is driven by
profiling at its core so it's hard to distinguish the value profiling
alone brings. We have seen a nice boost on PHP running Wordpress using
PGO, but not as impressive as Python and Swift.

By the way, I think letting the compiler optimize the code is a good
strategy. Not the only strategy we want to use, but it seems like one we
could do more of.

> And thanks for working on this!  I have several more questions: What
>sorts of future changes in CPython's code might interfere with

We're also looking at other source-level optimizations, like the CGOTO
patch Vamsi submitted in June. Some of these may reduce the value of PGO,
but in general it's nice to let the compiler do some optimization for you.

> What future additions might stand to benefit?

It's a good question. Our intent is to continue to evaluate and measure
different training workloads for improvement. In other words, as with any
good open source project, this patch should improve things a lot and
should be accepted upstream, but we will continue to make it better.

> What changes in existing code might improve optimization opportunities?

We intend to continue to work on source-level optimizations and measuring
them against GUPB and Swift.

> What is the added maintenance burden of the optimizations on CPython,

I think the answer is none. Our goal was to introduce performance
improvements without adding to maintenance effort.

>What is the performance impact on non-Intel architectures?  What
>aboutolder Intel architectures?  ...and future ones?

We should modify the patch to make it for Intel only, since we're not
evaluating non-Intel architectures. Unfortunately for us, I suspect that
older Intel CPUs might benefit more than current and future ones. Future
architectures will benefit from other enabling work we're planning.

> What is Intel's commitment to supporting these (or other) optimizations
>inthe future?  How is the practical EOL of the optimizations managed?

As with any corporation's budgeting process, it's hard to know exactly
what my managers will let me spend money on. :-) But we're definitely
convinced of the value of dynamic languages for servers and the need to
work on optimization. As far as I have visibility, it appears to be
holding true.

> Finally, +1 on adding an opt-in Makefile target rather than enabling
>theoptimizations by default.

Frankly since Ubuntu has been running this way for past two years, I think
it's fine to make it opt-in, but eventually I hope it can be the default
once we're happy with it.

> Thanks again! -eric

More information about the Python-Dev mailing list