[Python-Dev] Profile Guided Optimization active by-default
Gregory P. Smith
greg at krypto.org
Mon Aug 24 21:52:37 CEST 2015
On Sat, Aug 22, 2015 at 9:27 AM Brett Cannon <brett at python.org> wrote:
> On Sat, Aug 22, 2015, 09:17 Guido van Rossum <guido at python.org> wrote:
>
> How about we first add a new Makefile target that enables PGO, without
> turning it on by default? Then later we can enable it by default.
>
>
There already is one and has been for many years. make profile-opt.
I even setup a buildbot for it last year.
The problem with the existing profile-opt build in our default Makefile.in
is that is uses a horrible profiling workload (pybench, ugh) so it leaves a
lot of improvements behind.
What all Linux distros (Debian/Ubuntu and Redhat at least; nothing else
matters) do for their Python builds is to use profile-opt but they replace
the profiling workload with a stable set of the Python unittest suite
itself. Results are much better all around. Generally a 20% speedup.
Anyone deploying Python who is *not* using a profile-opt build is wasting
CPU resources.
Whether it should be *the default* or not *is a different question*. The
Makefile is optimized for CPython developers who certainly do not want to
run two separate builds and a profile-opt workload every time they type
make to test out their changes.
But all binary release builds should use it.
I agree. Updating the Makefile so it's easier to use PGO is great, but we
> should do a release with it as opt-in and go from there.
>
> Also, I have my doubts about regrtest. How sure are we that it represents
> a typical Python load? Tests are often using a different mix of operations
> than production code.
>
> That was also my question. You said that "it provides the best performance
> improvement", but compared to what; what else was tried? And what
> difference does it make to e.g. a Django app that is trained on their own
> simulated workload compared to using regrtest? IOW is regrtest displaying
> the best across-the-board performance because it stresses the largest swath
> of Python and thus catches generic patterns in the code but individuals
> could get better performance with a simulated workload?
>
This isn't something to argue about. Just use regrtest and compare the
before and after with the benchmark suite. It really does exercise things
well. People like to fear that it'll produce code optimized for the test
suite itself or something. No. Python as an interpreter is very
realistically exercised by running it as it is simply running a lot of code
and a good variety of code including the extension modules that benefit
most such as regexes, pickle, json, xml, etc.
Thomas tried the test suite and a variety of other workloads when looking
at what to use at work. The testsuite works out generally the best. Going
beyond that seems to be a wash.
What we tested and decided to use on our own builds after benchmarking at
work was to build with:
make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x test_gdb
test_multiprocessing"
In general if a test is unreliable or takes an extremely long time, exclude
it for your sanity. (i'd also kick out test_subprocess on 2.7; we replaced
subprocess with subprocess32 in our build so that wasn't an issue)
-gps
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150824/14f372e5/attachment.html>
More information about the Python-Dev
mailing list