[pypy-dev] playing with gcc
Michael Hudson
mwh at python.net
Fri Oct 21 10:44:19 CEST 2005
I spent a while compiling some pypy-c source with different
optimization options and running pystone (on an iMac with a 1.8 ghz G5
and 1.5 gigs of ram).
Here's a table of results:
0 1 2 3 3p s sp c
0 1.00 3.06 3.08 3.08 3.17 3.12 3.25 45.39
1 0.33 1.00 1.01 1.01 1.04 1.02 1.06 14.82
2 0.32 0.99 1.00 1.00 1.03 1.01 1.05 14.72
3 0.32 0.99 1.00 1.00 1.03 1.01 1.05 14.74
3p 0.32 0.97 0.97 0.97 1.00 0.99 1.02 14.31
s 0.32 0.98 0.99 0.99 1.02 1.00 1.04 14.53
sp 0.31 0.94 0.95 0.95 0.98 0.96 1.00 13.98
c 0.02 0.07 0.07 0.07 0.07 0.07 0.07 1.00
Here's a key:
0 : -O0, aka "no optimisation"
1 : -O1
2 : -O2
3 : -O3
3p: -O3 -fbranch-probabilities
(i compiled with -fprofile-arcs and ran pystone to generate the branch info)
s : -Os. aka "optimize for size"
sp: -Os -fbranch-probabilities
c : cpython for reference
As you can see, you definitely want *some* optimization but it doesn't
make a great deal of difference what level you choose. That said, -Os
gives the best results and -fbranch-probabilities gives a ~5% gain.
I timed each compilation, but forgot to write the results down
anywhere. There weren't vast differences at any rate (they all took
~7 minutes, not too bad). Of course, using -fbranch-probabilities
requires you compile everything twice.
The machine wasn't totally quiet as I ran these tests, but fairly.
Cheers,
mwh
--
What the semicolon's anxious supporters fret about is the tendency
of contemporary writers to use a dash instead of a semicolon and
thus precipitate the end of the world.
-- Lynne Truss, "Eats, Shoots & Leaves"
More information about the Pypy-dev
mailing list