[Python-Dev] proposal+patch: sys.gettickeraccumulation()
Raymond Hettinger
python at rcn.com
Mon Dec 6 01:05:56 CET 2004
> To restate my original goal:
>
> I am looking for a simple way to answer the question: How much of a
> speedup can I expect if I reimplement a piece of Python code in C or
> C++?
. . .
> Ratios (rounded to 3 decimals):
> 16.9193/16.8831=1.002
> 5.8287/5.9553 =0.979
> 1.5944/1.5914 =1.002
> 10.7931/10.8149=0.998
> 5.2865/5.2528 =1.006
> 11.6086/11.6437=0.997
> 10.0992/11.0248=0.916
> 27.6830/27.6960=1.000
>
> Therefore I'd argue that the runtime penalty for the one additional
> long long increment in ceval.c (_Py_TickerAccumulation++) is in the
> noise.
The measurements are too imprecise to draw any worthwhile conclusions.
Try running:
python timeit.py -r9 "pass"
That ought to give more stable measurements.
The proposed "analysis tool" has no benefit to a majority of Python
users. Even a 1% hit is not worth it.
> I am only interested in counting the
> iterations of the interpreter loop. However, the _Py_Ticker decrements
> in longobject.c are not inside the interpreter loop, but in C loops!
> This means _Py_Ticker is useless for my purposes. Therefore I
decoupled
> _Py_Ticker and _Py_TickerAccumulation.
Why add this to everyone's build? Just put it in when doing your own
analysis.
The eval loop already pays a penalty for Py2.4's extra function tracing
code. And ceval.c has been cluttered with #ifdefs for hardware
timestamps. And there have been other additions for signal handling and
whatnot. This is enough.
>. A lot of fallout caused by the simple idea to add one innocent
> line to ceval.c.
I do not find it to be innocent. A great of work was expended over the
years just trying to eliminate a small step or two from the eval-loop.
Those efforts should not be discarded lightly.
-1 on adding it directly.
-0 on adding it as a #ifdeffed compile option (with the default being to
exclude it).
Raymond Hettinger
More information about the Python-Dev
mailing list