Profiling the interpreter from within

gb at gb at
Fri Oct 5 02:11:06 CEST 2001

While thinking about where the time goes in the interpreter it occured
to me that it should be possible to build a small extension that
allows profiling of the interpreter and extensions from within python.

The extension, I hallucinate, would turn on statistical profiling
using the same interrupt mechanism used to get the times reported by
gprof. It works by setting a timer to go off 100 times per second. On
each interrupt it examines the return address and increments a counter
in an array indicating a "hit" on that address (actually range of
addresses). By adding up the counts for the range of addresses
included in a function you can get an estimate of the time spent in
that function.

The second part of this imaginary profiler, would be some python code
to enumerate the callables that are associated with C functions and
collecting the symbols from the executable (possibly using nm?). Given
this info (assuming it could be gotten) the profiler could print the
time spent in each function.

Is there some fatal problem with this scheme? I may try it when I get
time. I've always been a big fan of knowing where the time goes.


More information about the Python-list mailing list