[Numpy-discussion] Profiling line-by-line
Arnd Baecker
arnd.baecker at web.de
Thu Jul 20 03:06:35 EDT 2006
On Wed, 19 Jul 2006, David Grant wrote:
> Is there any way to do line-by-line profiling in Python? The profiling
> results can tell me how much time is spent in all functions, but within a
> given function I can't get any idea of how much time was spent on each line.
> For example, in the example below, I can see that graphWidth.py is taking
> all the time, but there are many lines of code in graphWidth.py that aren't
> function calls, and I have no way of knowing which lines are the
> bottlenecks. I'm using hotspot currently, by the way.
>
> ncalls tottime percall cumtime percall filename:lineno(function)
> 1 0.215 0.215 0.221 0.221 graphWidth.py:6(graphWidth)
> 27 0.001 0.000 0.003 0.000 oldnumeric.py:472(all)
> 26 0.002 0.000 0.002 0.000 oldnumeric.py:410(sum)
> 26 0.001 0.000 0.002 0.000 oldnumeric.py:163(_wrapit)
> 26 0.001 0.000 0.001 0.000 oldnumeric.py:283(argmin)
> 26 0.000 0.000 0.000 0.000 numeric.py:111(asarray)
> 0 0.000 0.000 profile:0(profiler)
>
> Thanks,
You might give hotshot2kcachegrind a try.
See
http://mail.enthought.com/pipermail/enthought-dev/2006-January/001075.html
for more details.
The screenshots
http://www.physik.tu-dresden.de/~baecker/tmp/bench_traits/kcachegrind_screenshot.png
http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindShot4
might give you an idea how things will look.
More importantly note that profiling in connection
with ufuncs seems problematic:
See this thread (unfortunately split into several pieces,
not sure if I got all of them):
http://thread.gmane.org/gmane.comp.python.numeric.general/5309/focus=5309
http://thread.gmane.org/gmane.comp.python.numeric.general/5311/focus=5316
http://thread.gmane.org/gmane.comp.python.numeric.general/5337/focus=5337
(I always wanted to write this up for the wiki, but "real work"
is interfering too strongly at the moment ;-).
Good luck with profiling,
Arnd
More information about the NumPy-Discussion
mailing list