Currently, the python profiler works at the function or method level. This is good for identifying slow functions, but not so good for identifying things like slow for loops inside larger functions. You can use timers, but putting a lot of times in its cumbersome and makes direct comparison between potential hotspots difficult. And although we should, in practice it isn't airways feasible to refactor or code so there is only one major hotspot per function or method. A solution to this is to have a profiler that shows the execution time for each line (cumulative and per-call). Currently there is the third-party line-profiler, but at least to me this seems like the sort of common functionality that belongs in the standard library. line-profiler also requires the use of a decorator since it has a large performance penalty, and perhaps a more deeply integrated line-by-line profiler could avoid this.