Benchmarking some modules - strange result
__peter__ at web.de
Sun Jan 25 13:24:40 CET 2015
Dan Stromberg wrote:
> I've been benchmarking some python modules that are mostly variations
> on the same theme.
> For simplicity, let's say I've been running the suite of performance
> tests within a single interpreter - so I test one module thoroughly,
> then move on to the next without exiting the interpreter.
> I'm finding that if I prune the list of modules down to just the best
> performers, I get pretty different results - what was best no longer
> is. This strikes me as strange.
> I'm about ready to rewrite things to run each individual test in a
> fresh interpreter. But is there a better way?
You could run combinations of two modules in the same interpreter to see if
there are specific modules that slow down the following module.
If you can identify such modules you could then look into their code to try
and find the cause of the slowdown.
Requires some work, but the results might be interesting to the greater
public -- many real-world applications make do with a single interpreter ;)
More information about the Python-list