Hi,
I released perf 0.7 (quickly followed by a 0.7.1 bugfix): http://perf.readthedocs.io/
I wrote this new version to collect more data in each process. It now reads (and stores) CPUs config, CPUs temperature, CPUs frequency, system load average, etc. Later we can add for example the process RSS peak or other useful metrics.
Oh, and the timestamp is now stored per process (run). Again, it's no more global. I noticed a temporarely slowdown which might be caused by a cron task, I'm not sure yet. At least, timestamps should help to debug such issue.
I added many CPU metrics because I wanted to analyze why *sometimes* a benchmark suddenly becomes 50% slower (up to 100% slower). It may be related to the CPUs temperature or Intel Turbo Boost, I don't know yet exactly.
The previous perf design didn't allow to store information per process, only globally per benchmark.
perf 0.7 now supports much better benchmark suites (not only individual benchmarks) and has now a really working --append command. A benchmark file has not enough runs? Run it again with --append!
Changes:
suites (not only individual benchmarks)
In the meanwhile, I also completed and updated my fork of the CPython benchmark suite: https://hg.python.org/sandbox/benchmarks_perf
Victor