[Python-Dev] Python Benchmarks
fredrik at pythonware.com
Fri Jun 2 11:25:37 CEST 2006
M.-A. Lemburg wrote:
> Of course, but then changes to try-except logic can interfere
> with the performance of setting up method calls. This is what
> pybench then uncovers.
I think the only thing PyBench has uncovered is that you're convinced that it's
always right, and everybody else is always wrong, including people who've
spent decades measuring performance, and the hardware in your own computer.
> See above (or the code in pybench.py). t1-t0 is usually
> around 20-50 seconds:
what machines are you using? using the default parameters, the entire run takes
about 50 seconds on the slowest machine I could find...
>> that's not a very good idea, given how get_process_time tends to be
>> implemented on current-era systems (google for "jiffies")... but it
>> definitely explains the bogus subtest results I'm seeing, and the "magic
>> hardware" behaviour you're seeing.
> That's exactly the reason why tests run for a relatively long
> time - to minimize these effects. Of course, using wall time
> make this approach vulnerable to other effects such as current
> load of the system, other processes having a higher priority
> interfering with the timed process, etc.
since process time is *sampled*, not measured, process time isn't exactly in-
vulnerable either. it's not hard to imagine scenarios where you end up being
assigned only a small part of the process time you're actually using, or cases
where you're assigned more time than you've had a chance to use.
afaik, if you want true performance counters on Linux, you need to patch the
operating system (unless something's changed in very recent versions).
I don't think that sampling errors can explain all the anomalies we've been seeing,
but I'd wouldn't be surprised if a high-resolution wall time clock on a lightly loaded
multiprocess system was, in practice, *more* reliable than sampled process time
on an equally loaded system.
More information about the Python-Dev