
On Wed, Jun 22, 2011 at 3:24 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Wed, Jun 22, 2011 at 10:47 PM, anatoly techtonik <techtonik@gmail.com> wrote:
I wonder if upcoming speed.python.org has any means to validate these claims for different Python releases? Is there any place where I can upload my two to compare performance? Are there any instructions how to create such snippets and add/enhance dataset for them? Any plans or opinions if that will be useful or not?
The timeit module handles microbenchmarks on short snippets without any real problems. speed.python.org is about *macro* benchmarks - getting a feel for the overall interpreter performance under a variety of real world workflows.
Cheers, Nick.
I think the question that timeit doesn't answer and speed potentially can (I don't know if it should, but that's a matter of opinion) is how those numbers differ among various interpreters/OSes/versions. This is something for what you need a special offloaded server support