
I run across a snippet in SCons.Util (don't worry, I've double-checked To: field) that claims it is faster than os.path.splitext() while basically doing the same thing. def splitext(path): "Same as os.path.splitext() but faster." sep = rightmost_separator(path, os.sep) dot = path.rfind('.') # An ext is only real if it has at least one non-digit char if dot > sep and not containsOnly(path[dot:], "0123456789."): return path[:dot],path[dot:] else: return path,"" I wonder if upcoming speed.python.org has any means to validate these claims for different Python releases? Is there any place where I can upload my two to compare performance? Are there any instructions how to create such snippets and add/enhance dataset for them? Any plans or opinions if that will be useful or not? -- anatoly t.

On 22 June 2011 13:47, anatoly techtonik <techtonik@gmail.com> wrote:
I run across a snippet in SCons.Util (don't worry, I've double-checked To: field) that claims it is faster than os.path.splitext() while basically doing the same thing.
Actually, it doesn't do the same thing. Doesn't handle files like .profile properly. Also, this one seems to treat numerics differently. So I'm not sure what you're trying to prove in a comparison...? Paul.

On Wed, Jun 22, 2011 at 10:47 PM, anatoly techtonik <techtonik@gmail.com> wrote:
I wonder if upcoming speed.python.org has any means to validate these claims for different Python releases? Is there any place where I can upload my two to compare performance? Are there any instructions how to create such snippets and add/enhance dataset for them? Any plans or opinions if that will be useful or not?
The timeit module handles microbenchmarks on short snippets without any real problems. speed.python.org is about *macro* benchmarks - getting a feel for the overall interpreter performance under a variety of real world workflows. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Wed, Jun 22, 2011 at 3:24 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Wed, Jun 22, 2011 at 10:47 PM, anatoly techtonik <techtonik@gmail.com> wrote:
I wonder if upcoming speed.python.org has any means to validate these claims for different Python releases? Is there any place where I can upload my two to compare performance? Are there any instructions how to create such snippets and add/enhance dataset for them? Any plans or opinions if that will be useful or not?
The timeit module handles microbenchmarks on short snippets without any real problems. speed.python.org is about *macro* benchmarks - getting a feel for the overall interpreter performance under a variety of real world workflows.
Cheers, Nick.
I think the question that timeit doesn't answer and speed potentially can (I don't know if it should, but that's a matter of opinion) is how those numbers differ among various interpreters/OSes/versions. This is something for what you need a special offloaded server support
participants (4)
-
anatoly techtonik
-
Maciej Fijalkowski
-
Nick Coghlan
-
Paul Moore