Re: [Python-Dev] Mercurial repository for Python benchmarks

Would there be any interest in accepting IronPython's in-house benchmarks into this repository as well? Internally we run the usual suspects (PyStone, PyBench, etc), but we also have a plethora of custom benchmarks we've written that also happen to run under CPython. My best, Dave ------------------------------ Message: 2 Date: Mon, 22 Feb 2010 15:17:09 -0500 From: Collin Winter <collinwinter@google.com> To: Daniel Stutzbach <daniel@stutzbachenterprises.com> Cc: Dirkjan Ochtman <djc.ochtman@gmail.com>, Python Dev <python-dev@python.org> Subject: Re: [Python-Dev] Mercurial repository for Python benchmarks Message-ID: <3c8293b61002221217v4b7f3b91y76b78763d69fb99d@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1 On Sun, Feb 21, 2010 at 9:43 PM, Collin Winter <collinwinter@google.com> wrote:
Hey Daniel,
On Sun, Feb 21, 2010 at 4:51 PM, Daniel Stutzbach <daniel@stutzbachenterprises.com> wrote:
On Sun, Feb 21, 2010 at 2:28 PM, Collin Winter <collinwinter@google.com> wrote:
Would it be possible for us to get a Mercurial repository on python.org for the Unladen Swallow benchmarks? Maciej and I would like to move the benchmark suite out of Unladen Swallow and into python.org, where all implementations can share it and contribute to it. PyPy has been adding some benchmarks to their copy of the Unladen benchmarks, and we'd like to have as well, and Mercurial seems to be an ideal solution to this.
If and when you have a benchmark repository set up, could you announce it via a reply to this thread?? I'd like to check it out.
Will do.
The benchmarks repository is now available at http://hg.python.org/benchmarks/. It contains all the benchmarks that the Unladen Swallow svn repository contains, including the beginnings of a README.txt that describes the available benchmarks and a quick-start guide for running perf.py (the main interface to the benchmarks). This will eventually contain all the information from http://code.google.com/p/unladen-swallow/wiki/Benchmarks, as well as guidelines on how to write good benchmarks. If you have svn commit access, you should be able to run `hg clone ssh://hg@hg.python.org/repos/benchmarks`. I'm not sure how to get read-only access; Dirkjan can comment on that. Still todo: - Replace the static snapshots of 2to3, Mercurial and other hg-based projects with clones of the respective repositories. - Fix the 2to3 and nbody benchmarks to work with Python 2.5 for Jython and PyPy. - Import some of the benchmarks PyPy has been using. Any access problems with the hg repo should be directed to Dirkjan. Thanks so much for getting the repo set up so fast! Thanks, Collin Winter

On Wed, Feb 24, 2010 at 12:12 PM, Dave Fugate <dfugate@microsoft.com> wrote:
Would there be any interest in accepting IronPython's in-house benchmarks into this repository as well? Internally we run the usual suspects (PyStone, PyBench, etc), but we also have a plethora of custom benchmarks we've written that also happen to run under CPython.
My best,
Dave
From my perspective the more we have there the better. We might not run all of them on nightly run for example (we as in PyPy). Are you up to adhering to perf.py expectation scheme? (The benchmark being runnable by perf.py)
Cheers, fijal

perf.py - I'll look into this. At this point we'll need to refactor them any ways as there are a few dependencies on internal Microsoft stuff the IronPython Team didn't create. Thanks, Dave -----Original Message----- From: Maciej Fijalkowski [mailto:fijall@gmail.com] Sent: Wednesday, February 24, 2010 10:51 AM To: Dave Fugate Cc: python-dev@python.org Subject: Re: [Python-Dev] Mercurial repository for Python benchmarks On Wed, Feb 24, 2010 at 12:12 PM, Dave Fugate <dfugate@microsoft.com> wrote:
Would there be any interest in accepting IronPython's in-house benchmarks into this repository as well? Internally we run the usual suspects (PyStone, PyBench, etc), but we also have a plethora of custom benchmarks we've written that also happen to run under CPython.
My best,
Dave
From my perspective the more we have there the better. We might not run all of them on nightly run for example (we as in PyPy). Are you up to adhering to perf.py expectation scheme? (The benchmark being runnable by perf.py) Cheers, fijal
participants (2)
-
Dave Fugate
-
Maciej Fijalkowski