
Hi all, I need some input for the benchmarking infrastructure. I'm nearly at the point where I need to have some place to run it before continuing (i.e. I need to try and use it, not just speculate). Anyway, what I was thinking about, and need input on, is how to get at the interpreters to run the benchmark. When we were talking just benchmarks, and not profiling, my thought was to just use whatever python the machine has, and fetch the pypy from the last buildbot run, but for profiling that will not work (and anyway, running the profiling on the standard python is quite pointless). So benchmarks will obviously have to specify what interpreter(s) they should be run by somehow. The bigger question is how to get those interpreters. Should running the benchmarks also trigger building one (or more) pypy interpreters according to specs in the benchmarking framework? (but then if you only want it to run one benchmark, you may have to wait for all the interpreters to build) Perhaps each benchmark should build its own interpreter (though this seems slow, given that most benchmarks can probably run on an identically built interpreter). Or maybe the installed infrastructure should only care about history, and if you want to run a single benchmark, you do that on your own. Thoughts please! /Anders