[pypy-dev] recent speed results

Paolo Giarrusso p.giarrusso at gmail.com
Wed May 12 12:58:19 CEST 2010


On Tue, May 11, 2010 at 23:22, Camillo Bruni <camillobruni at gmail.com> wrote:
> What about normalizing the results upon one certain fixed benchmark?
> This avoids the fact that the results are times but rather relative values.
> At least thats what we did to make benchmarks a bit more portable in general.
> Plus if you depend on the time's 'user'-time the load of the machine doesn't affect the results (Although I guess you are aware of that fact :P).

That's just theoretically true - the more context switches you have,
the more cache misses you have (and this is not accounted by user
time). And system time also needs to be accounted, so one should at
least use user+sys. Btw, system time is probably negatively affected
by in-kernel contention for mutexes and spinlocks. Again the same
story.

Bye
> On 06.05.2010, at 23:51, Maciej Fijalkowski wrote:
>
>> I looked at svn log, and the only coincidence I can find is upgrade of ubuntu.
>>
>> On Thu, May 6, 2010 at 7:51 AM,  <exarkun at twistedmatrix.com> wrote:
>>> On 11:05 am, arigo at tunes.org wrote:
>>>>
>>>> Hi,
>>>>
>>>> On Tue, May 04, 2010 at 07:59:17PM -0600, Maciej Fijalkowski wrote:
>>>>>
>>>>> Anyone has any clue where recent noise comes from? I could not
>>>>> pinpoint any single revision.
>>>>
>>>> That really looks like some external factor, e.g. the machine is
>>>> overloaded.  Maybe we should move benchmarking to tannit.  It would mean
>>>> that the results cannot be followed across the move, but at least tannit
>>>> is a non-virtual machine on which we can ensure some consistent total
>>>> usage.
>>>>
>>>> The drawback of picking tannit is that it's unclear what will occur to
>>>> it after the end of the current funding period.  Other choices have
>>>> other drawbacks -- e.g. wyvern is old (and not representative of today's
>>>> performance) and might die unexpectedly any year now.
>>>
>>> What else is on the machine that's causing it to be overloaded now?
>>>
>>> I'd suggest temporarily disabling or moving all of that first, getting a few
>>> benchmark runs that show there hasn't been a performance regression, and
>>> then considering moving the benchmarks to a new machine.  Otherwise if there
>>> has been a performance regression, the move will just hide it.
>>>
>>> Jean-Paul
>>>
>> _______________________________________________
>> pypy-dev at codespeak.net
>> http://codespeak.net/mailman/listinfo/pypy-dev
>
> _______________________________________________
> pypy-dev at codespeak.net
> http://codespeak.net/mailman/listinfo/pypy-dev
>



-- 
Paolo Giarrusso



More information about the Pypy-dev mailing list