[Speed] Experiences with Microbenchmarking

Paul paul at paulgraydon.co.uk
Fri Feb 12 11:00:23 EST 2016


On 12 Feb 2016 07:10, Armin Rigo <arigo at tunes.org> wrote:
>
> Hi Edd,
>
> On Fri, Feb 12, 2016 at 12:18 PM, Edd Barrett <edd at theunixzoo.co.uk> wrote:
> > JITted VMs (currently PyPy, HotSpot, Graal, LuaJIT, HHVM, JRubyTruffle
> > and V8) using microbenchmarks. For each microbenchmark/VM pairing we
> > sequentially run a number of processes (currently 10), and within each
> > process we run 2000 iterations of the microbenchmark. We then plot the
> > results and make observations.
>
> PyPy typically needs more than 2000 iterations to be warmed up.
>

Same goes for the JVM. Off the top of my head it doesn't even start marking a method as hot until around 10,000 iterations (at which point it'll start to do the first stage of optimisations). If you're below that threshold you're dealing with pure interpreter performance.


Paul.


More information about the Speed mailing list