There is no problem in running tests for branches. What other branches or interpreters would you for example run?<br><br><br><div class="gmail_quote">2010/6/25 Maciej Fijalkowski <span dir="ltr"><<a href="mailto:fijall@gmail.com">fijall@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div><div></div><div class="h5">On Fri, Jun 25, 2010 at 5:08 AM, Miquel Torres <<a href="mailto:tobami@googlemail.com">tobami@googlemail.com</a>> wrote:<br>
> Hi all!,<br>
><br>
> I want to announce a new version of the benchmarks site <a href="http://speed.pypy.org" target="_blank">speed.pypy.org</a>.<br>
><br>
> After about 6 months, it finally shows the vision I had for such a website:<br>
> usefull for pypy developers but also for the general public following pypy's<br>
> or even other python implementation's development. On to the changes.<br>
><br>
> There are now three views: "Changes", "Timeline" and "Comparison":<br>
><br>
> The Overview was renamed to Changes, and its inline plot bars got removed<br>
> because you can get the exact same plot in the Comparison view now (and then<br>
> some).<br>
><br>
> The Timeline got selectable baseline and "humanized" date labels for the x<br>
> axis.<br>
><br>
> The new Comparison view allows, well, comparing of "competing" interpreters,<br>
> which will also be of interest to the wider Python community (specially if<br>
> we can add unladen, ironpython and JPython results).<br>
><br>
><br>
> Two examples of interesting comparisons are:<br>
><br>
> - relative bars<br>
> (<a href="http://speed.pypy.org/comparison/?bas=2%2B35&chart=relative+bars" target="_blank">http://speed.pypy.org/comparison/?bas=2%2B35&chart=relative+bars</a>): here we<br>
> see that the jit is faster than psyco in all cases except spambayes and<br>
> slowspitfire, were the jit cannot make up for pypy-c's abismal performance.<br>
> Interestingly, in the only other case where the jit is slower than cpython,<br>
> the ai benchmark, psyco performs even worse.<br>
><br>
> - stacked bars<br>
> horizontal(<a href="http://speed.pypy.org/comparison/?hor=true&bas=2%2B35&chart=stacked+bars" target="_blank">http://speed.pypy.org/comparison/?hor=true&bas=2%2B35&chart=stacked+bars</a>):<br>
> This is not meant to "demonstrate" that overall the jit is over two times<br>
> faster than cpython. It is just another way for a developer to picture how<br>
> long a programme would take to complete if it were composed of 21 such<br>
> tasks. You can see that cpython's (the normalization chosen) benchmarks all<br>
> take 1"relative" second. pypy-c needs more or less the same time, some<br>
> "tasks" being slower and some faster. Psyco shows an interesting picture:<br>
> From meteor-contest downwards (fortuitously) , all benchmarks are extremely<br>
> "compressed", which means they are speeded up by psyco quite a lot. But any<br>
> further speed up wouldn't make overall time much shorter because the first<br>
> group of benchmarks now takes most of the time to complete. pypy-c-jit is a<br>
> more extreme case of this: If the jit accelerated all "fast" benchmarks to 0<br>
> seconds (infinitely fast), it would only get about twice as fast as now<br>
> because ai, slowspitfire, spambayes and twisted_tcp now need half the entire<br>
> execution time. An good demonstration of "you are only as fast as your<br>
> slowest part". Of course the aggregate of all benchmarks is not a real app,<br>
> but it is still fun.<br>
><br>
> I hope you find the new version useful, and as always any feedback is<br>
> welcome.<br>
><br>
> Cheers!<br>
> Miquel<br>
><br>
<br>
</div></div>Wow, I really like it, great job.<br>
<br>
Can we see how we can use this features for branches?<br>
<br>
Cheers,<br>
fijal<br>
</blockquote></div><br>