[Speed] Should we change what benchmarks we have?

Maciej Fijalkowski fijall at gmail.com
Fri Feb 12 09:48:01 EST 2016


I presume you looked at the pypy benchmark suite, which contains a
large collection of library-based benchmarks. You can endlessly argue
whether it's "macro enough", but it does cover some usages of various
libraries as submitted/written with help from lib authors (sympy,
twisted, various templating engines, sqlalchemy ORM, etc.) as well as
interesting python programs that are CPU intensive found on the
interwebs.

On Fri, Feb 12, 2016 at 1:31 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Thu, 11 Feb 2016 18:36:33 +0000
> Brett Cannon <brett at python.org> wrote:
>> Are we happy with the current benchmarks? Are there some we want to drop?
>> How about add? Do we want to have explanations as to why each benchmark is
>> included?
>
> There are no real explanations except the provenance of said benchmarks:
> - the benchmarks suite was originally developed for Unladen Swallow
> - some benchmarks were taken and adapted from the "Great Computer
>   Language Shootout" (which I think is a poor source of benchmarks)
> - some benchmarks have been added for specific concerns that may not be
>   of enough interest in general (for example micro-benchmarks of
>   methods calls, or benchmarks of json / pickle performance)
>
>> A better balance of micro vs. macro benchmarks (and probably
>> matching groups)?
>
> Easier said than done :-) Macro-benchmarks are harder to write,
> especially with the constraints that 1) runtimes should be short enough
> for convenient use 2) performance numbers should be stable enough
> accross runs.
>
> Regards
>
> Antoine.
>
>
> _______________________________________________
> Speed mailing list
> Speed at python.org
> https://mail.python.org/mailman/listinfo/speed


More information about the Speed mailing list