On 07.03.16 19:19, Brett Cannon wrote:
Are you thinking about turning all of this into a benchmark for the benchmark suite?
This was my purpose. I first had written a benchmark for the benchmark suite, then I became interested in more detailed results and a comparison with alternative engines.
There are several questions about a benchmark for the benchmark suite.
it every time (may be with caching) or add it to the repository?
computer. Isn't this too long? In any case I want first optimize some bottlenecks in the re module.
searches, or separate microbenchmarks for every pattern?
regular expression. This requires changing perf.py. May be we could use the same interface to compare ElementTree with lxml and json with simplejson.
to add non-ASCII pattern and non-ASCII text. But this will increase run time.