On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote:
On 06:30 pm, peter.westlake@pobox.com wrote:
A while back I promised to write some benchmarks for twisted.web.template's flattening functions. Is something like this suitable? If so, I'll add lots more test cases. The output format could be improved, too - any preferences?
The output should be something that we can load into our codespeed instance. The output of any of the existing benchmarks in lp:twisted- benchmarks should be a good example of that format (I don't even recall what it is right now - it may not even be a "format" so much as a shape of data to submit to an HTTP API).
It's pretty simple. The main difference is that all the other benchmarks only print a single result, and I was planning to do a number of tests. They can always go in separate files if it's a problem.
Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data, as it makes some questionable choices with respect to measurement technique, and at the very least it's inconsistent with the rest of the benchmarks we have.
What sort of choices? As far as I can see it just gets the time before the benchmarked code and the time after and subtracts. That looks quite close to what the other benchmarks do.
It does a ton more stuff than this, so I'm not sure what you mean here. It's full of dynamic code generation and loop counting/prediction logic, gc manipulation, and other stuff. Plus, it changes from Python version to Python version.
What method would you prefer?
Something simple and accurate. :) You may need to do some investigation to determine the best approach. Jean-Paul
Selecting data to operate on is probably an important part of this benchmark (or collection of benchmarks). It may not be possible to capture all of the interesting performance characteristics in a single dataset. However, at least something that includes HTML tags is probably desirable, since that is the primary use-case.
Yes, that's where I'm going to spend most of my effort.
There are some other Python templating systems with benchmarks. One approach that might make sense is to try to build analogous benchmarks for twisted.web.template. (Or perhaps a little thought will reveal that it's not possible to make comparisons between twisted.web.template and those systems, so there's no reason to follow their benchmarking lead.)
I'll do that if I get time, thanks.
Peter.
_______________________________________________ Twisted-web mailing list Twisted-web@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-web