
On Sun, Jan 6, 2013, at 20:22, exarkun@twistedmatrix.com wrote:
On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote: ... Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data ..... What method would you prefer?
Something simple and accurate. :) You may need to do some investigation to determine the best approach.
1. This is simple: def do_benchmark(content): t1 = time.time() d = flatten(request, content, lambda _: None) t2 = time.time() assert d.called return t2 - t1 Do you think it's acceptably accurate? After a few million iterations, the relative error should be pretty small. 2. For the choice of test data, I had a quick search for benchmarks from other web frameworks. All I found was "hello world" benchmarks, that test the overhead of the framework itself by rendering an empty page. I'll include that, of course. 3. Regarding option parsing, is there any reason to prefer twisted.python.usage.Options over argparse? Or optparse if Python 2.7 is too new. The docs imply that Options was written long before any decent argument parsing was available. Peter.