On Feb 26, 2013, at 10:05 AM, Peter Westlake <peter.westlake@pobox.com> wrote:

On Sun, Jan 6, 2013, at 20:22, exarkun@twistedmatrix.com wrote:
On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote:
Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data
What method would you prefer?

Something simple and accurate. :)  You may need to do some investigation 
to determine the best approach.

1. This is simple:

   def do_benchmark(content):
        t1 = time.time()
        d = flatten(request, content, lambda _: None)
        t2 = time.time()
        assert d.called
        return t2 - t1

Do you think it's acceptably accurate? After a few million iterations,
the relative error should be pretty small.

Well it rather depends on the contents of 'content', doesn't it? :)

I think we have gotten lost in the weeds here.  We talked about using benchlib.py initially, and then you noticed a bug, and it was mentioned that benchlib.py was mostly written for testing asynchronous things and didn't have good support for testing the simple case here, which is synchronous rendering of a simple document.  However, one of twisted.web.template's major features - arguably its reason for existing in a world that is practically overrun by HTML templating systems - is that it supports Deferreds.  So we'll want that anyway.

The right thing to do here would be to update benchlib itself with a few simple tools for doing timing of synchronous tasks, and possibly also to just fix the unbounded-recursion bug that you noticed, not to start building a new, parallel set of testing tools which use different infrastructure.  That probably means implementing a small subset of timeit.

2. For the choice of test data, I had a quick search for benchmarks from
other web frameworks. All I found was "hello world" benchmarks, that
test the overhead of the framework itself by rendering an empty page.
I'll include that, of course.

"hello world" benchmarks have problems because start-up overhead tends to dominate.  A realistic web page with some slots and renderers sprinkled throughout would be a lot better.  Although even better would be a couple of cases - let's say small, large-sync, and large-async - so we can see if optimizations for one case hurt another.

As Jean-Paul already mentioned in this thread, you can't have more than one result per benchmark, so you'll need to choose a fixed number of configurations and create one benchmark for each.

3. Regarding option parsing, is there any reason to prefer twisted.python.usage.Options over [...]

The reason to prefer usage.Options is consistency.  That's what we use on Twisted, and there is no compelling reason to use something else.  In any case, if there were a compelling reason to use something else, this wouldn't be the place to start; you could start a separate discussion about option parsing.  (Warning; a discussion about option parsing would inevitably be a waste of everyone's time and you should under no circumstances do this.)

All the options that you might need to parse (well, all the options that you _can_ parse, as far as codespeed is concerned) are already implemented by benchlib.py in http://launchpad.net/twisted-benchmarks, so there's no point in writing any option-parsing code for this task anyway.  The thing to implement would be a different driver() function that makes a few simple synchronous calls without running the reactor.