Re: [Twisted-web] [Twisted-Python] Speed of rendering?

On Jun 22, 2012, at 2:52 PM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Thu, Jun 21, 2012, at 10:16, Glyph wrote:
Le Jun 21, 2012 à 6:52 AM, Peter Westlake <peter.westlake@pobox.com> a écrit :
How fast is rendering in nevow? I have a page which is mostly a large table with a couple of hundred rows, each containing a form. The generated HTML is about 500 KB. Leaving aside the question of whether this is a good design or not, how long would you expect it to take? I'm interested specifically in what happens between the end of beforeRender and the request being completed. It takes about a minute and a quarter. Is that normal, or is there a delay in my code that I haven't found yet?
Thanks,
Peter.
What does your profiler tell you?
There's a profiler? There's a profiler! There it is, right up there at the top of the man page! Thank you!
Not only is there a profiler, there's benchmarks! <http://speed.twistedmatrix.com/timeline/> Maybe you could add one for twisted.web.template rendering speed?
Peter.
P.S. sorry, this was really meant to go to the twisted-web list. I suspect a last-minute substitution by my email client.
Thanks for the thought, at least. Cross-posted. -glyph

On Fri, Jun 22, 2012, at 15:27, Glyph wrote:
On Jun 22, 2012, at 2:52 PM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Thu, Jun 21, 2012, at 10:16, Glyph wrote:
Le Jun 21, 2012 à 6:52 AM, Peter Westlake <peter.westlake@pobox.com> a écrit :
How fast is rendering in nevow? I have a page which is mostly a large table with a couple of hundred rows, each containing a form. The generated HTML is about 500 KB. Leaving aside the question of whether this is a good design or not, how long would you expect it to take? I'm interested specifically in what happens between the end of beforeRender and the request being completed. It takes about a minute and a quarter. Is that normal, or is there a delay in my code that I haven't found yet?
Thanks,
Peter.
What does your profiler tell you?
There's a profiler? There's a profiler! There it is, right up there at the top of the man page! Thank you!
Not only is there a profiler, there's benchmarks!
<http://speed.twistedmatrix.com/timeline/>
Maybe you could add one for twisted.web.template rendering speed?
I'll see what I can do. Right now the server just shuts down when given the "--profile <outputfile>" option. No errors, just a nice clean shutdown. More on that when I'm back at work tomorrow.
Peter.
P.S. sorry, this was really meant to go to the twisted-web list. I suspect a last-minute substitution by my email client.
Thanks for the thought, at least. Cross-posted.
Thanks. Peter.

On Sun, Jun 24, 2012, at 20:14, Peter Westlake wrote:
On Fri, Jun 22, 2012, at 15:27, Glyph wrote:
On Jun 22, 2012, at 2:52 PM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Thu, Jun 21, 2012, at 10:16, Glyph wrote:
Le Jun 21, 2012 à 6:52 AM, Peter Westlake <peter.westlake@pobox.com> a écrit :
How fast is rendering in nevow?
...
What does your profiler tell you?
There's a profiler? There's a profiler! There it is, right up there at the top of the man page! Thank you!
Not only is there a profiler, there's benchmarks!
<http://speed.twistedmatrix.com/timeline/>
Maybe you could add one for twisted.web.template rendering speed?
Okay, I've found out how to use the profiler (though I never did find out what I did wrong the first time) and I'm reading the docs about how to interpret the results. The benchmark code doesn't look as though it uses the profiler, just times a number of repetitions - is that right? So a benchmark for t.w.template would consist of some functions that called flatten() once each? Peter.

On Sep 3, 2012, at 7:50 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Sun, Jun 24, 2012, at 20:14, Peter Westlake wrote:
On Fri, Jun 22, 2012, at 15:27, Glyph wrote:
On Jun 22, 2012, at 2:52 PM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Thu, Jun 21, 2012, at 10:16, Glyph wrote:
Le Jun 21, 2012 à 6:52 AM, Peter Westlake <peter.westlake@pobox.com> a écrit :
How fast is rendering in nevow?
...
What does your profiler tell you?
There's a profiler? There's a profiler! There it is, right up there at the top of the man page! Thank you!
Not only is there a profiler, there's benchmarks!
<http://speed.twistedmatrix.com/timeline/>
Maybe you could add one for twisted.web.template rendering speed?
Okay, I've found out how to use the profiler (though I never did find out what I did wrong the first time) and I'm reading the docs about how to interpret the results. The benchmark code doesn't look as though it uses the profiler, just times a number of repetitions - is that right? So a benchmark for t.w.template would consist of some functions that called flatten() once each?
That's the general idea, yes. Of course, each benchmark should try to be vaguely representative of some real-world use-case so that we don't optimize one case too much in favor of another. -glyph

On Tue, Sep 4, 2012, at 19:30, Glyph wrote: ...
Not only is there a profiler, there's benchmarks!
<http://speed.twistedmatrix.com/timeline/>
Maybe you could add one for twisted.web.template rendering speed?
Okay, I've found out how to use the profiler (though I never did find out what I did wrong the first time) and I'm reading the docs about how to interpret the results. The benchmark code doesn't look as though it uses the profiler, just times a number of repetitions - is that right? So a benchmark for t.w.template would consist of some functions that called flatten() once each?
That's the general idea, yes. Of course, each benchmark should try to be vaguely representative of some real-world use-case so that we don't optimize one case too much in favor of another.
Glyph, I haven't forgotten about this. The problem I'm having is that flatten() returns immediately if given a string or anything else without an unfired Deferred, and that sends Client._continue into an unbounded recursion. Is there a general good way to handle this kind of problem? Somehow I need to return control to the reactor long enough for Client._request to return. Peter.

On Oct 23, 2012, at 8:10 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Tue, Sep 4, 2012, at 19:30, Glyph wrote:
...
Not only is there a profiler, there's benchmarks!
<http://speed.twistedmatrix.com/timeline/>
Maybe you could add one for twisted.web.template rendering speed?
Okay, I've found out how to use the profiler (though I never did find out what I did wrong the first time) and I'm reading the docs about how to interpret the results. The benchmark code doesn't look as though it uses the profiler, just times a number of repetitions - is that right? So a benchmark for t.w.template would consist of some functions that called flatten() once each?
That's the general idea, yes. Of course, each benchmark should try to be vaguely representative of some real-world use-case so that we don't optimize one case too much in favor of another.
Glyph,
I haven't forgotten about this.
Thanks for sticking with it :).
The problem I'm having is that flatten() returns immediately if given a string or anything else without an unfired Deferred, and that sends Client._continue into an unbounded recursion. Is there a general good way to handle this kind of problem? Somehow I need to return control to the reactor long enough for Client._request to return.
That sounds like a bug, although it's hard to say without seeing the exact code that you're talking about. Can you send a representative example? -g

On Wed, Oct 24, 2012, at 09:16, Glyph wrote:
On Oct 23, 2012, at 8:10 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
...
The problem I'm having is that flatten() returns immediately if given a string or anything else without an unfired Deferred, and that sends Client._continue into an unbounded recursion. Is there a general good way to handle this kind of problem? Somehow I need to return control to the reactor long enough for Client._request to return.
That sounds like a bug, although it's hard to say without seeing the exact code that you're talking about. Can you send a representative example?
Here it is: from benchlib import driver, Client from twisted.web.template import flatten from twisted.web.server import Request from twisted.web.http import HTTPChannel class Client(Client): channel = HTTPChannel() request = Request(channel, False) def _request(self): d = flatten(self.request, 'hello', lambda _: None) d.addCallback(self._continue) ### Infinite recursion happens here d.addErrback(self._stop) def main(reactor, duration): concurrency = 1 client = Client(reactor) d = client.run(concurrency, duration) return d if __name__ == '__main__': import sys import flatten_string driver(flatten_string.main, sys.argv) Because flatten does not have to wait for anything, it returns a Deferred that has already fired. The d.addCallback sees this and calls the callback immediately. Client._continue calls the next iteration of the test by calling self.request again, and the stack blows up. This is perfectly reasonable and standard behaviour for Deferreds, so I should be doing the iteration in some other way, probably not using Client at all. What I was hoping for was a pattern for how to transform the code to avoid the problem; I suspect the answer is to use iteration instead of recursion. It might even be that none of benchlib.py is usable directly. Or maybe putting the flatten() calls into a thread would work? But that runs the risk of race conditions, if it finishes before the callback is added. Peter.

On 23 Oct, 03:10 pm, peter.westlake@pobox.com wrote:
On Tue, Sep 4, 2012, at 19:30, Glyph wrote:
...
Not only is there a profiler, there's benchmarks!
<http://speed.twistedmatrix.com/timeline/>
Maybe you could add one for twisted.web.template rendering speed?
Okay, I've found out how to use the profiler (though I never did find out what I did wrong the first time) and I'm reading the docs about how to interpret the results. The benchmark code doesn't look as though it uses the profiler, just times a number of repetitions - is that right? So a benchmark for t.w.template would consist of some functions that called flatten() once each?
That's the general idea, yes. Of course, each benchmark should try to be vaguely representative of some real-world use-case so that we don't optimize one case too much in favor of another.
Glyph,
I haven't forgotten about this. The problem I'm having is that flatten() returns immediately if given a string or anything else without an unfired Deferred, and that sends Client._continue into an unbounded recursion. Is there a general good way to handle this kind of problem? Somehow I need to return control to the reactor long enough for Client._request to return.
The benchmark tools are really intended for actually asynchronous things, like setting up a TCP connection. They can be abused into testing synchronous things, but the results are not very good. It would probably be better not to try to re-use the asynchronous testing tools for testing synchronous APIs and instead build some tools for testing synchronous APIs. These should be simpler anyway. You don't *need* a running reactor for the synchronous case of flatten(). Jean-Paul

On Wed, Oct 24, 2012, at 13:59, exarkun@twistedmatrix.com wrote:
The benchmark tools are really intended for actually asynchronous things, like setting up a TCP connection. They can be abused into testing synchronous things, but the results are not very good.
It would probably be better not to try to re-use the asynchronous testing tools for testing synchronous APIs and instead build some tools for testing synchronous APIs. These should be simpler anyway. You don't *need* a running reactor for the synchronous case of flatten().
This message appeared while I was typing my reply, in which I laboriously worked my way round to a similiar conclusion! So I'll write some tests that test synchronous uses, with no Deferreds. Waiting for other results to appear would only skew the timings, so I won't try timing any cases with Deferreds in them. Peter.

A while back I promised to write some benchmarks for twisted.web.template's flattening functions. Is something like this suitable? If so, I'll add lots more test cases. The output format could be improved, too - any preferences? Peter. from twisted.web.template import flatten from twisted.web.server import Request import twisted.web.http channel = twisted.web.http.HTTPChannel() request = Request(channel, False) def make(content): def f(): d = flatten(request, content, lambda _: None) assert d.called return f def test(content): return timeit.timeit(stmt=make(content), number=repeats) repeats = 1000 deeplist = ['centre'] for n in range(100): deeplist = [deeplist] tests = { 'empty': '', 'string': 'hello', 'shortlist': ['hello'], 'longlist': [str(n) for n in range(100)], 'deeplist': deeplist, } if __name__ == '__main__': import timeit from sys import argv for name in argv[1:] or tests: print name, test(tests[name])

On 06:30 pm, peter.westlake@pobox.com wrote:
A while back I promised to write some benchmarks for twisted.web.template's flattening functions. Is something like this suitable? If so, I'll add lots more test cases. The output format could be improved, too - any preferences?
The output should be something that we can load into our codespeed instance. The output of any of the existing benchmarks in lp:twisted- benchmarks should be a good example of that format (I don't even recall what it is right now - it may not even be a "format" so much as a shape of data to submit to an HTTP API). The `timeit` module is probably not suitable to use to collect the data, as it makes some questionable choices with respect to measurement technique, and at the very least it's inconsistent with the rest of the benchmarks we have. Selecting data to operate on is probably an important part of this benchmark (or collection of benchmarks). It may not be possible to capture all of the interesting performance characteristics in a single dataset. However, at least something that includes HTML tags is probably desirable, since that is the primary use-case. There are some other Python templating systems with benchmarks. One approach that might make sense is to try to build analogous benchmarks for twisted.web.template. (Or perhaps a little thought will reveal that it's not possible to make comparisons between twisted.web.template and those systems, so there's no reason to follow their benchmarking lead.) Jean-Paul
Peter.
from twisted.web.template import flatten from twisted.web.server import Request import twisted.web.http
channel = twisted.web.http.HTTPChannel() request = Request(channel, False)
def make(content): def f(): d = flatten(request, content, lambda _: None) assert d.called return f
def test(content): return timeit.timeit(stmt=make(content), number=repeats)
repeats = 1000
deeplist = ['centre'] for n in range(100): deeplist = [deeplist]
tests = { 'empty': '', 'string': 'hello', 'shortlist': ['hello'], 'longlist': [str(n) for n in range(100)], 'deeplist': deeplist, }
if __name__ == '__main__': import timeit from sys import argv for name in argv[1:] or tests: print name, test(tests[name])
_______________________________________________ Twisted-web mailing list Twisted-web@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-web

On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote:
On 06:30 pm, peter.westlake@pobox.com wrote:
A while back I promised to write some benchmarks for twisted.web.template's flattening functions. Is something like this suitable? If so, I'll add lots more test cases. The output format could be improved, too - any preferences?
The output should be something that we can load into our codespeed instance. The output of any of the existing benchmarks in lp:twisted- benchmarks should be a good example of that format (I don't even recall what it is right now - it may not even be a "format" so much as a shape of data to submit to an HTTP API).
It's pretty simple. The main difference is that all the other benchmarks only print a single result, and I was planning to do a number of tests. They can always go in separate files if it's a problem.
The `timeit` module is probably not suitable to use to collect the data, as it makes some questionable choices with respect to measurement technique, and at the very least it's inconsistent with the rest of the benchmarks we have.
What sort of choices? As far as I can see it just gets the time before the benchmarked code and the time after and subtracts. That looks quite close to what the other benchmarks do. What method would you prefer?
Selecting data to operate on is probably an important part of this benchmark (or collection of benchmarks). It may not be possible to capture all of the interesting performance characteristics in a single dataset. However, at least something that includes HTML tags is probably desirable, since that is the primary use-case.
Yes, that's where I'm going to spend most of my effort.
There are some other Python templating systems with benchmarks. One approach that might make sense is to try to build analogous benchmarks for twisted.web.template. (Or perhaps a little thought will reveal that it's not possible to make comparisons between twisted.web.template and those systems, so there's no reason to follow their benchmarking lead.)
I'll do that if I get time, thanks. Peter.

On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote:
On 06:30 pm, peter.westlake@pobox.com wrote:
A while back I promised to write some benchmarks for twisted.web.template's flattening functions. Is something like this suitable? If so, I'll add lots more test cases. The output format could be improved, too - any preferences?
The output should be something that we can load into our codespeed instance. The output of any of the existing benchmarks in lp:twisted- benchmarks should be a good example of that format (I don't even recall what it is right now - it may not even be a "format" so much as a shape of data to submit to an HTTP API).
It's pretty simple. The main difference is that all the other benchmarks only print a single result, and I was planning to do a number of tests. They can always go in separate files if it's a problem.
Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data, as it makes some questionable choices with respect to measurement technique, and at the very least it's inconsistent with the rest of the benchmarks we have.
What sort of choices? As far as I can see it just gets the time before the benchmarked code and the time after and subtracts. That looks quite close to what the other benchmarks do.
It does a ton more stuff than this, so I'm not sure what you mean here. It's full of dynamic code generation and loop counting/prediction logic, gc manipulation, and other stuff. Plus, it changes from Python version to Python version.
What method would you prefer?
Something simple and accurate. :) You may need to do some investigation to determine the best approach. Jean-Paul
Selecting data to operate on is probably an important part of this benchmark (or collection of benchmarks). It may not be possible to capture all of the interesting performance characteristics in a single dataset. However, at least something that includes HTML tags is probably desirable, since that is the primary use-case.
Yes, that's where I'm going to spend most of my effort.
There are some other Python templating systems with benchmarks. One approach that might make sense is to try to build analogous benchmarks for twisted.web.template. (Or perhaps a little thought will reveal that it's not possible to make comparisons between twisted.web.template and those systems, so there's no reason to follow their benchmarking lead.)
I'll do that if I get time, thanks.
Peter.
_______________________________________________ Twisted-web mailing list Twisted-web@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-web

On Sun, Jan 6, 2013, at 20:22, exarkun@twistedmatrix.com wrote:
On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote: ... Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data ..... What method would you prefer?
Something simple and accurate. :) You may need to do some investigation to determine the best approach.
1. This is simple: def do_benchmark(content): t1 = time.time() d = flatten(request, content, lambda _: None) t2 = time.time() assert d.called return t2 - t1 Do you think it's acceptably accurate? After a few million iterations, the relative error should be pretty small. 2. For the choice of test data, I had a quick search for benchmarks from other web frameworks. All I found was "hello world" benchmarks, that test the overhead of the framework itself by rendering an empty page. I'll include that, of course. 3. Regarding option parsing, is there any reason to prefer twisted.python.usage.Options over argparse? Or optparse if Python 2.7 is too new. The docs imply that Options was written long before any decent argument parsing was available. Peter.

On Feb 26, 2013, at 10:05 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Sun, Jan 6, 2013, at 20:22, exarkun@twistedmatrix.com wrote:
On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote: ... Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data ..... What method would you prefer?
Something simple and accurate. :) You may need to do some investigation to determine the best approach.
1. This is simple:
def do_benchmark(content): t1 = time.time() d = flatten(request, content, lambda _: None) t2 = time.time() assert d.called return t2 - t1
Do you think it's acceptably accurate? After a few million iterations, the relative error should be pretty small.
Well it rather depends on the contents of 'content', doesn't it? :) I think we have gotten lost in the weeds here. We talked about using benchlib.py initially, and then you noticed a bug, and it was mentioned that benchlib.py was mostly written for testing asynchronous things and didn't have good support for testing the simple case here, which is synchronous rendering of a simple document. However, one of twisted.web.template's major features - arguably its reason for existing in a world that is practically overrun by HTML templating systems - is that it supports Deferreds. So we'll want that anyway. The right thing to do here would be to update benchlib itself with a few simple tools for doing timing of synchronous tasks, and possibly also to just fix the unbounded-recursion bug that you noticed, not to start building a new, parallel set of testing tools which use different infrastructure. That probably means implementing a small subset of timeit.
2. For the choice of test data, I had a quick search for benchmarks from other web frameworks. All I found was "hello world" benchmarks, that test the overhead of the framework itself by rendering an empty page. I'll include that, of course.
"hello world" benchmarks have problems because start-up overhead tends to dominate. A realistic web page with some slots and renderers sprinkled throughout would be a lot better. Although even better would be a couple of cases - let's say small, large-sync, and large-async - so we can see if optimizations for one case hurt another. As Jean-Paul already mentioned in this thread, you can't have more than one result per benchmark, so you'll need to choose a fixed number of configurations and create one benchmark for each.
3. Regarding option parsing, is there any reason to prefer twisted.python.usage.Options over [...]
The reason to prefer usage.Options is consistency. That's what we use on Twisted, and there is no compelling reason to use something else. In any case, if there were a compelling reason to use something else, this wouldn't be the place to start; you could start a separate discussion about option parsing. (Warning; a discussion about option parsing would inevitably be a waste of everyone's time and you should under no circumstances do this.) All the options that you might need to parse (well, all the options that you _can_ parse, as far as codespeed is concerned) are already implemented by benchlib.py in http://launchpad.net/twisted-benchmarks, so there's no point in writing any option-parsing code for this task anyway. The thing to implement would be a different driver() function that makes a few simple synchronous calls without running the reactor. -glyph

On Wed, Feb 27, 2013, at 0:39, Glyph wrote:
On Feb 26, 2013, at 10:05 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
On Sun, Jan 6, 2013, at 20:22, exarkun@twistedmatrix.com wrote:
On 12:48 am, peter.westlake@pobox.com wrote:
On Fri, Jan 4, 2013, at 19:58, exarkun@twistedmatrix.com wrote: ... Codespeed cannot handle more than one result per benchmark.
The `timeit` module is probably not suitable to use to collect the data ..... What method would you prefer?
Something simple and accurate. :) You may need to do some investigation to determine the best approach.
1. This is simple:
def do_benchmark(content): t1 = time.time() d = flatten(request, content, lambda _: None) t2 = time.time() assert d.called return t2 - t1
Do you think it's acceptably accurate? After a few million iterations, the relative error should be pretty small.
Well it rather depends on the contents of 'content', doesn't it? :)
Yes, sorry, the loop is meant to be around the flatten call! Corrected version below.
I think we have gotten lost in the weeds here. We talked about using benchlib.py initially, and then you noticed a bug, and it was mentioned that benchlib.py was mostly written for testing asynchronous things and didn't have good support for testing the simple case here, which is synchronous rendering of a simple document. However, one of twisted.web.template's major features - arguably its reason for existing in a world that is practically overrun by HTML templating systems - is that it supports Deferreds. So we'll want that anyway.
That's true, and I'll include some Deferreds in the content to be flattened. But if the Deferreds actually do any lengthy processing, it makes a nonsense of the benchmark. It only makes sense to use ones that have already fired, i.e. defer.succeed(...). The other benchmarks are testing asynchronous operations, as names like "ssl_throughput" suggest. Flattening doesn't do any of that, and I'm only trying to measure the speed of flattening.
The right thing to do here would be to update benchlib itself with a few simple tools for doing timing of synchronous tasks, and possibly also to just fix the unbounded-recursion bug that you noticed, not to start building a new, parallel set of testing tools which use different infrastructure. That probably means implementing a small subset of timeit.
I'm not convinced that the unbounded recursion is actually a bug. A callback on a fired Deferred will be executed immediately, and that's correct behaviour. There's no chance to return control to the reactor, and even if there was, anything that happened in that time would only skew the results. The real problem is that recursion-by-Deferred doesn't have the optimisation for tail recursion found in most functional languages, because that would be very difficult and it's not how Deferreds are usually used.
2. For the choice of test data, I had a quick search for benchmarks from other web frameworks. All I found was "hello world" benchmarks, that test the overhead of the framework itself by rendering an empty page. I'll include that, of course.
"hello world" benchmarks have problems because start-up overhead tends to dominate. A realistic web page with some slots and renderers sprinkled throughout would be a lot better. Although even better would be a couple of cases - let's say small, large-sync, and large-async - so we can see if optimizations for one case hurt another.
Yes, I'm just making my excuses for not copying benchmarks from an existing framework.
As Jean-Paul already mentioned in this thread, you can't have more than one result per benchmark, so you'll need to choose a fixed number of configurations and create one benchmark for each.
3. Regarding option parsing, is there any reason to prefer twisted.python.usage.Options over [...]
The reason to prefer usage.Options is consistency. ...
OK
The thing to implement would be a different driver() function that makes a few simple synchronous calls without running the reactor.
If you don't mind the overhead of an extra function call, that could be as simple as: def sync_benchmark(iterations, name, func, *args): t1 = time.time() for _ in range(iterations): func(*args) t2 = time.time() benchlib.benchmark_report(iterations, t2 - t1, name) I'm not sure if options['iterations'] would be the right thing to use here, because it gives the number of times to repeat the whole benchmark, not the number of times round the inner loop. The async code uses options['duration'], but there would be more overhead to run synchronous code for a given duration. Peter.

On Feb 27, 2013, at 3:55 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
The real problem is that recursion-by-Deferred doesn't have the optimisation for tail recursion found in most functional languages, because that would be very difficult and it's not how Deferreds are usually used.
Actually it *is* supposed to have this sort of optimization. You can see the work on this on my favorite (now long-since closed) Twisted ticket: <http://twistedmatrix.com/trac/ticket/411>. So it might be interesting to investigate why that doesn't help in this case, and if it could be made to. -glyph

On Feb 27, 2013, at 3:55 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
It only makes sense to use ones that have already fired, i.e. defer.succeed(...).
The problem with only testing synchronously-fired Deferreds is that there are implementation shortcuts possible there, which would allow us to cheat the benchmark. Granted, it doesn't make sense to do much more work than returning to the reactor - or perhaps not even the reactor, perhaps just a tight outer control loop that does nothing but immediately fire all the Deferreds it knows about when it's returned to - but a not-fired-at-the-time-of-rendering Deferred is a potentially important code path. -glyph

On Wed, Feb 27, 2013, at 18:26, Glyph wrote:
On Feb 27, 2013, at 3:55 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
It only makes sense to use ones that have already fired, i.e. defer.succeed(...).
The problem with only testing synchronously-fired Deferreds is that there are implementation shortcuts possible there, which would allow us to cheat the benchmark. Granted, it doesn't make sense to do much more work than returning to the reactor - or perhaps not even the reactor, perhaps just a tight outer control loop that does nothing but immediately fire all the Deferreds it knows about when it's returned to - but a not-fired-at-the-time-of-rendering Deferred is a potentially important code path.
Fair enough - I'll see what I can do. Peter.

On Feb 27, 2013, at 3:55 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
If you don't mind the overhead of an extra function call, that could be as simple as:
Sounds good to me. Although this should now be expressed as a pull request against twisted-benchmarks on launchpad, and not a snippet in an email :). -glyph

On Wed, Feb 27, 2013, at 18:27, Glyph wrote:
On Feb 27, 2013, at 3:55 AM, Peter Westlake <peter.westlake@pobox.com> wrote:
If you don't mind the overhead of an extra function call, that could be as simple as:
Sounds good to me. Although this should now be expressed as a pull request against twisted-benchmarks on launchpad, and not a snippet in an email :).
Thanks - I'll go and read the docs. Peter.

Just in case anyone was wondering:
On Thu, Jun 21, 2012, at 10:16, Glyph wrote:
Le Jun 21, 2012 à 6:52 AM, Peter Westlake <peter.westlake@pobox.com> a écrit :
How fast is rendering in nevow? I have a page which is mostly a large table with a couple of hundred rows, each containing a form. The generated HTML is about 500 KB. Leaving aside the question of whether this is a good design or not, how long would you expect it to take? I'm interested specifically in what happens between the end of beforeRender and the request being completed. It takes about a minute and a quarter. Is that normal, or is there a delay in my code that I haven't found yet?
Thanks,
Peter.
What does your profiler tell you?
The answer is that nevow takes less than three seconds to render a 500KB page. Peter.
participants (3)
-
exarkun@twistedmatrix.com
-
Glyph
-
Peter Westlake