
On Tue, Feb 01, 2005 at 08:02:27PM +0000, Valentino Volonghi wrote:
On Tue, 1 Feb 2005 20:09:57 +0100, Andrea Arcangeli <andrea@cpushare.com> wrote:
Anyway the patch is below:
Looks a great start.
I'll give it a spin overnight to see what happens.
+_CACHE = {}
Shouldn't this be stored in the respective classes?
There are MANY ideas on where to put this _CACHE. jamwt volunteered for writing a memcache like http://www.danga.com/memcached/ Which will probably be one of the backends.
Is that a separate task? One of the benefits of having cache inside the python VM that runs the webserver is no context switches and no inter process communication. So I'm not very excited about caching outside nevow ;)
In python you cannot have a return statement with arguments when inside a generator. CachedSerializer is in fact a generator (because of the yield keyword inside the func body) and can't have a return statement with arguments.
Didn't know about that...
+ _CACHE[original.name] = (now(), result)
what is contained in original.name? How to identify exactly which object is being cached? (just to understand how should I use this exactly)
original name is the first argument of the tag instance.
t.cached(name="foobar")
this will create an empty cached tag with name foobar, you can also do:
t.cached(name=(IFoo, IBar))
as was suggested if you need. No check is done on the type of name but it must be hashable.
Ok, so it's up to me to avoid collisions, it's not like the more lowlevel cache where the url was picked automatically.
Do I understand correctly this more finegriend cache doesn't obsolete the other cache?
I think it does and will surely do if someone will write the flatsax stuff to use it with xhtml templates.
with stan you can do:
docFatory = loaders.stan(t.cached(name="MainPage", lifetime=10)[t.html[....]])
Which will do the same thing as the first ancient patch. I also get similar performances with the new patch: 26 req/sec and it shouldn't be any slower.
All my pages are using xml (except for the forms that come from formless), so I'd need the cache for xml templates too. But I really liked the caching using URL, I don't want having to write name="something" by hand.
The other cache is probably the fastest we can get, and it pretty much solves my problem for the high traffic part.
I still think 250 req/sec are too much. Are you sure that is not the redirect page in guard?
There's no guard (I know the issue with the guard redirect), I get 200 req/sec just fine (over loopback, it goes down to 180req/sec with ethernet in the middle). I cannot easily evalute if your same approach for xml is going to work at the same speed of the httprendering, but for sure I don't want having to write name="xxx" by hand. So for now I stick with the cache in the httprendering that guarantees me no cpu bottleneck until 200req/sec. The httprender cache is so easy to use and so efficient and gets automatically right the whole http site, that it doesn't worth for me to even think at messing things up and convert stuff to the new method, even if only because I'd need to choose the index of the hash by hand (and even assuming it has the same performance). So I'd still suggest to apply that patch to the trunk. Not everyone will want to use the more finegrined caches, for dynamic but mostly static data, the httprender cache is just ideal. For the SSL site where I cannot use the httprender cache at all, I'll need the xml loader caching or the fragments, since I only used xml fragments. But I suspect this stan cache could already solve all the forms rendering if I do something like tags.cached(..)[webform.renderForms()], I'm going to try it in a few minutes ;). If I can optimize the forms with this cache it'll be great.
I've talked to dp and he said that compy will be there only to not depend on twisted but it will definately directly use zope.interface if present.
Ok great news! Thanks ;)