Looks like some good speedup: Old: 5.53041e+01 New: 2.57021e+01
This was on a 6 level AMR everywhere. I will note that as you mentioned, the disk cache does do a lot, as here was the results from the first run: Old: 5.51695e+01 New: 2.23177e+02
On Fri, Aug 27, 2010 at 1:06 PM, Matthew Turk firstname.lastname@example.org wrote:
I added the quad tree projection to the tip of the hg repository.
If you could please put it through a few paces, that'd be great. You can access it as the attribute "quad_proj" on a hierarchy. I've done some speed testing and the generality necessary to get it to work with sources and whatnot makes it a bit slower than if it was just fed the data, but it's still about twice as fast on my datasets as the old projection method.
It doesn't work in parallel yet, but give it a shot on some big datasets anyway -- I tested it on a 1024^3 L7 and it worked fine in serial. But, I know how to parallelize it, and the parallelization process should also speed up the serial projections. (It will allow us to better choose the batching of IO.) I think this projection method should also scale *much* better for parallelism, because we shouldn't have to do domain decomposition to parallelize, which was always a killer on nested grid simulations.
I'd like to push forward with this, so if you get a chance to try it, report back with any issues. I'm sure there will be several...
I've placed a comparison test here:
(But note that disk cache will favor the old projection method the first time you run, so run at least twice. :)
-Matt _______________________________________________ Yt-dev mailing list Ytemail@example.com http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org