Hi all,
I need to calculate the virial properties (mass in particular) of 80,000+ haloes for perhaps up to 25 datasets. Actually, it's that many haloes at z=0, so fewer for higher z, but you get the idea. Can you guys suggest the best way to optimize this? My only good idea is to split up the entries in HopAnalysis.out into multiple files so each run of HaloProfiler has roughly the same amount of work to do. Of course, not making projections will speed things up too.
Thanks!
(For those of you getting this message twice, I realized yt-users is the appropriate place for this discussion. Sorry!)
_______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/%7Esskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________
Hi Stephen,
So there are a couple ways one could approach this. I'm going to respond, but Britton Smith wrote the HaloProfiler, so he should feel free to jump in and correct me.
I think that it is probably safe to say that for the projections of individual halos, you will likely not need to be projecting in parallel. What this frees us up to do is process halos individually, but distribute the halos-to-process over the different processors. If you look at the trunk revision of yt/extensions/HaloProfiler.py, I think this would be line 96. We'd want to implement a subset of the yt/lagos/ParallelTools.py:ParallelAnalysisInterface -- specifically, the grid iteration should be converted to iterate of obj.hopHalos. Then using mpi_catlist, join the VirialQuantities across all processors, and then end by writing those out only on the root processor.
I think this can be done, and it should be relatively straightforward. I've spoken with Britton, and it seems like this should be doable shortly. I'll send another reply when it's implemented.
-Matt
On Mon, Jan 26, 2009 at 1:09 PM, Stephen Skory sskory@physics.ucsd.edu wrote:
Hi all,
I need to calculate the virial properties (mass in particular) of 80,000+ haloes for perhaps up to 25 datasets. Actually, it's that many haloes at z=0, so fewer for higher z, but you get the idea. Can you guys suggest the best way to optimize this? My only good idea is to split up the entries in HopAnalysis.out into multiple files so each run of HaloProfiler has roughly the same amount of work to do. Of course, not making projections will speed things up too.
Thanks!
(For those of you getting this message twice, I realized yt-users is the appropriate place for this discussion. Sorry!)
sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/%7Esskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org