I've merged a bunch of changes from the parallel_profiles branch back
into trunk. This means that projections and ("lazy") profiles are now
transparently parallelized if you have mpi4py (mpi4py.scipy.org
installed. Un-lazy profiles are not, however, so that means you will
need to (for now) know your min/max before you execute any profiles in
To take advantage of the parallel projections and profiles you only
need to install mpi4py. If you are running off trunk, the following
command, if executed through mpirun, will be transparently parallel.
(In fact, the plots will only be saved on the head node, as well.)
from yt.mods import *
pf = EnzoStaticOutput("galaxy1200.dir/galaxy1200")
pc = PlotCollection(pf)
pc.add_profile_sphere(0.1, '1', ["Density","Temperature"],
lazy_reader=True, x_bounds=(1e-31,1e-24), y_bounds=(1e1,1e8))
Soon un-lazy profiles will be parallel as well, and then simpler
commands like add_profile_sphere(width, unit, fields) will be
parallelized by default.
Let me know if you run into any problems. All of the unit tests pass,
and I've tested in parallel and out of parallel and it all seems to
work, but this change also touches a couple different places in the
code so it could be a bit problematic for some corner cases.
Another side effect of this change is that parallel data analysis now
uses a much more efficient IO subsystem for the Packed HDF5 output. A
single file open is executed on each CPU file, rather than one for