Sorry for being sparse with the information, I should have been much more clear. This was running in parallel, 64 cores on Nautilus. The field was Density vs. MassFraction, MassFraction being a field I wrote that in this instance returns simply CellMass. lazy_reader is True. The run is 512^3 with 4 levels by 4, with only about 100 subgrids, but was written in 2007 (? i think) without parallel root grid IO on. My suspicion (without really understanding how yt parallelism works) is that each task needed to read in the root grid, rather than using the domain decomposition (?)
Would lazy_reader off change things?
Are you running in parallel? Which fields are you profiling? Do they have lots of dependencies, or require ghost zones? Is yt using lazy_reader? Does your data have many grids?
On Mar 14, 2012 12:37 AM, "david collins" firstname.lastname@example.org wrote:
I should add that this was done on 64 cores-- in serial it works fine, just slow.
On Tue, Mar 13, 2012 at 9:35 PM, david collins email@example.com wrote:
I have an old dataset that I'm trying to make profiles on. It's a 512^3 root grid, but was written with ParallelRootGridIO off. I find that it's using strange amounts of memory, more than 12 Gb. Is this a known problem with a straight forward work-around?
-- Sent from my computer.
-- Sent from my computer. _______________________________________________ yt-dev mailing list firstname.lastname@example.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
yt-dev mailing list email@example.com http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org