On Fri, Mar 2, 2018 at 1:10 PM, Jared Coughlin <Jared.W.Coughlin.29@nd.edu> wrote:
Hello! I have a large gadget simulation (1024^3 particles) that I am trying to make a projection plot of. I thought it would make sense to run in in parallel, so I'm currently running it on a node with 24 processors and 256GB of RAM. I know for a fact that this is enough RAM to hold the particle data because, well, the gadget simulation ran. However, the job keeps getting killed because it is using all of the system memory. Does yt try and give each processor a copy of the whole particle data set when run in parallel, or does it farm the data out more efficiently? I've tried lowering n_ref, but I can try lowering it more. Thanks!

*increasing* n_ref will use less RAM (since the octree will have more particles per leaf node before it gets refined).

I don't think the particle projection operation is parallelized at all in the current version of yt. I wouldn't be surprised if each process was duplicating the particle data.
 
-Jared 

_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org

_______________________________________________
yt-users mailing list -- yt-users@python.org
To unsubscribe send an email to yt-users-leave@python.org