
Hi Stephen, I'm trying to do halo profiling with parallelHP on a 1024 cube dataset 14 Mpc/h at z=5. I used to be able to do this on data of the same size on Triton with 256GB of RAM, and I'm trying it on Nautilus with the same amount of RAM. Should I be using more processors for faster results or should I cut down the number of processors as there's memory overhead as I increase the core count? From G.S.

Hi Geoffrey,
I'm trying to do halo profiling with parallelHP on a 1024 cube dataset 14 Mpc/h at z=5. I used to be able to do this on data of the same size on Triton with 256GB of RAM, and I'm trying it on Nautilus with the same amount of RAM.
Should I be using more processors for faster results or should I cut down the number of processors as there's memory overhead as I increase the core count?
There's no easy answer, I'm afraid. In particular, as you might have already guessed, because the dataset is so small in cosmological size it makes the load-balancing not as effective as it might otherwise be. My advice is to just experiment, and realize that your human time is much more valuable than computer time, so don't be afraid to up the horsepower if needed. -- Stephen Skory s@skory.us http://stephenskory.com/ 510.621.3687 (google voice)
participants (2)
-
gso@physics.ucsd.edu
-
Stephen Skory