7 Dec
2009
7 Dec
'09
7:39 p.m.
Hi, I have about 40,000 halos (using Hop_Finder). And I want to run Halo_profiler on it. Since the number of halos is so large, I would like to use... mpirun -np <cpus> halo_profiler.py --parallel Steele-purdue has 8cpus per node. The 8 cpus have shared memory. But the nodes do not share memory. I want to know would I be better off using just 8 cpus (on 1 node) or should I use more nodes, considering I have 40,000 halos. regards shankar