I've just pushed to devel a new load-balancing implementation for parallelHF. The old way would repeatedly read data off disk during load-balancing. Instead, this one uses a random subset of all the particles and does the load-balancing in memory on the subset, only. This should be substantially faster, especially on slow disks. It turns out that using a very small percentage of the total number of particles can achieve decent load-balancing.
There is a new parameter that controls how many particles are used for load-balacing, called 'sample'. It gives the fraction of the total number of particles to be used for balancing. The default is 0.03, which means 3% of the total are used. I've experimented down to 0.3% and seen reasonable results there, too.
h = yparallelHF(pf, threshold=160.0, safety=1.5, \ dm_only=False,resize=True, fancy_padding=True, rearrange=True, sample=0.03)
Let me know if you run into any problems using this. I've done some testing and the results are identical to the old method, but something could be wrong, of course.
email@example.com o Stephen Skory http://physics.ucsd.edu/~sskory/... _.>/ _Graduate Student __()\()__