I have a question about memory usage of yt's HOP HaloFinder. I have a N=256^3 DM only Enzo simulation that I ran with a 512^3 root grid and fairly aggressive DM refinement with MaximumRefinementLevel=7. Although the run only has 256^3 particles, the AMR has resulted in 163,951 grids and more than 1.5e9 grid cells. Running HOP like so halo_list = HaloFinder(pf) I'm finding that yt is using around 14GB of memory during the particle reading (prior to actually starting the HOP process), which is way out of proportion to the relatively small number of particles. It seems that the memory usage is being driven by the huge number of grids rather than the number of particles. I've traced the memory increase to the calculation of the total mass, specifically to this line: total_mass = self.comm.mpi_allreduce(self._data_source["ParticleMassMsun"].sum(dtype='float64'), op='sum') and again further down: sub_mass = self._data_source["ParticleMassMsun"].sum(dtype='float64') When I specify the total mass as a keyword and comment out the sub_mass calculation (forcing sub_mass = total_mass), then the memory usage remains small. So there's something about the summing up that is leaking memory. Can anyone here shed any light on this puzzling memory hunger? Mike -- ********************************************************************* * * * Dr. Michael Kuhlen Theoretical Astrophysics Center * * email: mqk@astro.berkeley.edu UC Berkeley * * cell phone: (831) 588-1468 B-116 Hearst Field Annex # 3411 * * skype username: mikekuhlen Berkeley, CA 94720 * * * *********************************************************************