I've committed a change in r1298 that should substantially speed up the process of reading in particles. I'm still working on reproducing the error.
On Thu, May 7, 2009 at 10:59 AM, Stephen Skory firstname.lastname@example.org wrote:
I believe we encountered KeyboardInterrupt errors before when the script was being killed either by out-of-memory or memory-corruption issues. This would only be possible in the HOP code, which corresponds to what you are seeing in terms of the commenting out of RunHOP. Unfortunately, it's not easy for me to reproduce memory corruption here on such a large dataset. I am attempting to do so with the L7 RD0035 dataset. I will be doing this by running your script on four processors on one of our machines; unfortunately, all our multiproc machines also have lots of RAM. So I'm not sure I'll be able to get identical results, but I am trying.
Let me know if you want me to make DD0082 publicly readable on either Ranger or Kraken. It's fairly quick to move the data to any machine that has NSF GridFTP, too, that I have an account on.
Are you running with vanilla trunk, and which revision? I'm on vanilla trunk r1297.
It's r1295, so before all the h5py stuff, which shouldn't change any of this I think, with my modified HaloFinding to use preloading:
Everything else is spec r1295.
_______________________________________________________ email@example.com o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_(_)_______________ _______________________________________________ Yt-dev mailing list Ytfirstname.lastname@example.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org