j: Next unread message
k: Previous unread message
j a: Jump to all threads
j l: Jump to MailingList overview
So far so good, got through 2805 (past the 2550 mark) of 15625 grids and still going. So it looks like stinky garbage wasn't being taken out!
I was thinking, when we load up a portion of an array say a=dd["creation_time"][0:99], are we only using the first 100 creation_time in memory or does it load up the whole array, then pick out the first 100? Â If not we can just loop over several times (next one will be [100:199]) until it gets to the end of the array. Â Superslow, but it'll fit in memory...
You're pretty much correct in your suspicions that indexing the array like [0:99] will load the whole array into memory and then cut it down. However, I think that there might be something else going on because what you ran should have worked. I think that even though you are looping over individual grids, perhaps not all the objects are being thrown away, and you're still running out of memory. Can you try this script below where I've made a couple changes? I've added a call to .clear_data() for each grid, and then a call to the garbage collector for good measure. Let me know how it goes!
print "starting imports" from yt.mods import from yt.analysis_modules.star_analysis.api import import gc print "loaded modules" pf = load("RD0017/RD0017") print "loaded datafile" dd = pf.h.all_data() print "loaded all data" sm =  ct =  for grid in pf.h.grids: print grid this_ct = grid['creation_time'] this_sm = grid['ParticleMassMsun'] select = (this_ct > 0) ext_ct = this_ct[na.where(select)] print na.size(ext_ct) ct.extend(ext_ct) sm.extend(this_sm[na.where(select)]) grid.clear_data() gc.collect()
sfr = StarFormationRate(pf, star_mass=na.array(sm), star_creation_time=na.array(ct), volume=dd.volume('mpc'))
-- Stephen Skory firstname.lastname@example.org http://stephenskory.com/ 510.621.3687 (google voice)
yt-users mailing list email@example.com http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org