Hi everyone, I recently got a hold of some 3200 cube data and was planning
This is data at a high redshift with only 230595 particles. But I don't
If you're trying to pick out all the stars in a dataset with 3200^3 + 230595 particles, it's very easy to run out of memory. For example, this would probably make it crash because it will try to read in all eleventy bilion particles (approximately):
from yt.mods import from yt.analysis_modules.star_analysis.api import pf = load("data0030") dd = pf.h.all_data() sfr = StarFormationRate(pf, data_source=dd)
So, you have several options depending on how fancy you want to get. One option is to parallelize the star formation analysis stuff so it can handle big datasets like this elegantly.
A simple option, faster to completion but slower in run time, is to instead build up arrays of the star particle data, one grid at a time, and then analyze those.
from yt.mods import from yt.analysis_modules.star_analysis.api import pf = load("data0030") dd = pf.h.all_data() sm =  ct =  for grid in pf.h.grids: this_ct = grid['creation_time'] this_sm = grid['ParticleMassMsun'] select = (this_ct > 0) ct.extend(this_ct[select]) sm.extend(this_sm[select]) sfr = StarFormationRate(pf, star_mass=sm, star_creation_time=ct, volume=dd.volume('mpc'))
-- Stephen Skory email@example.com http://stephenskory.com/ 510.621.3687 (google voice)