Stephen, thanks for the script to get me started thinking, like you said I had to make some small modification to make the ct and sm arrays. At the bottom is the script that will go through grid by grid in serial and pick out the star particles. from yt.mods import * from yt.analysis_modules.star_analysis.api import * pf = load("DD0273/DD0273") dd = pf.h.all_data() sm = [] ct = [] for grid in pf.h.grids: print grid this_ct = grid['creation_time'] this_sm = grid['ParticleMassMsun'] select = (this_ct > 0) ext_ct = this_ct[na.where(select)] # prints out the amount of star particles being added to ct, sm size(ext_ct) ct.extend(ext_ct) sm.extend(this_sm[na.where(select)]) sfr = StarFormationRate(pf, star_mass=na.array(sm), star_creation_time=na.array(ct), volume=dd.volume('mpc')) sfr.write_out(name="StarFormationRate.out") ------------------------------------------------------------------------- Since each grid patches can essentially be calculated independently, I was wondering if it is possible to do the ct,sm calculation separately on different nodes so that they work in parallel, and at the end be summed up to form one final ct, sm that has all the star particles from all grid patches? I was trying to do it using the derived quantity method as follow: def _SFR_parts(data): index=(data['creation_time']>0) this_ct=data['creation_time'][na.where(index)] this_sm=data['ParticleMassMsun'][na.where(index)] return this_ct, this_sm def _TwoSFRParts(data, this_ct, this_sm): tot_ct=[] # the "this_ct" here should be a list of "this_ct" from the different grids # after they did _SFR_part() # but not sure how I can verify that for part_ct in this_ct: tot_ct.extend(part_ct) tot_sm=[] for part_sm in this_sm: tot_sm.extend(part_ct) return tot_ct, tot_sm add_quantity('ctsm', function=_SFR_parts,combine_function=_TwoSFRParts, n_ret=2) and then I tried calling: final_ct, final_sm=dd.quantities['ctsm']() but get ValueError: setting an array element with a sequence. --------------------------------------------------------------------- Even if the above method worked, loading each grid onto 1 processor/node, it would only alleviate the memory problem so much, because potentially a LOT of the particles can be on a single grid, which will still overload the memory sometimes, so this isn't as good as the parallel HOP's KD tree way of cutting up the particles for load balancing. From G.S.
Err...
sfr = StarFormationRate(pf, star_mass=sm, star_creation_time=ct, volume=dd.volume('mpc'))
You might need to make sure that 'sm' and 'ct' are arrays when you pass them in. I don't remember, so I'd just do it to be careful!
-- Stephen Skory s@skory.us http://stephenskory.com/ 510.621.3687 (google voice) _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org