Hi Stephen, On Sat, Feb 25, 2012 at 6:52 PM, Stephen Skory <s@skory.us> wrote:
Hi Matt,
Could you please provide a small, minimally viable sample script that can show the problem?
Here's what I'm seeing, with a diff like this (on my branch's tip):
http://paste.yt-project.org/show/2195/
using this script:
http://paste.yt-project.org/show/2196/
on the RD0006 dataset of the Enzo_64 yt-workshop collection, I get this output using two cores:
1.01953344682e+17 9.8838266396e+16 1.03009963462e+17 9.8838266396e+16
The two first values are different, as they should be with slightly different numbers of particles in each half, but the second two numbers are identical on each core, which I think is wrong. What do you think?
Oh, I see what you're doing. Originally you were actually trying to make your OWN quantity, instead of using yt builtins. So yeah, this is wrong. What you want to be doing is calling the quantity on the base, full-domain region. What it's doing is actually sub-decomposing each of the self._data_source objects and then talking between the processors to get it; but, neither one gets the full set of components for their own data source. I'd recommend either: 1) Calling the total ParticleMassMsun quantity on the data source that covers all of the subdomains, pre-decomposition. 2) Doing the sum and then manually summing across processors, like you were doing. 3) Add on a new processor group to ensure the parallelism is split across one processor (yes, really) instead of allowing it to be decomposed across many. -Matt
Thanks!
-- Stephen Skory s@skory.us http://stephenskory.com/ 510.621.3687 (google voice) _______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org