It seems there was a problem with mailing list, but I hope this email gets through.

I can confirm Stephen's simple solution of not using halo.total_mass() (a float) and instead use the id of the halo for the line
sto.result_id = halo.id
will work.  I now get all my haloes again!

From
G.S.

On Fri, Mar 23, 2012 at 11:42 AM, Geoffrey So <gsiisg@gmail.com> wrote:
I just did a fresh install of dev-yt 2.4 with the latest install script and confirmed that the behavior is still the same.

changeset:   5387:7ff85b5c7dcc
branch:      yt
tag:         tip
parent:      5386:e08c15b9ef01
parent:      5385:a5af0cffb818
user:        Matthew Turk <matthewturk@gmail.com>
date:        Thu Mar 22 16:51:23 2012 -0400
summary:     Merging

From
G.S.


On Fri, Mar 23, 2012 at 11:15 AM, Geoffrey So <gsiisg@gmail.com> wrote:
I've just thought of something, I did a pull from dev-yt to my fork of YT sometime last week (to get the functionality of parallel_objects and my ellipsoid stuff), so maybe I broke something, I'll re-run the thing with dev-yt and see if the error is still there.  I recall I had to merge halo_objects.py manually and so maybe I screwed up something there.  I wasn't suspecting the merge to be problematic because parallel HOP ran just fine.

From
G.S.


On Fri, Mar 23, 2012 at 10:31 AM, Geoffrey So <gsiisg@gmail.com> wrote:
Hi,

Originally I was outputting something like 17 or 18 columns of attributes of my ellipsoids associated with the haloes, but I've ripped them out to the bare essential of just out putting the mass of halo and a "0" for attribute to narrow down the problem.  So in this script I did nothing with the ellipsoids.


For each halo, the output should have for the first column the halo DM particle mass, the second column a zero.

The output when using
for halo in haloes:

The output when using
for sto, halo in parallel_objects(haloes, num_procs, storage = my_storage):

The original halo list from parallelHOP:

So there's agreement of the number of haloes with the "for halo in haloes" method and disagreement with the parallel_objects() method.

From
G.S.


in the DD0273_z5.00_halo_list.out file I have 24 lines, first two


On Fri, Mar 23, 2012 at 8:24 AM, Stephen Skory <s@skory.us> wrote:
Hi Geoffrey,


>> haloes = LoadHaloes(pf, HaloListname)
>>
>> for sto, halo in parallel_objects(haloes, num_procs, storage = my_storage):

Can you paste the whole script? Thanks.

> Stephen might be able to shed some light on this, but I think
> LoadHalos will pre-assign processors to the halo objects, whereas
> parallel_objects will operate independently of that, distributing
> halos first come, first server.

In fact, LoadHaloes should not work that way. Each task should have a
full copy of the halos data, but initially only the data in the
HopAnalysis.out file, of course, the particles are loaded on demand.

I've been using something like what Geoffrey's trying to do for a
while with no issue. I'm hoping maybe there's something in Geoffrey's
script... but I've been wrong before.

--
Stephen Skory
s@skory.us
http://stephenskory.com/
510.621.3687 (google voice)
_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org