
Hi all, I have the following problem, when I compare the results of a merger tree created using fastBuildMerge.py for the most massive halo at z=0, with the output of running hop manually, I get the following discrepancy: #part x y z fastBuildMerge 33809 0.492 0.495 0.485 Hop(yt)manually 50931 0.5019 0.4981 0.4902 old enzohop 43815 0.5019 0.4981 0.4902 Visual inspection of slices show, that the last two positions are correct. So I wonder why the merger_tree gives wrong values even though it is calling the same Hop routine. I need the positions of the parent halos for profiling, and a difference of 0.01 in position corresponds to 1.28 Mpc which is a lot for a galaxy cluster, and would spoil the analysis. The difference in mass between the first two is strange as well. Did anybody encounter the same problem and/or knows where it is coming from? Thanks! Jean-Claude

Jean-Claude,
The difference in mass between the first two is strange as well. Did anybody encounter the same problem and/or knows where it is coming from?
I suspect I may be partly at fault. The default over density threshold in yt-HOP is 160.0, but it looks like I set it to run at 80.0 in the merger tree script. Can you try re-running when the thresholds are all the same? There may be something else going on, but let's make sure that's not the problem, first. Good luck! _______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________

Hi Stephen, well, actually I already did that, for the single HOP run, I used: sphere = pf.h.sphere([0.5 0.5 0.5],1.0) hop_results = lagos.hop.HopList(sphere, 80.000000) which I copied from the merger code, so the thresholds were the same, unfortunately. -Jean-Claude Am Donnerstag, den 06.08.2009, 06:51 -0700 schrieb Stephen Skory:
Jean-Claude,
The difference in mass between the first two is strange as well. Did anybody encounter the same problem and/or knows where it is coming from?
I suspect I may be partly at fault. The default over density threshold in yt-HOP is 160.0, but it looks like I set it to run at 80.0 in the merger tree script. Can you try re-running when the thresholds are all the same? There may be something else going on, but let's make sure that's not the problem, first.
Good luck!
_______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________ _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Jean-Claude,
sphere = pf.h.sphere([0.5 0.5 0.5],1.0) hop_results = lagos.hop.HopList(sphere, 80.000000)
which I copied from the merger code, so the thresholds were the same, unfortunately.
Darn it. Another simple question is (and I'm guessing 'no' is the answer), are there stars in your simulation? If yes, did you do all the runs of HOP with them considered (or excluded)? Are you comparing the same columns of the output of HOP? Are you sure you're not comparing the most-dense particle position to the calculated center of mass? In the text file output of HOP, the first columns of positions are the most-dense particle, while the merger tree should be using the center of mass (unless I've done something wrong!). I'm thinking of what else could be going on... _______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________

You guessed right, no stars involved. Indeed I used the positions for the densest particle, however the center of mass does not differ too much from it. So the positions are still not consistent. Furthermore there is a significant difference in the number of particles, between the two codes for the same threshold. Maybe I should mention, that I run the code on enzo-1.0-like outputs, so I changed the file handling in the beginning of fastBuildMerge.py, but this should not interfere with the algorithm, however I will check this. Am Donnerstag, den 06.08.2009, 08:06 -0700 schrieb Stephen Skory:
Jean-Claude,
sphere = pf.h.sphere([0.5 0.5 0.5],1.0) hop_results = lagos.hop.HopList(sphere, 80.000000)
which I copied from the merger code, so the thresholds were the same, unfortunately.
Darn it. Another simple question is (and I'm guessing 'no' is the answer), are there stars in your simulation? If yes, did you do all the runs of HOP with them considered (or excluded)?
Are you comparing the same columns of the output of HOP? Are you sure you're not comparing the most-dense particle position to the calculated center of mass? In the text file output of HOP, the first columns of positions are the most-dense particle, while the merger tree should be using the center of mass (unless I've done something wrong!).
I'm thinking of what else could be going on...
_______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________ _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Jean-Claude,
You guessed right, no stars involved. Indeed I used the positions for the densest particle, however the center of mass does not differ too much from it. So the positions are still not consistent. Furthermore there is a significant difference in the number of particles, between the two codes for the same threshold.
Can you send me the code you used to drive yt, as in the python script you built with fastBuildMerge.py? I'll edit it and send it back to you and ask you to run it on your dataset.
Maybe I should mention, that I run the code on enzo-1.0-like outputs, so I changed the file handling in the beginning of fastBuildMerge.py, but this should not interfere with the algorithm, however I will check this.
This may be a problem for you, unrelated to the HOP problem. There is a bug in enzo 1.0 that affects the merger tree code. The merger tree relies on the particles having unique IDs, and enzo 1.0 generally does not keep unique particle IDs if it is run in parallel. This bug has been fixed in enzo 1.5. I hope this isn't terrible news for you! _______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________

Unfortunately, that is terrible news and explains the difference in the number of particles that are found. So we will now see what we can do about it. Thanks very much, so we know what's the problem. -Jean-Claude Am Donnerstag, den 06.08.2009, 08:49 -0700 schrieb Stephen Skory:
Jean-Claude,
You guessed right, no stars involved. Indeed I used the positions for the densest particle, however the center of mass does not differ too much from it. So the positions are still not consistent. Furthermore there is a significant difference in the number of particles, between the two codes for the same threshold.
Can you send me the code you used to drive yt, as in the python script you built with fastBuildMerge.py? I'll edit it and send it back to you and ask you to run it on your dataset.
Maybe I should mention, that I run the code on enzo-1.0-like outputs, so I changed the file handling in the beginning of fastBuildMerge.py, but this should not interfere with the algorithm, however I will check this.
This may be a problem for you, unrelated to the HOP problem. There is a bug in enzo 1.0 that affects the merger tree code. The merger tree relies on the particles having unique IDs, and enzo 1.0 generally does not keep unique particle IDs if it is run in parallel. This bug has been fixed in enzo 1.5. I hope this isn't terrible news for you!
_______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________ _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Jean-Claude,
Unfortunately, that is terrible news and explains the difference in the number of particles that are found. So we will now see what we can do about it.
Before I ruin your day completely, there's a simple test to see if this is a problem. Try this:
from yt.mods import * pf = EnzoStaticOutput('dataset') sp = pf.h.sphere([0.5]*3, 1.0) indices = sp["particle_index"] indices.size x uni_indices = na.unique(indices) uni_indices.size y
If x == y, things are OK, at least for this dataset. As I think back to when I discovered this bug, I can't remember if it affected both DM & stars, or just stars. I'd like you to try this so I don't give you wrong information. If the test works out OK, can you send me the python script? Sorry for the confusion! _______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________

Hi Stephen, I went through my script again, and found indeed an error in the reading procedure. After correction it seems now to work fine and is giving the correct center positions. So in principle one can use the merger-code with enzo-1.0-type output. Thank you again for your effort, I should have checked my script more carefully ... Cheers, Jean-Claude Am Donnerstag, den 06.08.2009, 09:18 -0700 schrieb Stephen Skory:
Jean-Claude,
Unfortunately, that is terrible news and explains the difference in the number of particles that are found. So we will now see what we can do about it.
Before I ruin your day completely, there's a simple test to see if this is a problem. Try this:
from yt.mods import * pf = EnzoStaticOutput('dataset') sp = pf.h.sphere([0.5]*3, 1.0) indices = sp["particle_index"] indices.size x uni_indices = na.unique(indices) uni_indices.size y
If x == y, things are OK, at least for this dataset.
As I think back to when I discovered this bug, I can't remember if it affected both DM & stars, or just stars. I'd like you to try this so I don't give you wrong information.
If the test works out OK, can you send me the python script?
Sorry for the confusion!
_______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________ _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (2)
-
Jean-Claude Waizmann
-
Stephen Skory