Problem with Rockstar in yt
Dear yt-users, I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree ( https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script: ---- file test_rockstar.py ---- from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf) --- using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error: Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group' I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it... Does anybody have any idea what might be going on? Thanks! Brian
Hi folks,
Just a quick followup: I've now run this on a different machine and
verified that the reason I couldn't use Rockstar in parallel was due to a
MPI problem. *HOWEVER*, I still have the same Rockstar error that I was
reporting before when I was using one core (all using the same script as in
my previous email):
http://paste.yt-project.org/show/3747/
So, it does seem like there is a bug, or I am simply using Rockstar
incorrectly...
--Brian
On Sun, Aug 4, 2013 at 6:26 PM, Brian O'Shea
Dear yt-users,
I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree ( https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script:
---- file test_rockstar.py ----
from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
---
using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error:
Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group'
I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it...
Does anybody have any idea what might be going on?
Thanks!
Brian
Hi Brian,
If I recall correctly, I believe you need at least 3 processors to get
Rockstar running properly. And it must always be run in parallel.
I'm afraid I don't know much much more beyond that.
Cheers,
DK
On Aug 4, 2013, at 7:54 PM, "Brian O'Shea"
Hi folks,
Just a quick followup: I've now run this on a different machine and verified that the reason I couldn't use Rockstar in parallel was due to a MPI problem. *HOWEVER*, I still have the same Rockstar error that I was reporting before when I was using one core (all using the same script as in my previous email):
http://paste.yt-project.org/show/3747/
So, it does seem like there is a bug, or I am simply using Rockstar incorrectly...
--Brian
On Sun, Aug 4, 2013 at 6:26 PM, Brian O'Shea
wrote: Dear yt-users, I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree (https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script:
---- file test_rockstar.py ----
from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
---
using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error:
Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group'
I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it...
Does anybody have any idea what might be going on?
Thanks!
Brian
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Daegone,
Sorry, I think my last email was imprecise: after that first email, I am
now running in parallel on 4 processors, using the command line:
mpirun -np 4 python ./test_rockstar.py --parallel
Rockstar gives a similar error both running on a single core (with mpirun)
and in parallel. :-(
--Brian
On Sun, Aug 4, 2013 at 8:57 PM, Daegene Koh
Hi Brian,
If I recall correctly, I believe you need at least 3 processors to get Rockstar running properly. And it must always be run in parallel. I'm afraid I don't know much much more beyond that.
Cheers, DK
On Aug 4, 2013, at 7:54 PM, "Brian O'Shea"
wrote: Hi folks,
Just a quick followup: I've now run this on a different machine and verified that the reason I couldn't use Rockstar in parallel was due to a MPI problem. *HOWEVER*, I still have the same Rockstar error that I was reporting before when I was using one core (all using the same script as in my previous email):
http://paste.yt-project.org/show/3747/
So, it does seem like there is a bug, or I am simply using Rockstar incorrectly...
--Brian
On Sun, Aug 4, 2013 at 6:26 PM, Brian O'Shea
wrote: Dear yt-users,
I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree ( https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script:
---- file test_rockstar.py ----
from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
---
using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error:
Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group'
I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it...
Does anybody have any idea what might be going on?
Thanks!
Brian
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Brian,
I think what is going on here is that the Rockstar HF expects a TimeSeries
object. If you change
pf = load("DD0057/data0057")
rh = RockstarHaloFinder(pf)
to
ts = TimeSeriesData([pf])
rh = RockstarHaloFinder(ts)
rh.run()
I think it should work. Also just for reference, I think Daegene is
correct that Rockstar requires at least 3 processors.
Best,
Sam
On Sun, Aug 4, 2013 at 8:20 PM, Brian O'Shea
Hi Daegone,
Sorry, I think my last email was imprecise: after that first email, I am now running in parallel on 4 processors, using the command line:
mpirun -np 4 python ./test_rockstar.py --parallel
Rockstar gives a similar error both running on a single core (with mpirun) and in parallel. :-(
--Brian
On Sun, Aug 4, 2013 at 8:57 PM, Daegene Koh
wrote: Hi Brian,
If I recall correctly, I believe you need at least 3 processors to get Rockstar running properly. And it must always be run in parallel. I'm afraid I don't know much much more beyond that.
Cheers, DK
On Aug 4, 2013, at 7:54 PM, "Brian O'Shea"
wrote: Hi folks,
Just a quick followup: I've now run this on a different machine and verified that the reason I couldn't use Rockstar in parallel was due to a MPI problem. *HOWEVER*, I still have the same Rockstar error that I was reporting before when I was using one core (all using the same script as in my previous email):
http://paste.yt-project.org/show/3747/
So, it does seem like there is a bug, or I am simply using Rockstar incorrectly...
--Brian
On Sun, Aug 4, 2013 at 6:26 PM, Brian O'Shea
wrote: Dear yt-users,
I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree ( https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script:
---- file test_rockstar.py ----
from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
---
using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error:
Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group'
I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it...
Does anybody have any idea what might be going on?
Thanks!
Brian
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Err, just to be clear, keep the pf = load(...) line in there before the ts
= TimeSeriesData([pf]) line.
On Mon, Aug 5, 2013 at 8:52 AM, Sam Skillman
Hi Brian,
I think what is going on here is that the Rockstar HF expects a TimeSeries object. If you change
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
to
ts = TimeSeriesData([pf]) rh = RockstarHaloFinder(ts) rh.run()
I think it should work. Also just for reference, I think Daegene is correct that Rockstar requires at least 3 processors.
Best, Sam
On Sun, Aug 4, 2013 at 8:20 PM, Brian O'Shea
wrote: Hi Daegone,
Sorry, I think my last email was imprecise: after that first email, I am now running in parallel on 4 processors, using the command line:
mpirun -np 4 python ./test_rockstar.py --parallel
Rockstar gives a similar error both running on a single core (with mpirun) and in parallel. :-(
--Brian
On Sun, Aug 4, 2013 at 8:57 PM, Daegene Koh
wrote: Hi Brian,
If I recall correctly, I believe you need at least 3 processors to get Rockstar running properly. And it must always be run in parallel. I'm afraid I don't know much much more beyond that.
Cheers, DK
On Aug 4, 2013, at 7:54 PM, "Brian O'Shea"
wrote: Hi folks,
Just a quick followup: I've now run this on a different machine and verified that the reason I couldn't use Rockstar in parallel was due to a MPI problem. *HOWEVER*, I still have the same Rockstar error that I was reporting before when I was using one core (all using the same script as in my previous email):
http://paste.yt-project.org/show/3747/
So, it does seem like there is a bug, or I am simply using Rockstar incorrectly...
--Brian
On Sun, Aug 4, 2013 at 6:26 PM, Brian O'Shea
wrote: Dear yt-users,
I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree ( https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script:
---- file test_rockstar.py ----
from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
---
using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error:
Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group'
I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it...
Does anybody have any idea what might be going on?
Thanks!
Brian
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Sam, all,
This fixed the problem - thank you!
--Brian
On Mon, Aug 5, 2013 at 9:54 AM, Sam Skillman
Err, just to be clear, keep the pf = load(...) line in there before the ts = TimeSeriesData([pf]) line.
On Mon, Aug 5, 2013 at 8:52 AM, Sam Skillman
wrote: Hi Brian,
I think what is going on here is that the Rockstar HF expects a TimeSeries object. If you change
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
to
ts = TimeSeriesData([pf]) rh = RockstarHaloFinder(ts) rh.run()
I think it should work. Also just for reference, I think Daegene is correct that Rockstar requires at least 3 processors.
Best, Sam
On Sun, Aug 4, 2013 at 8:20 PM, Brian O'Shea
wrote: Hi Daegone,
Sorry, I think my last email was imprecise: after that first email, I am now running in parallel on 4 processors, using the command line:
mpirun -np 4 python ./test_rockstar.py --parallel
Rockstar gives a similar error both running on a single core (with mpirun) and in parallel. :-(
--Brian
On Sun, Aug 4, 2013 at 8:57 PM, Daegene Koh
wrote: Hi Brian,
If I recall correctly, I believe you need at least 3 processors to get Rockstar running properly. And it must always be run in parallel. I'm afraid I don't know much much more beyond that.
Cheers, DK
On Aug 4, 2013, at 7:54 PM, "Brian O'Shea"
wrote: Hi folks,
Just a quick followup: I've now run this on a different machine and verified that the reason I couldn't use Rockstar in parallel was due to a MPI problem. *HOWEVER*, I still have the same Rockstar error that I was reporting before when I was using one core (all using the same script as in my previous email):
http://paste.yt-project.org/show/3747/
So, it does seem like there is a bug, or I am simply using Rockstar incorrectly...
--Brian
On Sun, Aug 4, 2013 at 6:26 PM, Brian O'Shea
wrote: Dear yt-users,
I'm trying to use the Rockstar halo finder within yt, and am encountering some odd problems. I'm using the main development tree ( https://bitbucket.org/yt_analysis/yt) with changeset f936432ed45d, and attempting to use Rockstar to find halos on a small server running Ubuntu 12.04. When I call this script:
---- file test_rockstar.py ----
from yt.mods import * from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pf = load("DD0057/data0057") rh = RockstarHaloFinder(pf)
---
using the command line sequence "mpirun -np 2 python ./test_rockstar.py --parallel", I seem to be getting a seg fault (as can be seen at http://paste.yt-project.org/show/3745/). However, if I use a single processor (with "mpirun -np 1 python ./test_rockstar.py --parallel"), I get a very different error:
Traceback (most recent call last): File "./test_rockstar.py", line 6, in <module> rh = RockstarHaloFinder(pf) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 230, in __init__ self.pool, self.workgroup = self.runner.setup_pool() File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/rockstar.py", line 112, in setup_pool (self.num_writers, "writers") ] File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 335, in from_sizes pool.add_workgroup(size, name = name) File "/data/bwoshea/galparttest/yt-x86_64/src/yt-hg/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 303, in add_workgroup group = self.comm.comm.Get_group().Incl(ranks) AttributeError: 'NoneType' object has no attribute 'Get_group'
I'm puzzled about the error, since I can use other parallel yt scripts without any problems (a 4-processor script making projections works just fine). This machine doesn't have infiniband (as warned about at http://yt-project.org/doc/analysis_modules/running_halofinder.html#rockstar-...), and both FOF and Hop find several hundred halos with my dataset. I'd just use another halo finder, but I'm trying to do something that requires Rockstar to be run inline with Enzo, so I'm stuck with it...
Does anybody have any idea what might be going on?
Thanks!
Brian
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (3)
-
Brian O'Shea
-
Daegene Koh
-
Sam Skillman