Dear yt and rockstar users,
I recently installed the newest version of rockstar-galaxies as I had been
using a much older version. Although the output reports a particle mass of
1.31418e+05 Msun/h, I don't see any halos smaller than 1e+08 Msun/h. I was
expecting to get the minimum masses an order of magnitude lower.
I see in the config includes MIN_HALO_OUTPUT_SIZE = 25, set as a default. I
tried to change this parameter to 10 and restart from the beginning, but I
run rockstar within yt with RockstarHaloFinder (
yt.extensions.astro_analysis.halo_finding.rockstar.api), but my config file
was overwritten by the defaults. Is this even the right approach? If so,
how do I restart rockstar (more detailed than just rh.run(restart=True),
please) with customized parameters within yt's RockstarHaloFinder?
Thanks in advance,
--
Carla Bernhardt
PhD Student
Universität Heidelberg
ZAH Institut für Theoretische Astrophysik
Dear all,
I have read a Gadget snapshot as follows:
ds = yt.load('../GadgetDiskGalaxy/snapshot_200.hdf5')
partID = ad[('PartType0', 'ParticleIDs')]
I then defined a sphere around a Type0 particle using the following lines of code:
j = 9000
sp = ds.sphere([px[j],py[j],pz[j]],(20.0,'kpc'))
When I issue the following command:
print(sp["particle_index"])
I get:
[ 841624. 890649. 841457. 890473. 865599. 865772. 6258099. 5752659.
5752660. 5777940. 6229883. 5803543.] dimensionless
This shows that there are 12 particles inside the sphere.
I checked and found that the following IDs exist in the 'partID' array:
841624. 890649. 841457. 890473. 865599. 865772.
but these IDs (see below) do NOT exist in the 'partID' array:
6258099. 5752659. 5752660. 5777940. 6229883. 5803543
So the IDs of these 6 particles inside the sphere do not exist in the 'partID' array.
How is this possible since 'partID' (as defined above) must contain the IDs of all
Type0 particles ?
I would be grateful if someone could help me with this issue.
Best regards,
Hassan
Hi,
I am trying to read a RAMSES snapshot which has a size of a few TB. Please
let me know what are methods I can use to read this using yt. I have good
computation available but I can see from here: (
https://yt-project.org/doc/analyzing/parallel_computation.html) that yt
can't perform the reading operation in parallel. I tried using bbox
argument to get a smaller region but that didn't help. I would really
appreciate any help in this regard.
Thanks,
Ankit Singh
Dear all,
I have a simulation of an isolated galaxy with dark matter, bulge stars, disk stars and gas particles.
I would like to probe how dust evolves over time by post processing this simulation.
I am new in this subject and I would be very grateful if some one could introduce some references
or post-processing libraries, or ... that could be used to trace dust evolution in simulations.
Many thanks in advance.
Best regards,
Hassan
Hi yt-users,
I have a question about spheres, because I am having a problem that I don't
really know how to solve. Basically, I have clusters of stars, and I want
to know how many stars are within 6 pc of each star. So, what I have done
is taken a sphere at each star position with a radius of 6 pc, and then
performed
sp.quantities.total_quantity(["particle_mass"])
Now I wanted to double-check this number by brute-force, so I calculated
the distance between all the particles and then summed the masses of all
the particles that were less than 6 pc.
I am not getting the same values--they are close, but not the same (they
seem to mostly scatter up or down by a factor of 2).
So...to the question!! Because I am measuring particles in a grid code
(Enzo) is there any sort of shifting to the grid position instead of using
the particle positions here? Because stars are pretty packed I could
imagine that this could cause the differences I am seeing.
I am not sure that the problem is not something silly I am doing, but
wanted to check this possibility as well.
Thanks!
Stephanie
--
Dr. Stephanie Tonnesen
Associate Research Scientist
CCA, Flatiron Institute
New York, NY
stonnes(a)gmail.com
Dear yt and rockstar users,
I recently installed the newest version of rockstar-galaxies as I had been
using a much older version. Although the output reports a particle mass of
1.31418e+05 Msun/h, I don't see any halos smaller than 1e+08 Msun/h. I was
expecting to get the minimum masses an order of magnitude lower.
I see in the config includes MIN_HALO_OUTPUT_SIZE = 25, set as a default. I
tried to change this parameter to 10 and restart from the beginning, but I
run rockstar within yt with RockstarHaloFinder (
yt.extensions.astro_analysis.halo_finding.rockstar.api), but my config file
was overwritten by the defaults. Is this even the right approach? If so,
how do I restart rockstar (more detailed than just rh.run(restart=True),
please) with customized parameters within yt's RockstarHaloFinder?
Thanks in advance,
--
Carla Bernhardt
PhD Student
Universität Heidelberg
ZAH Institut für Theoretische Astrophysik
Hi yt-users!
It's been a while since we've had scheduled triage meetings during the
week. We use triage meetings to do development and maintenance work on the
project. If you'd like to help contribute to making yt *better* and maybe
even more useful for you, consider attending! All skill levels in
development are welcome and encouraged to attend. If you're a bit shy,
don't worry, you can also attend and just listen!
I've created two polls. The first is morning in central US timezone to
hopefully catch times where project members located in Europe can join, and
the second is for afternoon/early evening in the central US timezone so
project members on the west coast and asia can hopefully join. Vote in
whatever poll applies to you (or both if you find all times work). We will
choose times to accommodate the most people, and I personally would love to
see all of you attend!!!
I've put times for next week, but know that these polls are your *general
availability *weekly for the next few months.
Morning Poll:
http://whenisgood.net/tmdfp2i
Afternoon Poll:
http://whenisgood.net/i2c8n3b
- Madicken
Dear all,
For my master's research, I installed yt and I want to load my research data converted to gadget2 format with yt.
The function I want to use is the clump function, and I want to create a similar figure by performing the same processing as the Clump Finding below.
https://yt-project.org/doc/cookbook/constructing_data_objects.html#cookbook…
First of all, I tried to test with the gadget data placed in the data index, but neither the default nor the hdf5 format works.
Is it possible to do Clump Finding in Gadget2 format?