I have been working with Jia-Hung on looking at vorticity in his GAMER
simulations. He has supplied me with a dataset where the minimum
vorticity (by prescription) should be at the center.
After a lot of back and forth with him and with Jeff Oishi, I
determined that what looks to be happening is that unless I set the
number of ghost zones to 2 (instead of 1) there is a systematic
Can anybody think why this might be?
I'm working on making this into a unit test to check its behavior in 3.0.
I'm playing around with the Maestro/BoxLib frontend. Maestro (and Castro)
plotfiles contain a file called job_info that stores a lot of metadata,
including the values of all runtime parameters and the git hashes
corresponding to which versions of the various codes were used to build the
executable. Currently we have all of the runtime parameters stored in the
parameters dictionary. I'd like to add the version control hashes in a
dictionary as well. I can add them to the same parameters file, but I am
wondering if other codes use a standard place for this.
Ultimately I'd like to save some of this information (especially the code
hashes) in the output files that yt generates: i.e. as png metadata, simple
postscript comments for eps files, or pdf metadata for pdf files. This
will help complete the link toward reproducibility in that it will preserve
the chain of information from code to output to plot.
Dept. of Physics & Astronomy * Stony Brook University * Stony Brook, NY
I'm trying to define a particle union for my dataset that represents stars
formed during the simulation. The most straightforward way to accomplish
this is to use a mass cut.
Does anyone have examples of how to set up a particle union? Bonus if it
uses enzo data.
In return for any helpful advice anyone can give, I will add sorely needed
documentation for particle unions to the 3.0 docs.
The yt data hub and affiliated services are going to go down for a
little while; I hope this will be less than a few days, but we need to
migrate off of the AWS services over to Rackspace. This will include
storage of answer testing (both yt and Enzo) as well as the hub. I am
leaving the blog up for the time being, but it will need to migrate as
Again, I am aiming to have a short downtime... but it might be up to a week.
We just had a bug report from Aaron Smith at UT Austin. The symptom is
that the "load" comman was taking 30 seconds to complete on his FLASH
dataset, which should never happen for FLASH.
After asking him to profile the code, he produced the following profile:
It seems that the recent changes to the Tipsy frontend which allow it to
autodetect binary outputs have made it so in some cases non-tipsy data is
loaded off disk.
I'm not sure about the best way to handle this, which is why I'm writing to
the list rather than issuing a PR.
Recently I've been working with the HOP halo finder in yt 3.0. In
particular I've been looking at star particles from Enzo simulations in
halos of different sizes. I've been running into strange results with
particle fields that are stored in the halo hdf5 files vs particle fields
that have to be retrieved from the original simulation data. In particular,
if I create a mask for star particles from a field saved to disk
(creation_time prior to 3.0 or ParticleMassMsun now) then I get the correct
values for other fields when I use this mask if they were also saved to
disk (so particle positions or velocities) but not for fields that were
retrieved from the simulation (such as dynamical time). Similarly, if I
identify stars by creation_time in 3.0 (when it isn't saved in the hdf5
file) then I get the correct dynamical_times, but incorrect particle masses.
I think I've identified the source of this problem. When the
"particle_index" field is read from the halo hdf5 files, it is then sorted
into ascending order. In particular, in __getitem__ in the LoadedHalo class
there is the following (this is line ~867 in halo_objects in the 3.0
field_data = self._get_particle_data(self.id, self.fnames, self.size, key)
if field_data is not None:
if key == 'particle_index':
field_data = field_data[field_data.argsort()]
These sorted particle indices are then used when retrieving fields from the
simulation data, so the fields end up being sorted in a different order
than the ones that are retrieved directly from the halo hdf5 files. As a
result, masks created from one set of fields don't work properly on the
I think that I can fix this, but before I do I want to make sure I'm not
going to be breaking anything else in the process. Does anyone know why the
particle_index field was being sorted? If so, do you happen to know whether
it would make more sense to sort the other particle fields from disk or
leave particle_index unsorted? Thanks in advance for any help.
There've been three big pushes on adding species fields in a generic
way to yt recently.
Britton came up with an idea that I think works quite nicely. The
field info container is responsible for enumerating a list of species
that it finds, on startup. Then, when the field plugins are added,
the field plugin looks for all of the fields that it can find that are
related to each species enumerated, and the appropriate
fraction/density/number density fields get added.
Right now, the field plugin is not created yet, but that is my next
step for this. What the plugin will do is iterate over all the
species names, try to find the appropriate fields in the
pf.field_aliases, and then appropriately add things. What this means
is that if the frontend, in setup_fluid_fields, ensures that the
species names are all appended to the species_list, the plugin will
then create all of the appropriate fields. So if it finds "He_p1" in
the species_list, it will look to see if "He_p1_fraction" exists, then
look for _density, and _number_density, and whichever one it finds it
will use to create derived fields for the others.
Where this gets a bit messy is in the actual creation of all of the
aliases, which is why I think it's appropriate for it to be a plugin.
Once we have these aliases, we can add on smoothed versions of the
fields. By basing this on a recognized list of species types, we can
also reduce the number of fields looked for by the plugin system to
just those it knows it ought to make. And, since we have "species"
objects, we can create those on-demand to use.
Gabe did some work with this for the OWLS datasets, where he created
all of the ionization states based on the density fields. What I'm
suggesting here is that with this plugin, he could create *just* the
ionization states in density or fraction form, and then the plugin
would create the alternates *and* all the smoothed fields.
This does open up the question of what to do about *neutral* species,
as distinct from *total nuclei*, which the natural representation of
is somewhat degenerate. Britton opted for a %s_nuclei_%s naming
scheme, where Gabe opted to specify Symbol_%s, with explicit p0
suffices for those neutral fractions. Originally, I wanted _p0, and I
think I still do, but I recognize not everyone agrees.
The other item Gabe worked on, which I promised I'd write up into a
YTEP, was changing the particle deposition types. This will be a
forthcoming email, out of scope for this one.
Any feedback from anyone? I intend to consolidate all of the
detection shortly and make it into a plugin, but I'll need help,
especially from Gabe on the OWLS part of this.