Hi all,
I just finish the GAMER frontend for YT, and below I want to summarize some
issues and things that might be confusing in the current skeleton frontend.
I appreciate if you could help validate these issues and/or clarify some
points in the section "Things to be clarified". Hopefully it will make
frontend implementation easier in the future. I'll collect all comments and
issue a PR for the revised skeleton. It will be complementary to the recent
PR #2130 by Jonah Miller, which provides an extremely helpful example
frontend.
*Things to be clarified:*============================
1. Difference between *FieldInfoContainer.known_other_fields* and
*GridIndex._detect_output_field*:
=> known_other_fields contains fields that *might* be in an output, and
lists their units and aliases. _detect_output_fields finds the fields that
are defined in the currently loaded dataset.
2. *Dataset.unique_identifier*:
=> ??
3. Set *AMRGridPatch.Parent = None* for grids without parents, and
*AMRGridPatch.Children
= []* for grids without children
4. Field array returned by *BaseIOHandler._read_fluid_selection* should be
contiguous along the z instead of x direction. Therefore, for C-like array
with the dimension [x][y][z] and for Fortran-like array with the the
dimension (z,y,x), a matrix transpose is required (e.g., using
np_array.transpose() or np_array.swapaxes(0,2))
5. *start_index, stop_index, *and *ActiveDimension*s in the AMRGridPatch
objects
=> ??
(1) It looks like stop_index is not used anymore
(2) ActiveDimensions will be set by AMRGridPatch._prepare_grid. So
perhaps frontend does not need to set it explicitly anymore
(3) It seems that start_index is also calculated automatically
6. *chunk, selector, *and *_read_chunk_data* in io.py.
=> ??
Also the following comment about caching is confusing:
def _read_chunk_data(self, chunk, fields):
# This reads the data from a single chunk, and is only used for caching.
7. float type:
(1) *GridIndex.float_type* is the float type for left and right
simulation edges and must be float64 now
(2) *BaseIOHandler._read_fluid_selection* should always return float64
now even if the on-disk data are stored in single precision
8. Difference between *add_output_field *and *add_field *used in
FieldInfoContainer.setup_fluid_fields:
9. Dataset._parse_parameter_file: what does the following comment mean
"Note that these are all assumed to be in code units; domain_left_edge and
domain_right_edge will be updated to be in code units at a later time"
=> Perhaps it means that "domain_left_edge and domain_right_edge will be
converted to YTArray automatically at a later time"?
*Missing procedure*============================
1. Add the new frontend in *frontends/api.py*
2. Set *GridIndex.max_level*
3. Set *Dataset.refine_by*
*Bugin the current skeleton frontend*
============================
1. data_structures.py: *self.dataset* is not set in GridIndex.__init__ when
calling self.index_filename = self.dataset.parameter_filename
2. data_structures.py: replace self.Parent=[] by *self.Parent = None*
3. fields.py: the *field_list *argument is missing in
FieldInfoContainer.__init__. This issue has been fixed in Jonah's PR.
def __init__(self, ds, field_list):
super(SkeletonFieldInfo, self).__init__(ds, field_list)
Sincerely,
Hsi-Yu (Justin)
Hi all,
I just finish the GAMER frontend for YT, and below I want to summarize some
issues and things that might be confusing in the current skeleton frontend.
I appreciate if you could help validate these issues and/or clarify some
points in the section "Things to be clarified". Hopefully it will make
frontend implementation easier in the future. I'll collect all comments and
issue a PR for the revised skeleton. It will be complementary to the recent
PR #2130 by Jonah Miller, which provides an extremely helpful example
frontend.
*Things to be clarified:*============================
1. Difference between *FieldInfoContainer.known_other_fields* and
*GridIndex._detect_output_field*:
=> known_other_fields contains fields that *might* be in an output, and
lists their units and aliases. _detect_output_fields finds the fields that
are defined in the currently loaded dataset.
2. *Dataset.unique_identifier*:
=> ??
3. Set *AMRGridPatch.Parent = None* for grids without parents, and
*AMRGridPatch.Children
= []* for grids without children
4. Field array returned by *BaseIOHandler._read_fluid_selection* should be
contiguous along the z instead of x direction. Therefore, for C-like array
with the dimension [x][y][z] and for Fortran-like array with the the
dimension (z,y,x), a matrix transpose is required (e.g., using
np_array.transpose() or np_array.swapaxes(0,2))
5. *start_index, stop_index, *and *ActiveDimension*s in the AMRGridPatch
objects
=> ??
(1) It looks like stop_index is not used anymore
(2) ActiveDimensions will be set by AMRGridPatch._prepare_grid. So
perhaps frontend does not need to set it explicitly anymore
(3) It seems that start_index is also calculated automatically
6. *chunk, selector, *and *_read_chunk_data* in io.py.
=> ??
Also the following comment about caching is confusing:
def _read_chunk_data(self, chunk, fields):
# This reads the data from a single chunk, and is only used for caching.
7. float type:
(1) *GridIndex.float_type* is the float type for left and right
simulation edges and must be float64 now
(2) *BaseIOHandler._read_fluid_selection* should always return float64
now even if the on-disk data are stored in single precision
8. Difference between *add_output_field *and *add_field *used in
FieldInfoContainer.setup_fluid_fields:
9. Dataset._parse_parameter_file: what does the following comment mean:
"Note that these are all assumed to be in code units; domain_left_edge and
domain_right_edge will be updated to be in code units at a later time"?
=> Perhaps it means that "domain_left_edge and domain_right_edge will be
converted to YTArray automatically at a later time"?
*Missing procedure*============================
1. Add the new frontend in *frontends/api.py*
2. Set *GridIndex.max_level*
3. Set *Dataset.refine_by*
*Bugin the current skeleton frontend*
============================
1. data_structures.py: *self.dataset* is not set in GridIndex.__init__ when
calling self.index_filename = self.dataset.parameter_filename
2. data_structures.py: replace self.Parent=[] by *self.Parent = None*
3. fields.py: the *field_list *argument is missing in
FieldInfoContainer.__init__. This issue has been fixed in Jonah's PR.
def __init__(self, ds, field_list):
super(SkeletonFieldInfo, self).__init__(ds, field_list)
Sincerely,
Hsi-Yu (Justin)
New issue 1212: SPH and Octree AMR Datasets Failing in volume rendering, OAProj, Interactive Data Viz
https://bitbucket.org/yt_analysis/yt/issues/1212/sph-and-octree-amr-dataset…
Cameron Hummels:
This similar to Issues #788, #986, #1008, #1183, but I think this description generalizes and distills all of these problems.
It appears that all particle-based (or Octree-based) datasets are segfaulting when one attempts to do anything involving the KDTree with them. The following three scripts all fail with the same result:
```
#!python
import yt
ds = yt.load("gizmo_cosmology_plus/snap_N128L16_151.hdf5")
L = ds.arr([1,0,0], 'unitary')
yt.OffAxisProjectionPlot(ds, L, 'density').save()
```
```
#!python
import yt
ds = yt.load("gizmo_cosmology_plus/snap_N128L16_151.hdf5")
im, sc = yt.volume_render(ds, 'density', fname='rendering.png')
```
```
#!python
import yt
ds = yt.load("gizmo_cosmology_plus/snap_N128L16_151.hdf5")
yt.interactive_render(ds)
```
These all fail with some variation on this output:
```
#!python
Failed to split grids.
Failed to split grids.
Segmentation fault
```
The above scripts each use the publicly available dataset `gizmo_cosmology_plus`, but one gets similar results by using the same simple commands on any of the SPH datasets on http://yt-project.org/data/ e.g. `TipsyGalaxy`, `snapshot_033`, `GadgetDiskGalaxy`.
Notably, however, when you run the same script with Octree-based AMR outputs, you get similar results. Example datasets that do this are Ramses (e.g. `output_00080`), and ART (e.g. `D9p_500`).
These scripts were run on an OS X installation with the tip of yt-dev (843a342ee510).
Hi yt-dev,
Nathan Goldbaum and I were discussing the possibility of handling
vertex-centered data natively in yt. We think that roughly this is quite
do-able and that most of the pieces are already in place. Nathan
suggested I summarize our discussion and email it to all of you to ask
for your opinions.
Internally, all that's required is using the generated ghost cells to
convert between cell- and vertex-centered data. (For non-periodic
boundaries, linear extrapolation, which would need to be implemented,
should be good enough.)
In the userspace side, we need a concept for this type of data so that
it can be extracted and manipulated. One option would be to make
centering an attribute for datasets. It could be immutable or mutable,
depending on design choices. In both cases, we would create a method to
go between centerings:
ds.to_centering(centering_type)
in the former case, this method would return a new dataset. In the
latter case, it would reset the dataset.
In principle, centering is a metadata property. The input data can be
interpolated when it's explored and not before. So for this reason, it
makes some sense to leave the centering as a dataset property rather
than as, say, a field property.
I think that's about it. Nathan can correct me if I forgot something.
Thanks!
Best,
Jonah Miller
Hi yt-dev,
I'm trying to write a frontend for a patch-based AMR code. (The file
format is more general and experimental.) I'm running into a strange
problem where slice plots come out fine but projection plots come out
looking very strange. I attach a pdf explaining my situation and the
problem I'm encountering.
If you have any suggestions, I'd appreciate all the help you can offer.
Thanks very much!
Best,
Jonah Miller
P.S. I have an open pull request (WIP) for my frontend, which you can
find here:
https://bitbucket.org/yt_analysis/yt/pull-requests/2121/wip-simulationio-fr…
It contains information on the frontend, the file format, and of course,
my code. I have prepared some test cases here:
https://bitbucket.org/Yurlungur/simulationio-yt-tests/overview
And the frontend relies on this code:
https://github.com/eschnett/SimulationIO
Hi all,
I'd appreciate it if people could try running my updated install script on
a variety of platforms. With my changes, the install script sets up a conda
environment by default, but the option to create a bootstrapped from-source
python environment is still implemented. See PR 2009 for details:
https://bitbucket.org/yt_analysis/yt/pull-requests/2009
You can either grab the raw install_script.sh file from my fork:
$ wget
https://bitbucket.org/ngoldbaum/yt/raw/install-script-updates/doc/install_s…
Or pull from my fork and run the "test_install_script.py" that I've added.
The latter will take a while, because it runs the install script six times
with a variety of permutations in options.
Thanks for your help!
-Nathan