Cactus multi-block data are topologically Cartesian, but each grid point's
coordinate is arbitrary.
- How would this best be handled? Should I create a UnstructuredIndex?
Would "moab" be a good example to follow?
- Is it possible to combine this with AMR?
-erik
--
Erik Schnetter <schnetter(a)gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/
What do the Parent and Children lists for AMRGridPatch do in yt? I've set
up a grid structure without defining these entries, and things seem to be
working fine. Would yt automatically disable certain regions of the parents
if there are overlapping children?
In Cactus's AMR structure, a refined grid does not have a single parent
grid -- it may overlap with multiple coarser grids. How do I tell yt about
this? Do I need to split the fine grids?
-erik
--
Erik Schnetter <schnetter(a)gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/
I'll be backporting bugfixes to stable and doing a 3.2.2 release in the
next few days. If you want to have bugfixes included in this release,
please try and have them ready in the next couple of days.
Nathan
New issue 1143: Sphere callback for OffAxisProjectionPlot does not do coordinate conversions correctly
https://bitbucket.org/yt_analysis/yt/issues/1143/sphere-callback-for-offaxi…
Nathan Goldbaum:
The following test script:
```
#!python
import yt
ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
plot = yt.OffAxisProjectionPlot(ds, [0, 0, 1], 'density')
plot.annotate_sphere([0, 0, 0], (300, 'kpc'))
```
Produces the following image:

This is wrong, the circle should be centered on the origin.
I have implemented a front-end for cell-centred Cactus AMR data. However,
many Cactus simulations use vertex-centred AMR. That is, data is stored at
the vertices of a grid (not in the cells), and coarse vertices are at the
same locations as fine vertices.
How do I present such data to yt? Do I need to set grid coordinates
differently, or is there a "vertex centred" flag?
When vertex-centred data are displayed, then I expect either a "control
volume" around each vertex to have the same value, or to use e.g. linear
interpolation in the cells from the respective boundaries.
-erik
--
Erik Schnetter <schnetter(a)gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/
Hey yt community,
Over the last two years, there has been an ongoing effort led by Sam
Skillman to refactor the volume rendering (VR) code for making volumetric
images using radiative transfer models on simulation outputs. Because it's
such a large functional change to the codebase, all of these commits have
been kept in the `experimental` head on the dev branch of yt up until now.
A month ago, there was a code sprint at SLAC where Matt, Nathan, Suoqing,
Andrew, Kacper, Ji-hoon, Sam, and I worked hard in getting the new VR code
ready for release to the larger community. While this code still needs
work before it can go into the next stable version of yt (3.3), we've now
merged the code into the main development head, and we invite users to try
out this functionality to see how well it works for their own datasets.
For a full description of the new volume rendering interface and how to use
it, please see the narrative docs, new cookbook recipes, and annotated
example. For more background on where we'd like this functionality to go
in the future, see the YTEP.
VR Docs: http://yt-project.org/docs/dev/visualizing/volume_rendering.html
Cookbook (search for Volume Rendering):
http://yt-project.org/docs/dev/cookbook/index.html
Annotated Example of VR usage:
http://yt-project.org/docs/dev/visualizing/volume_rendering_tutorial.html
VR YTEP: http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html
To get access to this new VR functionality, you have to be on the
development (ie "yt") branch and using the most recent changeset (ie
"tip"). For information on how to switch branches, see:
http://yt-project.org/docs/dev/installing.html#switching-versions-of-yt-yt-…
. If you wish to continue to use the old VR interface, it will remain
available in the yt/visualization/volume_rendering/old_camera.py file.
Thanks to all of the hard work by the developers who helped get this out!
Cameron
--
Cameron Hummels
NSF Postdoctoral Fellow
Department of Astronomy
California Institute of Technology
http://chummels.org
Hi all,
I'd like to start the ball rolling on instituting a code of conduct for the
yt community. This will be added as a YTEP, linked from the "community"
portion of the webpage, and added the the yt repository.
Having a code of conduct is important because it the community norms we
abide by very concrete. It also affirms our commitment to embracing
diversity, openness, respect, and collegiality in our interactions. I think
officially adopting a code of conduct also lowers the barrier to entry for
new contributors who might be scared off based on preconceived notions
about open source projects (e.g. the Linux kernel).
I think a good place to start would be to use the Python Software
Foundation's code of conduct and just replace "Python" with "The yt
Project". I really like the PSF code of conduct and feel that it does a
good job of describing how we conduct business already.
I'm curious whether this is amenable to everyone. Are there other CoC
documents that we should look at? Are there additional points we should add
to or elide from the PSF CoC to make it more fitting for yt?
Thanks for your input!
-Nathan
Hey guys,
I continue to work on the 3D thing, and I have a few questions, it
would be *super
helpful *if you can answer me, even for some of my questions. So if you
even have a hint I would like to hear about it.
Thats what I have now: ART example
<https://sketchfab.com/models/d267102434274bdbb5f75f5b9ef27b44>, so with
some help I might succeed.
Thanx,
Tomer
0. What is the main reason that _extract_isocontours_from_grid doesn't work
on ART?
is it just not implemented? is there another big problem beneath it? is it
related to KDTree?
How come ENZO work and ART doesn't?
1. What is the ghost zones idea? does it work on ENZO? does it work on ART
OcTree data?
2. Can ghost_zone_interpolate work on OcTree? on OctreeSubsetBlockSlice?
3. In get_vertex_centered_data, it gets from ENZO 6X6X6 data blocks, is it
true that one of the differences between ENZO and ART,
is that the adaptive mesh of Art is divided to 2X2X2 but ENZO has 6X6X6?
The code I look at is in
"construction_data_containers.py" YTSurfaceBase.get_data(..) function
line 1031: for block, mask in self.data_source.blocks:
4. What is main idea of the classes
AMRGridPatch(YTSelectionContainer), OctreeSubsetBlockSlice(object),
OctreeSubset(YTSelectionContainer)? what is the difference between them?
doesn't OctreeSubsetBlockSlice supposed to inherite OctreeSubset?
while calling self.data_source.blocks for ENZO I get AMRGridPatch and for
ART I get OctreeSubsetBlockSlice.
but AMRGridPatch inherits from YTSelectionContainer as OctreeSubset.
different to OctreeSubsetBlockSlice which inherits from object...
The code I look at is in
"construction_data_containers.py" YTSurfaceBase.get_data(..) function
line 1031: for block, mask in self.data_source.blocks: