Short Version of this email:
Since it seems like I'm the minority in trying to flush out some of the
remaining problems with the VR infrastructure before it gets merged into
the main head of the dev branch, I will not be an impediment anymore. I've
provided a list of things I think should be addressed (or discussed at
least) at the bottom of this email before merging, but as I said, I will no
longer block the merge if people want to do it.
A few years ago a few of us decided that the volume rendering
infrastructure was too clunky and difficult to use efficiently to make good
volume renders, so we detailed how it should be improved in a YTEP (
Sam Skillman put in a huge amount of work (with help from Matt, me, Andrew,
Suoqing, and more) overhauling the volume rendering infrastructure to match
the description of the YTEP. The new VR interface appears to work well in
many cases, but I believe there are still a lot of remaining issues to be
addressed before it's ready for mainstream: bugfixes, docs, recipes, etc.
About 6 months ago, there was a pull request (
made to merge all of the new volume rendering infrastructure into the
development branch, but at that point, the PR was enormous and difficult to
review. While there were still a lot of unresolved issues associated with
the PR, it was decided to merge it into the "experimental" bookmark on the
dev branch, so as to not hold up other development dependent on that
infrastructure, and to make it easier to test it out for everyone.
While I was reviewing this pull request, I tried my best to read through
all of the code diffs, make sure there was appropriate documentation
(docstrings, narrative, comments, etc) for such a significant change, and
actually download and test run the example code to see if it did what it
was supposed to do. This was a ton of work, but I think that this level of
review needs to occur with major PRs to assure things are working before
merging and exposing problematic code to the user base. If we put code out
there and don't properly explain how to use it (or if it doesn't work),
we're ultimately going to make a lot more work for ourselves trying to
respond to people's questions on the mailing list about how to use it. And
we'll potentially sour a lot of people to the code when they try to use
something but find it doesn't do what they want it to do.
Now there is increased pressure to just merge the VR refactor into the main
dev branch so as to streamline overall development, and it seems I am the
only one who has been resistant to this. The argument for the merge has
been: "once it is in the main branch, people will sort out the bugs." Once
something is in the main branch, it may very well get more people trying to
use it; however, the level of confusion/irritation will increase if there
is insufficient documentation, and the ability to change any kind of API is
severely limited because it breaks peoples scripts to do so. The time for
reviewing and fixing code comes before it gets merged into the main dev
But ultimately, I do not want to be seen as the bad guy anymore on this.
If people want to merge the code, then I don't want to hold it up. I made
a list of items when I reviewed the PR a few months back, and below are the
things I think are remaining to be done. But what I think really needs to
be done is to have other people do a full code review (including testing)
and find things themselves (maybe my list, maybe other things). And then
everyone works to fix them. If people want to address my list, or they
want to ignore them, that's fine. Really, it doesn't matter to me anymore.
List of remaining things I think should be addressed before merging
experimental head with main head on dev branch:
- The cookbook recipe for a simple_vr should get rid of all of the
extraneous stuff and just have the most simple means of creating a VR, just
like the simple slice and simple proj do.
- clip_ratio could be changed to something more descriptive
(stddev_mask?), and should be defaulted to something like 4.0 since it is
used almost everywhere to produce useful images
When making a VR, it defaults to using the first field in the field_name
list if a field is not specified, which almost never gives the desired
result. Perhaps just default to ('gas', 'density') like the PW plots do?
Implicitly assumed in examples (ie cookbook recipe: camera_movement.py)
map_to_colormap function is not in the API docs.
- map_to_colormap has a scale keyword, which could be more descriptive
or have examples. what does it do? 30 gives good results, but 10 doesn't
and the default: 1. renders a black screen. how does the user figure out
what to use? even ballpark of what to use? change the defaults to be useful?
There should be a default filename for saving images. Default filename
should be something descriptive, like PlotWindow filenames default to
- Docstrings for all outward facing functions and classes including a
description, parameters, returned quantities, and an example in the
docstrings. right now, many examples are so trivial so as to not be
helpful at all: "source = PointsSource(particle_positions)" examples would
be better served to be a full VR with the appropriate method/class
injected; probably 3-5lines each; then can be tested with docstring tests
down the road too.
off_axis_projection() has a resolution arg, perhaps make a resolution
kwarg with a default set to 512? 256? just as PW has and VR has?
- off_axis_projection has an "item" kwarg, which should really be
"fields" and should be handled in the same way as PW does (recognizes lists
- The __init__ method on many VR classes have the docstrings, but it
would be better to have them just after the class declaration so that the
API docs rendered them properly.
the way to render an off_axis_projection requires yt.write_image()
instead of sc.render() like the VRs even though it is using the same
infrastructure. inconsistent. why? sc.render() *runs*, but even with a
clip_ratio, it just gives a black screen because all of the values are NaNs
- Transfer functions operate in log space for fields that are logged.
This should be done in the linear space of the fields. Can be very
- TransferFunctionHelper should be split out into a different page in
the narrative docs, as the jupyter notebook in the middle of the docs
breaks it up and makes it confusing
- Put the VR annotations (annotate_grids()) in the narrative docs to
show how to do them.
- rotate on the camera should specify what it is rotating --
specifically, it rotates the "position" of the camera, not the direction of
- switch_view should be included in narrative docs
- Add examples for all the different lenses to the cookbook or narrative
- Make one parallelization header, with subheaders in the narrative docs
- Note differences in ways to write images in VR: (1) you specify an
fname in the yt.volume_render() function originally (2) you get the
resulting im object from yt.volume_render() and then write it using
im.write_png(fname), (3) you take the resulting sc object from
yt.volume_render() and then sc.render(fname). i guess it is good to have
these three methods for getting the images to file, but it might be
worthwhile to explicitly state that there are three methods for writing
your VR to file, so people don't get confused on which is the main method
(as i did and just had to figure it out by looking through different
examples, source, etc).
- Camera movement cookbook needs to be improved to make images that make
- Link to subsections of the VR docs in the relevant cookbook recipes
not just to VR as a whole.
- Add narrative docs to get the source, camera, get transfer function,
- Possibly get rid of all getters other than for source on the scene
- get_source is incorrect, since dicts are not sorted
- Left handed versus right handed images; There was significant
discussion regarding this in the VR refactor PR linked above. The
conclusion that a few of us reached (Suoqing, Michael, myself) was: The
VR system measures the normal vector from the camera, but in the PlotWindow
interface, there is no "camera" per se, and the normal vector is measured
from the focus. So they're both in a RHS from their own perspective, but
it's because that normal vector is getting reversed between the two
systems. The fix would be to flip the direction of north_vector in the
VR system so it's right handed from the perspective of the focus, but left
handed from the perspective of the camera. Then we get the same results
from similar inputs to PlotWindow outputs and VR (including
- Using opaque sources (e.g. points and lines and grids) with render
sources (e.g. gas VR) tends to result in either seeing just one or the
other out of the box. It takes a lot of work to balance the opacity of the
different sources, and it is not documented how to do this. I think we
should potentially provide better default opacities associated with these
so as to make both points and gas visible in the same rendering out of the
box. I think we should also work on documenting easy ways to change the
relative opacity of different sources to highlight one source or the other
in the rendering.
NSF Postdoctoral Fellow
Department of Astronomy
California Institute of Technology
It's time we had another yt team meeting to discuss the general state of
the project. As per the organizational structure YTEP, subcomponent
leaders will be in attendance, but anyone else who would like to join in is
very welcome. I am currently looking at a time window of September 16 to
25 for the meeting. If you're interested in participating, please reply or
write to me and I will include you on a forthcoming doodle poll. Please,
let me know by the weekend if you'd like to be included. Once we have a
time chosen, I will announce that and anyone else that can make it will
also be welcome.
New issue 1084: Tipsy files larger with more than 64^3 particles fail to read auxiliary files correctly
Title is pretty self descriptive. The big problem is _read_aux_fields isn't written to handle chunks, and assumes it is reading ALL the particles of a given time at once.
New issue 1081: Make it easier to access projections and slices as fixed-resolution 2d arrays
At the moment, it is easy to make a ProjectionPlot for example, but it is not as easy to actually access the image array that is being plotted as a fixed-resolution array. Of course, it is possible, but this is a feature request to make the process a lot simpler (unless I've missed something)
New issue 1079: Gadget velocities of gas cells not being read correctly? (or unit problem?)
A recent update did something to drastically alter the gas cell velocities for gadget format files. As an example, the test script here reads in the GadgetDiskGalaxy test simulation and creates a histogram of the z-velocity of all gas within 300 comoving kpc of the center of the galaxy. The histogram shows both cells by ['gas','velocity_z'] and gas particles by the z-component of ['PartType0','Velocities'] (and also dark matter and star particles). The cell velocities show a much narrower distribution than any of the particle types.
In addition, the script calculates the z-component of the center of mass velocity, comparing using cell-masses and cell-velocities to gas particle masses and velocities. The two ways of computing it disagree by a factor of ~2.
Included is the script and the resulting histogram.
I'm trying to read Gasoline data with yT and I need to retrieve the oxygen
and iron mass fraction from the *.OxMassFrac and *.FeMassFrac for all the
I try to read the oxygen mass fraction with this line of code
but then I get the following error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
253, in __getitem__
708, in get_data
particles, self, self._current_chunk)
line 234, in _read_particle_fields
File "/home/mtomas/yt-x86_64/src/yt-hg/yt/utilities/io_handler.py", line
187, in _read_particle_selection
rv[field_f][my_ind:my_ind + vals.shape,...] = vals
ValueError: could not broadcast input array from shape (8649900) into shape
Has this been seen before?
I'm using the stable version of yT.