New issue 717: tracking rays on top of cut region
I am trying to use the .hierarchy.ray( ) to extract data along a particular
light ray, but when I check the temperature field of that light ray data, I
find that the data points it extracted were not really what I want, the temperature range goes beyond my limit, here is part of my code:
> sp = pf.h.sphere(center, radius)
> warmhot = sp.cut_region(['grid["Temperature"] >= 1e5',
> 'grid["Temperature"] < 1e6'])
> p0 = np.array([center, center-250./pf['kpc'], center])
> p1 = np.array([center, center+250./pf['kpc'], center])
> ray = warmhot.hierarchy.ray(p0, p1)
> tempt = warmhot['Temperature']
> ray_tempt = ray['Temperature']
when I examine the temperature range for warmnot, it is
but for ray_tempt, it is:
It seems that the .hierarchy.ray() extracts data directly from the pf.h dataset instead of the cut warmhot region. Could anyone help?
New issue 713: Test storage key should be cleverer
Right now tests are indexed by the simulation output file used. This made sense when we did testing on output files only, but now that we include several other kinds of answer tests (plotting, SZpack) we should move to a different naming system that makes it easier to upload new answers.
New issue 712: Use notebook and notebook-cell all over the docs
We should have dynamically evaluated code snippets in more places in the docs.
You can replace whole blocks of text with notebooks, so long as there aren't any sphinx cross-references (sphinx `:ref:` directives). If you'd like to keep sphinx formatting and functionality, you can replace inline python or bash `code-block` cells with `notebook-cell` (using the %bash magic in the latter case). This will produce an embedded one-cell mini notebook with the code in the docs source evaluated and all output captured by the notebook at runtime.
Having both the input and output embedded in the docs makes the code snippets more meaningful and therefore easier to understand.
Doing so also increases our test coverage, since changes in the docs build will likely be due to regressions. Having comprehensive docs therefore also helps us maintain the user-facing code, some of which may not be used very often or is not well tested.
Next Monday and Tuesday we will be having a documentation sprint to try to
beef up the yt docs (and docstrings) before the final release of the 2.x
branch when development (and documentation) will switch over to the 3.0
branch. All developers are welcome to attend/participate for as much time
as they can spare.
I know documentation isn't always glamorous, but I think this will be
pretty fun. Furthermore, I think this will be very beneficial to the
community, in that it will be fewer frustrated users, fewer questions on
IRC and the mailing list, and better understanding for everyone of all the
features and functionality of the codebase!
We will be meeting up as a Google + hangout over the course of those two
days, with periodic "check-in" meeting times to make sure everyone is on
the same page (see below for schedule). Yes, I know it is early for
Pacific Coasters on a Monday @ 8am, but you can meet for a few minutes from
your bed without your camera turned off if need be. We all know that
people have meetings and need to eat and such, so you are not required to
be in the G+ hangout constantly. That said, people are encouraged to
remain in the G+ hangout throughout both days for fast turnover time if
they have questions/discussion or want to work on something new.
Documentation tasks will be identified so that attendees can volunteer for
them at the various check-in meetings. This way, individuals can work on
these tasks semi-autonomously. The various things I think we should aim to
--Add documentation for features that are currently undocumented in the yt
--Add documentation for new features from yt 2.5 to present (in docstrings,
cookbooks, and narrative docs as applicable).
--Fulfill the tasks identified in the BitBucket Documentation Issues List:
--Remove outdated information from the docs.
--Pare down multiple locations of how to do individual tasks, so as to make
--Fix any typos or mistakes in the content.
--8am PST/11am EST: Initial meetup in G+ to discuss the goals of the sprint
and to layout the specific individual tasks which need to be accomplished.
I'll include a template for a docstrings example, a template for a
cookbook example, and an example of a good narrative docs section.
Developers will choose what task they want to work on. Meeting should
take 30-60 minutes.
--11am PST/2pm EST: Status check, and reshuffling of projects as needed.
--2pm PST/5pm EST: Status check and conclusion for the day. 15 minutes.
Same schedule as Monday.
WHAT I NEED FROM YOU
Please write back to this email saying whether or not you're going to be
attending any/all of the sprint. That way, I can invite you personally to
the google docs hangout to join up. I may try to make this hangout
"on-air" so that it is recorded, and so if we go above the 10-person limit,
those unable to fit in the hangout can watch it streaming live.
Before Monday, please think for a few minutes about things that you may
have noticed in the past that were lacking from the documentation, but at
the time you found them you didn't have time to fix. If you identify
anything, please mark it down as a documentation task in the bitbucket
issue tracker so that we know to work on it next week:
We will be using this as our main means of tracking what gets accomplished
during the sprint. I encourage you to look at the docs as well to look for
ways they can be improved.
Thanks everyone, and I look forward to meeting with you next week!
University of Arizona