Here are the things we talked about at the hangout, and where we left them:
* There are two big things codewise for 3.0 -- the rebranding, and
the unit refactor. We decided it would make the most sense to land
the unit refactor first, wait a couple weeks, then land the
rebranding. Neither are yet done, so this is still in the future a
* Documentation is widely viewed as the biggest impediment to 3.0 adoption.
* Nathan is going to be sending around some info about a manuscript.
* We will aim for a yt 3.0 paper sometime this calendar year, but
there is support for individuals to write their own papers on their
own work that do not need to have lengthy author lists -- i.e., a
reference paper versus a development paper.
* Near-term for me: SPH, documentation, unit-refactoring.
We should probably do more of these hangouts. Thanks to everybody who came!
New issue 763: Covering grid does not get filled out for a tightly nested AMR dataset
While I was helping Alex Bogert out with a visualization of my galaxy dataset, we noticed some odd behavior in the covering grid. This is exposed in the HiResIsolatedGalaxy dataset on yt-project.org/data using the following script:
from yt.mods import *
pf = load('HiResIsolatedGalaxy/DD0044/DD0044')
cgrid = pf.h.covering_grid(5, np.array([0.53, 0.53, 0.53]), np.array(*3))
On the current 3.0 tip, this prints:
[[[ 4.14182811e-25 1.75059046e-28 1.29468963e-28 ..., 1.29245039e-30
[ 1.13112951e-24 1.50869295e-28 1.00778753e-28 ..., 1.29245039e-30
[ 5.55295171e-25 1.26780190e-28 7.13453664e-29 ..., 1.29245039e-30
(cut off for clarity)
The zero values should not be there since this is an AMR dataset with valid data over the region covered by the covering grid.
One possibly weird property of this dataset is that it's quite tightly nested, particularly near the maximum refinement level, as you can clearly see in this image: http://i.imgur.com/7kSwXJk.png
No idea if the nesting is important - would love some input from someone who is more familiar with the covering grid code.
FWIW, this issue only appears to be reproducible for this one dataset, I see no similar issue for `IsolatedGalaxy`, another nested AMR sim.
New issue 762: Clump finder particles issue
Caroline Van Borm:
When I try to run the clump finder on my Enzo dataset (using a script that mostly follows the find_clumps.py from the yt docs, but using a function to select gravitationally bound clumps), I get an error (see http://pastebin.com/bhmRs2R4). It seems that the clump finder is assuming there are particles in the simulation, even though there shouldn't be any.
Any help on this is much appreciated!
(Also, my yt version is b118390aa42c.)
I'going to set up a (very low-key) hangout on Monday at 2PM EST just
to have a brief catch-up on these topics:
* State of documentation
* My personal roadmap for next few weeks/months
* Pending YTEPs and PRs
* Other business
For the most part, I plan to use this time to update others on my own
progress on various things we've talked about here and elsewhere; I
know that some other projects are blocked on my own progress, and I
want to update people on that. If you don't think you can make it,
don't move heaven and earth to do so! But if you can, it'd be good to
New issue 761: SlicePlot ".save()" stores image in plotfile directory, not current directory
When creating an image via "SlicePlot" of BoxLib data, the resulting image is stored in the plotfile directory, not the current directory. E.g.:
pf = load("plt23437")
SlicePlot(pf, 'x', "Density").save()
creates the image "plt23437/plt23437_Slice_x_Density.png", when it should output simply to the present directory (creating the image "plt23437_Slice_x_Density.png"). This bug is likely because this part of yt doesn't recognize that BoxLib plotfiles are directories.
New issue 758: VRs are transposed relative to projections
It appears that at some point the volume rendering interface underwent a transpose relative to the PlotWindow interface. When I make a simple VR with normal vector (0,0,1), that is, along the z vector, and compare it against a projection along the z vector, I get out two tranposed images.
I know I have used these two different pieces of machinery together in the distant past without problems, so I guess we just got our wires crossed somewhere.
I'm happy to change it in the VR interface, but I'm afraid that the reason it was swapped was for some other piece of interface to be accurate, and so this may break that...
Good news on the Windows/yt front. I'm happy to announce that as of
right now only one unit test is failing. The failure is subtle, but I
thought I would run it by you all to see what we should do about it.
The failure in question is a comparison test in
yt/utilities/tests/test_units.py, in "test_power":
assert u2.cgs_value == (pc_cgs**2 * mK_cgs**4)**2
basically, the unit cgs_value is exactly the combination of these
factors, so even though they are floats, one would expect that the
equality should hold, and on our other platforms it does. However, on
Windows, the comparison is failing on the last digit, with a relative
difference of ~4e-16 between the two values. So, while I'm not
inclined to be very concerned about it, I'm curious to see what you
guys think about how it should be handled.
Happy New Year!
NASA/Goddard Space Flight Center