Hi folks, I am passing on the following message about an upcoming
workshop. This workshop will feature a track about yt, including
tutorials both basic and advanced, and I'm pretty excited about the
breadth of topics that will be covered.
We are pleased to announce that our LSSTC workshop "Data Visualization
and exploration in the LSST Era" will be held at the National Center
for Supercomputing Applications (NCSA) at the University of Illinois
at Urbana-Champaign from June 19 to June 21, 2018. The 3-day workshop
is open to all interested attendees. However, preference will be given
to early career scientists given the nature of the workshop format.
Thanks to support from the LSST Enabling Science Group and NCSA we are
glad to waive the registration fee and to provide some travel support.
We welcome proposed contributions for short talks and 1 hour
hands-on-tutorials on the topics listed on the website.
The registration is now open and the deadline is June 1st, 2018.
For more information please visit the workshop website at
We hope to see you there!
On behalf of the SOC:
Matias Carrasco Kind, University of Illinois at Urbana-Champaign
Donna Cox, University of Illinois at Urbana-Champaign
Leanne Guy, LSST
Gilbert Holder, University of Illinois at Urbana-Champaign
Xin Liu, University of Illinois at Urbana-Champaign
Felipe Menanteau, University of Illinois at Urbana-Champaign
William O'Mullane, LSST
Laura Trouille, Adler Planetarium
Matt Turk, University of Illinois at Urbana-Champaign
Currently the tests are failing on master due to my merging two pull
requests yesterday that it turns out are mutually incompatible. My proposed
fix is in PR 1741:
Unfortunately the fix isn't trivial (unless I also back out PR 1693, which
I'd prefer not to), so I'd really appreciate at least one or two other sets
of eyes on the PR. I'd especially like to hear from people who have poked
at the internals of yt's field and field detection system.
I had a request today to make yt.units its own package. Right now if
someone wants to use yt.units in their own package, they need to depend on
*all* of yt. That's quite a lot of code to depend on if all you want is an
ndarray subclass that knows about units.
From the beginning, I've tried to make yt.units *not* depend on the rest of
yt as much as possible. For this reason it's relatively simple as far as
code modifications go to extract yt.units into it's own package.
This afternoon I've done most of that work and have created a new package
called "unyt" (pronounced the same as unit). For now the code lives as a
repo owned by my github account:
I'd like to create a new repository in the yt-project github organization
named unyt as a place for this code to live:
For now I'm *not* working towards making yt depend on unyt - effectively
I've forked yt.units into a new package. I've attempted to keep the
existing git history for the new package though, so you may find that you
have commits in this new package that originated in yt.
This *does* effectively double the maintenance burden for yt.units, since
now I need to make sure that yt.units and unyt don't diverge too much.
However, the pace of changes to yt.units is pretty slow these days so I'm
not overly worried about creating a bunch of work for myself.
Eventually we could consider making yt depend on unyt and making yt.units a
compatibility shim, however I don't think we want to take that step yet.
So far I've only made changes necessary to get unyt isolated from the rest
of yt. I've also renamed a few things. For example, YTArray is now
unyt_array and YTQuantity is now unyt_quantity.
I'm curious whether anyone has a problem with creating a repo on the
yt-project org for unyt to live in. If so, please let me know so we can
hash it out.
Please also let me know if you have any other questions or concerns about
the course of action outlined above.
I'm trying to close out this bug:
After spending a fair amount of time on this, all I've been able to
determine is that the values of the server address and port seem to go
missing for the server. These two values are passed
from yt/analysis_modules/halo_finding/rockstar/rockstar.py to
rockstar_interface.pyx in the same directory. They are then assigned to
two global variables internal to Rockstar. Rockstar instances are either
readers, writers, or servers. Both the readers and writers manage to hold
on to the correct values for the server address and port, but when I check
these values for the server itself, they are just empty strings.
Somehow, this all works fine in python 2, i.e., all rockstar instances
maintain the correct values. I am totally out of ideas for how to fix
this. Does anyone have any thoughts?
[cc'ing directly Nathan in case the mail gets stuck in Mailman]
I started working on the proposal for the project idea *Interpolating
particle data onto grids*. After, familiarizing myself with the background
information that you pointed to, I have few doubts which are as follows:
1. The YTEP-32 and paper by Dan Price mention "scatter" approach for
interpolation. I could not find any material on "gather" approach. Do you
have anything handy in this regard? Or should I take the path of scatter
approach and proceed further?
2. The project idea lists one of the deliverables as the ability to
interpolate data onto a uniform resolution mesh. In YTEP-32 you have
plotted slice and projection plots using octree and directly using particle
data. Is my understanding correct? If we are able to plot the sph data with
this particle-centric approach, then what is expected to be delivered with
respect to this project idea.
3. The optional deliverables list the ability to interpolate particle data
onto an octree mesh. Is this not the current approach (yt version 3.*)?
If most of my understanding is not correct, I would love to clarify all
these doubts as soon as possible, so that I work on the proposal. Thank you
for your time!
Kacper is currently fighting some fires with the Nebula cluster at NCSA.
Until he's done Jenkins will be down. Please do not merge any PRs that have
not passed the tests on Jenkins until it comes back up.
Thank you for your patience,
Yi-Hao and I have been arguing in a pull request since this afternoon. I
think we're having a tough time coming to an agreement about how to move
this discussion forward so I thought I'd bring this discussion to the
mailing list. For reference, this came up in PR 1710:
I'm going to try to summarize the issue, my opinion, and YI-Hao's opinion.
Yi-Hao, please let me know if you feel like I'm mischaracterizing your
Our disagreement boils down to this test script:
import numpy as np
arr = np.arange(8).reshape(4, 2, 1)
data = dict(density=arr)
ds = yt.load_uniform_grid(data, arr.shape)
slc = ds.slice('z', 0.5)
slc_frb = slc.to_frb(1, (4, 2))
dens_image = slc_frb['density']
This script currently prints:
[[ 0. 2. 4. 6.]
[ 1. 3. 5. 7.]] g/cm**3
I think that the fact that the resolution argument of the to_frb call was
(4, 2) means that I want an image with a shape (4, 2). But right now yt
gives me an image with shape (2, 4). My pull request makes it so you get an
image back with shape (4, 2).
Yi-Hao correctly points out that the current behavior of yt gives a gives a
pixelization that happens to exactly match the discretization of the data
loaded into the yt dataset and he wants to keep that property.
Unfortunately, with my pull request the same script would print:
[[ 1. 5.]
[ 1. 5.]
[ 2. 6.]
[ 2. 6.]] g/cm**3
So now the image's shape is correct, but the pixelization is no longer
"natural" because this corresponds to 2 pixels along the x direction and 4
I *can* get a "natural" pixelization if I tell yt to flip what it calls the
"x" and "y" axes:
ds.coordinates.x_axis = 1
ds.coordinates.y_axis = 0
If I add the above two lines to the script before calling to_frb, I get the
[[ 0. 1.]
[ 2. 3.]
[ 4. 5.]
[ 6. 7.]] g/cm**3
This is again a "natural" pixelization because we have 4 pixels along y and
2 along x. However I don't think that's particularly useful since most
people will want to make z-projections and slices with x plotted
horizontally and y vertically.
Unfortunately there's just a basic issue here with how to interpret the
shape of an image geometrically on a plot.
I have a feeling like Yi-Hao and I are a bit too close to this to resolve
the issue either way. I'm hoping at least one person can weigh in with an
opinion so we can find a way forward here.