There are two major changes coming soon for yt-3.0 as we march our way to
an official release. These are the unitrefactor and the rebranding. The
unitrefactor adds symbolically expressed, convertible units to all fields
and scalars in yt. The rebranding is a rethinking of some of yt's
conceptual entities (such as thinking of a "dataset" instead of a
"parameter file", an "indexer" instead of a "hierarchy", etc.) and attempt
to de-astro the infrastructure as we start to think about working with
other sciences. The unitrefactor also contains some rebranding efforts in
the form of field renaming (e.g., "Density" becoming "density"), so these
changes are somewhat linked.
What we need to figure out is the process by which these changes are merged
into the yt-3.0 branch of the main repo (yt_analysis). In my opinion, the
primary issues are the following:
1. Develop is cumbersome because it is taking place within Matt's fork,
meaning that all contributors have to fork his fork and issue PRs to that.
This is annoying because one has to maintain two forks and because most
people aren't getting notified of PRs issued to Matt's fork.
2. Experience has shown that the only way to identify all the bugs is by
actually attempting to use the code to do Real Stuff. What this means is
we need all the frontends represented and people putting the various
functionality and analysis modules to use. I think for most people, having
to pull changes in from an external repo and perform various mercurial
magic just to test changes is a bridge to far. We need to lower the
barrier to entry.
3. There is still a good amount of documentation, testing, polishing, etc
before this can be called stable. Even though yt-3.0 is still officially
Under Development, a number of people are using it to do actual things and
so it is unreasonable to just land this on them without full documentation
and with such a high likelihood that it will break things.
*I propose that the unitrefactor and rebranding work be pulled into the
main repository in an "experimental" bookmark.* I think this will a)
streamline development and make it more visible to everyone, b) lower the
barrier to trying it out for people so we can actually get everything
tested and working, and c) not disrupt the workflow of the current users of
yt-3.0. I also think this is the quickest way of satisfying everyone in
terms of getting all of the necessary documentation written as it makes the
development significantly more open and accessible.
For more info on what needs to be done on both of these fronts and for
yt-3.0 in general, see the trello boards: https://trello.com/yt_analysis
Can we get a +/-1 on this?
New issue 802: save_object bug in the Clump object
Juan Camilo Ibañez Mejia:
After a clump object is created with the functions:
> master_clump = Clump(sphere, None, field, function=function)
> find_clumps(master_clump, c_min, c_max, step)
It is not possible to save the data object with the
> pf.h.save_object(master_clump, 'My_clumps')
call, neither with the
> master_clump.save_object('%s_My_clumps' % pf, '%s_clumps_file.cpkl' % pf)
because the clump objet has no save_object attribute.
New issue 801: covering_grid fails if dims is a list
Following snippet is taken directly from the [documentation](http://yt-project.org/docs/dev/examining/low_level_inspectio…:
from yt.mods import *
pf = load('Enzo_64/DD0043/data0043')
all_data_level_0 = pf.h.covering_grid(level=0, left_edge=[0,0.0,0.0],
dims=[64, 64, 64])
Traceback (most recent call last):
File "issue_801.py", line 4, in <module>
dims=[64, 64, 64])
File "yt/data_objects/construction_data_containers.py", line 411, in __init__
rdx[np.where(dims - 2 * num_ghost_zones <= 1)] = 1 # issue 602
TypeError: unsupported operand type(s) for -: 'list' and 'int'
Moving our discussion about AMR hard ware volume rendering to yt-dev.:
The variables I need are the corners of the bounding box (which I call mi,
ma, basically left and right edges), number of cells in a grid, grid data
itself, children of a grid, and parent of a grid. This data itself works.
I notice children aren't referenced. Is that because everything is defined
just in terms of who the parent is, and where the tile exists within the
parent? (Which if that's the case, since a top-down traversal is necessary
in CUDA, the structure would have to be reversed).
Another thing, I notice that 'dims' and g.ActiveDimensions do not match up
a majority of the time. If they don't match up, how do you take a point in
space and determine what grid element it maps to? (Trivial, but if these
two don't match, I'm not sure who I should trust when doing the
On 3/10/14, 10:59 AM, Matthew Turk wrote:
> Hi Alex,
> We can have these discussions on yt-dev, too:
> Other people have developed this stuff. You're right, the patches are
> (by design) *not* kD-trees. To get kD-trees, you have to use the
> .tiles interface, like so:
> pf = load(...)
> data_source = pf.h.some_object_type(...)
> for (g, node, (sl, dims, gi)) in data_source.tiles.slice_traverse():
> here g is the grid object, node is the node of the kD-tree (for things
> like neighbor searching and the like), slice is the slice into the
> grid object that defines the node of the kD-tree, dims is the
> dimensions of the kD-tree brick, and gi is the left-edge in integers
> of the grid object. You can also do traverse, which just iterates and
> returns the partitioned grid data and which takesa viewpoint.
> On Mon, Mar 10, 2014 at 1:45 PM, Alex Bogert <bogart.alex(a)gmail.com>
>> Hi Matt,
>> I remeber you saying yt can give us AMR in a kd-tree. Was this something
>> like .get_grids or .get_tiles ?
>> On Mar 10, 2014 10:21 AM, "John Holdener" <jrholden(a)ucsc.edu> wrote:
>>> I'll be in lab shortly near 11. There is a serious complication that I
>>> need to talk to Matt about on the phone (when he gets time). I've been
>>> playing with patches, which are not kt-trees (there are many elements
>>> no parent or children). This format would take forever searching every
>>> for samples.
>>> Once yt gives me a kt-tree, namely whatever class gives it to me, it
>>> should become just CUDA coding.
>>> Everything will need to be global memory, due to texture count
>>> anyways. I suspect the first images will look awful (until some
>>> interpolation happens). Until we get trees sorted out, AMR is at a
>>> Sent from my iPhone
>>> On Mar 10, 2014, at 10:04 AM, Alex Bogert <bogart.alex(a)gmail.com> wrote:
>>> Any update on AMR?
>>> On Mar 6, 2014 10:18 PM, "John Holdener" <jrholden(a)ucsc.edu> wrote:
>>>> I could buy that. However the number of typos typing Theia is going to
>>>> drive me nuts!
>>>> On 3/6/14, 3:03 PM, Alex Bogert wrote:
>>>>> Both Joel and I agree we need a good name for the software. I found
>>>>> "Theia" goddess of tracing and the origin of all light. The greeks
>>>>> she casted light from their eyes to see the world. Super literal
>>>>> encapsulation. Im open for suggestions! No more acronyms or anything
>>>>> to do
>>>>> with what languages we use. I think only programmers appreciate these
>>>>> of names :)
>>>> John R. Holdener
>>>> Research Assistant
>>>> UCSC HiPACC
John R. Holdener
New issue 800: expected 'int64_t' but got 'int'
Trying to execute the (pretty basic) script
from yt.mods import *
pf = load("DD0020/sb_L2x2_0020")
my_sphere=pf.h.sphere([0.53,0.53,0.53], (1, "pc"))
plot=ProfilePlot(my_sphere, "Density", ["CellMassMsun"], weight_field=None)
File "slice.py", line 7, in <module>
plot=ProfilePlot(my_sphere, "Density", ["CellMassMsun"], weight_field=None)
File "/home/daniel/Desktop/Uni/Enzo/yt-i686/src/yt-hg/yt/visualization/profile_plotter.py", line 210, in __init__
File "/home/daniel/Desktop/Uni/Enzo/yt-i686/src/yt-hg/yt/data_objects/profiles.py", line 999, in create_profile
File "/home/daniel/Desktop/Uni/Enzo/yt-i686/src/yt-hg/yt/data_objects/profiles.py", line 763, in add_fields
self._bin_chunk(chunk, fields, temp_storage)
File "/home/daniel/Desktop/Uni/Enzo/yt-i686/src/yt-hg/yt/data_objects/profiles.py", line 841, in _bin_chunk
File "misc_utilities.pyx", line 28, in yt.utilities.lib.misc_utilities.new_bin_profile1d (yt/utilities/lib/misc_utilities.c:1950)
ValueError: Buffer dtype mismatch, expected 'int64_t' but got 'int'
I've used install_script.sh for version 3.0 (earlier attempts with 2.7 produced the same errors), which is why changing variable types in misc_utilities.pyx doesn't help.
After our successful sprint today on rebranding, we've implemented
most or all of the changes in YTEP-0017, which primarily amounts of
moving fields and field detection up to static outputs, along with all
the data objects and changing names from .h to .index and whatnot. We
have not yet implemented YTEP-0019, which is making "import yt" do
what we use "yt.mods" for now, but it's on its way.
The remaining items for rebranding are listed in this card:
None are blockers for acceptance, I think.
The remaining blockers for merging unitrefactor into mainline yt-3.0
are the red ones on these boards:
In essence, it's the remaining frontends and the SPH smoothing docs.
Orange cards are blockers to a 3.0 release.
The pull request for rebranding is here:
This may cause problems, although we have attempted to retain
backwards compatibility. The test suite passes except for the three
frontends that have not been converted to unitrefactor. To test, run
hg pull -r rebranding http://bitbucket.org/MatthewTurk/yt-units
hg up rebranding
you may need to force a rebuild. Please report problems on the PR; if
you find something that needs doing, add a card on Trello.
The plan is to finish up the unit refactor documentation (SPH
smoothign is the only remaining blocker), the rebranding testing, and
then merge en masse into the main yt-3.0 branch in yt_analysis/yt .
After talking to Britton, Nathan, and Anthony about this, I think I
need to pull a mulligan on the current rebranding effort and start
afresh in unit refactor.
To that end, I'm going to have a sprint a week from Friday, from
11AM-1PM EST or so, to work on this. I'll set up a google hangout,
and if you want to come by, please do. It's going to be mostly some
big, easily done renames, and then fixing unit tests.
Here's the event page; sorry about the weird theme, I don't understand
why the default for a hangout is wine glasses.
Between now and then, I am traveling until Saturday, then next week
will be devoting myself to fixing the remaining unit-refactor issues
as enumerated on the Trello boards for unit refactor and
documentation. Once rebranding is done, the docs will have to be
lightly updated, but not too badly.
This thread on astropy-dev might be of interest to people here,
particularly those of you working on simulated observations.
---------- Forwarded message ----------
From: Perry Greenfield <stsci.perry(a)gmail.com>
Date: Fri, Mar 7, 2014 at 11:31 AM
Subject: [astropy-dev] Adding optional information to models
We've been looking how to merge the capabilities of models with the
spectral objects we've been using for pysynphot (a topic that will be
raised in a separate thread) and one of the issues that has surfaced
has been how do we handle the "waveset" info we have some of our
current analytic models in pysynphot.
To explain a bit, the analytic models have a method, that returns a
set of input values that is judged to sample the feature reasonably
well. The purpose of this is that when one is combining such models
and then binning, integrating, or plotting them later, it is possible
to lose fine structure in any random sampling scheme. A classic
example is for a Gaussian. If the Gaussian has a small sigma such that
if it is evaluated on a regular grid or some other source of points to
sample, it may not appear in an integration or plot of the result if
its main structure falls in between those samples.
Pysynphot gets around this by making a union of all wavelength samples
present in all tabular data. But analytic components don't have a
wavetable by default since they can be evaluated at any wavelength.
Still, it is useful for them to have some equivalent to that so that a
feature like a Gaussian isn't missed in the other existing wavelength
tables. So we added an optional waveset to analytic models that had
features that might be missed. For a Gaussian, that means giving a set
of points that sample the peak reasonably well (what is "reasonable"
is of course a somewhat murky concept). Perhaps it could be called the
In our reworking of pysynphot, the design under consideration would
need this info in the model itself. So we want to consider an
attribute for models that provide a set of samples to cover such a
need. If the analytic model doesn't need it (e.g., a power law), it
can default to be an empty array. When models are combined (e.g.,
adding two different Gaussians), the resulting sample set would be the
union of the two input models (this is a recursive model). For
pysynphot, this union was computed recursively on the fly when
requested, though a more static approach could be used as well, or
some caching scheme.
People that wanted to add a new model wouldn't have to handle this if
it wasn't needed for their use. But for some kinds of models, it can
be quite useful for the same purposes (binning, integration, or
For 1-d models, it is a very straightforward thing to implement. For
2-d and higher dimensions, there are a few alternatives for saving the
info (gridded vs ungridded?).
Any reason we shouldn't add this capability to models?
You received this message because you are subscribed to the Google
Groups "astropy-dev" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to astropy-dev+unsubscribe(a)googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I am sorry that somehow my mailbox could not receive the mail successfully, so I copy and paste the previous discussion at the bottom of this email. I think my data is cell-centered, and I give YT my velocity field in the name of ["velx", "vely", "velz"]. Is there anything wrong?
Are values stored as face-centered or cell-centered in your output?
On Wed, Mar 5, 2014 at 5:21 AM, 吳佳鴻 <r00222055(a)ntu.edu.tw<mailto:firstname.lastname@example.org>> wrote:
> Dear all,
> I’m new to YT. I encountered a problem when trying to plot the
> divergence and vorticity of a velocity field with an octree data
> structure similar to FLASH.
> I put a cylindrically symmetric velocity field into YT. So far I could
> get Velocity Magnitude Plot correctly and the Velocity vector looks
> also fine (see the attachment). However, the VorticityMagnitude and
> DivV plots have a offset toward the top right corner. I was wondering where I did it wrong.
> Any suggestion will be welcome, and thanks for the help in advance.
> yt-dev mailing list
yt-dev mailing list