New issue 1287: New camera position cannot be aligned with north_vector that was used in previous invocation of set_position
Load any data set. The following script will produce an error:
sc = yt.create_scene(ds)
cam = sc.camera
cam.focus = [0, 0, 0]
cam_pos = [1, 0, 0]
north_vector = [0, 1, 0]
cam_pos = [0, 1, 0]
north_vector = [0, 0, 1]
Clearly, the user is specifying cam_pos and north_vector that are **not** aligned. However, in the current code set-up, position's fget() passes north_vector=None to `switch_orientation`. Because `north_vector` is None in `switch_orientation`, the old value of north_vector is passed to `_setup_normalized_vectors` which then errors out because the new camera position-focus vector is aligned with the old north_vector.
This issue will only arise in rare cases; however, it took me a very perplexed hour (and learning about python properties :-)) to figure out what was going on.
Matt has had two of his pull requests in the queue for a while. I think
both are very worthwhile changes that will unblock other work, so I'd like
to request some more pairs of eyes on them:
The first implements boolean objects, an oft-requested feature from yt 2.x.
The second makes it so our pixelization routines produce images via
compositing onto an existing image rather than returning an image which
does not exist yet. It also fixes some long-standing issues with the
orientation of images returned by the pixelizer.
The latter in particular will unblock work on supporting datasets that
contain multiple disjoint but continuous meshes - something that's common
in numerical relativity data.
New issue 1285: Yt loglevel: Many methods, why only one working?
There are several methods noted in the yt documentation that one can change the log level. The duplicated documentation is a problem for another day (that I may attempt to address, even), but not the subject of this bug report. Here is the relevant documentation:
And here are the methods:
Method 1. From text editor: edit $HOME/.yt file to read:
loglevel = 50
Method 2. From command line, call your script "my_script.py" that includes yt analysis:
$ python2.7 my_script.py --config loglevel=50
Method 3. Within python:
Method 4. Within python:
>> yt.config.ytcfg[“yt”, “loglevel”] = “50”
Method 5. Within python:
>> yt.config.ytcfg.set(“yt”, “loglevel”, “50”)
Method 6. Within python:
>> from yt.config import ytcfg
>> ytcfg[ …. # (variant on methods 4 and 5)
While trying to change the log level, I learned about and tried them all, but only method 4 worked. The others leave the "[INFO]" messages on.
I just opened a pull request that makes a minor change to how you run the
yt unit tests in some cases:
See the pull request description for more details.
I'm raising this on the mailing list because it will impact how you run the
unit tests if you specify a path to a test file. I would very much
appreciate hearing from anyone that has concerns about this. Just raising
it here out of an abundance of caution, I expect everyone to be pretty
happy with this change as it's a common source of user error when running
tl;dr: We are working on improving support and performance for particle
data in yt. We want feedback on our ideas and ask for people uninvolved in
this effort to do code reviews and participate in design discussions and
workflow discussions going forward
Last week we had a sprint here at NCSA around the "demeshening" idea a few
of us have been throwing around. The basic idea here is to incorporate the
code Meagan Lang has been working on over the past year into yt, allowing
improved performance for particle codes. One consequence of this work is
that there will no longer be a space-filling octree mesh for SPH data,
instead the data will be indexed by local mesh patches built on compressed
EWAH bitmap indices
As part of this sprint, we had lots of discussions about how this work
might proceed, and people also did some work on moving several aspects of
this effort forward:
I started work on an SPH pixelizer we can use for SlicePlot and
Bili Dong started working on a ray data object that can be used to generate
absorption spectra from SPH data without depositing data onto an octree
And Matt Turk and Alex Lindsay worked on improvements for the field system:
Alex's pull request also received some love today and I'd like to request
review on it, as it will unblock further work on the pixelizers. In
particular, I'd like to build on top of Alex's pull request to add a new
kind of sampling_type, "local" which correspond to fully local fields. This
generalizes the current default situation (i.e. the field does not have a
ValidateSpatial validator). My ultimate goal here is to make it easier to
support SPH particle fields. For fields that are fully local, we don't need
to create different implementations for the 'cell' and 'particle'
samping_type, instead we can just use the same implementation for all field
Once that is in, I think I'll be able to make it so projections and slices
work with the majority of yt derived fields using the new SPH pixelizers
I've added in PR 2382:
Unfortunately PR 2382 won't be mergeable at that point because we also need
to come up with a solution for fields that require finite differences (this
is discussed a bit in one of Daniel Price's papers, starting around section
I think once we've agreed on the way SPH particle fields should work going
forward (I'm currently drafting a YTEP for this) then the demeshening work
can build on top of that in a piecemeal fashion.
If we can I'd like it if we try to upstream stuff early and often to make
this work easier to review and to improve testing by users and other
developers. I don't want to move development into another fork of yt or a
new named branch, since we had issues with that in the past. This approach
might add some work we wouldn't have otherwise done (i.e. we might need to
add compatibility layers that will eventually be torn down), but it will
help keep us honest and ensure we're getting reviews from people uninvolved
in this effort.
That said, if you'd like to try out the new code or even chip in, I'm happy
to pull changes from people's forks or to review pull requests against my
fork of yt.
I think that's it, sorry for the novel! I'd very much appreciate comments
or questions about this.
New issue 1281: YTQuadTreeProj contains many unusable methods
Unless I am understanding something wrong, there are several non-functioning methods listed in YTQuadTreeProj.
The [auto-generated documentation](yt-project.org/doc/reference/api/generated/yt.data_objects... lists methods like "min()", "max()", "mean()", "integrate()", etc. These methods are also available to the python interface, but almost all of them just throw errors at me that say "Data selector... not implemented".
This is some combination of:
1. A bug
2. An oversight in documentation
3. An issue with method inheritance for YTQuadTreeProj
OR I just messed something up.
exampledir = "/home/sfeister/test/yttest/ytExampleData/"
ds = yt.load(exampledir + "Enzo_64/DD0043/data0043")
reg = ds.all_data()
print reg.min('Density') # Works
proj = reg.integrate('Density', 2) # Do a projection of the region
print(proj.__doc__) # Methods of YTQuadTree include "integrate", "min", etc.
print proj.min('Density') # Fails, with error below
print proj.integrate('Density', 1) # Fails, similar error
Error is as follows:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/data_objects/data_containers.py", line 778, in min
rv += (self._compute_extrema(f),)
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/data_objects/data_containers.py", line 706, in _compute_extrema
mi, ma = self.quantities.extrema(field)
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/data_objects/derived_quantities.py", line 510, in __call__
rv = super(Extrema, self).__call__(fields, non_zero)
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/data_objects/derived_quantities.py", line 66, in __call__
for sto, ds in parallel_objects(chunks, -1, storage = storage):
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/utilities/parallel_tools/parallel_analysis_interface.py", line 508, in parallel_objects
for result_id, obj in oiter:
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/data_objects/data_containers.py", line 1087, in chunks
for chunk in self.index._chunk(self, chunking_style, **kwargs):
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/geometry/geometry_handler.py", line 251, in _chunk
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/geometry/grid_geometry_handler.py", line 293, in _identify_base_chunk
gi = dobj.selector.select_grids(self.grid_left_edge,
File "/home/sfeister/anaconda2/lib/python2.7/site-packages/yt/data_objects/data_containers.py", line 1075, in selector
yt.utilities.exceptions.YTDataSelectorNotImplemented: Data selector 'proj' not implemented.