Hi all,
Cameron Hummels and I are both in Seattle right now at the Python in
Astronomy conference. There's a lot of time at this conference for
sprinting, and I thought we might spend some time working on yt issues.
In particular, I'd like to work on tasks we need to complete for yt 3.3.
Right now there are 170 open issues in the bug tracker:
https://bitbucket.org/yt_analysis/yt/issues?status=new&status=open
I know that a number of these issues are fixable with just a few hours of
work, at most. I'd like to spend sprint time triaging issues and
identifying a set of issues that are blockers for the yt 3.3 release.
Mostly, this will be issues that Cameron and I think are fixable with at
most a few hours of work. The rest of the issues will be bulk-reassigned to
yt 3.4, or closed if they are already fixed or do not have sufficient
information to reproduce the issue.
In the end we will have a list of issues that should block the 3.3 release,
and then hopefully we will be able to work on fixing them in the next few
weeks and release 3.3.
Does anyone have any objections to Cameron and I doing this? Many apologies
for the large amount of e-mail spam this will generate. We will try to
finish in one day to minimize the inbox damage on people who are subscribed
to issues.
-Nathan
Hi folks,
Over the last little while, Kacper, Andrew and I have been picking up
on some work started by Chuck Rozhon to implement OpenGL-based volume
rendering of AMR data. Kacper recorded a demo about a week ago,
although it has improved considerably even since then:
https://www.youtube.com/watch?v=yiiBDK1OJDo
As of right now, it can do these things:
* Load up a "data source" (which can be all_data, but doesn't need to be)
* Utilize orthographic and perspective cameras
* Maximum intensity projection
* Integrated projection
* Apply colormaps to these two things, using two-pass rendering
* Trackball camera with keyboard shortcuts for moving around the domain
* Output camera information that is compatible with the software
renderer (i.e., it can be used to get a sequence of camera positions)
* Save images out
* Preliminary support for color transfer function-based VR. At
present this only supports isosurfaces that are manually encoded. It
will soon take 1D textures from the CTF object.
The system has been designed to be very modular, with extensible
keyboard and mouse shortcuts. Kacper has even been able to build a
very lightweight Qt-based GUI around it (on BB as xarthisius/reason )
without changing much/any of the internal-to-yt code. Also, it works
reasonably well even on fairly old graphics cards for reasonably sized
data. (And since it'll accept data objects that are cutouts, this
means you could pull a sphere or block out of a gigantic dataset and
use that.)
Anyway, the reason I'm writing is that I'd like to bring it to
people's attention sooner rather than later. It'll need some testing,
and we're also working to make it in a readily usable state as well.
As of right now, before WIP gets removed from the pull request, we're
going to add documentation (with notes that it is likely an unstable
API) and hopefully a short screencast. But before then, I would like
to invite folks to either review the PR or to test it out.
https://bitbucket.org/yt_analysis/yt/pull-requests/1598
Note that this requires cyglfw3, which is accessible via pip.
I'm pretty excited about this, and the design we have been aiming for
with the way it accepts objects and shaders should enable a lot of
cool things to be done -- especially with respect to selecting data,
presenting it, etc etc.
I'd really like to see this be a part of 3.3.
-Matt
New issue 1189: error in volume rendering if north_vector is aligned with normal_vector
https://bitbucket.org/yt_analysis/yt/issues/1189/error-in-volume-rendering-…
Michael Zingale:
If the normal_vector is aligned with the north_vector, the volume rendering fails:
```
File "/home/zingale/development/yt-mz/yt/visualization/volume_rendering/camera.py", line 320, in switch_orientation
self._setup_normalized_vectors(normal_vector, north_vector)
File "/home/zingale/development/yt-mz/yt/utilities/orientation.py", line 92, in _setup_normalized_vectors
self.inv_mat = np.linalg.pinv(self.unit_vectors)
File "/usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py", line 1585, in pinv
u, s, vt = svd(a, 0)
File "/usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py", line 1327, in svd
u, s, vt = gufunc(a, signature=signature, extobj=extobj)
File "/usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py", line 99, in _raise_linalgerror_svd_nonconvergence
raise LinAlgError("SVD did not converge")
numpy.linalg.linalg.LinAlgError: SVD did not converge
```
If I offset it even just a little bit, then it works. But if I want to do a rendering looking down at "north", we get this message. If this is a limitation of the VR, we should do a check and issue a warning.
New issue 1188: Trouble Getting Started with yt
https://bitbucket.org/yt_analysis/yt/issues/1188/trouble-getting-started-wi…
Anonymous:
I'm a new user to yt and I've been having a really frustrating time getting started. I work with large simulations and have been using VisIt for 2D data. I want to switch to yt but the frustration of just trying to get it to run almost made me quit and go back to VisIt.
My first problem was figuring out how to start yt at all. I went to the quickstart guide, but all it told me to do was run a bunch of ipython notebooks. Nowhere did it mention that yt can just be imported into python like any other module and run from there, which would have saved me a lot of time and trouble. I don't know how to run an ipython notebook; I've never used them before. I eventually figured out how to create one, but then I had to access it remotely. All my data resides on the filesystem of the supercomputer where it was calculated. It's not feasible to fetch it to my local machine, especially not when I'm generating new data much faster than I could transfer it. So analysis needs to run on the viz node. Eventually I got a notebook to connect (needed to set up an ssh tunnel apparently) but it was really annoying to work through. I may be a programmer by profession, but I don't often deal with web stuff, and all of this setup is badly-explained when it's explained
at all
on the relevant websites. Like I said, this could have been avoided if the quickstart guide had ever mentioned that I could just import it and work with it like any other python module instead of diving straight into notebooks.
I got lost for a while in the fact that there seems to be two sets of documentation for ipython notebooks - ipython, but then jupyter? All the recent documentation on notebooks is written for jupyter, but that's not installed on my viz node and yt's docs don't say anything about it. So that also confused the heck out of me for a little while.
The cookbook recipes are also real confusing if you're working with 2D data. Nearly everything is written assuming you have a 3D dataset. I had to mess around a bit before I figured out that you could do a SlicePlot with a 2D dataset as long as you specify a slice axis that does not, technically, exist. ProjectionPlot is really what should be used for 2D data but the name makes it non-obvious that that's what you want.
yt itself is great and I'd really like to use it. I understand that a lot of the devs and members of the community work with things like bitbucket and ipython notebooks on a daily basis, but I spend most of my time tackling 30-year-old Fortran and I'm certainly not the only one. I have enough trouble just getting these simulations to run in the first place; there's no room in my workflow for me to devote large chunks of time to unraveling the visualization software. Programs like VisIt and EnSuite might be balky and obscenely expensive, but in general you can turn them on, click a button, open your data, and get a simple picture in a few minutes. Please overhaul the "quickstart" guides so that it's easier for someone like me who has python and matplotlib experience but no notebook experience to just turn the dang thing on and make a picture with a minimum of fuss.
This may be of interest to some people here.
---------- Forwarded message ----------
From: Thomas Robitaille <thomas.p.robitaille(a)gmail.com>
Date: Thu, Mar 17, 2016 at 10:52 AM
Subject: [glue-viz] Glue v0.7.0 released!
To: "glue-viz(a)googlegroups.com" <glue-viz(a)googlegroups.com>,
glue-viz-dev(a)googlegroups.com
Hi everyone,
We are happy to announce the release of* Glue v0.7.0*!
*Changes in this release*
A large fraction of the code has been re-organized in order to make the
code base more approachable for new contributors. The documentation now
includes sections on explaining various aspects of the Glue Architecture
<http://www.glueviz.org/en/stable/#the-glue-architecture> as well as how to
get started with Contributing to Glue
<http://www.glueviz.org/en/stable/#developing-glue> (which includes an
overview of the new layout of the code). If you are interesting in
contributing to the Glue code, please get in touch, and I'll help you get
started!
As a result of the reorganization, it is possible that some imports in your
Glue scripts will need to be updated. You can find more information about
which imports have changed in this page
<http://www.glueviz.org/en/stable/whatsnew/0.7_code_reorganization.html>.
We have also tried to make sure that old session files will still load
correctly. However, if you run into issues with old session files or
scripts, please let us know and we will fix the issues as quickly as
possible.
This release also includes a number of improvements and bug fixes, as well
as some internal changes needed for upcoming 3D functionality, which we
will announce very soon!
More information about this release can be found at:
http://www.glueviz.org/en/stable/whatsnew/0.7.html#whatsnew-07
*Installing/updating Glue*
We recommend installing Glue using the Anaconda Python Distribution
<http://continuum.io/downloads>. Once you have Anaconda installed (or
Miniconda, a minimal version of Anaconda), you can install or update glue
by simply typing:
conda install glueviz
in a terminal. This will install Glue as well as all of the required
dependencies. You can install any additional optional dependencies using pip
or the glue-deps command if needed.
You can also install or update Glue by clicking on *Install* in the
Anaconda Launcher:
[image: Inline images 2]
If you are using a Mac and prefer to use the standalone Mac app (which
comes with its own built-in version of Python), you can still find the
latest stable version here:
http://mac.glueviz.org
but note that it can be harder to customize the Mac app, and it will not be
possible for you to install third-party plugins for now. Therefore, for
more flexibility in future, we recommend using the conda package.
Please let us know if you run into any issues, and thanks to everyone who
contributed to this release!
Cheers,
Tom
--
You received this message because you are subscribed to the Google Groups
"Glue users" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to glue-viz+unsubscribe(a)googlegroups.com.
To post to this group, send email to glue-viz(a)googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/glue-viz/CAGNW64RCF6tmkb_DEhd_27ASv_BCxgq…
<https://groups.google.com/d/msgid/glue-viz/CAGNW64RCF6tmkb_DEhd_27ASv_BCxgq…>
.
For more options, visit https://groups.google.com/d/optout.
New issue 1187: ARTIO data cannot create Ray data objects
https://bitbucket.org/yt_analysis/yt/issues/1187/artio-data-cannot-create-r…
Cameron Hummels:
There appears to be something wrong with generating a `ray` object in ARTIO datasets. One can view its fields (e.g. 'density') no problem, but in order to view the ordered list of values along the ray (using the parametric variable `t`), it fails. See [here](http://yt-project.org/docs/dev/faq/index.html#why-are-the-values-in-m… for a full description of `t`'s use. It appears to occur in the 2 ARTIO datasets I've tested, and I have no problems with other frontends in accessing the contents of the 't' field in the ray.
I've generated a simple script (using the [publicly available artio dataset](http://yt-project.org/data/)) to demonstrate this:
```
#!python
import yt
ds = yt.load('sizmbhloz-clref04SNth-rs9_a0.9011/sizmbhloz-clref04SNth-rs9_a0.9011.art')
ray = ds.ray(ds.domain_left_edge, ds.domain_right_edge)
print "first density value in ray: %f" % ray['density'][0]
print "first t value in ray: %f" % ray['t'][0]
```
with output/traceback here:
http://paste.yt-project.org/show/6319/
Hi!
I am Kumar Ayush, an Engineering Physics sophomore who also loves coding. I
am the convener of the Web n Coding Club in my institute.
I have been working with astronomy for 5 years now, having represented my
country twice at the International Astronomy Olympiads. I really liked the
project idea "Domain Contexts and Domain-specific fields".
I am not familiar with yt and am really clueless on how to proceed.
It would be great if the mentors for this project can help me out.
Regards
Kumar Ayush
cheekujodhpur.github.io
New issue 1186: Include particles in the neighboring blocks for nearest neighbor particle deposition?
https://bitbucket.org/yt_analysis/yt/issues/1186/include-particles-in-the-n…
Yi-Hao Chen:
As I understand, the current nearest neighbor particle deposition function takes only the particle with in a block and assign the values of each cell by the nearest particle. However, it does not take the particles in the adjacent blocks into account. Thus it results in sharp boundaries between the blocks in the deposited field. It also gives empty blocks if there is no particle inside them, even though there are some particles just next to the wall.
I am interested in implementing a more flexible nearest neighbor deposition that takes at least the particles in the surrounding blocks into consideration. I have thought about some possible methods. But due to my unfamiliarity with the data structure, I am not sure if they are really possible. It would be great to have your suggestions. Thanks!
1. Pass more particles into the deposition function in ```Dataset.add_deposited_particle_field```. Instead of using ```pos = data[ptype, "particle_position"]```, I can include all nearby particles by using something like ```pos = data.ds.all_data()[ptype, "particle_position"][mask]``` where ```mask``` filters out the needed particles by specifying the boundaries.
However, since ```data``` is not necessary a grid, it might have some problems.
2. Use validator to force ```data``` to be a grid.
Then I can include particles in the surrounding blocks. I am not sure if yt loaded the indices of surrounding blocks, but it seems that the information is readily available in my FLASH plot files.
3. Force ```data``` to include guard cells.
I don't really know how guard cells work in yt. But in principle I can stored the deposited field and distance information in the guard cells, and compare them with the grid data to decide which value I want to keep. It might be too complicated though...