Right now it's somewhat different. There's a cookbook recipe in the pull request you can look at to get an idea of how to set it up.

In principle the details could be wrapped up to make the user-facing API more similar.

On Tuesday, March 1, 2016, Cameron Hummels <chummels@gmail.com> wrote:
Does this have a similar API / interface to the standard VR?  

On Thu, Feb 25, 2016 at 9:45 AM, Nathan Goldbaum <nathan12343@gmail.com> wrote:


On Thu, Feb 25, 2016 at 11:40 AM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi folks,

Over the last little while, Kacper, Andrew and I have been picking up
on some work started by Chuck Rozhon to implement OpenGL-based volume
rendering of AMR data.  Kacper recorded a demo about a week ago,
although it has improved considerably even since then:

https://www.youtube.com/watch?v=yiiBDK1OJDo

As of right now, it can do these things:

 * Load up a "data source" (which can be all_data, but doesn't need to be)
 * Utilize orthographic and perspective cameras
 * Maximum intensity projection
 * Integrated projection
 * Apply colormaps to these two things, using two-pass rendering
 * Trackball camera with keyboard shortcuts for moving around the domain
 * Output camera information that is compatible with the software
renderer (i.e., it can be used to get a sequence of camera positions)
 * Save images out
 * Preliminary support for color transfer function-based VR.  At
present this only supports isosurfaces that are manually encoded.  It
will soon take 1D textures from the CTF object.

The system has been designed to be very modular, with extensible
keyboard and mouse shortcuts.  Kacper has even been able to build a
very lightweight Qt-based GUI around it (on BB as xarthisius/reason )
without changing much/any of the internal-to-yt code.  Also, it works
reasonably well even on fairly old graphics cards for reasonably sized
data.  (And since it'll accept data objects that are cutouts, this
means you could pull a sphere or block out of a gigantic dataset and
use that.)

Anyway, the reason I'm writing is that I'd like to bring it to
people's attention sooner rather than later.  It'll need some testing,
and we're also working to make it in a readily usable state as well.
As of right now, before WIP gets removed from the pull request, we're
going to add documentation (with notes that it is likely an unstable
API) and hopefully a short screencast.  But before then, I would like
to invite folks to either review the PR or to test it out.

https://bitbucket.org/yt_analysis/yt/pull-requests/1598

Note that this requires cyglfw3, which is accessible via pip.

Also needs glfw3, which I installed via homebrew.
 

I'm pretty excited about this, and the design we have been aiming for
with the way it accepts objects and shaders should enable a lot of
cool things to be done -- especially with respect to selecting data,
presenting it, etc etc.

I'd really like to see this be a part of 3.3.

-Matt
_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org


_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org




--
Cameron Hummels
NSF Postdoctoral Fellow
Department of Astronomy
California Institute of Technology