Hi all,

It came up in the PR discussion today that we didn't settle in this thread on whether the new openGL vr should be included in the yt 3.3 release. I'd like to raise that now.

First, I want to say that I think that the PR is really awesome and adds a really nice new feature to yt - one that it would be very nice to include in the 3.3 release.

That said I'm nervous about including it as it currently stands due to the relative lack of documentation. Right now the documentation consists of cookbook recipes that aren't linked in the documentation, which means users won't be able to see them unless they clone the repository.

I understand the desire to not add documentation until the API is less experimental, but I'm nervous about adding a big chunk of new code with little documentation.

In addition, if we *do* want to advertise this functionality to people, then I think the path of highest empathy for those people is to provide them with some documentation. We could easily warn them in the documentation that the API is unstable, but at least they'll have a place to look. In addition, that it will be less work in the future to update the docs if the API ever does change rather than creating docs entirely from scratch.

I think just a bit more documentation, and making sure that the documentation is visible in the docs build along with warnings about the experimental API would be enough for me to withdraw my reservations.

-Nathan

On Tue, Mar 1, 2016 at 9:00 AM, Kacper Kowalik <xarthisius.kk@gmail.com> wrote:
Hi Cameron,

On 03/01/2016 08:40 AM, Cameron Hummels wrote:
Does this have a similar API / interface to the standard VR?

You can recreate image that you're seeing in software renderer. There's a helper for that (well stub for a helper...). Therefore, it could be used as a tool to quickly identify key frames and pass them to sw VR to make full HD movie. We are also working on native support for TransferFunction, but it requires more work. If you'd like to help with that, that'd be awesome.

However, similarities end here. GPU VR needs to care about things like mouse movement, key callbacks, OpenGL context, OpenGL shaders. By definition it will create different API / interface.

Cheers,
Kacper


On Thu, Feb 25, 2016 at 9:45 AM, Nathan Goldbaum <nathan12343@gmail.com>
wrote:



On Thu, Feb 25, 2016 at 11:40 AM, Matthew Turk <matthewturk@gmail.com>
wrote:

Hi folks,

Over the last little while, Kacper, Andrew and I have been picking up
on some work started by Chuck Rozhon to implement OpenGL-based volume
rendering of AMR data.  Kacper recorded a demo about a week ago,
although it has improved considerably even since then:

https://www.youtube.com/watch?v=yiiBDK1OJDo

As of right now, it can do these things:

  * Load up a "data source" (which can be all_data, but doesn't need to be)
  * Utilize orthographic and perspective cameras
  * Maximum intensity projection
  * Integrated projection
  * Apply colormaps to these two things, using two-pass rendering
  * Trackball camera with keyboard shortcuts for moving around the domain
  * Output camera information that is compatible with the software
renderer (i.e., it can be used to get a sequence of camera positions)
  * Save images out
  * Preliminary support for color transfer function-based VR.  At
present this only supports isosurfaces that are manually encoded.  It
will soon take 1D textures from the CTF object.

The system has been designed to be very modular, with extensible
keyboard and mouse shortcuts.  Kacper has even been able to build a
very lightweight Qt-based GUI around it (on BB as xarthisius/reason )
without changing much/any of the internal-to-yt code.  Also, it works
reasonably well even on fairly old graphics cards for reasonably sized
data.  (And since it'll accept data objects that are cutouts, this
means you could pull a sphere or block out of a gigantic dataset and
use that.)

Anyway, the reason I'm writing is that I'd like to bring it to
people's attention sooner rather than later.  It'll need some testing,
and we're also working to make it in a readily usable state as well.
As of right now, before WIP gets removed from the pull request, we're
going to add documentation (with notes that it is likely an unstable
API) and hopefully a short screencast.  But before then, I would like
to invite folks to either review the PR or to test it out.

https://bitbucket.org/yt_analysis/yt/pull-requests/1598

Note that this requires cyglfw3, which is accessible via pip.


Also needs glfw3, which I installed via homebrew.



I'm pretty excited about this, and the design we have been aiming for
with the way it accepts objects and shaders should enable a lot of
cool things to be done -- especially with respect to selecting data,
presenting it, etc etc.

I'd really like to see this be a part of 3.3.

-Matt
_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org



_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org






_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org


_______________________________________________
yt-dev mailing list
yt-dev@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org