Making volume rendering interactive

Hi everyone,
I'm a new user to yt (which looks amazing, by the way), and have a question about volume rendering...
What I'd like to do is have interactive volume rendering, like what is in VTK. In VTK, you can use the mouse to pan, zoom, and dolly. From what I can see, this isn't directly possible with yt currently. Is that correct?
Assuming that is correct, I'd be happy to write my own interactive navigator using matplotlib. However, I'd need to be able to map a (i, j) pixel from the 2D image output from camera.snapshot to an (x, y, z) position in the volume. Is there a way to do this without too much effort? I see there is a method called 'project_to_plane' in camera, which seems like it might do the inverse of that...
Relatedly, I was told on IRC that there is a new camera interface in the works (http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html). Is there a branch for this somewhere? Does it need contributors? It is going to be interactive in the above sense?
Thanks for your help!
Irwin

Hello Irwin!
Right now, me and my associates at UCSC are working on a piece of software that does just this (Interactive remote volume rendering using GPUs). It works right now, but is in the alpha/beta stage. However, it is designed to interop nicely with Yt. If you'd like to try it out, email me off list and I can help you out.
John Holdener Research Assistant UC HiPACC (408) 828-8699
-----Original Message----- From: yt-users-bounces@lists.spacepope.org [mailto:yt-users-bounces@lists.spacepope.org] On Behalf Of Irwin Zaid Sent: Sunday, April 20, 2014 4:05 PM To: yt-users@lists.spacepope.org Subject: [yt-users] Making volume rendering interactive
Hi everyone,
I'm a new user to yt (which looks amazing, by the way), and have a question about volume rendering...
What I'd like to do is have interactive volume rendering, like what is in VTK. In VTK, you can use the mouse to pan, zoom, and dolly. From what I can see, this isn't directly possible with yt currently. Is that correct?
Assuming that is correct, I'd be happy to write my own interactive navigator using matplotlib. However, I'd need to be able to map a (i, j) pixel from the 2D image output from camera.snapshot to an (x, y, z) position in the volume. Is there a way to do this without too much effort? I see there is a method called 'project_to_plane' in camera, which seems like it might do the inverse of that...
Relatedly, I was told on IRC that there is a new camera interface in the works (http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html). Is there a branch for this somewhere? Does it need contributors? It is going to be interactive in the above sense?
Thanks for your help!
Irwin _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (2)
-
Irwin Zaid
-
John Holdener