Hi all, I've been trying to fix a bug in the opaque rendering. It comes down to which way the rays get integrated (back-to-front/front-to-back) with respect to the camera position. Right now I think the behavior is such that the bricks are handed to the sampler from the back to front, but the bricks themselves are rendered front to back. Anyways, the bug shows up better in parallel, and I've been using http://paste.yt-project.org/show/2372/ with https://bitbucket.org/samskillman/yt-refactor/changeset/b4db9a5b5704 The behavior ends up being something like this (this particular image might have the opposite L than the script): http://imgur.com/Od3np.png If you get something that looks okay, rotate it to a few different positions and see if it still works. If anyone wants to dig in, I would appreciate the help. Cheers, Sam
Hi Sam, (For everyone else, this does not cause issues with the main branch, where opacity is largely ignored.) On Fri, May 11, 2012 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi all,
I've been trying to fix a bug in the opaque rendering. It comes down to which way the rays get integrated (back-to-front/front-to-back) with respect to the camera position. Right now I think the behavior is such that the bricks are handed to the sampler from the back to front, but the bricks themselves are rendered front to back. Anyways, the bug shows up better in parallel, and I've been using
http://paste.yt-project.org/show/2372/
with
https://bitbucket.org/samskillman/yt-refactor/changeset/b4db9a5b5704
The behavior ends up being something like this (this particular image might have the opposite L than the script): http://imgur.com/Od3np.png
If you get something that looks okay, rotate it to a few different positions and see if it still works.
In camera.py, the code I think that this would be related to is the creation of the third element of box_vectors. This is created by multiplying unit_vector[2] (normalized "normal" vector) by the third element of width. This then gets passed in to the sampler, where it ultimately gets fed into the walk_volume function. Origin is defined as the center minus half of the width times each of theu nit vectors. walk_volume then takes the origin, increments along each of the first two unit vectors (to dynamically generate positions), and then *along* the vp_dir (unit_vector[2]) it calculates the point of first intersection. i.e., the position of each *emitted ray* is dynamically generated, and then along the vp_dir (unit_vector[2]) the first intersection with a brick is calculated. The code for generating the positions is in __call__ in grid_traversal.pyx, around line 274. What it's doing there is taking a pixel value, transforming it with the inverse of the unit vectors matrix, and offsetting it by the back_center. Note that it's okay to offset by back_center instead of a back_corner because we move from -width/2.0 to width/2.0. So I think the positions are actually being calculated okay, unless you can identify an issue there. (They should all be identical to eye_position + unit_vector[2] * width[2].) The next thing to look at is in walk_volume. walk_volume calculates the intersection time t of a ray defined by a position (calculated above) and a vector (provided) with the first face it encounters. It does this by looking at all six faces, and then calculating the intersection time as (face_position - initial_position) / vector_component. This is then set as the initial position of the vector, and it walks across the grid. The variable step[i] defines the increment along directional component i. What this does is define if grid cell indices along that direction should be incremented or decremented as a ray passes over the brick. This is set to be the sign of v_dir, which means if you have a ray moving from high positional value to low positional value, it will decrement cell indices as it walsk over the grid. So unless there's something wrong with how any of these items are done (and I have not identified a problem) then I think it's okay. I'd suggest you take a look at exactly how the intersection is calculated and how step is calculated -- once those two things are done, the vector direction actually doesn't enter into the computation. As in, once you've entered a brick, and the code thinks it knows *where* you've entered a brick, it doesn't look at how you're moving anymore other than through step and tmax. This code has been changed during the broader volume rendering refactor. My recollection is that your code to do the color mapping does not touch the code in as many places and is much more self-contained; I would suggest you try it on the current development tip to see if the problem exists there as well. -Matt
If anyone wants to dig in, I would appreciate the help.
Cheers, Sam
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
participants (2)
-
Matthew Turk
-
Sam Skillman