I've been playing around with the orientation of the VR images and have a PR that introduces a fix, but as thinking about 3d and understanding it when looking at 2d projections is, at times, confusing, I put together some examples and would like comment.
The test dataset is a simple one. The domain is [-1,1]^3, and there is a single cube on the +x axis, two cubes on +y, and three on +z. There is a sphere at the center, and the planes corresponding to x = xmin, y = ymin, and z = zmin are highlighted as well.
First point of confusion is what is the direction of the normal vector we use in the camera setup. The docstring says "The vector between the camera position and the center". For all of the renderings, I put the center of the camera at (0,0,0), so the docs suggest that the vector should point from where I am standing toward the origin. Therefore, if I want my view to hover in the +x, +y, +z quadrant, I would set it to (-1, -1, -1). But I seem to find that the normal vector points outward from the center, and (1,1,1) gives me what I want.
To make sense of all of these, here are three different views, with the old (current in yt) orientation and the one in my PR 1117. For each I also show an animation where I rotate in yaw (I thought I was doing 360 degrees, but it is not quite a full circle, but I'm not worrying about that now).
you may need to stare at these a bit to get a sense of the rotation, but it should be right-handed, so that should help break any degeneracies.
Note that I don't use draw_domain() because it does not work (yet) with the fix in the PR (because the image is upside down when it is drawn).
If you know the VR well, please comment as to whether I am just still confused, or if the PR seems to get things right. It matches my expectations, but I may be making some wrong assumptions about what we want the VR to do.