Hey everyone, Short Version of this email: Since it seems like I'm the minority in trying to flush out some of the remaining problems with the VR infrastructure before it gets merged into the main head of the dev branch, I will not be an impediment anymore. I've provided a list of things I think should be addressed (or discussed at least) at the bottom of this email before merging, but as I said, I will no longer block the merge if people want to do it. Long Version: A few years ago a few of us decided that the volume rendering infrastructure was too clunky and difficult to use efficiently to make good volume renders, so we detailed how it should be improved in a YTEP ( http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html). Subsequently, Sam Skillman put in a huge amount of work (with help from Matt, me, Andrew, Suoqing, and more) overhauling the volume rendering infrastructure to match the description of the YTEP. The new VR interface appears to work well in many cases, but I believe there are still a lot of remaining issues to be addressed before it's ready for mainstream: bugfixes, docs, recipes, etc. About 6 months ago, there was a pull request ( https://bitbucket.org/yt_analysis/yt/pull-requests/1256/volume-rendering-ref...) made to merge all of the new volume rendering infrastructure into the development branch, but at that point, the PR was enormous and difficult to review. While there were still a lot of unresolved issues associated with the PR, it was decided to merge it into the "experimental" bookmark on the dev branch, so as to not hold up other development dependent on that infrastructure, and to make it easier to test it out for everyone. While I was reviewing this pull request, I tried my best to read through all of the code diffs, make sure there was appropriate documentation (docstrings, narrative, comments, etc) for such a significant change, and actually download and test run the example code to see if it did what it was supposed to do. This was a ton of work, but I think that this level of review needs to occur with major PRs to assure things are working before merging and exposing problematic code to the user base. If we put code out there and don't properly explain how to use it (or if it doesn't work), we're ultimately going to make a lot more work for ourselves trying to respond to people's questions on the mailing list about how to use it. And we'll potentially sour a lot of people to the code when they try to use something but find it doesn't do what they want it to do. Now there is increased pressure to just merge the VR refactor into the main dev branch so as to streamline overall development, and it seems I am the only one who has been resistant to this. The argument for the merge has been: "once it is in the main branch, people will sort out the bugs." Once something is in the main branch, it may very well get more people trying to use it; however, the level of confusion/irritation will increase if there is insufficient documentation, and the ability to change any kind of API is severely limited because it breaks peoples scripts to do so. The time for reviewing and fixing code comes before it gets merged into the main dev branch. But ultimately, I do not want to be seen as the bad guy anymore on this. If people want to merge the code, then I don't want to hold it up. I made a list of items when I reviewed the PR a few months back, and below are the things I think are remaining to be done. But what I think really needs to be done is to have other people do a full code review (including testing) and find things themselves (maybe my list, maybe other things). And then everyone works to fix them. If people want to address my list, or they want to ignore them, that's fine. Really, it doesn't matter to me anymore. Cameron List of remaining things I think should be addressed before merging experimental head with main head on dev branch: - The cookbook recipe for a simple_vr should get rid of all of the extraneous stuff and just have the most simple means of creating a VR, just like the simple slice and simple proj do. - clip_ratio could be changed to something more descriptive (stddev_mask?), and should be defaulted to something like 4.0 since it is used almost everywhere to produce useful images - When making a VR, it defaults to using the first field in the field_name list if a field is not specified, which almost never gives the desired result. Perhaps just default to ('gas', 'density') like the PW plots do? Implicitly assumed in examples (ie cookbook recipe: camera_movement.py) - map_to_colormap function is not in the API docs. - map_to_colormap has a scale keyword, which could be more descriptive or have examples. what does it do? 30 gives good results, but 10 doesn't and the default: 1. renders a black screen. how does the user figure out what to use? even ballpark of what to use? change the defaults to be useful? - There should be a default filename for saving images. Default filename should be something descriptive, like PlotWindow filenames default to - Docstrings for all outward facing functions and classes including a description, parameters, returned quantities, and an example in the docstrings. right now, many examples are so trivial so as to not be helpful at all: "source = PointsSource(particle_positions)" examples would be better served to be a full VR with the appropriate method/class injected; probably 3-5lines each; then can be tested with docstring tests down the road too. - off_axis_projection() has a resolution arg, perhaps make a resolution kwarg with a default set to 512? 256? just as PW has and VR has? - off_axis_projection has an "item" kwarg, which should really be "fields" and should be handled in the same way as PW does (recognizes lists of fields) - The __init__ method on many VR classes have the docstrings, but it would be better to have them just after the class declaration so that the API docs rendered them properly. - the way to render an off_axis_projection requires yt.write_image() instead of sc.render() like the VRs even though it is using the same infrastructure. inconsistent. why? sc.render() *runs*, but even with a clip_ratio, it just gives a black screen because all of the values are NaNs - Transfer functions operate in log space for fields that are logged. This should be done in the linear space of the fields. Can be very confusing. - TransferFunctionHelper should be split out into a different page in the narrative docs, as the jupyter notebook in the middle of the docs breaks it up and makes it confusing - Put the VR annotations (annotate_grids()) in the narrative docs to show how to do them. - rotate on the camera should specify what it is rotating -- specifically, it rotates the "position" of the camera, not the direction of the camera - switch_view should be included in narrative docs - Add examples for all the different lenses to the cookbook or narrative docs - Make one parallelization header, with subheaders in the narrative docs - Note differences in ways to write images in VR: (1) you specify an fname in the yt.volume_render() function originally (2) you get the resulting im object from yt.volume_render() and then write it using im.write_png(fname), (3) you take the resulting sc object from yt.volume_render() and then sc.render(fname). i guess it is good to have these three methods for getting the images to file, but it might be worthwhile to explicitly state that there are three methods for writing your VR to file, so people don't get confused on which is the main method (as i did and just had to figure it out by looking through different examples, source, etc). - Camera movement cookbook needs to be improved to make images that make sense - Link to subsections of the VR docs in the relevant cookbook recipes not just to VR as a whole. - Add narrative docs to get the source, camera, get transfer function, etc. - Possibly get rid of all getters other than for source on the scene object. - get_source is incorrect, since dicts are not sorted - Left handed versus right handed images; There was significant discussion regarding this in the VR refactor PR linked above. The conclusion that a few of us reached (Suoqing, Michael, myself) was: The VR system measures the normal vector from the camera, but in the PlotWindow interface, there is no "camera" per se, and the normal vector is measured from the focus. So they're both in a RHS from their own perspective, but it's because that normal vector is getting reversed between the two systems. The fix would be to flip the direction of north_vector in the VR system so it's right handed from the perspective of the focus, but left handed from the perspective of the camera. Then we get the same results from similar inputs to PlotWindow outputs and VR (including off_axis_projections()). - Using opaque sources (e.g. points and lines and grids) with render sources (e.g. gas VR) tends to result in either seeing just one or the other out of the box. It takes a lot of work to balance the opacity of the different sources, and it is not documented how to do this. I think we should potentially provide better default opacities associated with these so as to make both points and gas visible in the same rendering out of the box. I think we should also work on documenting easy ways to change the relative opacity of different sources to highlight one source or the other in the rendering. -- Cameron Hummels NSF Postdoctoral Fellow Department of Astronomy California Institute of Technology http://chummels.org
I think a lot of these could potentially be fixed/finished in a big sprint
(documentation stuff in particular). Perhaps we could split this into a
list of difficult vs. semi-trivial, and at least kill off the easier ones
before we do a big merge. I agree pretty much completely with your points,
but VR is not a feature I rely on, so I don't really lose anything if this
feature takes longer to get merged in.
On Wed, Sep 2, 2015 at 5:40 PM, Cameron Hummels
Hey everyone,
Short Version of this email:
Since it seems like I'm the minority in trying to flush out some of the remaining problems with the VR infrastructure before it gets merged into the main head of the dev branch, I will not be an impediment anymore. I've provided a list of things I think should be addressed (or discussed at least) at the bottom of this email before merging, but as I said, I will no longer block the merge if people want to do it.
Long Version:
A few years ago a few of us decided that the volume rendering infrastructure was too clunky and difficult to use efficiently to make good volume renders, so we detailed how it should be improved in a YTEP ( http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html). Subsequently, Sam Skillman put in a huge amount of work (with help from Matt, me, Andrew, Suoqing, and more) overhauling the volume rendering infrastructure to match the description of the YTEP. The new VR interface appears to work well in many cases, but I believe there are still a lot of remaining issues to be addressed before it's ready for mainstream: bugfixes, docs, recipes, etc.
About 6 months ago, there was a pull request ( https://bitbucket.org/yt_analysis/yt/pull-requests/1256/volume-rendering-ref...) made to merge all of the new volume rendering infrastructure into the development branch, but at that point, the PR was enormous and difficult to review. While there were still a lot of unresolved issues associated with the PR, it was decided to merge it into the "experimental" bookmark on the dev branch, so as to not hold up other development dependent on that infrastructure, and to make it easier to test it out for everyone.
While I was reviewing this pull request, I tried my best to read through all of the code diffs, make sure there was appropriate documentation (docstrings, narrative, comments, etc) for such a significant change, and actually download and test run the example code to see if it did what it was supposed to do. This was a ton of work, but I think that this level of review needs to occur with major PRs to assure things are working before merging and exposing problematic code to the user base. If we put code out there and don't properly explain how to use it (or if it doesn't work), we're ultimately going to make a lot more work for ourselves trying to respond to people's questions on the mailing list about how to use it. And we'll potentially sour a lot of people to the code when they try to use something but find it doesn't do what they want it to do.
Now there is increased pressure to just merge the VR refactor into the main dev branch so as to streamline overall development, and it seems I am the only one who has been resistant to this. The argument for the merge has been: "once it is in the main branch, people will sort out the bugs." Once something is in the main branch, it may very well get more people trying to use it; however, the level of confusion/irritation will increase if there is insufficient documentation, and the ability to change any kind of API is severely limited because it breaks peoples scripts to do so. The time for reviewing and fixing code comes before it gets merged into the main dev branch.
But ultimately, I do not want to be seen as the bad guy anymore on this. If people want to merge the code, then I don't want to hold it up. I made a list of items when I reviewed the PR a few months back, and below are the things I think are remaining to be done. But what I think really needs to be done is to have other people do a full code review (including testing) and find things themselves (maybe my list, maybe other things). And then everyone works to fix them. If people want to address my list, or they want to ignore them, that's fine. Really, it doesn't matter to me anymore.
Cameron
List of remaining things I think should be addressed before merging experimental head with main head on dev branch:
- The cookbook recipe for a simple_vr should get rid of all of the extraneous stuff and just have the most simple means of creating a VR, just like the simple slice and simple proj do. - clip_ratio could be changed to something more descriptive (stddev_mask?), and should be defaulted to something like 4.0 since it is used almost everywhere to produce useful images -
When making a VR, it defaults to using the first field in the field_name list if a field is not specified, which almost never gives the desired result. Perhaps just default to ('gas', 'density') like the PW plots do? Implicitly assumed in examples (ie cookbook recipe: camera_movement.py) -
map_to_colormap function is not in the API docs. - map_to_colormap has a scale keyword, which could be more descriptive or have examples. what does it do? 30 gives good results, but 10 doesn't and the default: 1. renders a black screen. how does the user figure out what to use? even ballpark of what to use? change the defaults to be useful? -
There should be a default filename for saving images. Default filename should be something descriptive, like PlotWindow filenames default to - Docstrings for all outward facing functions and classes including a description, parameters, returned quantities, and an example in the docstrings. right now, many examples are so trivial so as to not be helpful at all: "source = PointsSource(particle_positions)" examples would be better served to be a full VR with the appropriate method/class injected; probably 3-5lines each; then can be tested with docstring tests down the road too. -
off_axis_projection() has a resolution arg, perhaps make a resolution kwarg with a default set to 512? 256? just as PW has and VR has? - off_axis_projection has an "item" kwarg, which should really be "fields" and should be handled in the same way as PW does (recognizes lists of fields) - The __init__ method on many VR classes have the docstrings, but it would be better to have them just after the class declaration so that the API docs rendered them properly. -
the way to render an off_axis_projection requires yt.write_image() instead of sc.render() like the VRs even though it is using the same infrastructure. inconsistent. why? sc.render() *runs*, but even with a clip_ratio, it just gives a black screen because all of the values are NaNs - Transfer functions operate in log space for fields that are logged. This should be done in the linear space of the fields. Can be very confusing. - TransferFunctionHelper should be split out into a different page in the narrative docs, as the jupyter notebook in the middle of the docs breaks it up and makes it confusing - Put the VR annotations (annotate_grids()) in the narrative docs to show how to do them. - rotate on the camera should specify what it is rotating -- specifically, it rotates the "position" of the camera, not the direction of the camera - switch_view should be included in narrative docs - Add examples for all the different lenses to the cookbook or narrative docs - Make one parallelization header, with subheaders in the narrative docs - Note differences in ways to write images in VR: (1) you specify an fname in the yt.volume_render() function originally (2) you get the resulting im object from yt.volume_render() and then write it using im.write_png(fname), (3) you take the resulting sc object from yt.volume_render() and then sc.render(fname). i guess it is good to have these three methods for getting the images to file, but it might be worthwhile to explicitly state that there are three methods for writing your VR to file, so people don't get confused on which is the main method (as i did and just had to figure it out by looking through different examples, source, etc).
- Camera movement cookbook needs to be improved to make images that make sense - Link to subsections of the VR docs in the relevant cookbook recipes not just to VR as a whole. - Add narrative docs to get the source, camera, get transfer function, etc. - Possibly get rid of all getters other than for source on the scene object. - get_source is incorrect, since dicts are not sorted - Left handed versus right handed images; There was significant discussion regarding this in the VR refactor PR linked above. The conclusion that a few of us reached (Suoqing, Michael, myself) was: The VR system measures the normal vector from the camera, but in the PlotWindow interface, there is no "camera" per se, and the normal vector is measured from the focus. So they're both in a RHS from their own perspective, but it's because that normal vector is getting reversed between the two systems. The fix would be to flip the direction of north_vector in the VR system so it's right handed from the perspective of the focus, but left handed from the perspective of the camera. Then we get the same results from similar inputs to PlotWindow outputs and VR (including off_axis_projections()). - Using opaque sources (e.g. points and lines and grids) with render sources (e.g. gas VR) tends to result in either seeing just one or the other out of the box. It takes a lot of work to balance the opacity of the different sources, and it is not documented how to do this. I think we should potentially provide better default opacities associated with these so as to make both points and gas visible in the same rendering out of the box. I think we should also work on documenting easy ways to change the relative opacity of different sources to highlight one source or the other in the rendering.
-- Cameron Hummels NSF Postdoctoral Fellow Department of Astronomy California Institute of Technology http://chummels.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Hi Cameron,
Thanks for the summary -- I think we do need to address most of your
concerns before it gets fully released into the wild as we agreed a couple
months ago. In particular all examples should make decent looking images.
I'm 50/50 on some of the API suggestions you have there, but see my next
thought below.
Given that I'm the source of a lot of this pain, what would folks think
about turning Cameron's list into bb issues that are marked as blockers for
3.3 and assigned to me, then if folks have time they can take them off my
plate? I can't commit to a ton of time but it'd be a nice vacation once in
a while to work on yt some more. That way each one could be discussed a
bit (especially some of the API suggestions) before spending time coding
things that don't do the "right" thing going forward.
Cheers from the ether,
Sam
On Wed, Sep 2, 2015 at 5:47 PM B.W. Keller
I think a lot of these could potentially be fixed/finished in a big sprint (documentation stuff in particular). Perhaps we could split this into a list of difficult vs. semi-trivial, and at least kill off the easier ones before we do a big merge. I agree pretty much completely with your points, but VR is not a feature I rely on, so I don't really lose anything if this feature takes longer to get merged in.
On Wed, Sep 2, 2015 at 5:40 PM, Cameron Hummels
wrote: Hey everyone,
Short Version of this email:
Since it seems like I'm the minority in trying to flush out some of the remaining problems with the VR infrastructure before it gets merged into the main head of the dev branch, I will not be an impediment anymore. I've provided a list of things I think should be addressed (or discussed at least) at the bottom of this email before merging, but as I said, I will no longer block the merge if people want to do it.
Long Version:
A few years ago a few of us decided that the volume rendering infrastructure was too clunky and difficult to use efficiently to make good volume renders, so we detailed how it should be improved in a YTEP ( http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html). Subsequently, Sam Skillman put in a huge amount of work (with help from Matt, me, Andrew, Suoqing, and more) overhauling the volume rendering infrastructure to match the description of the YTEP. The new VR interface appears to work well in many cases, but I believe there are still a lot of remaining issues to be addressed before it's ready for mainstream: bugfixes, docs, recipes, etc.
About 6 months ago, there was a pull request ( https://bitbucket.org/yt_analysis/yt/pull-requests/1256/volume-rendering-ref...) made to merge all of the new volume rendering infrastructure into the development branch, but at that point, the PR was enormous and difficult to review. While there were still a lot of unresolved issues associated with the PR, it was decided to merge it into the "experimental" bookmark on the dev branch, so as to not hold up other development dependent on that infrastructure, and to make it easier to test it out for everyone.
While I was reviewing this pull request, I tried my best to read through all of the code diffs, make sure there was appropriate documentation (docstrings, narrative, comments, etc) for such a significant change, and actually download and test run the example code to see if it did what it was supposed to do. This was a ton of work, but I think that this level of review needs to occur with major PRs to assure things are working before merging and exposing problematic code to the user base. If we put code out there and don't properly explain how to use it (or if it doesn't work), we're ultimately going to make a lot more work for ourselves trying to respond to people's questions on the mailing list about how to use it. And we'll potentially sour a lot of people to the code when they try to use something but find it doesn't do what they want it to do.
Now there is increased pressure to just merge the VR refactor into the main dev branch so as to streamline overall development, and it seems I am the only one who has been resistant to this. The argument for the merge has been: "once it is in the main branch, people will sort out the bugs." Once something is in the main branch, it may very well get more people trying to use it; however, the level of confusion/irritation will increase if there is insufficient documentation, and the ability to change any kind of API is severely limited because it breaks peoples scripts to do so. The time for reviewing and fixing code comes before it gets merged into the main dev branch.
But ultimately, I do not want to be seen as the bad guy anymore on this. If people want to merge the code, then I don't want to hold it up. I made a list of items when I reviewed the PR a few months back, and below are the things I think are remaining to be done. But what I think really needs to be done is to have other people do a full code review (including testing) and find things themselves (maybe my list, maybe other things). And then everyone works to fix them. If people want to address my list, or they want to ignore them, that's fine. Really, it doesn't matter to me anymore.
Cameron
List of remaining things I think should be addressed before merging experimental head with main head on dev branch:
- The cookbook recipe for a simple_vr should get rid of all of the extraneous stuff and just have the most simple means of creating a VR, just like the simple slice and simple proj do. - clip_ratio could be changed to something more descriptive (stddev_mask?), and should be defaulted to something like 4.0 since it is used almost everywhere to produce useful images -
When making a VR, it defaults to using the first field in the field_name list if a field is not specified, which almost never gives the desired result. Perhaps just default to ('gas', 'density') like the PW plots do? Implicitly assumed in examples (ie cookbook recipe: camera_movement.py) -
map_to_colormap function is not in the API docs. - map_to_colormap has a scale keyword, which could be more descriptive or have examples. what does it do? 30 gives good results, but 10 doesn't and the default: 1. renders a black screen. how does the user figure out what to use? even ballpark of what to use? change the defaults to be useful? -
There should be a default filename for saving images. Default filename should be something descriptive, like PlotWindow filenames default to - Docstrings for all outward facing functions and classes including a description, parameters, returned quantities, and an example in the docstrings. right now, many examples are so trivial so as to not be helpful at all: "source = PointsSource(particle_positions)" examples would be better served to be a full VR with the appropriate method/class injected; probably 3-5lines each; then can be tested with docstring tests down the road too. -
off_axis_projection() has a resolution arg, perhaps make a resolution kwarg with a default set to 512? 256? just as PW has and VR has? - off_axis_projection has an "item" kwarg, which should really be "fields" and should be handled in the same way as PW does (recognizes lists of fields) - The __init__ method on many VR classes have the docstrings, but it would be better to have them just after the class declaration so that the API docs rendered them properly. -
the way to render an off_axis_projection requires yt.write_image() instead of sc.render() like the VRs even though it is using the same infrastructure. inconsistent. why? sc.render() *runs*, but even with a clip_ratio, it just gives a black screen because all of the values are NaNs - Transfer functions operate in log space for fields that are logged. This should be done in the linear space of the fields. Can be very confusing. - TransferFunctionHelper should be split out into a different page in the narrative docs, as the jupyter notebook in the middle of the docs breaks it up and makes it confusing - Put the VR annotations (annotate_grids()) in the narrative docs to show how to do them. - rotate on the camera should specify what it is rotating -- specifically, it rotates the "position" of the camera, not the direction of the camera - switch_view should be included in narrative docs - Add examples for all the different lenses to the cookbook or narrative docs - Make one parallelization header, with subheaders in the narrative docs - Note differences in ways to write images in VR: (1) you specify an fname in the yt.volume_render() function originally (2) you get the resulting im object from yt.volume_render() and then write it using im.write_png(fname), (3) you take the resulting sc object from yt.volume_render() and then sc.render(fname). i guess it is good to have these three methods for getting the images to file, but it might be worthwhile to explicitly state that there are three methods for writing your VR to file, so people don't get confused on which is the main method (as i did and just had to figure it out by looking through different examples, source, etc).
- Camera movement cookbook needs to be improved to make images that make sense - Link to subsections of the VR docs in the relevant cookbook recipes not just to VR as a whole. - Add narrative docs to get the source, camera, get transfer function, etc. - Possibly get rid of all getters other than for source on the scene object. - get_source is incorrect, since dicts are not sorted - Left handed versus right handed images; There was significant discussion regarding this in the VR refactor PR linked above. The conclusion that a few of us reached (Suoqing, Michael, myself) was: The VR system measures the normal vector from the camera, but in the PlotWindow interface, there is no "camera" per se, and the normal vector is measured from the focus. So they're both in a RHS from their own perspective, but it's because that normal vector is getting reversed between the two systems. The fix would be to flip the direction of north_vector in the VR system so it's right handed from the perspective of the focus, but left handed from the perspective of the camera. Then we get the same results from similar inputs to PlotWindow outputs and VR (including off_axis_projections()). - Using opaque sources (e.g. points and lines and grids) with render sources (e.g. gas VR) tends to result in either seeing just one or the other out of the box. It takes a lot of work to balance the opacity of the different sources, and it is not documented how to do this. I think we should potentially provide better default opacities associated with these so as to make both points and gas visible in the same rendering out of the box. I think we should also work on documenting easy ways to change the relative opacity of different sources to highlight one source or the other in the rendering.
-- Cameron Hummels NSF Postdoctoral Fellow Department of Astronomy California Institute of Technology http://chummels.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Hi Cameron, Thanks for writing this--it's carefully written and lays out the issues well. Just wanted to comment on this thought:
The argument for the merge has been: "once it is in the main branch, people will sort out the bugs."
I don't think this description really catches the spirit of the argument correctly. As it is, it makes it sound like we are shoving the responsibility for finding and fixing bugs onto users. For me, the "dev" branch is just that--development. The very fact we call it that carries with it a "caveat emptor." This is not to say that we just pull things into it willy-nilly: you have correctly emphasized that we should not accept PRs without appropriate review, sufficient documentation, and making sure the test suite passes. Ideally, this also includes adding new tests for the new functionality. I have definitely learned better coding habits from your efforts in this regard. But the dev branch is by definition going to be unstable (which is why the alternative branch is named "stable"), even after all of these considerations. By getting code into the dev branch, we get a chance for users to take a whack at it in ways that most of us won't, which will by its very nature do a much better job at catching bugs in corner cases than programmed tests, which is the ultimate "test" for determining how "stable" this code is and whether it merits that label. This points to the fact that we really do need to change things so that the stable branch is updated much more often than it has been, so that we don't have to keep shuttling people onto dev whenever they need a recent bugfix, which is frankly risky at the very least. That way we can consistently say "feel free to use the dev branch, but we have a separate branch called 'stable' for a reason." That said, I think this particular PR definitely falls into a grey area for me as to whether it is ready for merging into dev or needs more review. Good arguments can be (and have been) made for both points of view. It sounds like Matt wants to do a final push with Sam in the Bay Area soon (or so I thought I saw on Slack), so maybe we should wait for that and then do the merge. Best, John
On Wed, Sep 2, 2015 at 8:07 PM, John ZuHone
But the dev branch is by definition going to be unstable (which is why the alternative branch is named "stable"), even after all of these considerations. By getting code into the dev branch, we get a chance for users to take a whack at it in ways that most of us won't, which will by its very nature do a much better job at catching bugs in corner cases than programmed tests, which is the ultimate "test" for determining how "stable" this code is and whether it merits that label.
This points to the fact that we really do need to change things so that the stable branch is updated much more often than it has been, so that we don't have to keep shuttling people onto dev whenever they need a recent bugfix, which is frankly risky at the very least. That way we can consistently say "feel free to use the dev branch, but we have a separate branch called 'stable' for a reason."
And I'm making it a personal mission to make sure the pr_backport.py script is being used every week after the PR hangout to ensure that bugfixes make their way to stable. See: https://bitbucket.org/yt_analysis/yt/pull-requests/1716/backporting-bugfixes... https://bitbucket.org/yt_analysis/yt/pull-requests/1730/backporting-bugfixes... My hope is to have one of those pull requests once a week, allowing us to issue stable releases once a month or so. If we do merge this while it isn't 100% finished, users and developers who normally work off the dev branch will still be able to go back to the stable branch to do their volume renderings. Definitely a bit annoying and not ideal, but this way the VR refactor gets a lot more hands-on testing, bugfixes, and feedback than if it lived on in the experimental bookmark until just before the 3.3 release. It also prevents a possible schism in the community for users of the ExodusII frontend and other unstructured mesh codes, since the version of yt they need is currently based on the VR refactor work. -Nathan
From what I can tell, *nearly* all of these are *not* API breaking, is that right? If so, can we decide on what are blockers to the merging of the vr refactor into the yt head, and what are blockers on release? If it's "everything" as blocker to merging into the yt head, then OK, but I think
Hi Cameron,
Thanks for doing this; you are not the "bad guy." I really appreciate that
you took the time out to itemize these things. Each of these is important
and necessary to do, and I like having them all be bitbucket issues.
there may be some leeway. The main reason I am suggesting we do this is
because *with* the backport script now running regularly, we are in a very,
very good position to move everyone running yt for production onto "stable"
and then encourage active, possibly-flaky, development on the "yt" branch.
What do you think?
-Matt
On Wed, Sep 2, 2015 at 4:40 PM, Cameron Hummels
Hey everyone,
Short Version of this email:
Since it seems like I'm the minority in trying to flush out some of the remaining problems with the VR infrastructure before it gets merged into the main head of the dev branch, I will not be an impediment anymore. I've provided a list of things I think should be addressed (or discussed at least) at the bottom of this email before merging, but as I said, I will no longer block the merge if people want to do it.
Long Version:
A few years ago a few of us decided that the volume rendering infrastructure was too clunky and difficult to use efficiently to make good volume renders, so we detailed how it should be improved in a YTEP ( http://ytep.readthedocs.org/en/latest/YTEPs/YTEP-0010.html). Subsequently, Sam Skillman put in a huge amount of work (with help from Matt, me, Andrew, Suoqing, and more) overhauling the volume rendering infrastructure to match the description of the YTEP. The new VR interface appears to work well in many cases, but I believe there are still a lot of remaining issues to be addressed before it's ready for mainstream: bugfixes, docs, recipes, etc.
About 6 months ago, there was a pull request ( https://bitbucket.org/yt_analysis/yt/pull-requests/1256/volume-rendering-ref...) made to merge all of the new volume rendering infrastructure into the development branch, but at that point, the PR was enormous and difficult to review. While there were still a lot of unresolved issues associated with the PR, it was decided to merge it into the "experimental" bookmark on the dev branch, so as to not hold up other development dependent on that infrastructure, and to make it easier to test it out for everyone.
While I was reviewing this pull request, I tried my best to read through all of the code diffs, make sure there was appropriate documentation (docstrings, narrative, comments, etc) for such a significant change, and actually download and test run the example code to see if it did what it was supposed to do. This was a ton of work, but I think that this level of review needs to occur with major PRs to assure things are working before merging and exposing problematic code to the user base. If we put code out there and don't properly explain how to use it (or if it doesn't work), we're ultimately going to make a lot more work for ourselves trying to respond to people's questions on the mailing list about how to use it. And we'll potentially sour a lot of people to the code when they try to use something but find it doesn't do what they want it to do.
Now there is increased pressure to just merge the VR refactor into the main dev branch so as to streamline overall development, and it seems I am the only one who has been resistant to this. The argument for the merge has been: "once it is in the main branch, people will sort out the bugs." Once something is in the main branch, it may very well get more people trying to use it; however, the level of confusion/irritation will increase if there is insufficient documentation, and the ability to change any kind of API is severely limited because it breaks peoples scripts to do so. The time for reviewing and fixing code comes before it gets merged into the main dev branch.
But ultimately, I do not want to be seen as the bad guy anymore on this. If people want to merge the code, then I don't want to hold it up. I made a list of items when I reviewed the PR a few months back, and below are the things I think are remaining to be done. But what I think really needs to be done is to have other people do a full code review (including testing) and find things themselves (maybe my list, maybe other things). And then everyone works to fix them. If people want to address my list, or they want to ignore them, that's fine. Really, it doesn't matter to me anymore.
Cameron
List of remaining things I think should be addressed before merging experimental head with main head on dev branch:
- The cookbook recipe for a simple_vr should get rid of all of the extraneous stuff and just have the most simple means of creating a VR, just like the simple slice and simple proj do. - clip_ratio could be changed to something more descriptive (stddev_mask?), and should be defaulted to something like 4.0 since it is used almost everywhere to produce useful images -
When making a VR, it defaults to using the first field in the field_name list if a field is not specified, which almost never gives the desired result. Perhaps just default to ('gas', 'density') like the PW plots do? Implicitly assumed in examples (ie cookbook recipe: camera_movement.py) -
map_to_colormap function is not in the API docs. - map_to_colormap has a scale keyword, which could be more descriptive or have examples. what does it do? 30 gives good results, but 10 doesn't and the default: 1. renders a black screen. how does the user figure out what to use? even ballpark of what to use? change the defaults to be useful? -
There should be a default filename for saving images. Default filename should be something descriptive, like PlotWindow filenames default to - Docstrings for all outward facing functions and classes including a description, parameters, returned quantities, and an example in the docstrings. right now, many examples are so trivial so as to not be helpful at all: "source = PointsSource(particle_positions)" examples would be better served to be a full VR with the appropriate method/class injected; probably 3-5lines each; then can be tested with docstring tests down the road too. -
off_axis_projection() has a resolution arg, perhaps make a resolution kwarg with a default set to 512? 256? just as PW has and VR has? - off_axis_projection has an "item" kwarg, which should really be "fields" and should be handled in the same way as PW does (recognizes lists of fields) - The __init__ method on many VR classes have the docstrings, but it would be better to have them just after the class declaration so that the API docs rendered them properly. -
the way to render an off_axis_projection requires yt.write_image() instead of sc.render() like the VRs even though it is using the same infrastructure. inconsistent. why? sc.render() *runs*, but even with a clip_ratio, it just gives a black screen because all of the values are NaNs - Transfer functions operate in log space for fields that are logged. This should be done in the linear space of the fields. Can be very confusing. - TransferFunctionHelper should be split out into a different page in the narrative docs, as the jupyter notebook in the middle of the docs breaks it up and makes it confusing - Put the VR annotations (annotate_grids()) in the narrative docs to show how to do them. - rotate on the camera should specify what it is rotating -- specifically, it rotates the "position" of the camera, not the direction of the camera - switch_view should be included in narrative docs - Add examples for all the different lenses to the cookbook or narrative docs - Make one parallelization header, with subheaders in the narrative docs - Note differences in ways to write images in VR: (1) you specify an fname in the yt.volume_render() function originally (2) you get the resulting im object from yt.volume_render() and then write it using im.write_png(fname), (3) you take the resulting sc object from yt.volume_render() and then sc.render(fname). i guess it is good to have these three methods for getting the images to file, but it might be worthwhile to explicitly state that there are three methods for writing your VR to file, so people don't get confused on which is the main method (as i did and just had to figure it out by looking through different examples, source, etc).
- Camera movement cookbook needs to be improved to make images that make sense - Link to subsections of the VR docs in the relevant cookbook recipes not just to VR as a whole. - Add narrative docs to get the source, camera, get transfer function, etc. - Possibly get rid of all getters other than for source on the scene object. - get_source is incorrect, since dicts are not sorted - Left handed versus right handed images; There was significant discussion regarding this in the VR refactor PR linked above. The conclusion that a few of us reached (Suoqing, Michael, myself) was: The VR system measures the normal vector from the camera, but in the PlotWindow interface, there is no "camera" per se, and the normal vector is measured from the focus. So they're both in a RHS from their own perspective, but it's because that normal vector is getting reversed between the two systems. The fix would be to flip the direction of north_vector in the VR system so it's right handed from the perspective of the focus, but left handed from the perspective of the camera. Then we get the same results from similar inputs to PlotWindow outputs and VR (including off_axis_projections()). - Using opaque sources (e.g. points and lines and grids) with render sources (e.g. gas VR) tends to result in either seeing just one or the other out of the box. It takes a lot of work to balance the opacity of the different sources, and it is not documented how to do this. I think we should potentially provide better default opacities associated with these so as to make both points and gas visible in the same rendering out of the box. I think we should also work on documenting easy ways to change the relative opacity of different sources to highlight one source or the other in the rendering.
-- Cameron Hummels NSF Postdoctoral Fellow Department of Astronomy California Institute of Technology http://chummels.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
participants (6)
-
B.W. Keller
-
Cameron Hummels
-
John ZuHone
-
Matthew Turk
-
Nathan Goldbaum
-
Sam Skillman