
Hi all, (especially Sam) What's the current status of inline Enzo volume rendering? Sam, you had mentioned to me that with the new kD-tree decomposition that this should be feasible. If we opt not to move data around, which is my preference, is it still possible to partition every grid that belongs to a processor and then do the appropriate number of intermediate composition steps of the image? I recall Sam saying this may require log_2 Nproc composition steps, which may in fact be acceptable. Thanks, Matt PS Stephen, Britton and I have been chatting off-list about inline HOP, but once we come to a consensus we'll float it back onto the list.

Hi Matt, First things first - I've never even tried to do a volume rendering inline. If we don't want to move data around, it should be straightforward for simulations without load balancing turned on because the enzo domain decomposition mimics the kd-tree breadth-first decomposition (with a few adjustments). If load-balancing is turned on, I really have no clue how one would do this without some major additions. If we are okay with moving data around then there are more options and we would just have to put an initial data distribution function before the rendering begins. We could even add in some better memory management so that chunks of data are sent as needed instead of having to load everything into memory at one time. Alternatively, if we don't care about back-to-front ray-casting (in some cases you can't tell much of a difference), then the problem gets very simple...we may want to try this out on some post processing renders and get a feel for how much it matters. Anyways, I guess the current status would be that if we want it to work for all cases, it's going to take quite a bit more work. If we want it to work in some of the cases, it shouldn't be too much more work. Sam On Wed, Dec 8, 2010 at 4:00 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi all, (especially Sam)
What's the current status of inline Enzo volume rendering? Sam, you had mentioned to me that with the new kD-tree decomposition that this should be feasible. If we opt not to move data around, which is my preference, is it still possible to partition every grid that belongs to a processor and then do the appropriate number of intermediate composition steps of the image? I recall Sam saying this may require log_2 Nproc composition steps, which may in fact be acceptable.
Thanks,
Matt
PS Stephen, Britton and I have been chatting off-list about inline HOP, but once we come to a consensus we'll float it back onto the list. _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org

Hi Sam, On Wed, Dec 8, 2010 at 3:19 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi Matt, First things first - I've never even tried to do a volume rendering inline. If we don't want to move data around, it should be straightforward for simulations without load balancing turned on because the enzo domain decomposition mimics the kd-tree breadth-first decomposition (with a few adjustments). If load-balancing is turned on, I really have no clue how one would do this without some major additions.
Ah, I see the issue here. I think we can't/shouldn't assume load balancing is off. The particular use case I had in mind initially was unigrid, but the more interesting case occurs when there is AMR. Ideally this would work for both RefineRegion runs and AMR-everywhere, but it seems that in general this cannot be the case.
If we are okay with moving data around then there are more options and we would just have to put an initial data distribution function before the rendering begins. We could even add in some better memory management so that chunks of data are sent as needed instead of having to load everything into memory at one time.
Well, let's back up for a moment. The initial implementation of inline analysis as I wrote it is what I have referred to as "in situ" analysis. This is where the simulation grinds to a halt while analysis is conducted; for this reason, you can see why I'm a bit hesitant to do any load-balancing of data. The alternate is something that we could call "co-visualization." This is not yet implemented in yt, but it is the next phase. This is where the data is passed off and then the simulation continues; this is attractive for a number of reasons. I've created a very simple initial implementation of this that works with 1:1 processors, but it also does no load-balancing. The recent focus on inline analysis has been for two reasons: the first of which is that we are currently benchmarking and identifying hot spots for the *existing* inline analysis. But, we need to think to the next two iterations: the next iteration will involve coviz capabilities, and the one following that will be a hybrid of the two, wherein in situ visualization will be a byproduct of a rethinking of a simulation mainloop. So I think for the current generation, we can't assume it's okay to move data around. But it will be, eventually. This might just mean we can't use the fanciest of the volume rendering with in situ and need to move to coviz for that.
Alternatively, if we don't care about back-to-front ray-casting (in some cases you can't tell much of a difference), then the problem gets very simple...we may want to try this out on some post processing renders and get a feel for how much it matters.
For the ProjectionTransferFunction, this manifestly is not an issue -- but that, of course, is not the fanciest of the renderings. It may be interesting to have it as a switch: "unordered = True" in the Camera, for instance, that lets the grids come in any order. What do you think? Then for the gaussian-style TF's, we may get similar or identical results, but for the Planck it would probably be gross and wrong.
Anyways, I guess the current status would be that if we want it to work for all cases, it's going to take quite a bit more work. If we want it to work in some of the cases, it shouldn't be too much more work.
I think "some of the cases" is perfectly fine. This also speaks to the idea that we should construct a more general load balancing framework for spatially-oriented data in yt, but that's definitely not going to be a near-term goal. Thanks for your thoughts, Sam. I think the summary is: * With a small bit of work, it will work for non-EnzoLoadBalanced simulations * With unordered ray casting, it should work roughly as is with some minor additions * Anything else will require coviz capabilities Does that sound fair? -Matt
Sam On Wed, Dec 8, 2010 at 4:00 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi all, (especially Sam)
What's the current status of inline Enzo volume rendering? Sam, you had mentioned to me that with the new kD-tree decomposition that this should be feasible. If we opt not to move data around, which is my preference, is it still possible to partition every grid that belongs to a processor and then do the appropriate number of intermediate composition steps of the image? I recall Sam saying this may require log_2 Nproc composition steps, which may in fact be acceptable.
Thanks,
Matt
PS Stephen, Britton and I have been chatting off-list about inline HOP, but once we come to a consensus we'll float it back onto the list. _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org

Hi Matt, I think you hit the nail right on the head! I was thinking along the same lines as the coviz that you mentioned. I agree that it will be a way, if not the only way, to make it a general solution that works in all cases. I think the unordered=True is a great idea for the camera. We could even have it try to do the best it can by ordering all of the data that it owns (in an inline load balanced simulation), then ordering the composition based on the root grid locations that each processor owns. Then the only grids that are rendered out of order are the load balanced grids. There's a lot of room in this option. Anyways, your summary is spot on. Sam On Wed, Dec 8, 2010 at 4:46 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Sam,
On Wed, Dec 8, 2010 at 3:19 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi Matt, First things first - I've never even tried to do a volume rendering inline. If we don't want to move data around, it should be straightforward for simulations without load balancing turned on because the enzo domain decomposition mimics the kd-tree breadth-first decomposition (with a few adjustments). If load-balancing is turned on, I really have no clue how one would do this without some major additions.
Ah, I see the issue here. I think we can't/shouldn't assume load balancing is off. The particular use case I had in mind initially was unigrid, but the more interesting case occurs when there is AMR. Ideally this would work for both RefineRegion runs and AMR-everywhere, but it seems that in general this cannot be the case.
If we are okay with moving data around then there are more options and we would just have to put an initial data distribution function before the rendering begins. We could even add in some better memory management so that chunks of data are sent as needed instead of having to load everything into memory at one time.
Well, let's back up for a moment. The initial implementation of inline analysis as I wrote it is what I have referred to as "in situ" analysis. This is where the simulation grinds to a halt while analysis is conducted; for this reason, you can see why I'm a bit hesitant to do any load-balancing of data. The alternate is something that we could call "co-visualization." This is not yet implemented in yt, but it is the next phase. This is where the data is passed off and then the simulation continues; this is attractive for a number of reasons. I've created a very simple initial implementation of this that works with 1:1 processors, but it also does no load-balancing.
The recent focus on inline analysis has been for two reasons: the first of which is that we are currently benchmarking and identifying hot spots for the *existing* inline analysis. But, we need to think to the next two iterations: the next iteration will involve coviz capabilities, and the one following that will be a hybrid of the two, wherein in situ visualization will be a byproduct of a rethinking of a simulation mainloop.
So I think for the current generation, we can't assume it's okay to move data around. But it will be, eventually. This might just mean we can't use the fanciest of the volume rendering with in situ and need to move to coviz for that.
Alternatively, if we don't care about back-to-front ray-casting (in some cases you can't tell much of a difference), then the problem gets very simple...we may want to try this out on some post processing renders and get a feel for how much it matters.
For the ProjectionTransferFunction, this manifestly is not an issue -- but that, of course, is not the fanciest of the renderings. It may be interesting to have it as a switch: "unordered = True" in the Camera, for instance, that lets the grids come in any order. What do you think? Then for the gaussian-style TF's, we may get similar or identical results, but for the Planck it would probably be gross and wrong.
Anyways, I guess the current status would be that if we want it to work for all cases, it's going to take quite a bit more work. If we want it to work in some of the cases, it shouldn't be too much more work.
I think "some of the cases" is perfectly fine. This also speaks to the idea that we should construct a more general load balancing framework for spatially-oriented data in yt, but that's definitely not going to be a near-term goal.
Thanks for your thoughts, Sam. I think the summary is:
* With a small bit of work, it will work for non-EnzoLoadBalanced simulations * With unordered ray casting, it should work roughly as is with some minor additions * Anything else will require coviz capabilities
Does that sound fair?
-Matt
Sam On Wed, Dec 8, 2010 at 4:00 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi all, (especially Sam)
What's the current status of inline Enzo volume rendering? Sam, you had mentioned to me that with the new kD-tree decomposition that this should be feasible. If we opt not to move data around, which is my preference, is it still possible to partition every grid that belongs to a processor and then do the appropriate number of intermediate composition steps of the image? I recall Sam saying this may require log_2 Nproc composition steps, which may in fact be acceptable.
Thanks,
Matt
PS Stephen, Britton and I have been chatting off-list about inline HOP, but once we come to a consensus we'll float it back onto the list. _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org

Hi Sam, On Wed, Dec 8, 2010 at 4:00 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi Matt, I think you hit the nail right on the head! I was thinking along the same lines as the coviz that you mentioned. I agree that it will be a way, if not the only way, to make it a general solution that works in all cases.
Excellent. I agree, this is something to keep in mind. First order of business is dealing with the inline viz we have now, and then moving on to this. I think if we keep this simmering it's something we'll be able to return to once we have a good lay of the landscape. (Perhaps this can be a topic of discussion at the next Enzo workshop -- even though I would prefer we keep the API extremely general to allow for coviz with arbitrary sources of data.)
I think the unordered=True is a great idea for the camera. We could even have it try to do the best it can by ordering all of the data that it owns (in an inline load balanced simulation), then ordering the composition based on the root grid locations that each processor owns. Then the only grids that are rendered out of order are the load balanced grids. There's a lot of room in this option.
Ah, awesome, I'm totally in agreement. I'm not sure that I would be able to handle making this modification, because it seems it may touch the kD-tree in a lot of different spots. Do you think you might have time to try your hand at this sometime in the next couple weeks/months? -Matt
Anyways, your summary is spot on. Sam
On Wed, Dec 8, 2010 at 4:46 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Sam,
On Wed, Dec 8, 2010 at 3:19 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi Matt, First things first - I've never even tried to do a volume rendering inline. If we don't want to move data around, it should be straightforward for simulations without load balancing turned on because the enzo domain decomposition mimics the kd-tree breadth-first decomposition (with a few adjustments). If load-balancing is turned on, I really have no clue how one would do this without some major additions.
Ah, I see the issue here. I think we can't/shouldn't assume load balancing is off. The particular use case I had in mind initially was unigrid, but the more interesting case occurs when there is AMR. Ideally this would work for both RefineRegion runs and AMR-everywhere, but it seems that in general this cannot be the case.
If we are okay with moving data around then there are more options and we would just have to put an initial data distribution function before the rendering begins. We could even add in some better memory management so that chunks of data are sent as needed instead of having to load everything into memory at one time.
Well, let's back up for a moment. The initial implementation of inline analysis as I wrote it is what I have referred to as "in situ" analysis. This is where the simulation grinds to a halt while analysis is conducted; for this reason, you can see why I'm a bit hesitant to do any load-balancing of data. The alternate is something that we could call "co-visualization." This is not yet implemented in yt, but it is the next phase. This is where the data is passed off and then the simulation continues; this is attractive for a number of reasons. I've created a very simple initial implementation of this that works with 1:1 processors, but it also does no load-balancing.
The recent focus on inline analysis has been for two reasons: the first of which is that we are currently benchmarking and identifying hot spots for the *existing* inline analysis. But, we need to think to the next two iterations: the next iteration will involve coviz capabilities, and the one following that will be a hybrid of the two, wherein in situ visualization will be a byproduct of a rethinking of a simulation mainloop.
So I think for the current generation, we can't assume it's okay to move data around. But it will be, eventually. This might just mean we can't use the fanciest of the volume rendering with in situ and need to move to coviz for that.
Alternatively, if we don't care about back-to-front ray-casting (in some cases you can't tell much of a difference), then the problem gets very simple...we may want to try this out on some post processing renders and get a feel for how much it matters.
For the ProjectionTransferFunction, this manifestly is not an issue -- but that, of course, is not the fanciest of the renderings. It may be interesting to have it as a switch: "unordered = True" in the Camera, for instance, that lets the grids come in any order. What do you think? Then for the gaussian-style TF's, we may get similar or identical results, but for the Planck it would probably be gross and wrong.
Anyways, I guess the current status would be that if we want it to work for all cases, it's going to take quite a bit more work. If we want it to work in some of the cases, it shouldn't be too much more work.
I think "some of the cases" is perfectly fine. This also speaks to the idea that we should construct a more general load balancing framework for spatially-oriented data in yt, but that's definitely not going to be a near-term goal.
Thanks for your thoughts, Sam. I think the summary is:
* With a small bit of work, it will work for non-EnzoLoadBalanced simulations * With unordered ray casting, it should work roughly as is with some minor additions * Anything else will require coviz capabilities
Does that sound fair?
-Matt
Sam On Wed, Dec 8, 2010 at 4:00 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi all, (especially Sam)
What's the current status of inline Enzo volume rendering? Sam, you had mentioned to me that with the new kD-tree decomposition that this should be feasible. If we opt not to move data around, which is my preference, is it still possible to partition every grid that belongs to a processor and then do the appropriate number of intermediate composition steps of the image? I recall Sam saying this may require log_2 Nproc composition steps, which may in fact be acceptable.
Thanks,
Matt
PS Stephen, Britton and I have been chatting off-list about inline HOP, but once we come to a consensus we'll float it back onto the list. _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org

Hi Matt,
Ah, awesome, I'm totally in agreement. I'm not sure that I would be able to handle making this modification, because it seems it may touch the kD-tree in a lot of different spots. Do you think you might have time to try your hand at this sometime in the next couple weeks/months?
Uh, not in the next couple weeks. After the new year I should have a bit more time and depending on how straightforward this ends up being I would think that the couple months timeline is about right. I guess with this setup, I'd want to restrict myself to each processor "owning" the grids associated with a single .cpu#### file (in enzo speak). This should mimic the grid ownership of the simulation while it is running. Then I should be able to just build a compositing tree and a local tree, where the compositing tree knows about the root grid layouts and the local tree is built from only the grids in the .cpu file. Sorry if I'm just thinking out loud here, but I better write this down now so that I know what I was thinking when I come back to it. In any case, I think I see the way forward. Sam
-Matt
Sam
On Wed, Dec 8, 2010 at 4:46 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi Sam,
On Wed, Dec 8, 2010 at 3:19 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi Matt, First things first - I've never even tried to do a volume rendering inline. If we don't want to move data around, it should be straightforward for simulations without load balancing turned on because the enzo domain decomposition mimics the kd-tree breadth-first decomposition (with a
few
adjustments). If load-balancing is turned on, I really have no clue how one would do this without some major additions.
Ah, I see the issue here. I think we can't/shouldn't assume load balancing is off. The particular use case I had in mind initially was unigrid, but the more interesting case occurs when there is AMR. Ideally this would work for both RefineRegion runs and AMR-everywhere, but it seems that in general this cannot be the case.
If we are okay with moving data around then there are more options and we would just have to put an initial data distribution function before
Anyways, your summary is spot on. the
rendering begins. We could even add in some better memory management so that chunks of data are sent as needed instead of having to load everything into memory at one time.
Well, let's back up for a moment. The initial implementation of inline analysis as I wrote it is what I have referred to as "in situ" analysis. This is where the simulation grinds to a halt while analysis is conducted; for this reason, you can see why I'm a bit hesitant to do any load-balancing of data. The alternate is something that we could call "co-visualization." This is not yet implemented in yt, but it is the next phase. This is where the data is passed off and then the simulation continues; this is attractive for a number of reasons. I've created a very simple initial implementation of this that works with 1:1 processors, but it also does no load-balancing.
The recent focus on inline analysis has been for two reasons: the first of which is that we are currently benchmarking and identifying hot spots for the *existing* inline analysis. But, we need to think to the next two iterations: the next iteration will involve coviz capabilities, and the one following that will be a hybrid of the two, wherein in situ visualization will be a byproduct of a rethinking of a simulation mainloop.
So I think for the current generation, we can't assume it's okay to move data around. But it will be, eventually. This might just mean we can't use the fanciest of the volume rendering with in situ and need to move to coviz for that.
Alternatively, if we don't care about back-to-front ray-casting (in some cases you can't tell much of a difference), then the problem gets very simple...we may want to try this out on some post processing renders and get a feel for how much it matters.
For the ProjectionTransferFunction, this manifestly is not an issue -- but that, of course, is not the fanciest of the renderings. It may be interesting to have it as a switch: "unordered = True" in the Camera, for instance, that lets the grids come in any order. What do you think? Then for the gaussian-style TF's, we may get similar or identical results, but for the Planck it would probably be gross and wrong.
Anyways, I guess the current status would be that if we want it to work for all cases, it's going to take quite a bit more work. If we want it to work in some of the cases, it shouldn't be too much more work.
I think "some of the cases" is perfectly fine. This also speaks to the idea that we should construct a more general load balancing framework for spatially-oriented data in yt, but that's definitely not going to be a near-term goal.
Thanks for your thoughts, Sam. I think the summary is:
* With a small bit of work, it will work for non-EnzoLoadBalanced simulations * With unordered ray casting, it should work roughly as is with some minor additions * Anything else will require coviz capabilities
Does that sound fair?
-Matt
Sam On Wed, Dec 8, 2010 at 4:00 PM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi all, (especially Sam)
What's the current status of inline Enzo volume rendering? Sam, you had mentioned to me that with the new kD-tree decomposition that this should be feasible. If we opt not to move data around, which is my preference, is it still possible to partition every grid that belongs to a processor and then do the appropriate number of intermediate composition steps of the image? I recall Sam saying this may require log_2 Nproc composition steps, which may in fact be acceptable.
Thanks,
Matt
PS Stephen, Britton and I have been chatting off-list about inline HOP, but once we come to a consensus we'll float it back onto the list. _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
participants (2)
-
Matthew Turk
-
Sam Skillman