"faking" away variable-by-level refinement
Hi all, I've been out of the yt-loop for awhile, but am now getting back into the game. In particular I work with the BoxLib-based codes, Maestro and Castro. I'm glad Matt has taken the time to consolidate the BoxLib frontends - this is something I've wanted to do for some time! One issue that has come up is that (at least) Castro supports jumps in refinement that can vary between levels. In other words, Level 1 might be twice as refined as Level 0, but Level 2 might be four times as refined as Level 1. This causes problems in yt as there is only a single global refine_by parameter, which assumes all levels are refined by the same ratio. This probably isn't a major concern as I think I'm the only one so far who has used this variable-by-level refinement with Castro, and it may not be something useful in the future. So, for the time being, I want to move forward with testing out the consolidated BoxLib frontend on a real dataset. I could, of course, write a script offline that would take my old datafile, create intermediate-level grids and populate the data by averaging or interpolating the real data so that all levels have the same refinement ratio, but I've just "faked" the data for the levels that I have added. Another option would be to do this within yt when it loads a plotfile, if it notices the refinement ratio isn't the same at all levels. I'm not sure exactly if/how this could/should be done. One thing that pops to mind is use the child_mask that is set to all zeros. For instance, the data file contains Level 0 Level 1 (refined by 2 w.r.t. Level 0) Level 2 (refined by 4 w.r.t. Level 1) then when the data is actually needed to make a plot or slice or whathaveyou, yt could do something like this Level 0 Level 1 (refined by 2 w.r.t. Level 0) Level 2 (refined by 2 w.r.t. Level 1; same bounding grid as Level 2 from data file; child_mask = 0) Level 3 (refined by 2 w.r.t Level 2; same data/grid as Level 2 from data file) I'm scouring the code for "child_mask" to get a feel for when it is actually used in generating plots/data containers, but I thought I'd ask some basic questions first: When the mask is set, do all operations ignore coarse data? Would having an entire level that is masked be problematic? Chris
Hi Chris.
I have also used the by-level refinement ratio option in Castro and Nyx,
but I doubt we will need it in the future. I think if we ever use a
refinement factor of 4, all levels will be that way (and likely just one
level). Your idea of interpolating the in between level sounds reasonable
and easiest.
On Thu, May 23, 2013 at 10:37 AM, Chris Malone
Hi all,
I've been out of the yt-loop for awhile, but am now getting back into the game. In particular I work with the BoxLib-based codes, Maestro and Castro. I'm glad Matt has taken the time to consolidate the BoxLib frontends - this is something I've wanted to do for some time!
One issue that has come up is that (at least) Castro supports jumps in refinement that can vary between levels. In other words, Level 1 might be twice as refined as Level 0, but Level 2 might be four times as refined as Level 1.
This causes problems in yt as there is only a single global refine_by parameter, which assumes all levels are refined by the same ratio. This probably isn't a major concern as I think I'm the only one so far who has used this variable-by-level refinement with Castro, and it may not be something useful in the future.
So, for the time being, I want to move forward with testing out the consolidated BoxLib frontend on a real dataset. I could, of course, write a script offline that would take my old datafile, create intermediate-level grids and populate the data by averaging or interpolating the real data so that all levels have the same refinement ratio, but I've just "faked" the data for the levels that I have added.
Another option would be to do this within yt when it loads a plotfile, if it notices the refinement ratio isn't the same at all levels. I'm not sure exactly if/how this could/should be done. One thing that pops to mind is use the child_mask that is set to all zeros.
For instance, the data file contains
Level 0 Level 1 (refined by 2 w.r.t. Level 0) Level 2 (refined by 4 w.r.t. Level 1)
then when the data is actually needed to make a plot or slice or whathaveyou, yt could do something like this
Level 0 Level 1 (refined by 2 w.r.t. Level 0) Level 2 (refined by 2 w.r.t. Level 1; same bounding grid as Level 2 from data file; child_mask = 0) Level 3 (refined by 2 w.r.t Level 2; same data/grid as Level 2 from data file)
I'm scouring the code for "child_mask" to get a feel for when it is actually used in generating plots/data containers, but I thought I'd ask some basic questions first:
When the mask is set, do all operations ignore coarse data? Would having an entire level that is masked be problematic?
Chris
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Hi Chris,
Thanks for writing!
On Thu, May 23, 2013 at 1:37 PM, Chris Malone
Hi all,
I've been out of the yt-loop for awhile, but am now getting back into the game. In particular I work with the BoxLib-based codes, Maestro and Castro. I'm glad Matt has taken the time to consolidate the BoxLib frontends - this is something I've wanted to do for some time!
One issue that has come up is that (at least) Castro supports jumps in refinement that can vary between levels. In other words, Level 1 might be twice as refined as Level 0, but Level 2 might be four times as refined as Level 1.
This causes problems in yt as there is only a single global refine_by parameter, which assumes all levels are refined by the same ratio. This probably isn't a major concern as I think I'm the only one so far who has used this variable-by-level refinement with Castro, and it may not be something useful in the future.
So, for the time being, I want to move forward with testing out the consolidated BoxLib frontend on a real dataset. I could, of course, write a script offline that would take my old datafile, create intermediate-level grids and populate the data by averaging or interpolating the real data so that all levels have the same refinement ratio, but I've just "faked" the data for the levels that I have added.
Ah, wow! And have the tests worked so far?
Another option would be to do this within yt when it loads a plotfile, if it notices the refinement ratio isn't the same at all levels. I'm not sure exactly if/how this could/should be done. One thing that pops to mind is use the child_mask that is set to all zeros.
For instance, the data file contains
Level 0 Level 1 (refined by 2 w.r.t. Level 0) Level 2 (refined by 4 w.r.t. Level 1)
then when the data is actually needed to make a plot or slice or whathaveyou, yt could do something like this
Level 0 Level 1 (refined by 2 w.r.t. Level 0) Level 2 (refined by 2 w.r.t. Level 1; same bounding grid as Level 2 from data file; child_mask = 0) Level 3 (refined by 2 w.r.t Level 2; same data/grid as Level 2 from data file)
I'm scouring the code for "child_mask" to get a feel for when it is actually used in generating plots/data containers, but I thought I'd ask some basic questions first:
I have thought about this, and I believe it's quite reasonable. For what it's worth, we do something slightly similar with the RAMSES frontend, in that we have a min_level that we assume it refines to. (Not completely the same thing, but similar idea.) There is a mechanism for creating a fake grid, which is to set up a covering grid, but that assumes the existence of a hierarchy already. But it might also be possible to fake the grid itself during the creation of the hierarchy (setting a flag on it or something) and doubling up the data.
When the mask is set, do all operations ignore coarse data?
Almost entirely, yes.
Would having an entire level that is masked be problematic?
I don't think it would be, no. This is a really cool idea! I could probably help out a bit if you get stuck, but this sounds very promising. It would be absolutely amazing to get a unified Boxlib frontend, too. And if this doesn't pan out, it may be possible that we can just change how refine_by works.
Chris
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Casey - Ok good, I didn't know there was anyone else who had used that feature. One issue with a script to alter the dataset by adding levels is that I think BoxLib likes --- but perhaps doesn't _enforce_ --- there to be some padding between levels so that a coarse level isn't entirely covered by fine data. Again, this could be remedied by interpolation, but I don't really care about the data on this intermediate level. The approach within yt would be nice because it could simply ignore the data, possibly. Matt -
So, for the time being, I want to move forward with testing out the consolidated BoxLib frontend on a real dataset. I could, of course, write a script offline that would take my old datafile, create intermediate-level grids and populate the data by averaging or interpolating the real data so that all levels have the same refinement ratio, but I've just "faked" the data for the levels that I have added.
Ah, wow! And have the tests worked so far?
This was something that I could do, but have not yet. I apologize if that was misleading!
I have thought about this, and I believe it's quite reasonable. For what it's worth, we do something slightly similar with the RAMSES frontend, in that we have a min_level that we assume it refines to. (Not completely the same thing, but similar idea.) There is a mechanism for creating a fake grid, which is to set up a covering grid, but that assumes the existence of a hierarchy already. But it might also be possible to fake the grid itself during the creation of the hierarchy (setting a flag on it or something) and doubling up the data.
Ok, so there is some fake grid precedent. I'll look at the Ramses code to get a feel for what is happening there. If we don't actually care about the data on the fake grid, does it even need to be doubled? It surely doesn't need to be a deep copy, but does data actually need to exist within the hierarchy at a specific level, or would that cause problems? Chris
On Thu, May 23, 2013 at 3:10 PM, Chris Malone
Casey - Ok good, I didn't know there was anyone else who had used that feature. One issue with a script to alter the dataset by adding levels is that I think BoxLib likes --- but perhaps doesn't _enforce_ --- there to be some padding between levels so that a coarse level isn't entirely covered by fine data. Again, this could be remedied by interpolation, but I don't really care about the data on this intermediate level. The approach within yt would be nice because it could simply ignore the data, possibly.
Matt -
So, for the time being, I want to move forward with testing out the consolidated BoxLib frontend on a real dataset. I could, of course, write a script offline that would take my old datafile, create intermediate-level grids and populate the data by averaging or interpolating the real data so that all levels have the same refinement ratio, but I've just "faked" the data for the levels that I have added.
Ah, wow! And have the tests worked so far?
This was something that I could do, but have not yet. I apologize if that was misleading!
I have thought about this, and I believe it's quite reasonable. For what it's worth, we do something slightly similar with the RAMSES frontend, in that we have a min_level that we assume it refines to. (Not completely the same thing, but similar idea.) There is a mechanism for creating a fake grid, which is to set up a covering grid, but that assumes the existence of a hierarchy already. But it might also be possible to fake the grid itself during the creation of the hierarchy (setting a flag on it or something) and doubling up the data.
Ok, so there is some fake grid precedent. I'll look at the Ramses code to get a feel for what is happening there. If we don't actually care about the data on the fake grid, does it even need to be doubled? It surely doesn't need to be a deep copy, but does data actually need to exist within the hierarchy at a specific level, or would that cause problems?
The RAMSES code in question unfortunately may not be completely relevant, since it also assumes we have a base covering of the entire domain at some min_level. If we don't care about the data, I don't think it will matter -- but if it does, we can address that, and it should be an obvious failure.
Chris
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
participants (3)
-
Casey W. Stark
-
Chris Malone
-
Matthew Turk