Hi together,
I would like to better understand how the volume rendering works. It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values.
From the description of the add_gaussian function, I understand
that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi together,
I would like to better understand how the volume rendering works. It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values. From the description of the add_gaussian function, I understand that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
The short answer, I think, is that you are right, you can get very large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of how the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that there are two primary methods for volume rendering. The first is to tell a story using somewhat abstract visuals; this is typically what people think of when they think of volume rendering, and it is supported by (and possibly the primary application of) yt. This would be what's used when the ColorTransferFunction is used. The other is designed to perform a meaningful line integral through the calculation; this is what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where the emission and absorption values come from.
In all cases, while the code may call things RGB, they need not be RGB explicitly. In fact, for the ProjectionTransferFunction, they are not. For the ProjectionTransferFunction, the code simply integrates with the emission value being set to the fluid value in a cell and the absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point the integration samples, which defaults to 5 subsamples within a given cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell) path length between samples. For the PlanckTransferFunction, the local emission is set to some approximation of a black-body, weighted with Density for emission, and the absorption is set to an approximation of scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss briefly below, to allow it to utilize multiple variables at a given location to calculate the emission and absorption. (i.e., Temperature governs the color, density governs the strength of the emission -- sliding along x and scaling along y, in the plot of the transfer function.)
When integrating a ColorTransferFunction, the situation is somewhat different. I've spent a bit of time reviewing the code, and I think I can provide a definite answer to your question. For reference, the code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing abstract isocontours, rather than computing an actual line integral that is then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from another field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha emission. Functionally, this means that we accentuate the peaks and spikes in the color transfer function, because alpha is typically set quite high at the gaussians included.
So in essence, with a color transfer function we accentuate only the regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours. If you place contours in the outer regions, if your emission value is too high, it will indeed obscure completely the inner regions. I have experimented with this and have found that it is extremely easy to create a completely visually confusing image that contains only the outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses into the inner regions. The inner regions are likely generating *higher* emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but please feel free to write back with any further questions about this or anything else.
Best,
Matt
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package yt-users@lists.spacepope.org Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi together,
I would like to better understand how the volume rendering works. It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values. From the description of the add_gaussian function, I understand that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
The short answer, I think, is that you are right, you can get very large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of how the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that there are two primary methods for volume rendering. The first is to tell a story using somewhat abstract visuals; this is typically what people think of when they think of volume rendering, and it is supported by (and possibly the primary application of) yt. This would be what's used when the ColorTransferFunction is used. The other is designed to perform a meaningful line integral through the calculation; this is what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where the emission and absorption values come from.
In all cases, while the code may call things RGB, they need not be RGB explicitly. In fact, for the ProjectionTransferFunction, they are not. For the ProjectionTransferFunction, the code simply integrates with the emission value being set to the fluid value in a cell and the absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point the integration samples, which defaults to 5 subsamples within a given cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell) path length between samples. For the PlanckTransferFunction, the local emission is set to some approximation of a black-body, weighted with Density for emission, and the absorption is set to an approximation of scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss briefly below, to allow it to utilize multiple variables at a given location to calculate the emission and absorption. (i.e., Temperature governs the color, density governs the strength of the emission -- sliding along x and scaling along y, in the plot of the transfer function.)
When integrating a ColorTransferFunction, the situation is somewhat different. I've spent a bit of time reviewing the code, and I think I can provide a definite answer to your question. For reference, the code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing abstract isocontours, rather than computing an actual line integral that is then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from another field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha emission. Functionally, this means that we accentuate the peaks and spikes in the color transfer function, because alpha is typically set quite high at the gaussians included.
So in essence, with a color transfer function we accentuate only the regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours. If you place contours in the outer regions, if your emission value is too high, it will indeed obscure completely the inner regions. I have experimented with this and have found that it is extremely easy to create a completely visually confusing image that contains only the outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses into the inner regions. The inner regions are likely generating *higher* emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but please feel free to write back with any further questions about this or anything else.
Best,
Matt
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Maike,
I think there is some confusion here about how our transfer function works. For the ColorTransferFunction, we are really specifying r,g,b emissivities. For example, let's say that we have chosen to specify that a density value of 5 corresponds to rgb = [0.5,0.5,0.5], whatever that color may be. Now, that does not mean that when the rays pass through an area with a density of 5 that the rgb images will be equal to 0.5,0.5,0.5. Rather, we will add j*ds to the image plane. If the entire domain had a density of 5, then the resulting image would be (some factor)*[0.5,0.5,0.5] where the factor has to do with the integration length. If instead the density=5 region was just in part of the volume, then it would add up the contribution to the final image, and it would be blended along the line of sight with any other emission from the rest of the volume.
If you take your example of an AMR sim with 10 levels, then using the implementation we have written, while the high levels have much smaller ds's, there are many more of them, which will yield roughly the same integration as had it been down only using the lower resolution data. It is a lot like an adaptive integration scheme in that sense.
Does this help? Please let us know.
Best, Sam
On Wed, Apr 6, 2011 at 4:27 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package yt-users@lists.spacepope.org Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi together,
I would like to better understand how the volume rendering works. It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values. From the description of the add_gaussian function, I understand that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
The short answer, I think, is that you are right, you can get very large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of how the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that there are two primary methods for volume rendering. The first is to tell a story using somewhat abstract visuals; this is typically what people think of when they think of volume rendering, and it is supported by (and possibly the primary application of) yt. This would be what's used when the ColorTransferFunction is used. The other is designed to perform a meaningful line integral through the calculation; this is what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where the emission and absorption values come from.
In all cases, while the code may call things RGB, they need not be RGB explicitly. In fact, for the ProjectionTransferFunction, they are not. For the ProjectionTransferFunction, the code simply integrates with the emission value being set to the fluid value in a cell and the absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point the integration samples, which defaults to 5 subsamples within a given cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell) path length between samples. For the PlanckTransferFunction, the local emission is set to some approximation of a black-body, weighted with Density for emission, and the absorption is set to an approximation of scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss briefly below, to allow it to utilize multiple variables at a given location to calculate the emission and absorption. (i.e., Temperature governs the color, density governs the strength of the emission -- sliding along x and scaling along y, in the plot of the transfer function.)
When integrating a ColorTransferFunction, the situation is somewhat different. I've spent a bit of time reviewing the code, and I think I can provide a definite answer to your question. For reference, the code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing abstract isocontours, rather than computing an actual line integral that is then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from another field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha emission. Functionally, this means that we accentuate the peaks and spikes in the color transfer function, because alpha is typically set quite high at the gaussians included.
So in essence, with a color transfer function we accentuate only the regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours. If you place contours in the outer regions, if your emission value is too high, it will indeed obscure completely the inner regions. I have experimented with this and have found that it is extremely easy to create a completely visually confusing image that contains only the outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses into the inner regions. The inner regions are likely generating *higher* emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but please feel free to write back with any further questions about this or anything else.
Best,
Matt
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Sam,
I still don't understand this. The small ds's will only yield the same integration as the large ds's if the physical width of the isosurface is the same on all refinement levels. If the width is, say, 10 cells on each refinement level, then the smallest ds's will only emit 1/1000 of the contribution of the largest ds's. In fact, if the emission is directly proportional to the RGB value, then even a factor of 100 will make the color so dark that the contribution to the final image is invisible. In this case you would see the isosurface only on the largest scales.
Is this what would happen?
Cheers, Maike
Hi Maike,
I think there is some confusion here about how our transfer function works. For the ColorTransferFunction, we are really specifying r,g,b emissivities. For example, let's say that we have chosen to specify that a density value of 5 corresponds to rgb = [0.5,0.5,0.5], whatever that color may be. Now, that does not mean that when the rays pass through an area with a density of 5 that the rgb images will be equal to 0.5,0.5,0.5. Rather, we will add j*ds to the image plane. If the entire domain had a density of 5, then the resulting image would be (some factor)*[0.5,0.5,0.5] where the factor has to do with the integration length. If instead the density=5 region was just in part of the volume, then it would add up the contribution to the final image, and it would be blended along the line of sight with any other emission from the rest of the volume.
If you take your example of an AMR sim with 10 levels, then using the implementation we have written, while the high levels have much smaller ds's, there are many more of them, which will yield roughly the same integration as had it been down only using the lower resolution data. It is a lot like an adaptive integration scheme in that sense.
Does this help? Please let us know.
Best, Sam
On Wed, Apr 6, 2011 at 4:27 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package
Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi together,
I would like to better understand how the volume rendering works. It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values. From the description of the add_gaussian function, I understand that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
The short answer, I think, is that you are right, you can get very large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of how the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that there are two primary methods for volume rendering. The first is to tell a story using somewhat abstract visuals; this is typically what people think of when they think of volume rendering, and it is supported by (and possibly the primary application of) yt. This would be what's used when the ColorTransferFunction is used. The other is designed to perform a meaningful line integral through the calculation; this is what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where the emission and absorption values come from.
In all cases, while the code may call things RGB, they need not be RGB explicitly. In fact, for the ProjectionTransferFunction, they are not. For the ProjectionTransferFunction, the code simply integrates with the emission value being set to the fluid value in a cell and the absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point the integration samples, which defaults to 5 subsamples within a given cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell) path length between samples. For the PlanckTransferFunction, the local emission is set to some approximation of a black-body, weighted with Density for emission, and the absorption is set to an approximation of scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss briefly below, to allow it to utilize multiple variables at a given location to calculate the emission and absorption. (i.e., Temperature governs the color, density governs the strength of the emission -- sliding along x and scaling along y, in the plot of the transfer function.)
When integrating a ColorTransferFunction, the situation is somewhat different. I've spent a bit of time reviewing the code, and I think I can provide a definite answer to your question. For reference, the code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing abstract isocontours, rather than computing an actual line integral that is then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from another field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha emission. Functionally, this means that we accentuate the peaks and spikes in the color transfer function, because alpha is typically set quite high at the gaussians included.
So in essence, with a color transfer function we accentuate only the regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours. If you place contours in the outer regions, if your emission value is too high, it will indeed obscure completely the inner regions. I have experimented with this and have found that it is extremely easy to create a completely visually confusing image that contains only the outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses into the inner regions. The inner regions are likely generating *higher* emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but please feel free to write back with any further questions about this or anything else.
Best,
Matt
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Maike,
The area of the grid cell in the image plane does not really matter. It is only the width of the grid cell in the line of sight that factors in. Therefore, if you have 10 cells replacing a single cell in the line of sight, each of them will contribute 1/10 of the emission. The contribution of each cell does not go as dx^3, just dx.
Britton
On Tue, Apr 12, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.dewrote:
Hi Sam,
I still don't understand this. The small ds's will only yield the same integration as the large ds's if the physical width of the isosurface is the same on all refinement levels. If the width is, say, 10 cells on each refinement level, then the smallest ds's will only emit 1/1000 of the contribution of the largest ds's. In fact, if the emission is directly proportional to the RGB value, then even a factor of 100 will make the color so dark that the contribution to the final image is invisible. In this case you would see the isosurface only on the largest scales.
Is this what would happen?
Cheers, Maike
Hi Maike,
I think there is some confusion here about how our transfer function works. For the ColorTransferFunction, we are really specifying r,g,b emissivities. For example, let's say that we have chosen to specify
that
a density value of 5 corresponds to rgb = [0.5,0.5,0.5], whatever that
color
may be. Now, that does not mean that when the rays pass through an area with a density of 5 that the rgb images will be equal to 0.5,0.5,0.5. Rather, we will add j*ds to the image plane. If the entire domain had a density of 5, then the resulting image would be (some factor)*[0.5,0.5,0.5] where the factor has to do with the integration length. If instead the density=5 region was just in part of the volume, then it would add up the contribution to the final image, and it would be blended along the line of sight with any other emission from the rest of the volume.
If you take your example of an AMR sim with 10 levels, then using the implementation we have written, while the high levels have much smaller ds's, there are many more of them, which will yield roughly the same integration as had it been down only using the lower resolution data. It is a lot like an adaptive integration scheme in that sense.
Does this help? Please let us know.
Best, Sam
On Wed, Apr 6, 2011 at 4:27 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package
Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt <maikeschmidt2@gmx.de
wrote:
Hi together,
I would like to better understand how the volume rendering works. It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values. From the description of the add_gaussian function, I understand that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
The short answer, I think, is that you are right, you can get very large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of how the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that there are two primary methods for volume rendering. The first is to tell a story using somewhat abstract visuals; this is typically what people think of when they think of volume rendering, and it is supported by (and possibly the primary application of) yt. This would be what's used when the ColorTransferFunction is used. The other is designed
to
perform a meaningful line integral through the calculation; this is what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where the emission and absorption values come from.
In all cases, while the code may call things RGB, they need not be
RGB
explicitly. In fact, for the ProjectionTransferFunction, they are not. For the ProjectionTransferFunction, the code simply integrates with the emission value being set to the fluid value in a cell and
the
absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point the integration samples, which defaults to 5 subsamples within a given cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell) path length between samples. For the PlanckTransferFunction, the local emission is set to some approximation of a black-body, weighted with Density for emission, and the absorption is set to an approximation
of
scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss briefly below, to allow it to utilize multiple variables at a given location to calculate the emission and absorption. (i.e., Temperature governs the color, density governs the strength of the emission -- sliding along x and scaling along y, in the plot of the transfer function.)
When integrating a ColorTransferFunction, the situation is somewhat different. I've spent a bit of time reviewing the code, and I think
I
can provide a definite answer to your question. For reference, the code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing abstract isocontours, rather than computing an actual line integral that is then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from another field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha emission. Functionally, this means that we accentuate the peaks and spikes in the color transfer function, because alpha is typically set quite
high
at the gaussians included.
So in essence, with a color transfer function we accentuate only the regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours. If
you
place contours in the outer regions, if your emission value is too high, it will indeed obscure completely the inner regions. I have experimented with this and have found that it is extremely easy to create a completely visually confusing image that contains only the outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses into the inner regions. The inner regions are likely generating *higher* emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but please feel free to write back with any further questions about this or anything else.
Best,
Matt
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Britton,
sure, but with 10 refinement levels, the smallest cells are a factor of 10^3 smaller than the largest ones. So 10 times a factor of 10^3 smaller compared to 10 times 1 is still a factor of 10^3 smaller...
Maike
-------- Original-Nachricht --------
Datum: Tue, 12 Apr 2011 11:43:45 -0400 Von: Britton Smith brittonsmith@gmail.com An: Discussion of the yt analysis package yt-users@lists.spacepope.org Betreff: Re: [yt-users] volume rendering
Hi Maike,
The area of the grid cell in the image plane does not really matter. It is only the width of the grid cell in the line of sight that factors in. Therefore, if you have 10 cells replacing a single cell in the line of sight, each of them will contribute 1/10 of the emission. The contribution of each cell does not go as dx^3, just dx.
Britton
On Tue, Apr 12, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.dewrote:
Hi Sam,
I still don't understand this. The small ds's will only yield the same integration as the large ds's if the physical width of the isosurface is the same on all refinement levels. If the width is, say, 10 cells on each refinement level, then the smallest ds's will only emit 1/1000 of the contribution of the largest ds's. In fact, if the emission is directly proportional to the RGB value, then even a factor of 100 will make the color so dark that the contribution to the final image is invisible. In this case you would see the isosurface only on the largest scales.
Is this what would happen?
Cheers, Maike
Hi Maike,
I think there is some confusion here about how our transfer function works. For the ColorTransferFunction, we are really specifying r,g,b emissivities. For example, let's say that we have chosen to specify
that
a density value of 5 corresponds to rgb = [0.5,0.5,0.5], whatever that
color
may be. Now, that does not mean that when the rays pass through an
area
with a density of 5 that the rgb images will be equal to 0.5,0.5,0.5. Rather, we will add j*ds to the image plane. If the entire domain
had a
density of 5, then the resulting image would be (some factor)*[0.5,0.5,0.5] where the factor has to do with the integration length. If instead
the
density=5 region was just in part of the volume, then it would add up
the
contribution to the final image, and it would be blended along the
line
of sight with any other emission from the rest of the volume.
If you take your example of an AMR sim with 10 levels, then using the implementation we have written, while the high levels have much
smaller
ds's, there are many more of them, which will yield roughly the same integration as had it been down only using the lower resolution data.
It
is a lot like an adaptive integration scheme in that sense.
Does this help? Please let us know.
Best, Sam
On Wed, Apr 6, 2011 at 4:27 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package
Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt
<maikeschmidt2@gmx.de
wrote:
Hi together,
I would like to better understand how the volume rendering
works.
It is explained here
http://yt.spacepope.org/doc/visualizing/volume_rendering.html
that the user defines transfer functions in terms of RGB values. From the description of the add_gaussian function, I understand that these RGB values describe the color value in the interval [0,1]. Now, in the radiative transfer equation on the above website, the emissivity gets multiplied by the path length delta s. I am now wondering how this works: Depending on how big the step size is, one could get extremely large or extremely small intensities that are essentially unrelated to the RGB values that were previously specified. How is it possible that, for example, the color of a density isosurface depends on the density only and not on the cell size? I guess I am missing something.
Cheers, Maike
The short answer, I think, is that you are right, you can get very large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of
how
the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that
there
are two primary methods for volume rendering. The first is to
tell a
story using somewhat abstract visuals; this is typically what
people
think of when they think of volume rendering, and it is supported
by
(and possibly the primary application of) yt. This would be
what's
used when the ColorTransferFunction is used. The other is
designed
to
perform a meaningful line integral through the calculation; this
is
what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where
the
emission and absorption values come from.
In all cases, while the code may call things RGB, they need not be
RGB
explicitly. In fact, for the ProjectionTransferFunction, they are not. For the ProjectionTransferFunction, the code simply
integrates
with the emission value being set to the fluid value in a cell and
the
absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point the integration samples, which defaults to 5 subsamples within a given cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell) path length between samples. For the PlanckTransferFunction, the local emission is set to some approximation of a black-body, weighted
with
Density for emission, and the absorption is set to an
approximation
of
scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss
briefly
below, to allow it to utilize multiple variables at a given
location
to calculate the emission and absorption. (i.e., Temperature
governs
the color, density governs the strength of the emission -- sliding along x and scaling along y, in the plot of the transfer
function.)
When integrating a ColorTransferFunction, the situation is
somewhat
different. I've spent a bit of time reviewing the code, and I
think
I
can provide a definite answer to your question. For reference,
the
code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing
abstract
isocontours, rather than computing an actual line integral that is then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from
another
field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha
emission.
Functionally, this means that we accentuate the peaks and spikes
in
the color transfer function, because alpha is typically set quite
high
at the gaussians included.
So in essence, with a color transfer function we accentuate only
the
regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours. If
you
place contours in the outer regions, if your emission value is too high, it will indeed obscure completely the inner regions. I have experimented with this and have found that it is extremely easy to create a completely visually confusing image that contains only
the
outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses
into
the inner regions. The inner regions are likely generating
*higher*
emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but
please
feel free to write back with any further questions about this or anything else.
Best,
Matt
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Maike,
Those cells are a factor of 1000 smaller in volume, but only a factor of 10 smaller in length. You need to think of this in terms of a single ray passing through a set of grid cells. What matter here is the length and not the volume. The contribution from each cell is proportional to its length in the line of sight. Additionally, if a single grid cell was replaced by cells that were 1/10 of the length of the original cell, there would still be 1000 of them as well.
At this point, I think we are talking past each other. I suggest that you try making a volume render image with some AMR data. If the volume renderer is not giving enough brightness to the smaller cells, it should be very obvious.
Britton
On Tue, Apr 12, 2011 at 11:51 AM, Maike Schmidt maikeschmidt2@gmx.dewrote:
Hi Britton,
sure, but with 10 refinement levels, the smallest cells are a factor of 10^3 smaller than the largest ones. So 10 times a factor of 10^3 smaller compared to 10 times 1 is still a factor of 10^3 smaller...
Maike
-------- Original-Nachricht --------
Datum: Tue, 12 Apr 2011 11:43:45 -0400 Von: Britton Smith brittonsmith@gmail.com An: Discussion of the yt analysis package yt-users@lists.spacepope.org Betreff: Re: [yt-users] volume rendering
Hi Maike,
The area of the grid cell in the image plane does not really matter. It is only the width of the grid cell in the line of sight that factors in. Therefore, if you have 10 cells replacing a single cell in the line of sight, each of them will contribute 1/10 of the emission. The contribution of each cell does not go as dx^3, just dx.
Britton
On Tue, Apr 12, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.dewrote:
Hi Sam,
I still don't understand this. The small ds's will only yield the same integration as the large ds's if the physical width of the isosurface is the same on all refinement levels. If the width is, say, 10 cells on each refinement level, then the smallest ds's will only emit 1/1000 of the contribution of the largest ds's. In fact, if the emission is directly proportional to the RGB value, then even a factor of 100 will make the color so dark that the contribution to the final image is invisible. In this case you would see the isosurface only on the largest scales.
Is this what would happen?
Cheers, Maike
Hi Maike,
I think there is some confusion here about how our transfer function works. For the ColorTransferFunction, we are really specifying r,g,b emissivities. For example, let's say that we have chosen to specify
that
a density value of 5 corresponds to rgb = [0.5,0.5,0.5], whatever that
color
may be. Now, that does not mean that when the rays pass through an
area
with a density of 5 that the rgb images will be equal to 0.5,0.5,0.5. Rather, we will add j*ds to the image plane. If the entire domain
had a
density of 5, then the resulting image would be (some factor)*[0.5,0.5,0.5] where the factor has to do with the integration length. If instead
the
density=5 region was just in part of the volume, then it would add up
the
contribution to the final image, and it would be blended along the
line
of sight with any other emission from the rest of the volume.
If you take your example of an AMR sim with 10 levels, then using the implementation we have written, while the high levels have much
smaller
ds's, there are many more of them, which will yield roughly the same integration as had it been down only using the lower resolution data.
It
is a lot like an adaptive integration scheme in that sense.
Does this help? Please let us know.
Best, Sam
On Wed, Apr 6, 2011 at 4:27 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package
Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt
<maikeschmidt2@gmx.de
wrote: > Hi together, > > I would like to better understand how the volume rendering
works.
> It is explained here > > http://yt.spacepope.org/doc/visualizing/volume_rendering.html > > that the user defines transfer functions in terms of RGB
values.
> From the description of the add_gaussian function, I understand > that these RGB values describe the color value in the interval > [0,1]. Now, in the radiative transfer equation on the above > website, the emissivity gets multiplied by the path length > delta s. I am now wondering how this works: Depending on how > big the step size is, one could get extremely large or
extremely
> small intensities that are essentially unrelated to the RGB > values that were previously specified. How is it possible that, > for example, the color of a density isosurface depends on the > density only and not on the cell size? I guess I am missing > something. > > Cheers, > Maike >
The short answer, I think, is that you are right, you can get
very
large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of
how
the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that
there
are two primary methods for volume rendering. The first is to
tell a
story using somewhat abstract visuals; this is typically what
people
think of when they think of volume rendering, and it is supported
by
(and possibly the primary application of) yt. This would be
what's
used when the ColorTransferFunction is used. The other is
designed
to
perform a meaningful line integral through the calculation; this
is
what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where
the
emission and absorption values come from.
In all cases, while the code may call things RGB, they need not
be
RGB
explicitly. In fact, for the ProjectionTransferFunction, they
are
not. For the ProjectionTransferFunction, the code simply
integrates
with the emission value being set to the fluid value in a cell
and
the
absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point
the
integration samples, which defaults to 5 subsamples within a
given
cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell)
path
length between samples. For the PlanckTransferFunction, the
local
emission is set to some approximation of a black-body, weighted
with
Density for emission, and the absorption is set to an
approximation
of
scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss
briefly
below, to allow it to utilize multiple variables at a given
location
to calculate the emission and absorption. (i.e., Temperature
governs
the color, density governs the strength of the emission --
sliding
along x and scaling along y, in the plot of the transfer
function.)
When integrating a ColorTransferFunction, the situation is
somewhat
different. I've spent a bit of time reviewing the code, and I
think
I
can provide a definite answer to your question. For reference,
the
code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing
abstract
isocontours, rather than computing an actual line integral that
is
then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from
another
field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha
emission.
Functionally, this means that we accentuate the peaks and spikes
in
the color transfer function, because alpha is typically set quite
high
at the gaussians included.
So in essence, with a color transfer function we accentuate only
the
regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours.
If
you
place contours in the outer regions, if your emission value is
too
high, it will indeed obscure completely the inner regions. I
have
experimented with this and have found that it is extremely easy
to
create a completely visually confusing image that contains only
the
outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses
into
the inner regions. The inner regions are likely generating
*higher*
emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but
please
feel free to write back with any further questions about this or anything else.
Best,
Matt
> > -- > GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit > gratis Handy-Flat! http://portal.gmx.net/de/go/dsl > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi,
Okay, so think about a single ray.
Now assume your simulation is just the root grid, and it has a density of 1, with 10 cells and a box length of 1. Now let your transfer function be such that the rgb emission at density of 1 is (0.1,0.2,0.3)
Your ds = 0.1
Result = 10 (total steps to traverse the volume) * (0.1, 0.2, 0.3) * 0.1 (your ds) = (0.1, 0.2 ,0.3)
Now say your volume is refined by 10 levels, so you have a total of 1024 cells. Your ds is now 1./1024
Result = 1024 (total steps to traverse the volume) * (0.1, 0.2, 0.3) * 1./1024 (your ds) = (0.1, 0.2 ,0.3)
Now just imagine you do the same process for every pixel, since every pixel is a ray.
I see britton just responded with a similar reasoning. Hope this helps.
Sam
On Tue, Apr 12, 2011 at 11:51 AM, Maike Schmidt maikeschmidt2@gmx.dewrote:
Hi Britton,
sure, but with 10 refinement levels, the smallest cells are a factor of 10^3 smaller than the largest ones. So 10 times a factor of 10^3 smaller compared to 10 times 1 is still a factor of 10^3 smaller...
Maike
-------- Original-Nachricht --------
Datum: Tue, 12 Apr 2011 11:43:45 -0400 Von: Britton Smith brittonsmith@gmail.com An: Discussion of the yt analysis package yt-users@lists.spacepope.org Betreff: Re: [yt-users] volume rendering
Hi Maike,
The area of the grid cell in the image plane does not really matter. It is only the width of the grid cell in the line of sight that factors in. Therefore, if you have 10 cells replacing a single cell in the line of sight, each of them will contribute 1/10 of the emission. The contribution of each cell does not go as dx^3, just dx.
Britton
On Tue, Apr 12, 2011 at 11:34 AM, Maike Schmidt maikeschmidt2@gmx.dewrote:
Hi Sam,
I still don't understand this. The small ds's will only yield the same integration as the large ds's if the physical width of the isosurface is the same on all refinement levels. If the width is, say, 10 cells on each refinement level, then the smallest ds's will only emit 1/1000 of the contribution of the largest ds's. In fact, if the emission is directly proportional to the RGB value, then even a factor of 100 will make the color so dark that the contribution to the final image is invisible. In this case you would see the isosurface only on the largest scales.
Is this what would happen?
Cheers, Maike
Hi Maike,
I think there is some confusion here about how our transfer function works. For the ColorTransferFunction, we are really specifying r,g,b emissivities. For example, let's say that we have chosen to specify
that
a density value of 5 corresponds to rgb = [0.5,0.5,0.5], whatever that
color
may be. Now, that does not mean that when the rays pass through an
area
with a density of 5 that the rgb images will be equal to 0.5,0.5,0.5. Rather, we will add j*ds to the image plane. If the entire domain
had a
density of 5, then the resulting image would be (some factor)*[0.5,0.5,0.5] where the factor has to do with the integration length. If instead
the
density=5 region was just in part of the volume, then it would add up
the
contribution to the final image, and it would be blended along the
line
of sight with any other emission from the rest of the volume.
If you take your example of an AMR sim with 10 levels, then using the implementation we have written, while the high levels have much
smaller
ds's, there are many more of them, which will yield roughly the same integration as had it been down only using the lower resolution data.
It
is a lot like an adaptive integration scheme in that sense.
Does this help? Please let us know.
Best, Sam
On Wed, Apr 6, 2011 at 4:27 AM, Maike Schmidt maikeschmidt2@gmx.de wrote:
Hi Matt,
many thanks for this really nice explanation! But I still don't understand how the color transfer function works. I thought that, if you add a Gaussian with peak (r,g,b), then the intensity that arrives at the camera from cells that contain the center of the Gaussian has exactly intensity (r,g,b), when I am neglecting absorption. Otherwise I don't understand how one can relate the color of the Gaussian to the color at the camera.
Now, say I want to visualize an isosurface of density on an AMR grid with 10 refinement levels, then the actual intensity contribution j * delta s will differ by 3 orders of magnitude for cells with the same density but different cell size. How do you make all these cells emit the same color? This is what I don't understand.
I guess it boils down to the question of how exactly you calculate your j in the radiative transfer equation when you have a color transfer function, say a single Gaussian with peak (r,g,b).
Many thanks, Maike
-------- Original-Nachricht --------
Datum: Tue, 5 Apr 2011 16:29:02 -0400 Von: Matthew Turk matthewturk@gmail.com An: Discussion of the yt analysis package
Betreff: Re: [yt-users] volume rendering
Hi Maike,
(I think this is your first post -- welcome to yt-users!)
On Tue, Apr 5, 2011 at 11:34 AM, Maike Schmidt
<maikeschmidt2@gmx.de
wrote: > Hi together, > > I would like to better understand how the volume rendering
works.
> It is explained here > > http://yt.spacepope.org/doc/visualizing/volume_rendering.html > > that the user defines transfer functions in terms of RGB
values.
> From the description of the add_gaussian function, I understand > that these RGB values describe the color value in the interval > [0,1]. Now, in the radiative transfer equation on the above > website, the emissivity gets multiplied by the path length > delta s. I am now wondering how this works: Depending on how > big the step size is, one could get extremely large or
extremely
> small intensities that are essentially unrelated to the RGB > values that were previously specified. How is it possible that, > for example, the color of a density isosurface depends on the > density only and not on the cell size? I guess I am missing > something. > > Cheers, > Maike >
The short answer, I think, is that you are right, you can get
very
large intensities in large cells that are unrelated to what came before. But, for the most part, this is not an issue because of
how
the weighting and color assignments are typically done.
[begin long, rambly answer]
There are a couple things in what you ask -- the first is that
there
are two primary methods for volume rendering. The first is to
tell a
story using somewhat abstract visuals; this is typically what
people
think of when they think of volume rendering, and it is supported
by
(and possibly the primary application of) yt. This would be
what's
used when the ColorTransferFunction is used. The other is
designed
to
perform a meaningful line integral through the calculation; this
is
what's done with the ProjectionTransferFunction and the PlanckTransferFunction. In all cases, the RT equation *is* integrated, but what varies between the two mechanisms is where
the
emission and absorption values come from.
In all cases, while the code may call things RGB, they need not
be
RGB
explicitly. In fact, for the ProjectionTransferFunction, they
are
not. For the ProjectionTransferFunction, the code simply
integrates
with the emission value being set to the fluid value in a cell
and
the
absorption value set to exactly zero. This results in the integration:
dI/ds = v_local
Where v_local is the (interpolated) fluid value at every point
the
integration samples, which defaults to 5 subsamples within a
given
cell. So the final intensity is equal to the sum of all (interpolated) values along a ray times the (local-to-a-cell)
path
length between samples. For the PlanckTransferFunction, the
local
emission is set to some approximation of a black-body, weighted
with
Density for emission, and the absorption is set to an
approximation
of
scattering, which we then assign to RGB. The PTF also utilizes a 'weighting' field inside the volume renderer, which I discuss
briefly
below, to allow it to utilize multiple variables at a given
location
to calculate the emission and absorption. (i.e., Temperature
governs
the color, density governs the strength of the emission --
sliding
along x and scaling along y, in the plot of the transfer
function.)
When integrating a ColorTransferFunction, the situation is
somewhat
different. I've spent a bit of time reviewing the code, and I
think
I
can provide a definite answer to your question. For reference,
the
code that this calls upon is defined in two source files:
yt/visualization/volume_rendering/transfer_functions.py yt/utilities/_amr_utils/VolumeIntegrator.pyx
Specifically, in the class ColorTransferFunction and in the FIT_get_value and TransferFunctionProxy.eval_transfer functions.
The ColorTransferFunction, which is designed for visualizing
abstract
isocontours, rather than computing an actual line integral that
is
then examined or modified, sets a weighting *table*. (For the PlanckTransferFunction, a weight_field_id is set; this means to multiply the value from a table against a value obtained from
another
field. This is how density-weighted emission is done.) The weight_table_id for CTF is set to the table for the alpha
emission.
Functionally, this means that we accentuate the peaks and spikes
in
the color transfer function, because alpha is typically set quite
high
at the gaussians included.
So in essence, with a color transfer function we accentuate only
the
regions where we have isocontours. I think it's easiest to speak about this in terms of a visualization of Density isocontours.
If
you
place contours in the outer regions, if your emission value is
too
high, it will indeed obscure completely the inner regions. I
have
experimented with this and have found that it is extremely easy
to
create a completely visually confusing image that contains only
the
outer contours and wispy hints of the inner contours.
However, even if you do have outer isocontours, if you set the emission and color values lower, you can indeed provide glimpses
into
the inner regions. The inner regions are likely generating
*higher*
emission values (this is certainly how it is done in yt, with the add_layers method on ColorTransferFunction.)
Anyway, I hope that helps clear things up a little bit -- but
please
feel free to write back with any further questions about this or anything else.
Best,
Matt
> > -- > GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit > gratis Handy-Flat! http://portal.gmx.net/de/go/dsl > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- NEU: FreePhone - kostenlos mobil telefonieren und surfen! Jetzt informieren: http://www.gmx.net/de/go/freephone _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-- GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit gratis Handy-Flat! http://portal.gmx.net/de/go/dsl _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org