Re: [Yt-dev] [yt-users] Stars on volume rendering
Hi all,
I've moved the discussion over to yt-dev, because I ended up writing
up a very short set of changesets that test it out. This is the
simplest implementation; it uses a kD-tree I grabbed off googlecode,
which we'll ultimately want to replace with something faster, and for
each cell it calculates all the stars to consider. At each sampling
point it sums the contribution from all the stars (for now, they all
produce pure white emission, distributed as a Gaussian) and then adds
that to the incoming intensity for the purposes of computing the
transfer function for the baryons. Note that currently it samples the
Gaussian and does rectangular integration. (See below for more on
this.) The effective radius is set to 1%, but as Greg mentioned on
yt-users that can probably be made more loose.
There are several problems, which I was hoping to work out before I
sent this, but in the interest of getting it out there for someone
else to possibly fix up I thought I'd share what I had first.
1) There are grid artifacts. I tried tracking these down, but I was
unable to do so. I think it may be related to cell center/vertex
center data.
2) It's pretty slow, but my profiling shows that as showing up in the
retrieving of data, not in the searching of the kD-tree. This may be
a mistake in my reading of the profiles. It's slowest for images
where a single star contributes to many pixels, because of the way the
loops are nested.
3) It assumes a uniform sigma and a uniform emission coefficient
(equal in all three channels, i.e. white light)
4) It only works with homogenized volume; I don't understand how
PartitionedGrid objects are assigned to the kD-tree so I couldn't use
that.
One definite improvement that will need to happen is to remove the
direct calculation and sampling of the Gaussian inside the function
add_stars in yt/utilities/_amr_utils/VolumeIntegrator.pyx. The
sampling as a mechanism for integration (either rectangular or
trapezoidal) is going to miss points if your step size internal to a
cell could step over the centroid of the Gaussian. I believe the
right way to solve this is to cease calculating direct radius from the
sampling point to the Gaussian, and instead decompose the Gaussian
into the cylindrical radius (impact parameter) component and the
component along the ray. You then would use a tabulated erf(t)
function to get the total contribution to the intensity over t' to t'
+ dt. This should be better quality, although it may end up being a
wash for speed. This would also help to ensure that the peak of any
given Gaussian doesn't get skipped during the integration.
I've stuck the bundle up here:
http://yt.enzotools.org/files/star_rendering.hg
You'll have to re-cythonize yt/utilities/amr_utils.pyx and all the
changes have been made to
yt/utilities/_amr_utils/VolumeIntegrator.pyx. The function add_stars
is the main one.
A sample script is here:
http://paste.enzotools.org/show/1481
and I've attached two images of the Enzo Workshop sample dataset
JHK-DD0030, provided by Ji-hoon Kim. The first (raw_stars.png) is
what the image looks like straight out of the code. The second one
I've compressed the color scale, so you can see where the star
particles all are. You can definitely see the grid artifacts in both!
Anyway, if you play with it, let me know what you think. Especially
if you can figure out the grid boundary thing! :)
-Matt
On Fri, Jan 7, 2011 at 3:13 PM, j s oishi
As a note, that's what Orion does for all its particles, and it works just fine.
On Fri, Jan 7, 2011 at 12:01 PM, John Wise
wrote: Hi Matt,
This all sounds great. I like the idea of associating stars with bricks to simplify the search.
I think it's the easiest and best approach now (maybe not at petascale) to have all star particles duplicated on each processor. I can't think of any simulation with more than a few million star particles, and that easily fits into memory. This is the same approach I've taken with the new star particles in Enzo. I thought it would be best to exploit the fact that the problem wasn't memory limited.
My 2c. John
On 6 Jan 2011, at 18:30, Matthew Turk wrote:
Hi John,
(As a quick comment, one can export to Sunrise from yt, so that could also serve as a mechanism for rendering star particles.)
I have been thinking about this a lot lately, and I think you're right: we need a proper mechanism for compositing star particles on the fly during the traversal of rays across a homogenized volume. I had planned on this being one of my first yt projects this year. The current process of volume rendering (for more detail see the method paper) is basically:
1) Homogenize volume, splitting fields up into uniquely-tiling bricks 2) Sort bricks 3) For every brick, for every ray: a) Calculate intersection b) Update all channels (typically 3) based on *local* emission and absorption c) Update ray position 4) Return image plane
This is true for both the old, homogenized volume rendering technique and the new kD-tree technique. To fit star particles into this, we would regard them as exclusively sources of emission, with no impact on the local absorption. Nominally, this should be easy to do: for every cell, simply deposit the emission from a star. As you noted in your email, this results in very, very ugly results -- I tested it last summer with the hopes of coming up with something cool, but was unable to. Testing it today on an airplane showed it had bitrot a bit, so I haven't attached it. :)
I think we would need to move to, rather than cell-based emission (so that the smallest emission point from a star is a single cell) to a method where emission from star particles is calculated per ray (i.e., pixel). This would require an additional step:
1) Homogenize volume, splitting fields up into uniquely-tiling bricks 2) Sort bricks 3) For every brick, for every ray: a) Calculate intersection b) Calculate emission from stars local to the ray c) Update all channels (typically 3) based on *local* emission and absorption d) Update ray position 4) Return image plane
This would enable both density-based emission *and* star emission. This could be both density isocontours, for instance, and individual stars. The visual language in that would probably be very confusing, but it would be possible, particularly for pure annotations.
The issue here is that step 2b is probably very, very slow -- even using a (point-based) kD-tree it would likely add substantial run time, because there's no early termination mechanism. What I think we could do, however, is execute a pre-deposition phase. For the purposes of rendering, we can describe a star particle by only a few characteristics:
Emission(Red, Green, Blue) Gaussian(Radius) Position(x,y,z)
We should instead define an effective radius (ER), say at the 1% level, at which we won't worry anymore. We could then deposit delta functions of size ER for every star particle. This would give a cue to the ray caster, and we could modify:
1) Homogenize volume, splitting fields up into uniquely-tiling bricks 2) Sort bricks 3) For every brick, for every ray: a) Calculate intersection b) If local delta_field == True, execute ball query and calculate emission from stars local to the ray c) Update all channels (typically 3) based on *local* emission and absorption d) Update ray position 4) Return image plane
For the first pass, we would probably want all our stars to have the same ER, which would then be the radius of our ball-query. For parallel rendering, we would still have to have all of the star particles loaded on every processor; I don't think this is a problem, since in the limit where the star particles are memory-limiting, you would likely not suffer from pre-deposition. This also solves the grid-boundary issues, as each processor would deposit all stars during its initial homogenization.
What do you think? I think that the components external to the ray tracer could be assembled relatively easily, and then the ray tracer might take a bit of work. As a post-processing step we could even add lens flare, for that extra Star Trek look.
-Matt
On Thu, Jan 6, 2011 at 8:45 AM, John Wise
wrote: I forgot to mention that another way to do this is making a derived field that adds the stellar density to the gas density. However this doesn't look good when particles are in coarse grids, when they should be point sources in the image.
def _RelativeDensityStars(field, data): return (data["Density"] + data["star_density"])/dma add_field("RelativeDensityStars", function=_RelativeDensityStars, take_log = False)
where dma is a scaling variable.
I'm uploading my stand-alone script if you want to try to decipher it, although I tried to comment it some.
http://paste.enzotools.org/show/1475/
Also I uploaded the colormap based on B-V colors that I ripped from partiview to
http://www.astro.princeton.edu/~jwise/temp/BminusV.h5
John
On 01/06/2011 11:14 AM, John Wise wrote:
Hi Libby,
I'm afraid that there isn't a good solution for rendering stars, at least to my knowledge!
You can add them as pixels after you've determined the pixel numbers (as in Andrew's email) of the particles with the splat_points() routine in the image_writer module.
I wrote my own stand-alone splatter to put Gaussian splats for particles, but I never incorporated it into yt. I meant to a few months back when I wrote it but never did! It will produce these types of splats
http://www.astro.princeton.edu/~jwise/research/GalaxyBirth_files/combine.png
I had to manually blend the gas volume rendering and star splats afterwards to produce that image.
I hope I can make something that looks as good as partiview soon. This is the same dataset but with partiview.
http://www.astro.princeton.edu/~jwise/research/GalaxyBirth_files/stars_only....
I'll see if I can make time (first I have to find the code!) to incorporate my splatter into yt.
John
On 01/06/2011 09:15 AM, Elizabeth Harper-Clark wrote:
Hi all,
Thanks for all your help over the last couple of days. One more question: - Can I plot particles on a volume rendered image? I have stars and I want to show where they are!
Thanks,
Libby
-- Elizabeth Harper-Clark MA MSci PhD Candidate, Canadian Institute for Theoretical Astrophysics, UofT Sciences and Engineering Coordinator, Teaching Assistants' Training Program, UofT
www.astro.utoronto.ca/~h-clark http://www.astro.utoronto.ca/%7Eh-clark h-clark@cita.utoronto.ca mailto:h-clark@cita.utoronto.ca Astronomy office phone: +1-416-978-5759
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Matt,
This looks great so far. I'll take a look at how to get it going on the
amr-kdtree homogenization and either send you a bundle or just wait till
some of this gets pushed.
Sam
On Sun, Jan 9, 2011 at 7:28 PM, Matthew Turk
Hi all,
I've moved the discussion over to yt-dev, because I ended up writing up a very short set of changesets that test it out. This is the simplest implementation; it uses a kD-tree I grabbed off googlecode, which we'll ultimately want to replace with something faster, and for each cell it calculates all the stars to consider. At each sampling point it sums the contribution from all the stars (for now, they all produce pure white emission, distributed as a Gaussian) and then adds that to the incoming intensity for the purposes of computing the transfer function for the baryons. Note that currently it samples the Gaussian and does rectangular integration. (See below for more on this.) The effective radius is set to 1%, but as Greg mentioned on yt-users that can probably be made more loose.
There are several problems, which I was hoping to work out before I sent this, but in the interest of getting it out there for someone else to possibly fix up I thought I'd share what I had first.
1) There are grid artifacts. I tried tracking these down, but I was unable to do so. I think it may be related to cell center/vertex center data. 2) It's pretty slow, but my profiling shows that as showing up in the retrieving of data, not in the searching of the kD-tree. This may be a mistake in my reading of the profiles. It's slowest for images where a single star contributes to many pixels, because of the way the loops are nested. 3) It assumes a uniform sigma and a uniform emission coefficient (equal in all three channels, i.e. white light) 4) It only works with homogenized volume; I don't understand how PartitionedGrid objects are assigned to the kD-tree so I couldn't use that.
One definite improvement that will need to happen is to remove the direct calculation and sampling of the Gaussian inside the function add_stars in yt/utilities/_amr_utils/VolumeIntegrator.pyx. The sampling as a mechanism for integration (either rectangular or trapezoidal) is going to miss points if your step size internal to a cell could step over the centroid of the Gaussian. I believe the right way to solve this is to cease calculating direct radius from the sampling point to the Gaussian, and instead decompose the Gaussian into the cylindrical radius (impact parameter) component and the component along the ray. You then would use a tabulated erf(t) function to get the total contribution to the intensity over t' to t' + dt. This should be better quality, although it may end up being a wash for speed. This would also help to ensure that the peak of any given Gaussian doesn't get skipped during the integration.
I've stuck the bundle up here:
http://yt.enzotools.org/files/star_rendering.hg
You'll have to re-cythonize yt/utilities/amr_utils.pyx and all the changes have been made to yt/utilities/_amr_utils/VolumeIntegrator.pyx. The function add_stars is the main one.
A sample script is here:
http://paste.enzotools.org/show/1481
and I've attached two images of the Enzo Workshop sample dataset JHK-DD0030, provided by Ji-hoon Kim. The first (raw_stars.png) is what the image looks like straight out of the code. The second one I've compressed the color scale, so you can see where the star particles all are. You can definitely see the grid artifacts in both!
Anyway, if you play with it, let me know what you think. Especially if you can figure out the grid boundary thing! :)
-Matt
As a note, that's what Orion does for all its particles, and it works just fine.
On Fri, Jan 7, 2011 at 12:01 PM, John Wise
wrote: Hi Matt,
This all sounds great. I like the idea of associating stars with bricks to simplify the search.
I think it's the easiest and best approach now (maybe not at petascale) to have all star particles duplicated on each processor. I can't think of any simulation with more than a few million star particles, and that easily fits into memory. This is the same approach I've taken with the new star
My 2c. John
On 6 Jan 2011, at 18:30, Matthew Turk wrote:
Hi John,
(As a quick comment, one can export to Sunrise from yt, so that could also serve as a mechanism for rendering star particles.)
I have been thinking about this a lot lately, and I think you're right: we need a proper mechanism for compositing star particles on the fly during the traversal of rays across a homogenized volume. I had planned on this being one of my first yt projects this year. The current process of volume rendering (for more detail see the method paper) is basically:
1) Homogenize volume, splitting fields up into uniquely-tiling bricks 2) Sort bricks 3) For every brick, for every ray: a) Calculate intersection b) Update all channels (typically 3) based on *local* emission and
absorption
c) Update ray position 4) Return image plane
This is true for both the old, homogenized volume rendering technique and the new kD-tree technique. To fit star particles into this, we would regard them as exclusively sources of emission, with no impact on the local absorption. Nominally, this should be easy to do: for every cell, simply deposit the emission from a star. As you noted in your email, this results in very, very ugly results -- I tested it last summer with the hopes of coming up with something cool, but was unable to. Testing it today on an airplane showed it had bitrot a bit, so I haven't attached it. :)
I think we would need to move to, rather than cell-based emission (so that the smallest emission point from a star is a single cell) to a method where emission from star particles is calculated per ray (i.e., pixel). This would require an additional step:
1) Homogenize volume, splitting fields up into uniquely-tiling bricks 2) Sort bricks 3) For every brick, for every ray: a) Calculate intersection b) Calculate emission from stars local to the ray c) Update all channels (typically 3) based on *local* emission and absorption d) Update ray position 4) Return image plane
This would enable both density-based emission *and* star emission. This could be both density isocontours, for instance, and individual stars. The visual language in that would probably be very confusing, but it would be possible, particularly for pure annotations.
The issue here is that step 2b is probably very, very slow -- even using a (point-based) kD-tree it would likely add substantial run time, because there's no early termination mechanism. What I think we could do, however, is execute a pre-deposition phase. For the purposes of rendering, we can describe a star particle by only a few characteristics:
Emission(Red, Green, Blue) Gaussian(Radius) Position(x,y,z)
We should instead define an effective radius (ER), say at the 1% level, at which we won't worry anymore. We could then deposit delta functions of size ER for every star particle. This would give a cue to the ray caster, and we could modify:
1) Homogenize volume, splitting fields up into uniquely-tiling bricks 2) Sort bricks 3) For every brick, for every ray: a) Calculate intersection b) If local delta_field == True, execute ball query and calculate emission from stars local to the ray c) Update all channels (typically 3) based on *local* emission and absorption d) Update ray position 4) Return image plane
For the first pass, we would probably want all our stars to have the same ER, which would then be the radius of our ball-query. For parallel rendering, we would still have to have all of the star particles loaded on every processor; I don't think this is a problem, since in the limit where the star particles are memory-limiting, you would likely not suffer from pre-deposition. This also solves the grid-boundary issues, as each processor would deposit all stars during its initial homogenization.
What do you think? I think that the components external to the ray tracer could be assembled relatively easily, and then the ray tracer might take a bit of work. As a post-processing step we could even add lens flare, for that extra Star Trek look.
-Matt
On Thu, Jan 6, 2011 at 8:45 AM, John Wise
wrote: I forgot to mention that another way to do this is making a derived field that adds the stellar density to the gas density. However this doesn't look good when particles are in coarse grids, when they should be point
On Fri, Jan 7, 2011 at 3:13 PM, j s oishi
wrote: particles in Enzo. I thought it would be best to exploit the fact that the problem wasn't memory limited. sources in the image.
def _RelativeDensityStars(field, data): return (data["Density"] + data["star_density"])/dma add_field("RelativeDensityStars", function=_RelativeDensityStars, take_log = False)
where dma is a scaling variable.
I'm uploading my stand-alone script if you want to try to decipher it, although I tried to comment it some.
http://paste.enzotools.org/show/1475/
Also I uploaded the colormap based on B-V colors that I ripped from partiview to
http://www.astro.princeton.edu/~jwise/temp/BminusV.h5
John
On 01/06/2011 11:14 AM, John Wise wrote:
Hi Libby,
I'm afraid that there isn't a good solution for rendering stars, at least to my knowledge!
You can add them as pixels after you've determined the pixel numbers
(as
in Andrew's email) of the particles with the splat_points() routine in the image_writer module.
I wrote my own stand-alone splatter to put Gaussian splats for particles, but I never incorporated it into yt. I meant to a few months back when I wrote it but never did! It will produce these types of splats
http://www.astro.princeton.edu/~jwise/research/GalaxyBirth_files/combine.png
I had to manually blend the gas volume rendering and star splats afterwards to produce that image.
I hope I can make something that looks as good as partiview soon.
This
is the same dataset but with partiview.
http://www.astro.princeton.edu/~jwise/research/GalaxyBirth_files/stars_only....
I'll see if I can make time (first I have to find the code!) to incorporate my splatter into yt.
John
On 01/06/2011 09:15 AM, Elizabeth Harper-Clark wrote: > > Hi all, > > Thanks for all your help over the last couple of days. One more
question:
> - Can I plot particles on a volume rendered image? > I have stars and I want to show where they are! > > Thanks, > > Libby > > -- > Elizabeth Harper-Clark MA MSci > PhD Candidate, Canadian Institute for Theoretical Astrophysics, UofT > Sciences and Engineering Coordinator, Teaching Assistants' Training > Program, UofT > > www.astro.utoronto.ca/~h-clark < http://www.astro.utoronto.ca/%7Eh-clark> > h-clark@cita.utoronto.ca mailto:h-clark@cita.utoronto.ca > Astronomy office phone: +1-416-978-5759 > > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
participants (2)
-
Matthew Turk
-
Sam Skillman