Matt,
> We have several ways of representing hierarchical data in yt, most of
> which follow this basic concept:
>
> Object [with some supplemental data]
> Children [list of Objects]
I was not aware of this. Interesting.
> I don't think we want to standardize on our data objects being the
> data objects from pydot -- I don't think we want "Clump" for
> instance
> or "Halo" to inherit from pydot.
I was certainly not suggesting that. What I was suggesting is once the clump finder or merger tree has established a relationship, it can then add this relationship to the hierarchical data object, whatever that may be.
> I just thought we could figure out a
> way to write out dotfiles without using a big package... My original
> idea was to create a function something like:
>
> def write_hierarchical_dotfile(base_nodes, child_node_name,
> supplemental_data_function_name, parent = None, f = ""):
> if isinstance(f, types.StringTypes): f = open(f, "w")
> for n in base_nodes:
> sdata = getattr(n, supplemental_data_function_name)()
> new_node = write_node(n, sdata):
> if parent is not None: connect_node(parent, new_node)
> write_hierarchical_dotfile(getattr(n, child_node_name),
> supplemental_data_function_name, new_node, f)
>
> So we would still be writing dotfiles, but we could keep the
> nomenclature children, etc etc. Obviously this particular function
> would not work for multiple parents, but as an example it's closer to
> what I was thinking. My recollection is that the dot file format is
> very straightforward and simple, and we could use something like this.
> Does this make sense? Am I missing a big benefit that pydot
> provides?
I guess my main point is that if we roll our own solution, as above, we're just going to reinvent the wheel, again. I don't see how this is any easier than using a small package like pydot (which, again, I'm not married to, but it's an example and I have been playing with it. If you don't care about loading dot files off disk into the internal format, it's only one 2500 line file) to handle the logic of outputting to graphviz format. Additionally, with graphviz installed in the $PATH, pydot can output directly to an image format by calling graphviz directly.
Another point: what is going to go into write_node()? Will it have the node shape and color hard-coded, or will that be stored in sdata? Likewise for connect_node(). If yes, things will be hard-coded, then writing a dot output routine for each (merger trees, clump finder) is the way to go (which has already been done for the most part). However, if write_node() will allow sdata to contain node attributes, then we're just re-writing what pydot already does.
Stephen Skory
stephenskory(a)yahoo.com
http://stephenskory.com/
510.621.3687 (google voice)
Hi everyone,
I've uploaded a preliminary build of the new documentation for yt-2.0 here:
http://yt.enzotools.org/doc/
(downloadable: http://yt.enzotools.org/doc/download.zip )
yt 2.0 will be released Monday.
Please let me know any comments or suggestions you might have about
the documentation -- if you see any problems, or any areas you think
could use more information, or more emphasis, you'd like to help, etc
etc, please feel encouraged to drop me a line. In particular, the
orientation section is brand new, and the documentation on the whole
has been reorganized.
Best wishes,
Matt
All,
> Sorry, I didn't mean to output pngs directly -- I just meant, a single
> place we could write hierarchical datasets to dot format. There's a
> lot of research, time, development and wheels in graphviz that we
> don't need to reinvent.
>
> As for a new package, I wonder if maybe that decision would benefit
> from a cost/benefit analysis: how much of any given graphviz wrapper
> would we use?
I think that if we have a unified way to write hierarchical data to dot files, this would require a unified way of representing hierarchical data. As such, something like pydot provides this (not that there could be something better out there). So, in this regard, a python graphviz abstraction is useful. Otherwise, we're writing our own. Thoughts?
Stephen Skory
stephenskory(a)yahoo.com
http://stephenskory.com/
510.621.3687 (google voice)
All,
> Having a built-in graph generator would be wonderful and save us from running
> dot every time we generate a graph file.
I'm not sure exactly what you mean, John? Do you mean you want the ability to write out pngs of graphs, avoiding graphviz? I guess I'm not clear on why exactly you would want to do this. Is graphviz not capable of doing something we need? I can see that it would be slightly more convenient to skip an intermediate step, but I think it would be lots of work on our part, when people have put at least a decade already into graphviz. Perhaps I misunderstand you?
> > What I think would very cool is if we could apply this particular type
> > of graphviz output (and I know Stephen has also done a lot with
> > graphviz, so maybe he has something to suggest) to the creation of
> > level set (i.e., clump) diagrams.
If we wish to expand our usage of graphviz, I feel we may want to spend some time deciding if one of the python graphviz abstractions (like this http://code.google.com/p/pydot/) are useful. When it was only merger trees, a new package was overkill. Additionally, I never meant the graphviz merger tree output to be of a publishable quality. If that's something we want for these new directions (merger tree(s), clumps, level sets), we'll need to expand our capabilities.
Graphviz is apparently capable of making some very beautiful graphs with some post-processing, see here:
http://www2.research.att.com/~yifanhu/GALLERY/GRAPHS/index.html
Stephen Skory
stephenskory(a)yahoo.com
http://stephenskory.com/
510.621.3687 (google voice)
Hi all,
I just extended Matt's merger tree that works on enzo's inline FOF halo finder to work on multiple outputs, and build a complete merger tree. It produces DOT (Graphviz) files. I tried to make it simple to use and put in the necessary docstrings. Here's an example to make a merger tree from z = 15->7 only including halos with >500 particles.
"""
from yt.mods import *
import yt.analysis_modules.halo_merger_tree.api as hm
tree = hm.EnzoFOFMergerTree((7.0,15.0), load_saved=True)
#tree.build_tree(0, max_children=4)
tree.build_tree(0, min_particles=500)
tree.write_dot()
"""
The load_saved argument will load the pickled results from previous calculations. It will save it by default if it's false.
Here's the result from the most massive halo in a 256^3 sim. Careful, it's large! 1.9MB, 4325x7461 pix
http://www.astro.princeton.edu/~jwise/pics/merger-tree.png
If you try it out, please let me know of any problems!
Cheers,
John
Hi all,
As previously mentioned, I'm working in my spare time to add support for the
Maestro code to yt. I am able to read in the data, make slices of various
variables, and save the resulting plots. I tried, however, to follow the
cookbook recipe for volume rendering and I keep getting an error while
attempting to ray_cast.
Here is the script I use: http://paste.enzotools.org/show/1470/
and here is the traceback:
http://paste.enzotools.org/show/Jn7X2pBPkUD0yhhLwrVv/
I'm not sure if this is an issue with the way I have implemented Maestro
support or if there is a bug in the visualization. Could someone please try
my script with their own parameter file to at least verify there is no bug
in ray_cast?
Thanks,
Chris
> Does anyone have any feelings on making Cython mandatory for
> installation?
I think it's fine. +1.
Stephen Skory
stephenskory(a)yahoo.com
http://stephenskory.com/
510.621.3687 (google voice)
Matt,
> Unfortunately, we will need to call the kD-tree many, many time in
> succession from with a Cython routine, as well as maintain multiple
> extant kD-tree objects simultaneously. My recollection is that you
> use Forthon because the first is not possible, and I also seem to
> recall the latter is somewhat difficult. Additionally, the usage of
> the kD-tree inside compiled C code would make it a compile-time
> dependency... What do you think -- does this agree with your
> understanding?
You're pretty much correct. Again, it would be nice if we could find a fast, python-friendly C kD-tree. I've never been happy about the Fortran tree, but it is the fastest I've come across. We should all stay on the lookout for one!
Stephen Skory
stephenskory(a)yahoo.com
http://stephenskory.com/
510.621.3687 (google voice)
Hi all,
I've started dealing with a lot of time-consuming derived fields and am
wondering if there is already support for writing derived fields back to the
raw data files (e.g. .cpu files). If not, any thoughts on doing so?
Sam
Hi all,
I've moved the discussion over to yt-dev, because I ended up writing
up a very short set of changesets that test it out. This is the
simplest implementation; it uses a kD-tree I grabbed off googlecode,
which we'll ultimately want to replace with something faster, and for
each cell it calculates all the stars to consider. At each sampling
point it sums the contribution from all the stars (for now, they all
produce pure white emission, distributed as a Gaussian) and then adds
that to the incoming intensity for the purposes of computing the
transfer function for the baryons. Note that currently it samples the
Gaussian and does rectangular integration. (See below for more on
this.) The effective radius is set to 1%, but as Greg mentioned on
yt-users that can probably be made more loose.
There are several problems, which I was hoping to work out before I
sent this, but in the interest of getting it out there for someone
else to possibly fix up I thought I'd share what I had first.
1) There are grid artifacts. I tried tracking these down, but I was
unable to do so. I think it may be related to cell center/vertex
center data.
2) It's pretty slow, but my profiling shows that as showing up in the
retrieving of data, not in the searching of the kD-tree. This may be
a mistake in my reading of the profiles. It's slowest for images
where a single star contributes to many pixels, because of the way the
loops are nested.
3) It assumes a uniform sigma and a uniform emission coefficient
(equal in all three channels, i.e. white light)
4) It only works with homogenized volume; I don't understand how
PartitionedGrid objects are assigned to the kD-tree so I couldn't use
that.
One definite improvement that will need to happen is to remove the
direct calculation and sampling of the Gaussian inside the function
add_stars in yt/utilities/_amr_utils/VolumeIntegrator.pyx. The
sampling as a mechanism for integration (either rectangular or
trapezoidal) is going to miss points if your step size internal to a
cell could step over the centroid of the Gaussian. I believe the
right way to solve this is to cease calculating direct radius from the
sampling point to the Gaussian, and instead decompose the Gaussian
into the cylindrical radius (impact parameter) component and the
component along the ray. You then would use a tabulated erf(t)
function to get the total contribution to the intensity over t' to t'
+ dt. This should be better quality, although it may end up being a
wash for speed. This would also help to ensure that the peak of any
given Gaussian doesn't get skipped during the integration.
I've stuck the bundle up here:
http://yt.enzotools.org/files/star_rendering.hg
You'll have to re-cythonize yt/utilities/amr_utils.pyx and all the
changes have been made to
yt/utilities/_amr_utils/VolumeIntegrator.pyx. The function add_stars
is the main one.
A sample script is here:
http://paste.enzotools.org/show/1481
and I've attached two images of the Enzo Workshop sample dataset
JHK-DD0030, provided by Ji-hoon Kim. The first (raw_stars.png) is
what the image looks like straight out of the code. The second one
I've compressed the color scale, so you can see where the star
particles all are. You can definitely see the grid artifacts in both!
Anyway, if you play with it, let me know what you think. Especially
if you can figure out the grid boundary thing! :)
-Matt
On Fri, Jan 7, 2011 at 3:13 PM, j s oishi <jsoishi(a)gmail.com> wrote:
> As a note, that's what Orion does for all its particles, and it works just fine.
>
> On Fri, Jan 7, 2011 at 12:01 PM, John Wise <jwise(a)astro.princeton.edu> wrote:
>> Hi Matt,
>>
>> This all sounds great. I like the idea of associating stars with bricks to simplify the search.
>>
>> I think it's the easiest and best approach now (maybe not at petascale) to have all star particles duplicated on each processor. I can't think of any simulation with more than a few million star particles, and that easily fits into memory. This is the same approach I've taken with the new star particles in Enzo. I thought it would be best to exploit the fact that the problem wasn't memory limited.
>>
>> My 2c.
>> John
>>
>> On 6 Jan 2011, at 18:30, Matthew Turk wrote:
>>
>>> Hi John,
>>>
>>> (As a quick comment, one can export to Sunrise from yt, so that could
>>> also serve as a mechanism for rendering star particles.)
>>>
>>> I have been thinking about this a lot lately, and I think you're
>>> right: we need a proper mechanism for compositing star particles on
>>> the fly during the traversal of rays across a homogenized volume. I
>>> had planned on this being one of my first yt projects this year. The
>>> current process of volume rendering (for more detail see the method
>>> paper) is basically:
>>>
>>> 1) Homogenize volume, splitting fields up into uniquely-tiling bricks
>>> 2) Sort bricks
>>> 3) For every brick, for every ray:
>>> a) Calculate intersection
>>> b) Update all channels (typically 3) based on *local* emission and absorption
>>> c) Update ray position
>>> 4) Return image plane
>>>
>>> This is true for both the old, homogenized volume rendering technique
>>> and the new kD-tree technique. To fit star particles into this, we
>>> would regard them as exclusively sources of emission, with no impact
>>> on the local absorption. Nominally, this should be easy to do: for
>>> every cell, simply deposit the emission from a star. As you noted in
>>> your email, this results in very, very ugly results -- I tested it
>>> last summer with the hopes of coming up with something cool, but was
>>> unable to. Testing it today on an airplane showed it had bitrot a
>>> bit, so I haven't attached it. :)
>>>
>>> I think we would need to move to, rather than cell-based emission (so
>>> that the smallest emission point from a star is a single cell) to a
>>> method where emission from star particles is calculated per ray (i.e.,
>>> pixel). This would require an additional step:
>>>
>>> 1) Homogenize volume, splitting fields up into uniquely-tiling bricks
>>> 2) Sort bricks
>>> 3) For every brick, for every ray:
>>> a) Calculate intersection
>>> b) Calculate emission from stars local to the ray
>>> c) Update all channels (typically 3) based on *local* emission and absorption
>>> d) Update ray position
>>> 4) Return image plane
>>>
>>> This would enable both density-based emission *and* star emission.
>>> This could be both density isocontours, for instance, and individual
>>> stars. The visual language in that would probably be very confusing,
>>> but it would be possible, particularly for pure annotations.
>>>
>>> The issue here is that step 2b is probably very, very slow -- even
>>> using a (point-based) kD-tree it would likely add substantial run
>>> time, because there's no early termination mechanism. What I think we
>>> could do, however, is execute a pre-deposition phase. For the
>>> purposes of rendering, we can describe a star particle by only a few
>>> characteristics:
>>>
>>> Emission(Red, Green, Blue)
>>> Gaussian(Radius)
>>> Position(x,y,z)
>>>
>>> We should instead define an effective radius (ER), say at the 1%
>>> level, at which we won't worry anymore. We could then deposit delta
>>> functions of size ER for every star particle. This would give a cue
>>> to the ray caster, and we could modify:
>>>
>>> 1) Homogenize volume, splitting fields up into uniquely-tiling bricks
>>> 2) Sort bricks
>>> 3) For every brick, for every ray:
>>> a) Calculate intersection
>>> b) If local delta_field == True, execute ball query and calculate
>>> emission from stars local to the ray
>>> c) Update all channels (typically 3) based on *local* emission and absorption
>>> d) Update ray position
>>> 4) Return image plane
>>>
>>> For the first pass, we would probably want all our stars to have the
>>> same ER, which would then be the radius of our ball-query. For
>>> parallel rendering, we would still have to have all of the star
>>> particles loaded on every processor; I don't think this is a problem,
>>> since in the limit where the star particles are memory-limiting, you
>>> would likely not suffer from pre-deposition. This also solves the
>>> grid-boundary issues, as each processor would deposit all stars during
>>> its initial homogenization.
>>>
>>> What do you think? I think that the components external to the ray
>>> tracer could be assembled relatively easily, and then the ray tracer
>>> might take a bit of work. As a post-processing step we could even add
>>> lens flare, for that extra Star Trek look.
>>>
>>> -Matt
>>>
>>>
>>> On Thu, Jan 6, 2011 at 8:45 AM, John Wise <jwise(a)astro.princeton.edu> wrote:
>>>> I forgot to mention that another way to do this is making a derived field
>>>> that adds the stellar density to the gas density. However this doesn't look
>>>> good when particles are in coarse grids, when they should be point sources
>>>> in the image.
>>>>
>>>> def _RelativeDensityStars(field, data):
>>>> return (data["Density"] + data["star_density"])/dma
>>>> add_field("RelativeDensityStars", function=_RelativeDensityStars,
>>>> take_log = False)
>>>>
>>>> where dma is a scaling variable.
>>>>
>>>> I'm uploading my stand-alone script if you want to try to decipher it,
>>>> although I tried to comment it some.
>>>>
>>>> http://paste.enzotools.org/show/1475/
>>>>
>>>> Also I uploaded the colormap based on B-V colors that I ripped from
>>>> partiview to
>>>>
>>>> http://www.astro.princeton.edu/~jwise/temp/BminusV.h5
>>>>
>>>> John
>>>>
>>>> On 01/06/2011 11:14 AM, John Wise wrote:
>>>>>
>>>>> Hi Libby,
>>>>>
>>>>> I'm afraid that there isn't a good solution for rendering stars, at
>>>>> least to my knowledge!
>>>>>
>>>>> You can add them as pixels after you've determined the pixel numbers (as
>>>>> in Andrew's email) of the particles with the splat_points() routine in
>>>>> the image_writer module.
>>>>>
>>>>> I wrote my own stand-alone splatter to put Gaussian splats for
>>>>> particles, but I never incorporated it into yt. I meant to a few months
>>>>> back when I wrote it but never did! It will produce these types of splats
>>>>>
>>>>>
>>>>> http://www.astro.princeton.edu/~jwise/research/GalaxyBirth_files/combine.png
>>>>>
>>>>>
>>>>> I had to manually blend the gas volume rendering and star splats
>>>>> afterwards to produce that image.
>>>>>
>>>>> I hope I can make something that looks as good as partiview soon. This
>>>>> is the same dataset but with partiview.
>>>>>
>>>>>
>>>>> http://www.astro.princeton.edu/~jwise/research/GalaxyBirth_files/stars_only…
>>>>>
>>>>>
>>>>> I'll see if I can make time (first I have to find the code!) to
>>>>> incorporate my splatter into yt.
>>>>>
>>>>> John
>>>>>
>>>>> On 01/06/2011 09:15 AM, Elizabeth Harper-Clark wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Thanks for all your help over the last couple of days. One more question:
>>>>>> - Can I plot particles on a volume rendered image?
>>>>>> I have stars and I want to show where they are!
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Libby
>>>>>>
>>>>>> --
>>>>>> Elizabeth Harper-Clark MA MSci
>>>>>> PhD Candidate, Canadian Institute for Theoretical Astrophysics, UofT
>>>>>> Sciences and Engineering Coordinator, Teaching Assistants' Training
>>>>>> Program, UofT
>>>>>>
>>>>>> www.astro.utoronto.ca/~h-clark <http://www.astro.utoronto.ca/%7Eh-clark>
>>>>>> h-clark(a)cita.utoronto.ca <mailto:h-clark@cita.utoronto.ca>
>>>>>> Astronomy office phone: +1-416-978-5759
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> yt-users mailing list
>>>>>> yt-users(a)lists.spacepope.org
>>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>>
>>>>> _______________________________________________
>>>>> yt-users mailing list
>>>>> yt-users(a)lists.spacepope.org
>>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>
>>>> _______________________________________________
>>>> yt-users mailing list
>>>> yt-users(a)lists.spacepope.org
>>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>>>
>>> _______________________________________________
>>> yt-users mailing list
>>> yt-users(a)lists.spacepope.org
>>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users(a)lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
> _______________________________________________
> yt-users mailing list
> yt-users(a)lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>