Howdy y'all,
I'm wondering if there is a system to deciding when something belongs in yt.lagos (like the HaloFinders), or in yt.extensions (like the HaloProfiler)? For example, I'm thinking of adding a simple bit of code that will calculate the star formation history (Msol/year, for example) for a given set of stars. Would that go in extensions or in lagos? The best I can tell is that extensions are more secondary, as in they are a post-processor of already refined data, while lagos handles the raw data and refines it down.
Thanks!
_______________________________________________________
sskory(a)physics.ucsd.edu o__ Stephen Skory
http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
________________________________(_)_\(_)_______________
Eric,
> 1. Have you run this on the light cone simulation yet?
No. If that is to happen, I'd like this tool to be shaken down a bit more before the resources are applied to that. But that's a good idea!
> 2. I don't know how to query an SQLite database, can you help me out with that?
Sure. The primary reason why I'm using a SQL database is all the big catalogs of observational (SDSS) and simulated data (Millennium) make the data publicly accessible via a SQL interface. What I've written isn't quite up to the scope of those databases, but as it is improved it can be (& more!). There are lots of examples of how to write SQL queries out there, but I'll write some examples specific to this database soon for you.
> 3. It looks like you are appending filenames to your list in the example below,
> have you thought about using the EnzoSimulation class to call it and run it?
I haven't used that class before so I didn't even think to use it. I'll look into that, thanks for reminding me!
> 4. I also don't know about GraphViz, maybe you could help me out with that as
> well.
Graphviz is a open-source tool that can build visualizations of relational data. It's not as pretty as you could do by hand with a professional graphics tool, but it has an easy to output flexible markup language and that's why I use it. There are Graphviz-Python packages, but the ones I can find have at least two years since last update, and would require someone to install on their Python, which most people have a hard time doing. Therefore I hard-coded the Graphviz output content into the source. If you want to change the output style and content, you can use what I wrote as a starting point.
http://graphviz.org/
_______________________________________________________
sskory(a)physics.ucsd.edu o__ Stephen Skory
http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
________________________________(_)_\(_)_______________
Hi guys,
Cython now supports -- or will, with 0.13, coming soon -- wrapping C++
classes without any mucking about with a lot of fake wrapping and
whatnot.
This could be a BIG step forward for a LOT of projects. Things like
direct wrapping of Enzo data structures -- particularly for inline
analysis! -- as well as interfacing with external C++ libraries for
graphics support. Once upon a time I spent some effort to wrap Enzo
with Cython, and the C/C++ divide just ended up causing headaches, but
this should reduce or eliminate that quite a bit.
Just something to keep in mind.
-Matt
---------- Forwarded message ----------
From: Danilo Freitas <dsurviver(a)gmail.com>
Date: Thu, Jan 28, 2010 at 7:34 PM
Subject: [Cython] C++ support - A simple tutorial
To: cython-dev(a)codespeak.net
As you know, Robert is planning to put the work about C++ Support in
next release. So, we have some new stuff, and people need to learn it.
I wrote a very simple tutorial on wiki [0], with some examples. I
think it's easy to learn (and use) it.
So, if you have any questions about it, just ask here.
[0] http://wiki.cython.org/gsoc09/daniloaf/progress
--
- Danilo Freitas
_______________________________________________
Cython-dev mailing list
Cython-dev(a)codespeak.net
http://codespeak.net/mailman/listinfo/cython-dev
Hi guys,
I changed the way the Pastebin was launched from "FastCGI" to
"Passenger." If you notice any bugs or glitches, let me know. I
think it's been quite a bit more responsive since I made the change;
initial launching is faster (but not instantaneous) and subsequent
requests are super fast.
-Matt
Hi devs,
I've got a much-improved halo merger tree class working. It works in parallel for the important stuff: the halo finding of course, as well as the halo membership comparison. The halo data is kept in a SQLite database which I think is the Right Way To Go for this kind of stuff, and I've carefully made sure the database is written to by only one thread at a time.
If this is something you are interested in, or have some requests of me to make sure it has some capability, let me know.
This is how you run on a set of outputs:
---
from yt.mods import *
from yt.extensions.MergerTree import *
files = []
first = 100
for i in range(117-first):
files.append('/mirage/sskory/reddead-427/DD%04d/data%04d' % (i+first, i+first))
MergerTree(restart_files=files, database='/tmp/427.db')
MergerTreeDotOutput(halos=[0], database='/tmp/427.db', current_time=1264547197)
---
That's it. The output is in Graphviz format, which has binary distributions for every OS that matters.
I'll eventually write some docs for this. I have it in the mercurial repo right now, and eventually it will be promoted to trunk, perhaps for the next point release.
Enjoy!
_______________________________________________________
sskory(a)physics.ucsd.edu o__ Stephen Skory
http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
________________________________(_)_\(_)_______________
We're proud to announce the release of yt version 1.6, a point release
of the analysis and visualization toolkit for Adaptive Mesh Refinement
data. This release contains many small improvements to the codebase,
as well as several large improvements. Most prominently, it features a
completely redesigned, massively parallel implementation of the halo
finding algorithm HOP, as well as attendant improvements to particle
data IO and parallel communication. Additionally, the underlying data
structures for hierarchical datasets have been rewritten for speed and
clarity, enabling the future addition of new multi-resolution data
formats to be easily added.
yt features native support for Enzo
(http://lca.ucsd.edu/projects/enzo) data, providing a natural and
intuitive way to address physical regions in space as well as
processed data.
Some of the changes since yt-1.5 (released on November 3, 2009) include:
* (New) Parallel HOP ( http://arxiv.org/abs/1001.3411 )
* (Beta!) Software ray casting and volume rendering (Sam Skillman has
a gallery here:
http://casa.colorado.edu/~skillman/simulation_gallery/simulation_gallery.ht…
)
* Rewritten, faster and better contouring engine for clump identification
* Spectral Energy Distribution calculation for stellar populations
* Optimized data structures such as the hierarchy
* Star particle analysis routines
* Halo mass function routines
* Completely rewritten, massively faster and more memory efficient Particle IO
* Fixes for plots, including normalized phase plots
* Better collective communication in parallel routines
* Consolidation of optimized C routines into amr_utils
* Many bug fixes and minor optimizations
* Installation instructions, documentation, recipes, mailing list
info and assorted other items can be found at the website,
http://yt.enzotools.org/ along with an annotated changelog at
http://yt.enzotools.org/doc/changelog.html .
yt is a Free and Open Source project, and we invite you to get
involved. For more information, join the yt-dev mailing list, or see
the hacking guidelines on the Wiki:
http://yt.enzotools.org/wiki/HackingGuidelines .
Sincerely,
The yt development team:
Matthew Turk
Stephen Skory
Britton Smith
Jeff Oishi
Sam Skillman
Devin Silvia
John Wise
David Collins
Hi everyone,
Stephen's gotten a lot of great stuff stuck into the SVN trunk, and
I'm trying to make sure that the particle support is there for him.
The new ParticleIO is getting close to being "ready for primetime",
but I'm going to need some help testing it. I've uploaded a patch:
http://paste.enzotools.org/show/288/
What this will do is co-opt the particle IO from a standard dict-like
access of an AMR3DData instance and, if it's a particle type, pass it
through the ParticleIO(.py) object. Right now this will only affect
Regions -- everything else will simply operate as normal. (i.e.,
spheres will just get passed on through.) As a note, the new
ParticleIO is set up to do two-passes through a region, first counting
the particles that will end up in the object and then reading them.
This speeds up IO considerably, as it reduces the number of array
creation operations to a minimum.
(My sample script for testing is here, but I'd prefer if you could
test in a real-world application...
http://paste.enzotools.org/show/289/ )
If possible, could a few of you test this out, and let me know if this
patch interferes with any normal behavior? I'm running my own tests
here and it's looking *okay*, but it should get tested in the field a
bit before being committed. After that, I'll add the necessary
methods for spheres, and then the ParticleIO should be mostly handled
by the new mechanism.
Thanks for any ideas and testing!
-Matt