Sam Leitner and I have just started writing a new yt frontend for the distributed version of
ART (a completely different file format, but very similar units and structure to the current
ART frontend). artio refers to an interface library we use to read/write from the new file formats,
similar in principle to RamsesReader.
We're targeting yt-3.0 in hopes of using the new Oct tree support written for Ramses, and
hopefully can help develop and generalize that part of yt. We'll be focusing on trying to get
a very basic AMR support implemented and leave particle support for a later phase.
The online documentation on frontends is out of date and lacking in some areas, so we'll
probably be flooding the list with questions over the next few weeks. I am not a current
user of yt, so I'm also trying to catch up on general terminology and may ask some basic
or ill-posed questions. Thanks for your patience.
We currently are able to read parameters from artio and set a few fields and units. This
development has produced the following questions:
- Is there a list of the properties of StaticOutput that are required? Each front-end has a slightly
different list and it's not clear which ones are in current use or required (except when the code
throws an exception on load). For now we're setting the following:
and in the case of cosmological simulations:
what about units? time_units?
- It looks like GeometryHandler has replaced AMRHierarchy as the preferred frontend interface? Chombo
now uses GridGeometryHandler rather than AMRHierarchy, for example. How does that affect io.py, where
the online documentation (http://yt-project.org/doc/advanced/creating_frontend.html) describes Grid instances
being passed to methods in io.py. What if we're using an OctreeGeometryHandler rather than a
- What would be the best way to start developing a customized geometry handler? Where are the major entry
points, and what functions are required vs optional? Is it possible to begin by writing something coarse that doesn't
implement any performance features like chunking or parallelism?
- We'd like to use the RAMSES and ART frontends as examples, since their data structures are very similar to our
own. How current are those frontends in yt-3.0? Are there any major pieces which are scheduled for deprecation
or refactoring that we should be aware of? In RAMSES, for example, some field names are hard-coded in
RAMSESGeometryHandler._detect_fields. Shouldn't this information be pulled from the fields interface?
Thanks for the help!
Scientific Computing Consultant
Research Computing Center, KICP
University of Chicago
The talk of the evolving 3.0 docs makes me wonder what the best place
to have evolving developer documentation in-between a release. I see
four options, which have differing levels of edit-ability.
* Sphinx docs (pain to deploy, requires repo write permission)
* Static websites (requires deeper than repo write permission)
* Wiki on BB (requires repo permission, can be edited offline but
* Confluence ( www.atlassian.com/software/confluence ) (independent
permissions, can't be edited offline)
Confluence is the slickest, and the Wiki in the past has gone out of
date relatively quickly. Any strong feelings on which I should start
using as a dumping ground for information about 3.0 vs 2.x?
Matt, Chris Moody, and myself have started a bit of a conversation
off-list about overhauling the halo finding machinery that we would
like to bring here. This has been sparked by some recent work on the
Rockstar integration into yt. Matt will surely be able to summarize
these issues better, but here's how I understand things:
- As Matt wrote the halo finding stuff years ago, it isn't very
scalable to high levels of parallelism. In particular, halo object
information isn't as shared across parallel processes as they should
be. For some of the halo finders, data is localized to the processor
that "owns" the halo. When running in parallel, the local values need
to be broadcast to all the other processors when accessed. This is why
you get the halo mass with a function "halo.total_mass()" instead of a
simple class value "halo.total_mass". The "total_mass()" function (and
other halo object functions) is wrapped with a decorator that
broadcasts the value transparently.
I think it should be fairly straightforward to ensure that all halos
objects on all tasks know all the key bits of info, but it's not clear
to me how particle access would work.
- Matt has also raised the idea of creating "callbacks" for halo
finders. I don't understand this quite as well, but I think it means
that it would be possible to write functions that would operate on the
particles as the halo finder runs (or is finished finding halos but
before it returns) to ensure good parallelism and speed. He has also
mentioned that it would be possible for these callbacks to pull grid
information from Python if that was needed for the analysis.
What I don't quite understand here is where these callbacks would
actually operate? In c for HOP, in cython for Rockstar, and Python for
parallel HOP? Or would they happen between what the halo finder
returns and building the halo lists?
I probably have more things to say and ask, but I think I'll let Matt come in.
510.621.3687 (google voice)
Hi all (mainly Matt),
I know we have several threads open on Rockstar and halo finding
already, but I'm wondering how Rockstar is going to work inline with
Enzo? It seems to me that every MPI thread will need to be a "reader,"
but that doesn't leave room for the "writers" or the "server". Matt,
you mentioned to me that you had thought about this already. I'm
wondering if you could share your ideas? I'd love to help make it
510.621.3687 (google voice)
We need tests for plot windows and other visualizations.
What would people think of actually creating a bunch of plot windows,
saving them to disk, and then deleting the resultant files? Is that
okay to do? Nathan?
I'm just about to update my PR for rockstar, but as I will say there,
it's not quite ready to pull in. There's a strange thing that needs
fixing first. The strange behavior is having to do with the hierarchy,
it seems. When the hierarchy isn't instantiated first inside python,
it appears that I get hangs in rockstar_interface.pyx in
rh_read_particles() where it needs to access the grid list through the
hierarchy. It looks like some of the time, one or more of the readers
will function correctly, but most of the time at least one reader gets
stuck and sits doing nothing other than making my computer's fans go
If I run rockstar this way, everything goes fine:
from yt.mods import *
from yt.analysis_modules.halo_finding.rockstar.api import RockstarHaloFinder
pm = 9.00026e+10
files = glob.glob("DD*/DD????")
ts = TimeSeriesData.from_filenames(files[-1], parallel=False)
for t in ts:
rh = RockstarHaloFinder(ts, num_readers = 3, particle_mass=pm)
Notice that above I'm touching the hierarchy before setting up
Rockstar in the for loop. However, if I change "files[-1]" to
something that has more than one entry, the problem happens with the
very first dataset in the time series. Or, if I comment out the for
loop above, even with "files[-1]", things do not work.
If I comment out the for loop above, but in TimeSeriesData() I require
that pf.h get touched before __iter__ returns the snapshot reference,
things work fine in Rockstar for "files[-1]" and "files[:]".
I've seen the same behavior on my Mac and on Linux. If any one has any
ideas on the best way to handle this, let me know. Thanks!
510.621.3687 (google voice)
This is pretty cool. Really, really cool.
and I think that the Reason widgets could be implemented, once the
webapp refactoring they're in the midst of doing is done -- we still
need to supply a bit of chrome around the CSS/JS, which we can't right
now in the way I'd like to. I think our widget registry system and
callback scheme will fit exactly correctly to do this, as well. So as
soon as it's viable, I want to migrate Reason to the IPython notebook.
The way they're doing it here, I believe, implements a display
protocol which doesn't have callbacks; in principle our existing WebGL
widgets (which use XTK) could be implemented this way, as we don't
need callbacks for them. Anybody want to give it a shot? :)
---------- Forwarded message ----------
From: Fernando Perez <fperez.net(a)gmail.com>
Date: Tue, Nov 13, 2012 at 5:22 PM
Subject: [IPython-dev] Nice WebGL integration in the notebook at the
IPython sprints at PyCon Canada (tried JSMol first)
To: IPython Development list <ipython-dev(a)scipy.org>
I know a number of people have asked about 3d in the notebook. We
had a team try to integrate JSMol (the JS version of the venerable
JMol) but they found too many bugs for that to be a viable path, and
instead focused on doing it directly with pure WebGL and the new JSON
Here's a quick demo:
they had to run and I don't have the github repo URL right now, will
get it soon and will post. But it looked very nice!
I'll try to make a little blog post later on with a summary of what we
got done, there's been a ton of good work.
IPython-dev mailing list
- Has anyone run rockstar with num_readers > 1? I just did a run with
4 (on 12 cores total) and I got repeated (but very slightly different)
halos in the output.
- When you run rockstar, does it always die with a segfault? On both
mac and linux, after everything is done the run dies with a segfault
that I can't make heads or tails of right now. It doesn't seem to
affect the content of the output, but it's obviously wrong.
510.621.3687 (google voice)