Hi all (specifically Jeff!),
I'm trying to use Reason to visualize some of my data. What I'm
running up against is issues with centering in the plot window.
Specifically, I'm trying to visualize a collapsing object, which is
offset from the center by some small value. I can sort of mock this
up by zooming, re-centering on that object, then zooming, then
re-centering, and so on and so forth.
It's very convenient to be able to specify a center for the PlotWindow
when it's created. I've added the items to both the slice and the
projection to optionally "Center on max", but it's not clear to me the
best way to communicate that information to the PlotWindow object.
What's the best way to specify a center about which the panning and
zooming will occur? And, since this will likely result in edges that
are no longer within the domain, is it alright if I turn on
periodicity by default in the Plot Window?
IPython is going to be releasing another in their 0.10.x series, which
is what we've been running on for a while. They have recently
overhauled all the methods of parallelism, as well as their kernel and
engine, and I'm really not clear on how this changes the overall
architecture of the system. 0MQ is the underlying machinery they now
use, and I've played with it and it's definitely as awesome as they
say. Unfortunately it's a bit tricky to get installed.
Anyway, the long and the short of it is that IPython is going to be
releasing some backwards incompatible changes, which I don't think I
can keep up with. The 0.11 series even breaks the "insert_ipython()"
function in yt/funcs.py, as well as iyt, by moving names around and
shuffling things about. I don't think I can keep up with the changes.
Can someone volunteer to step up and see if they can try to make the
various IPython-depending yt components work with both the current
(i.e., 'stable') and future (i.e., 'completely re-designed for
ground-up parallelism on clusters of a nature most of us don't use')
---------- Forwarded message ----------
From: Fernando Perez <fperez.net(a)gmail.com>
Date: Fri, Apr 8, 2011 at 3:55 AM
Subject: [IPython-dev] Release plans, yet again. And a road to 1.0,
believe it or not.
To: IPython Development list <ipython-dev(a)scipy.org>
I know, everybody laughs when I talk about releases... But for the two
of you still listening:
- 0.10.2 will be out asap, literally as soon as I find a block of a
few hours free. I thought I'd have the rc released last week but some
family health problems have made the last week very unpleasant for me,
and I'm only now getting back on track. I should be able to cut the
0.10.2 final on Saturday at the latest.
- 0.11: we now have no pending pull requests and just a few critical
bugs. We do need to give some time to shake out in user testing the
massive merges of the past few days, today I discussed with Thomas
some important things that need to be done to the sqlite history code,
and I have a few local things as well, but since all the big stuff is
done, we should be looking at pushing 0.11 out the door finally in
just a few weeks. If you have anything on your local trees that you
think is in good shape for 0.11, try to make a pull request before too
long (though we'll announce the release freeze in advance, we're not
quite there yet).
So now is the time to really start playing with master. Install
zeromq/pyzmq 2.1.4 and take it for a spin. Anything that breaks, let
us know by filing a bug report. If you think we have already a bug
but not listed as critical, please let us know and we'll look into
raising its priority. We want to focus on flushing only the critical
bugs before cutting out 0.11, so that we can start a quicker release
cycle after 0.11.
The plan will be to try and push small releases after 0.11 to the
point where we are happy with the API, and then simply start a
stabilization series like matplotlib had, 0.99.x, leading to 1.0. I
don't want to make any promises on when 1.0 will be out, but ideally
it would be by this summer. But we'll see, I've broken those enough
times that the joke isn't funny anymore.
Many thanks to everyone who has jumped in recently with so much great
work to get us to this state. I particularly want to thank Thomas,
whose massive clearing job initially really got us 'unstuck' from
behind a pile of accumulated pull requests and bugs, and who now has
moved into doing brain surgery right at the core, improving some of
our most delicate code in really nice ways (the recent AST
IPython-dev mailing list
I've just encountered the same h5py bug in another place. This is with
h5py 1.2.0. The newest is 1.3.1. What do we think about upgrading?
510.621.3687 (google voice)
I just spent some time trying to diagnose a segfault when the
HaloProfiler was trying to make projections of the haloes. The problem
was in making the FixedResolutionBuffer, the image it was trying to
make was 16K by 16K, which cannot fit in any normal machine's memory.
I think this is because I am looking at a zoom-in simulation of a
smallish cosmological size, with high resolution in the region of
interest. The default projection width in the HaloProfiler is 8 Mpc,
so I had a big numerator and a small denominator.
What do we think about changing this default to some multiple of the
halo maximum radius? I think any constant value will make problems.
510.621.3687 (google voice)
I think we've stumbled upon a h5py bug! Without getting into gritty
details, the LoadHaloes() code actually opens, closes, and then
re-opens the h5 file when data is read in. For some reason, the
re-open wasn't working, and the file was still marked as closed, and
therefore trying read data from it wasn't working. I've added a
band-aid of sorts which seems to fix the issue. Go ahead and update
and let me know if you continue to have problems.
510.621.3687 (google voice)
I noticed recently, that as far as I can tell, there is nothing in the
docs about all the various options one can do with 'yt' on the command
line (e.g. 'yt plot -p data0043'). Am I wrong? If I am correct, I'm
offering to write something up because I'd like to learn about all the
options, and that would be a good way to do it.
510.621.3687 (google voice)
We are proud to announce the release of yt version 2.1. This release
includes several new features, bug fixes, and numerous improvements to the
code base and documentation. At the yt homepage, http://yt.enzotools.org/ ,
an installation script, a cookbook, documentation and a guide to getting
involved can be found.
yt is an analysis and visualization toolkit for Adaptive Mesh Refinement
data. yt provides full support for Enzo, Orion, and FLASH codes, with
preliminary support for RAMSES, ART, Chombo, CASTRO and MAESTRO codes. It
can be used to create many common types of data products such as:
* Arbitrary Data Selection
* Cosmological Analysis
* Halo finding
* Parallel AMR Volume Rendering
* Gravitationally Bound Objects Analysis
There are a few major additions since yt-2.0 (Released January 17, 2011),
* Streamlines for visualization and querying
* A treecode implementation to calculate binding energy
* Healpix / all-sky parallel volume rendering
* A development bootstrap script, for getting going with modifying and
* CASTRO particles
* Time series analysis
If you can’t wait to get started, install with:
$ wget http://hg.enzotools.org/yt/raw/stable/doc/install_script.sh
$ bash install_script.sh
Development has been sponsored by the NSF, DOE, and University funding. We
invite you to get involved with developing and using yt!
Please forward this announcement to interested parties.
The yt development team:
I've tried to get a nice bug import mechanism going to go from the
existing Trac to BitBucket nicely. But, the API for BitBucket's bug
creation mechanism doesn't (currently) let you set the resolution of a
bug, or the milestone info.
With the new development of Reason, Cameron and I talked about how to
track future changes, and the idea of using the bug tracker came up.
I'm a little bit skittish about using the bitbucket one unless we have
a full import. But I'm willing to get over that. So far the bug
tracker has really only ever been a mechanism to keep track of things,
never for actual bug reports. (Maybe this will change?) I see three
options for moving to bitbucket:
1) Import what I can, for history's sake, and then close down Trac.
2) Wait for the API to be improved, and for now, don't use the
3) Give up, just start over on BitBucket, and leave Trac up but hidden.
Can I get a real quick vote, 1/2/3? I know that the bug tracker is
really never used, but I think transitioning to the BB one might
change that, so I want to make sure we do it right.
Importing the Wiki is a lot easier, because we have very little
content in the wiki that needs to be retained. And if we end up
moving the bugs out of Trac, that whole infrastructure can be
This week, after Cameron announced and Jeff demo'd the new GUI for yt,
we've had some thinking and thoughts about some other obvious places
to go with the interface. Tonight Tom and I did some brainstorming
about what a *personal* database of simulation outputs might look
like; this isn't so much about large, comprehensive databases of
simulations, but rather about a pragmatic convenience for people who
run their data on a single file system.
As it stands, yt dumps the last 200 parameter files into a .csv file
in your ~/.yt directory. These include the unique id, the parameter
file name on the file system, and some other misc data that's not
We were thinking it might be neat to have a really simple one that the
files never really left. It could be ephemeral -- so that files that
get removed or moved around would simply be removed from the DB -- and
maybe it would contain information (if available from the simulation
code) about the previous output in the simulation. Enzo has this, and
it may be coming to other codes, in the form of UUIDs that get tracked
and carried along by the sim.
So there are a couple ways this might work. We came up with a few:
* A "locate" command that just returns all the basic parameter files
/ outputs that could be loaded into yt (or other tools), maybe based
on a quick search
* A registering of these outputs into the database at write time by
the simulation code
* An intentional import of, say, a directory -- simply type "yt
import" in the base dir and it'll find all the outputs below that.
(This used to exist in the form of "fimport" but it has bitrot a bit.)
* An unlimited parameter file storage, where all loaded parameter
files get added (unlike the 200 limit we have)
What do people think? With the new GUI, this becomes I think a lot
more interesting -- particularly if you can re-assemble a graph of the
outputs, maybe even querying them, if you were to store the parameter
file in full.
Anyway, just a handful of thoughts. Stephen, you have quite a bit of
experience with SQLite; do you think this could be a fun application
of them? Or would that be overkill? Maybe the whole system could be
handled with just filenames?