The following movie shows what's possible with yt-dev ~2.1
The camera rotates about z axis of the data with a constant azimuthal angle
w.r.t the z of the data:
curl -O http://lca.ucsd.edu/~gso/test_movies/Density.avi
The following movies shows what happens now if I have a rotation vector or
no rotation vector
Weird camera rotation:
curl -O http://lca.ucsd.edu/~gso/test_movies/with_rot_vector.mpg
Camera rotates about the "up" axis of the camera:
curl -O http://lca.ucsd.edu/~gso/test_movies/no_rot_vector.mpg
I brought this issue up a while back but got distracted before it was
resolved, and over past two days did more digging/testing.
I cannot rotate the camera about the z axis of the data while keeping the
azimuthal angle constant anymore, where as I could do this before in I
believe yt-dev 2.1ish.
What I tried/found:
I thought I mailed the dev list but couldn't dig it out of the yt-dev
archive, so maybe I only complained in IRC before. This thread is the
latter portion where Nathan and Sam were helping me with helpful
suggestions and what to try.
The example script Sam provided (by not specifying the north_vector) will
rotate the camera about the camera's z (up) axis, but not the data's z
axis, if looking at the image, what I am trying to do is have the camera
rotate around the disk about the axis normal to the disk's plane (assume
disk is in x-y plane).
I tried playing with the north_vector, steady_north in the camera object,
and the rot_vector in cam.rotation, made sure the vectors are normalized.
I found that if I specify the normal vector (usually designated "L" in
example scripts) as the x or -x axis, and the north vector as the z axis, I
can look at the data cube (in this case Enzo data), and watch it rotate
about the z axis, the behavior I expected. However, if my L is tilted not
90 degrees (non x or y axis), no matter what I specify as the north vector,
the rotation angle changes and the cube wobbles or precesses.
I've modified Sam's script to image the entire box, because it's harder to
see the effect if we're looking at a sphere
This script will produce the strange rotation, if the rot_vector is taken
out on line 17, then the rotation about the camera up is recovered. But
that's not what was done before and not what I wanted.
I also wrote the script to reproduce the effect I wanted with the rotation,
but this is a lot more work than before. I have to specify a new L and a
new up angle for the camera for every frame to get back the effect I wanted.
Hope this help track down the problem, cam.rotation is so much simpler to
---------- Forwarded message ----------
From: Andrew Collette <andrew.collette(a)gmail.com>
Date: Tue, Sep 11, 2012 at 11:14 AM
Subject: ANN: HDF5 for Python 2.1.0 BETA
Announcing HDF5 for Python (h5py) 2.1 BETA
We are proud to announce the availability of HDF5 for Python (h5py) 2.1-beta.
HDF5 for Python (h5py) is a general-purpose Python interface to the
Hierarchical Data Format library, version 5. HDF5 is a mature scientific
software library originally developed at NCSA, designed for the fast,
flexible storage of enormous amounts of data.
>From a Python programmer's perspective, HDF5 provides a robust way to
store data, organized by name in a tree-like fashion. You can create
datasets (arrays on disk) hundreds of gigabytes in size, and perform
random-access I/O on desired sections. Datasets are organized in a
filesystem-like hierarchy using containers called "groups", and
accessed using the traditional POSIX /path/to/resource syntax.
H5py 2.1-beta is available for Unix and Windows. The beta period will last
approximately 2 weeks. Comments and suggestions are welcome, either at the
project issue tracker or on the mailing list (h5py at Google Groups).
Downloads, FAQ and bug tracker are available at Google Code:
* Google code site: http://h5py.googlecode.com
Documentation is available at Alfven.org:
What's new in h5py 2.1
* The HDF5 Dimension Scales API is now available, along with high-level
integration with Dataset objects. Thanks to D. Dale for implementing this.
* Unicode scalar strings can now be stored in attributes.
* Dataset objects now expose a .size property giving the total number of
* Many bug fixes.
You received this message because you are subscribed to the Google
Groups "h5py" group.
To post to this group, send email to h5py(a)googlegroups.com.
To unsubscribe from this group, send email to h5py+unsubscribe(a)googlegroups.com.
For more options, visit this group at http://groups.google.com/group/h5py?hl=en.
On Friday, Anthony submitted a PR to the yt-3.0 branch:
This PR is pretty invasive, and done by regular expressions.
Basically, for a long time we've been sticking with a convention I
started using about ... six years ago ... when NumArray was the
dominant array language. (Or when I was still too removed from
Python's scientific community to see otherwise.) The array library
was shortened to 'na'. Almost immediately after, NumPy took over and
while we switched to NumPy we never updated the shorthand in the
Python code to 'np'. (The Cython code always uses 'np') Most python
tutorials use np instead of numpy, and I'd like to encourage best
practices in yt as well as suggest we try to fit into the broader
ecosystem of packages.
Anyway, this is kind of a bandaid that needs to be ripped off at some
point, and I think it's appropriate to discuss now. I see three
options, which basically break down on levels of disruption and ease
of the dual-lines of development we currently have.
1) Put this PR (which applies only to 3.0) on hold. This way, merging
from 2.x to 3.0 can proceed easily, and the disruption is completely
pushed off for a bit.
2) Accept the PR. This increases the burden on me for merging
considerably, and it would fall on me. Any file where both a numpy
line and another line are changed would throw a conflict that I'd have
to manually resolve. But, because it would just be in the 3.0 line,
disruption would be kept to a minimum.
3) Accept the PR *but* also mandate that we apply it to the main
repository's development branch (2.x). This would be the most
disruptive, but it would also keep merging difficulty to a minimum.
As a compatibility layer, we'll keep "na" in the yt.mods namespace,
which means my_plugins.py files would still work, as would existing
Because this could be disruptive for any major, outstanding forks, I
also think it needs to be discussed here. (I'm actually kind of -1 on
big discussions happening in pull requests.) My vote is for #3. I'd
rather get this over with, since we all know it probably ought to
happen at some point in the future.
What do people think -- of the three options, which is your favorite,
and do you have strong feelings against any one option?
I've just run into an issue with the way plot window currently handles the derived field display_names. Everything works great so long as the display name is an ascii string (true for 99% of yt fields) but if I define a new field and I want the display name to include some latex macros, things currently break.
I've hacked up a solution in this changeset: https://bitbucket.org/ngoldbaum/yt-cleancopy/changeset/5087f6769726a9527b50…
This allows me to make plots like this: http://i.imgur.com/a8G9e.png Or this: http://i.imgur.com/TTgAY.png if I define the fields as in this paste: http://paste.yt-project.org/show/2683/
This wasn't a problem using PlotCollection since the colorbar label wasn't rendered in mathtext unless the display_name string was explicitly passed as mathtext. When I finished up plot window I decided to force the axis labels and colorbar labels to be mathtext so that the unit label and field name are rendered in the same font.
A simpler solution would be to go back to annotating the labels using the matplotlib fonts. I'm -1 on that since I think the plots don't look as nice.
Personally, I'm +1 on my changeset but since this is a biggish change that might interfere with user-defined fields, I wanted to open a discussion on the dev list about how to handle the issue before submitting a PR.
Hi all (especially Chris & Matt),
this morning I've spent some time trying to get rockstar to work for
me, and I've made some progress, I think, but I'm getting answers I
don't trust, and a crash, besides, so there's more to be done.
What I've done is to pull and merge Chris' current PR for rockstar
into a copy of the current tip of yt. Then upon that, I've made the
following changes (http://paste.yt-project.org/show/2671/). In
particular, here's the substantive things I've done:
- I modified the part of RockstarHaloFinder.__init__ that finds the
total number of particles to work in parallel (meaning, not all the
particles are loaded on all processors).
- I am running on only one data output in my time series. I think this
is why I've had to comment out one of the ProcessorPool() blocks, but
frankly, I don't really understand the ProcessorPool stuff. If I keep
both blocks, the second one in the sequence complains that there are
no workers available.
- My test data does not have the 'particle_type' field, which the
rockstar wrapper was relying on. I've made a few changes that
basically assume that if there is no 'particle_type' field, everything
is a DM particle (which is the case for my test data). This has
required a few changes in rockstar.py and rockstar_interface.pyx (look
for the rh.dm_type stuff).
With these changes, I can get some stuff to run. Here are the most
interesting lines I see:
[ 0s] Accepting connections...
[ 0s] Accepted all reader / writer connections.
[ 0s] Verified all reader / writer connections.
[ 0s] Reading 1 blocks for snapshot 0...
reading from particle filename ./inline.0: data0012
[ 21s] Transferring particles to writers...
[ 22s] Analyzing for halos / subhalos...
[ 33s] Constructing merger tree...
[ 33s] [Success] Done with snapshot 0.
Immediately after this I see some unhelpful crash error messages.
I do get a "halos_0.0.ascii" file (and some other stuff) in my
xxx_rockstar directory, but the centers of mass for the halos are very
fishy (and likely everything else). I think this might be related to
the fact that I'm getting a crash. I think that the centers are fishy
because, for example, the largest halo in the rockstar text file is no
where near the largest that HOP finds, although they do have a similar
number of particles.
There was a bit of momentum on rockstar a couple weeks ago, and I'm
hoping that we can try to get this thing working. I can share the
dataset I'm using for my tests, if anyone wants it (it's similar to
one of the compendium datasets, if not identical), and my script as
510.621.3687 (google voice)
I'm running into an issue with FLASH 2D data, namely that occasionally
a grid will end up with a z-position of 0.5 instead of
domain_left_edge+0.5. Since most of the grids are at the latter
location in z, this messes up slices.
Nathan has traced the problem to frontends/flash/data_structures.py,
in particular the for loop at line 146. It looks like I'm catching an
edge case where na.rint rounds .4999999... to 0 instead of 0.5 to 1.
A data file which exhibits this issue is available here:
For the record I'm using revision 1a3c927ef00c of the development branch.