Mark Richardson wrote to the yt_analysis project with a problem with
clumps on FLASH datasets. Here's his email:
"Below includes my script and the output (it wouldn't let me attach
two files). I get some output in the clump file up until I'd expect to
see the Jeans Mass dumped. The output makes me think that it can't
calculate this point because Gamma is unknown, or some other essential
variable needed to get the number density. I've attempted to change
the data_structure.py frontend for Flash to alias Gamma to gamc and
game but I still get the 'cannot find Gamma' when loading my dataset."
The issue from the traceback is that JeansMassMsun requires
MeanMolecularWeight which needs NumberDensity. This can't be
generated for his data.
I am not an expert on FLASH, but it seems that this should be
straightforward to fix, I just don't know how. Anybody from the FLASH
or Clump side have any ideas?
A lot of really great work has already or is just about to be been
merged into yt since we did our last release in August. Just a few
Unit Tests and improved answer tests with continuous integration
Numerous PlotWindow fixes and improvements
Improved Athena support
spherical and cylindrical coordinates and vector coordinate transformations
Particle support for FLASH data
Limited support for generating GDF initial conditions
Improved support for 2D FLASH data.
PlotWindow TimeSeries interface
Numerous bug fixes
The obvious holdup to a release is the number of open issues marked for
2.5. I think the idea was to make the 2.5 release the last in the 2.X
series before moving all development over to yt 3.0. Is that still the
plan? Is it possible to get a 2.5 release out while delaying the end of
2.X development to a 2.6 release?
The reason I bring all this up is mostly for the plot window fixes -
we've improved it a lot since August and I think it's much more usable
now. Perhaps a new release and a blog post about the new plotting
interface will encourage more people to switch over.
I'm curious what everyone thinks about this. I know many of you are
busy with the enzo 2.2 release and probably don't want to make another
big push so soon.
As alluded to previously in the YTEP discussion, I've made a yt
Any time new changes are pushed to yt-doc, this will get rebuilt. The
one downside is that because of how RTD is set up, it won't grab any
API docs. But everything else will be live! And when we release, we
can force an API doc update in RTD.
So please -- contribute to the docs! This way we can have "Release"
docs that are for when the API changes are live, and instant docs,
that don't include API info. But it also means there's one fewer
place we need to worry about bottlenecks -- because the docs are
always up to date on RTD.
This discussion is based on the work we're doing for the enzo testing
suite. As part of this, we're doing a number of 1D and 2D enzo
simulations. Unfortunately, we're getting some strange results which
makes me curious whether it actually makes any sense to project 1D and
2D AMR data in yt. This is motivated by inspection of my data:
In : pf.dimensionality
In : prj = pf.h.proj(0,'Density',weight_field='Density')
yt : [INFO ] 2012-11-29 15:17:55,860 Projection completed
In : prj['Density']
array([ 1. , 1. , 1. , 1. , 1. ,
1. , 1. , 1. , 1. , 2.24475454,
2.54903927, 2.54903927, 2.54903927])
Formally the simulation has no extent along y or z, yet the projection
I'm getting back isn't a scalar.
For reference, this is the Toro-3-ShockTubeAMR test that comes with enzo.
I guess the real question I have is what is the best tool inside yt to
quantitatively inspect and compare 1D and 2D datasets? Am I making a
mistake in the way the test simulation is set up, causing yt to
misinterpret it somehow?
Thanks very much for your help with this.
Hi Peter and yt developers,
(Peter - I am copying you on this message that is also going to the yt
developers email list. I believe if you reply-all the yt-dev copy of
the message will bounce back to you, or be held for moderation, if
you're not subscribed to yt-dev.)
I am having trouble running Rockstar within yt on more than one node.
For example, I am successful in running with 12 tasks total on one
node, but 6 tasks each on two nodes does not work. By not working, I
mean that it prints a few of these messages (one per PID, but not by
all PIDs, it appears to equal NUM_WRITERS)
"[Warning] Network IO Failure (PID NNNNN):"
with various explanations ("Connection reset by peer", "Address
already in use", "Broken pipe"). After that it prints
"[Network] Packet send retry count at: 1"
And then it hangs.
I have turned on DEBUG_RSOCKET and I can see that tasks on both nodes
are communicating with the server, and I also see "Accepted all reader
/ writer connections." and "Verified all reader / writer connections."
The process gets as far as "Analyzing for halos / subhalos..." but it
does not make it to " Constructing merger tree..." I am running on a
I have tracked down that the call of accept() in _accept_connection()
(in socket.c) is where the hang is happening. It looks like that has
been called by repair_connection() (in rsocket.c). If I'm interpreting
things correctly (please correct me if I'm not), the other half of
repair_connection() is a call to _reconnect_to_addr() done by a
different task. It looks to me like it is not being called to match by
a different task, and that's where the hang is happening.
I have done a test with stand-alone Rockstar on the same machine, and
I am successful running it on 2 nodes. I think this means that there
is some weirdness with the communication when running Rockstar as a
library in yt/Python, and not the machine's network.
I'm wondering if anyone has been successful running Rockstar in yt on
more than one node? Also, does anyone have any intuition for what
might be going wrong here?
510.621.3687 (google voice)
Hi all (specifically Matt, I suspect),
I'm running into an odd issue in yt 3.0. I'm using the following script: http://paste.yt-project.org/show/2905/. This refers to a non-axissymmetric dataset I generated with enzo: http://ucolick.org/~goldbaum/files/DD0000.tgz.
The issue is that when I run this script with yt-3.0, I only get back the 'z' projection, even in the cases where I ask it for the 'x' and 'y' projection.
Since the signature of __init__ for the projection object is slightly different in 3.0, you'll need to manually choose the line in the script that creates the projection object depending on which version of yt you're running. As a side note, is there a reason for this change in the API?
Astronomy & Astrophysics, UCSC
There's been some discussion of planning documents. For a while I
thought we could use JIRA for this, but I think it's time to put that
plan on hold, particularly in light of the discussion a couple weeks
ago where the strong sentiment of using Sphinx/ReST (i.e., something
we *know*) was expressed.
So I'd like to present YTEPs: yt Enhancement Proposals. I see this as
a place for evolving design documents, or for design documents that
describe how things are being implemented moving forward. This means
instead of having a mailing list post serve as the primary reference
point for something, we'll have a repository of documents that get
automatically built into documentation, which can be discussed via PR
and via mailing list and iterated on. Other projects such as NumPy,
Matplotlib, IPython, Cython and Python itself use enhancement
proposals to describe proposed major changes and to leave a record of
what has been done and how it was implemented.
I know this sounds like a lot of project planning and whatnot, but I
think this could be very useful particularly as we transition to yt
3.0, where a lot of design decisions have been made or need to be
made, and where we are hoping to build something sustainable.
Here's the repository I've created:
When a new commit is pushed, this will be automatically updated:
This also includes information about why you would write a YTEP, how
you would do so, and what to do once you have.
I have written my first YTEP based on the IO chunking mechanism that
I've alluded to here in the past. I've issued a PR, but I hope that
things like this can be the subject of discussion either on the
mailing list or in the PR to ensure that the YTEP covers everything
that it needs to. These can be updated over time, as well, after they
are initially accepted.
Once this has been accepted, the YTEP repo will be automatically updated.
I'll commit to writing YTEPs for other aspects of 3.0 as well:
* Multiple particle, multiple fluid access
* Geometry handling
* Coordinate systems
But other things that would be good subjects of the YTEP process:
* Initial conditions API
* The GDF
* Changing Halos to all be lightweight
* Switching the order of arguments of projections
* Switching Reason to use IPython
I would like to encourage people who are working on enhancements or
changes that have large, user-facing changes or components that would
benefit from input on design or implementation details submit YTEPs.
I'd also like to encourage that if you are reviewing a pull request or
changeset that would benefit from this process, you ask the submitter
to write a YTEP. The overhead is relatively minimal, and I think at
this time we have grown as a project to the point that keeping a
record of design thoughts is a responsibility we have to our
Does this sound like an appropriate step? Does this implementation,
template and setup sound acceptable to everyone else?
This morning I have realized that I made a mistake. In the process of pushing a small bugfix to the Athena frontend, I forgot about my still-existing PR regarding GridTree and pushed that in as well.
I had intended to let it be vetted and accepted in the normal way. If anyone would like to comment on it and suggest changes or concerns please let me know. Again, my apologies.
Laboratory for High-Energy Astrophysics
NASA/Goddard Space Flight Center
8800 Greenbelt Rd., Code 662
Greenbelt, MD 20771