Earlier today I sent an email on yt-users list mistakenly reporting that
there was a yt users' & developers' conference. Now at the moment there
isn't one, but after having spoken to Matt a bit, we realized that maybe
that's not a bad idea.
We came up with a few questions:
1. Virtual or real workshop?
2. What should be the goals?
3. How many people would be interested, and how many could come?
4. Funding sources?
We were tentatively considering piggy-backing the Enzo conference, but
adding *two* days *before* it.
Good morning yt developers,
I am very pleased to introduce to you all the yt hub:
This was one of the main motivations for moving to yt-project.org. Many
here might remember
the enzotools barn, which was a place for people to submit non-yt related
enzo scripts. The
barn was very limited in scope and had somewhat poor usability since the
registration processes were not at all automated or simple. Those days are
With the yt hub, users can create their own accounts and submissions
themselves. We are
aiming for the yt hub to become the place to share all things related to
astrophysics: yt scripts, simulation code specific scripts, relevant news
announcements), repositories of scripts for making figures from published
papers, etc. Users
also have the option of subscribing to the hub, where they will receive
email alerts of new
Before we announce this to the full community, it would be great have the
populated with submissions. If you have anything that you think other
people would be
interested in, please post it to the hub. Currently, the recommended method
is to create
a repository somewhere like bitbucket and to submit the link to that. If
you'd like to host
you submissions somewhere else, that's fine, too.
All comments and feedback are welcome. Hopefully, this can be something
I've uploaded a build of the docs for 2.2 here:
(2.2 will go out probably on Friday morning.) There are two new
and Elizabeth Tasker has also added a FAQ section which we can all
continue updating. Britton and Stephen have been studiously adding
docs as they have developed, so there are a few additional sections
under Analysis Modules that are new, as well.
I'd love to hear any comments, particularly about those two sections
but also covering everything else, if you have them. Cameron is
currently writing the "Command Line Utility" section of "Interacting
with yt" but I wanted to send this out so that we could have a bit of
time to solicit comments before anything goes live.
Comments on anything? Feel free, also, if you see something wrong to
and issue a pull request. Elizabeth has been doing this and it has
been very effective.
Thanks very much for *any* thoughts,
I have been looking at FLASH data today, which suffers from a
particular oddity which will impact users of every other code;
however, block structured AMR will bring it out more often.
In the covering_grid code, we have this routine:
def _get_list_of_grids(self, buffer = 0.0):
if self._grids is not None: return
if na.any(self.left_edge - buffer < self.pf.domain_left_edge) or \
na.any(self.right_edge + buffer > self.pf.domain_right_edge):
grids,ind = self.pf.hierarchy.get_periodic_box_grids(
self.left_edge - buffer,
self.right_edge + buffer)
ind = slice(None)
grids,ind = self.pf.hierarchy.get_box_grids(
self.left_edge - buffer,
self.right_edge + buffer)
level_ind = (self.pf.hierarchy.grid_levels.ravel()[ind] <= self.level)
sort_ind = na.argsort(self.pf.h.grid_levels.ravel()[ind][level_ind])
self._grids = self.pf.hierarchy.grids[ind][level_ind][(sort_ind,)][::-1]
This is used to identify grids that overlap with a given region in
space. As you can see, if the grid abuts the domain boundary, yt will
look for grids assuming a period.
But then that information is basically completely discarded, and all
grids are selected. This absolutely *kills* performance. I looked
into it, and this dates back several years.
Can anyone -- Stephen, Britton, Sam, you have all looked at periodic
grids -- think of a reason why we should err on the side of not
following the results of get_periodic_box_grids? Or is it safe to
remove that line, and improve performance by a factor of tons?
As discussed over on yt-users, people have requested that the
parameter file reading be changed for Enzo, to scrape the entire
I've committed a change and pushed to *my* fork of yt on BitBucket.
The change is shorter than this email.
Please pull this revision and see if it works for your parameter
files. It works for mine. If it works globally, I will push to the
main repo. A test script is here:
http://paste.enzotools.org/show/1751/ which will print out the
conversion factors between the two. There are other items to check as
well. This is potentially a HUGE change that might have errors, so I
ask for your testing on it! If I get several testings, I will push,
otherwise I will back it out.
There are three things I wanted to bring up:
1) Enzo 3.0 will feature a much better configuration system, making
most of this irrelevant.
2) I would like to move away from using the CurrentTimeIdentifer as a
unique ID to using MetaDataDatasetUUID as that identifier. This will
break existing pickles. I think the natural time to do this is at
3) With yt-3.0, I would like to remove the dict-like access to the
parameter file we currently have, which currently queries *FOUR*
dictionaries in order:
I'd like to split this into at least parameters and units. [+-]?
Dear yt developers,
I just wanted to let you know that I've added yt to the MacPorts
package manager (http://www.macports.org/).
I recently created a page with instructions for how to set up a fully
functioning Python distribution for Astronomy with MacPorts:
and as of today, you can type:
sudo port install py27-yt
and yt and all required dependencies will be installed. I'd be happy
to advertise this on the users mailing list if you like, or feel free
to do it if you prefer.
Note that MacPorts relies on stable releases, so I had to use the
auto-generated 2.1 tar file on bitbucket, which MacPorts needs to
apply a one-line patch to in order to get to compile. Just letting you
know so you are aware that this installation method is dependent on
In addition to py27-yt, I created py25-yt and py26-yt (it's easy to
have multiple Python versions side by side in MacPorts).
Note that it's also possible to install the latest dev version of yt -
the easiest way it to install py27-yt as above to get all the
dependencies working, then to download the latest source and do
python setup.py install --user
Let me know if you have any questions about this!
As yt has become much more than analysis for Enzo, I think it has outgrown
yt.enzotools.org as its host. Without making this longer than it has to be,
can I get a +-1 from people on moving from yt.enzotools.org to
yt-project.org? We would be not simply remove yt.enzotools.org from
existence, but would redirect it to the new url so people won't get lost.
I was wondering if there is a consensus about the precision of the data
output by YT should be?
I've came upon this question when trying to output ellipsoidal parameters
along with halo attributes. With the current setting, the halo attributes
are outputted with 9 decimal points, but the ellipsoid parameters determined
using the particle's position (when the data is 64 bit) has 16 decimals.
I was thinking that it is best to keep whatever precision we have, but
Stephen brought up a good point, that the halos are only vaguely defined,
that the extra digits are a waste of storage. So I just want to see if
anyone even cares, and what we want to go with. Either way, I think it is
best to be consistent across the board, having both halo or ellipsoid IO in
the same amount of decimals.
this may not be a big issue because most people won't be using
Geoffrey's new ellipsoidal halo information. However, it may spark
discussion about this topic overall and set a useful precedent.
I have just finished vectorizing some of Geoffrey's code that is
attached to the halo finder code. In so doing, I am allowing NaNs to
come into the calculation because it keeps thing simple. Happily, the
NaNs only exist where I know the answers can't be, and
numpy.nanargmin/max() happily ignores the NaNs. But when I make the
NaNs through a divide by zero (and some other stuff) I get warning
messages telling me I just did what I knew was going to happen.
My question is, do we think that I should try to suppress these
warnings? They are accurate, but the math that makes them is done
intentionally, so they're not informative.
Can I get a +1/0/-1? Thanks!
510.621.3687 (google voice)
I am writing about testing yt.
Right now we have 100 tests (on the nose), and it's super easy to blow
through the entire list very rapidly. But, they only cover a tiny
fraction of the code base. (The important fraction, but a fraction
none the less.) And, they only run on a single dataset, the one that
comes with yt.
To improve this situation, we need two things.
1. Datasets with some complexity.
2. More tests.
== Datasets ==
So, if you have a dataset you think might present some challenges --
lots of fields, interesting structure, or if it's NOT ENZO, please
consider submitting it for testing!
*We need more non-Enzo datasets to run these tests on!*
I'm happy to set up testing, I just need some interesting datasets
from the other codes -- Orion, Castro, Maestro, Nyx, FLASH in
particular. I can copy them from wherever, or download from wherever.
== Tests ==
Writing tests is really easy. And if you have written a module for
Enzo, you should *really* write a test for it. Make it simple, make
it easy, and it doesn't matter if it takes a little while.
Here's how to write a test. The best examples are in the yt source
repository, under tests/hierarchy_consistency.py and
tests/object_field_values.py . There are basically three modes of
testing in yt:
1. Testing that something is true or false. This is how some of the
hierarchy consistency checks are done.
2. Testing that something remains *nearly* the same. This allows for
some relative change between changesets.
3. Testing that something remains *exactly* the same. This allows for
no change between changesets.
All three basically look the same. To write a test, just open up a
file in tests/ and make it look a bit like the object_field_values.py
file. For instance, put this at the top:
import numpy as na
from yt.utilities.answer_testing.output_tests import \
YTStaticOutputTest, RegressionTestException, create_test
Now, make a new object, called MyModuleTest or something, and inherit
from YTStaticOutputTest. Give it a name, and implement "run" and
"compare". Inside run, set self.result. Inside compare, accept
old_result and compare to self.result. Like this:
name = "my_module_test"
self.result = my_module(self.pf)
def compare(self, old_value):
self.compare_array_delta(old_result, self.result, 0.0)
There are lots of compare functions. This example ensures exact
results for an array. You can also use hashlib.sha256 to compare
things for precise, no delta differences. And if you want to check
True/False, just store True or False as self.result.
I'm happy to help out with this, but testing all the contributed
modules from over the years is a daunting task!
Please, if you have worked on analysis modules that you would like ...
not to break ... contribute a test or two. It'll help ensure
reliability and invariance over time. And if we decide an answer
changed, and it was a good change, we can update the test results.
Feel free to write back with any questions or comments, and let me
know if you have a dataset to contribute!