Hi all,
Mark Richardson wrote to the yt_analysis project with a problem with
clumps on FLASH datasets. Here's his email:
"Below includes my script and the output (it wouldn't let me attach
two files). I get some output in the clump file up until I'd expect to
see the Jeans Mass dumped. The output makes me think that it can't
calculate this point because Gamma is unknown, or some other essential
variable needed to get the number density. I've attempted to change
the data_structure.py frontend for Flash to alias Gamma to gamc and
game but I still get the 'cannot find Gamma' when loading my dataset."
The issue from the traceback is that JeansMassMsun requires
MeanMolecularWeight which needs NumberDensity. This can't be
generated for his data.
I am not an expert on FLASH, but it seems that this should be
straightforward to fix, I just don't know how. Anybody from the FLASH
or Clump side have any ideas?
-Matt
Hi all,
Yesterday I converted the blog to a new format. It now uses ReST, the same
format as sphinx.
The blog can be a very effective means of reaching out to the community to
share new information. At a bootcamp I ran a few weeks ago, many
experienced users of yt were seeing PlotWindow for the very first time.
We're doing some really cool stuff that nobody really knows about, because
our communication channels are either transient (email), sparse (release
announcements) or overwhelming (docs.) A blog is a good middle ground, but
our existing blog on Posterous was awful: no one knew how to format code,
every post looked different, had to manually approve stuff, blah blah.
So the new format uses ReST, and is super easy.
Fork here:
https://bitbucket.org/yt_analysis/blog/fork
And add a new .rst file under content/post/ . You should set an "author"
and you can see how this is done elsewhere.
Then issue a pull request, and bam, as soon as that PR is accepted, it gets
rebuilt and posted at http://blog.yt-project.org/ .
I've also written a converter using IPython's nbconvert that will convert
to the format needed for this particular blogging engine. So you can use a
notebook to show some code, intersperse text and images, and then just run
the blohg_converter.py script on it, and it will add a new post. This part
needs a bit of cleanup on my part, but it's somewhat documented already.
Anyway, please -- if you do something cool, or write a new feature, add a
post on the blog! I looked at feedburner today and we have 21 subscribers.
I don't know how this is calculated, but yowza.
-Matt
--
Technical details about the new blog:
blohg for maintaining it (blohg.org)
bitbucket for storing the repository
shiningpanda for building
s3 for hosting
IPython's nbconvert as engine to drive conversion of .ipynb => blohg ReST
Dear all,
I'm trying to add new fields used in GAMER by editing the file "field.py",
but I'm confused about the following declarations.
KnownGAMERFields = FieldInfoContainer()
add_gamer_field = KnownGAMERFields.add_field
GAMERFieldInfo = FieldInfoContainer.create_with_fallback(FieldInfo)
add_field = GAMERFieldInfo.add_field
What is the difference between "KnownGAMERField" and "GAMERFieldInfo"?
To me, it seems that they are both instantiations of the class FieldInfoContainer,
except that GAMERFieldInfo has a fallback function "FieldInfo".
Similarly, what's the difference between "add_gamer_field" and "add_field"?
Thanks in advance for the help!!
Sincerely,
Hsi-Yu
Dear all,
I have two simple questions about adding support of GAMER in yt.
1. How does YT distinguish between data generated from different codes?
For example, if I want to do something like
pf = load( "GAMER_Data0000" )
to load the GAMER data, how to make YT realize that it is GAMER output instead of, for example, Enzo or FLASH data?
What kind of information I should provide in the "frontends/gamer/XXX.py" scripts and the GAMER output file "GAMER_Data0000"
in order to achieve that?
2. After I implement all routines in the "frontends/gamer/XXX/py" scripts, how to recompile YT so that I could test
whether the GAMER data can be loaded and analyzed successfully? From the information on the "How to Develop yt" website,
it seems that I should try:
cd $YT_DEST/src/yt-hg/
python setup.py develop
I was wondering whether that's exactly what I should do to test the new scripts?
Thanks in advance for any help : )
Sincerely,
Hsi-Yu
Hi all (especially Nathan),
Looking over the Plot Window code, I think this is okay, but I wanted
to make sure that there wasn't something I was missing. Is there any
reason I can't swap out a data_source in a plot window, invalidate the
data, and then save()? Or will there be lingering hints of the old
data source, and the old parameter file?
The context for this is that I'm adding a TimeSeriesPlotWindow, and I
want to try to retain as much as I can as I switch between datasets.
Annotations, widht (in code units), limits, etc etc.
Thanks,
Matt
Hi all,
The answer testing pull request I submitted has been accepted by
Nathan. This is our primary method of testing code frontends.
I've documented how it works and how to contribute here:
https://bitbucket.org/yt_analysis/yt/wiki/AnswerTesting
(The pull request, PR 308, has more:
https://bitbucket.org/yt_analysis/yt/pull-request/308 )
This still needs contributions in two forms:
1) Port over the old answer tests, which are basically everything
imported in yt/utilities/answer_testing/api.py
2) Add on datasets for other frontends
Both of these are described in the wiki page I linked to; adding
datasets is considerably easier than writing new tests. We need:
* FLASH
* Orion
* Nyx
at a bare minimum to be tested in this manner. Please write back to
the list if you're up for working on this! I'm updating the buildbots
now to run answer tests on DD0010. Thanks to Britton and Nathan for
all the feedback on this.
-Matt
Hi yt gang
Just had a little chat with Matt, and he suggested I bring up
ipython-physics <https://bitbucket.org/birkenfeld/ipython-physics> for
discussion. Sounds like unit handling is an issue that needs to be
addressed in yt. I've found ipython-physics to be pretty neat and quite
usable. Maybe this is something worth tying into yt? Matt also mentioned
that Casey had written a sympy-based library for handling units; how does
it compare to what ipython-physics does?
Mike
--
*********************************************************************
* *
* Dr. Michael Kuhlen Theoretical Astrophysics Center *
* email: mqk(a)astro.berkeley.edu UC Berkeley *
* cell phone: (831) 588-1468 B-116 Hearst Field Annex # 3411 *
* skype username: mikekuhlen Berkeley, CA 94720 *
* *
*********************************************************************
Hi all,
I'm really excited about this, but I would like to hear feedback as
well as solicit help with it. There are a lot of new tests we can
write, particularly for frontend bits that have gotten us in the past.
We can also use Nose to measure performance over time, which would be
a nice way of checking for regressions or improvements.
As I note in the PR, I'd like to get a discussion going about this --
any feedback, would be very, very welcome. Does this meet our needs
for answer testing? Will you be willing to write tests for a given
frontend? What else could be added or improved?
I'd also like to suggest that we have a Hangout or IRC meeting to get
some builds set up and actually try this out on a couple different
machines. My best times would be Tuesday at 4PM EST or Wednesday at
2PM EST.
-Matt
---------- Forwarded message ----------
From: Matthew Turk <pullrequests-noreply(a)bitbucket.org>
Date: Thu, Oct 18, 2012 at 10:28 PM
Subject: [yt_analysis/yt] Answer testing plugin for Nose (pull request #308)
A new pull request has been opened by Matthew Turk.
MatthewTurk/yt has changes to be pulled into yt_analysis/yt.
https://bitbucket.org/yt_analysis/yt/pull-request/308/answer-testing-plug...
Title: Answer testing plugin for Nose
This pull request includes an answer testing plugin for Nose, as well
as a mechanism by which this plugin can be used to upload new results
and compare existing results to a gold standard, stored in Amazon.
## How does Answer Testing work now?
Currently, Answer Testing in yt works by running a completely
home-grown test runner, discoverer, and storage system. This works on
a single parameter file at a time, and there is little flexibility in
how the parameter files are tested. For instance, you cannot select
fields based on the code that generated the pf. This catches many but
not all errors, and can only test Enzo and FLASH.
When a new set of "reliable" tests has been identified, it is tarred
up and uploaded. No one ever really used them, and it's difficult to
run them unless you're on Matt's machine.
## What does this do?
There are two ways in which this can function:
* Pull down results for a given parameter file and compare the
locally-created results against them
* Run new results and upload those to S3
These are not meant to co-exist. In fact, the ideal method of
operation is that when the answer tests are changed *intentionally*,
new gold standards are generated and pushed to S3 by one of a trusted
set of users. (New users can be added, with the privs necessary to
push a new set of tests.)
This adds a new config option to `~/.yt/config` in the `[yt]` section:
`test_data_dir`, which is where parameter files (such as
"IsolatedGalaxy" and "DD0010" from yt's distribution) can be found.
When the nosetests are run, any parameter files it finds in that
directory will be used as answer testing input. In
`yt/frontends/enzo/tests/test_outputs.py` is the Enzo frontend tests
that rely on parameter files. Note that right now, the standard AMR
tests are quite extensive and generate a lot of data; I am still in
the process of creating new tests to replicate the old answer tests,
and also slimming it down for big datasets.
To run a comparison, you must first run "develop" so that the new nose
plugin becomes available. Then, in the yt directory,
`nosetests --with-answer-testing frontends/enzo/ --answer-compare=gold001`
To run a set of tests and *store* them:
`nosetests --with-answer-testing frontends/enzo/ --answer-store
--answer-name=gold001`
We can now not only run answer tests, but we don't have to manage
(manually or otherwise) the uploads. yt will do this for us, using
boto. Down the road we can swap out Amazon for any
OpenStack-compliant cloud provider, such as SDSC's cloud.
Additionally, we can now add answer testing of small data to Shining
Panda. In the future, we can add answer testing of large data with
lower frequency, as well.
## What's Next?
Because there's a lot to take in, I'd like to suggest this PR not be
accepted as-is. There are a few items that need to be done first:
* The developer community needs to be brought in on this; I would
like to suggest either a hangout or an IRC meeting to discuss how this
works. I'd also encourage others to pull this PR, run the nosetests
command that compares data, and figure out if they like how it looks.
* The old tests all need to be replicated. This means things like
projections turned into pixel buffers, field statistics (without
storing the fields)
* Tests need to be added for other frontends. I am currently working
with other frontend maintainers to get data, but once we've gotten it,
we need to add tests as is done for Enzo. This means FLASH, Nyx,
Orion, as well as any others that would like to be on the testing
suite.
I'd like to encourage specific comments on lines of code to be left
here, as well as comments on the actual structure of the code, but
I'll be forwarding this PR to yt-dev and asking for broader comments
there. I think that having a single, integrated testing system that
can test a subset of parameter files (as well as auto-discover them)
will be extremely valuable for ensuring maintainability. I'm really
excited about this.
Changes to be pulled:
--
This is an issue notification from bitbucket.org.
You are receiving this either because you are the participating
in a pull request, or you are following it.
Hi all,
We had a productive discussion about the yt workshop today. Our conclusions are summarized in this Google doc: https://docs.google.com/document/d/1-SVt6In7w5jpWCm9Co6nT07-owimrFDOvNjN4...
We've settled on March 6-8, with some possible spillover on Saturday March 9th. We'll be focused on yt, and have decided to not include science talks in the interest of having more time to work on coding sprints. We'll also have short ~30 minute talks on key portions of the code so that the leads for those portions can describe advanced features, point out areas for improvement, and solicit help from the other developers. Most of the time will be spent in coding sprints, which we will track using some of the Atlassian agile programming tools we've recently been given access to. Hopefully by the end of the workshop, we'll have an automatically generated list of accomplishments that we can point to.
Chris Moody had volunteered to write up a draft of the proposal. He'll also be talking to Joel to get some more details on the size of the funding request and how to focus the proposal. We also need to decide who will be the PI - it's not clear if it's appropriate to have grad students in charge of the grant.
We'll be colloboratively finishing up the grant starting the week of November 2nd.
Please let me or Chris know if you have concerns about the format or the way we're progressing.
Happy yt-ing!
Cheers,
Nathan
Hi all,
I've gotten 7 responses on the doodle poll and unfortunately there is no time when everyone is available. Since Matt isn't available on Friday and John is only available on Friday, I've had to choose Thursday since Matt has experience from the last yt workshop that I think will be valuable. Sorry, John! We'll try to take good notes.
I'll be setting up the google+ hangout on Thursday. Please add me to your friends circle so I can invite you: https://plus.google.com/109002259194298035269/posts
-Nathan