Today we got an Atlassian on-demand instance of JIRA:
We're currently under evaluation for an Open Source license, but I
think we should be a good candidate. As noted in the other thread
(and as we hashed out on IRC), JIRA is great for long-lived items, and
I think we should stick to BitBucket for minor bugs and for having
people that are mostly outside the development process record bugs.
Although, after some chatter in IRC, it seems like we'd all be pretty
keen on moving everything over, we just don't have a mechanism for
doing that. (Particularly since you can collect JIRA issues from
Anyway. JIRA is targeted at so-called Agile development, which we
don't really do, so I've gone ahead and slimmed down the issue tracker
to reflect what we use. I've invited the people who were in IRC today
and if you'd like to join as well (we have a 10 user limit at the
moment) just ping me.
0) Briefly browse the JIRA-101 linked on the homepage.
1) Convert some of the bigger, non-bug issues over. If you have one
you have been thinking about or working on (like Cython kD-trees, or
profile plotting, or SPH kernels, or Rockstar halo finding, movie
making, etc etc), try out converting it over.
2) Think about if this is a workflow that can work for you/us. The
linkages happen anywhere in BitBucket, so if you leave a comment with
YT-## where ## is the number of an issue, it will link directly back.
The same goes for comments in issues and commits. Do we like this?
Is it worthwhile?
This is still an evaluation period, so we should regroup in a while
and figure out if the value-add over the bitbucket issue tracker is
worth the additional system. Please also write back if you have
suggestions about changes to the system or thoughts on this.
Over the last couple days, in case you hadn't noticed, we've had a lot
of emails going out about fixing up the workflow, developing a suite
of tests, and handling tracking of changes, bugs, planning, etc etc.
I think these are coming to fruition, and I wanted to write with a
quick summary of where we're at.
1) Changes, except for obvious bug fixes, should be mediated by pull
requests. This is where we have been for a while.
2) New functionality needs to be accompanied by unit tests. Right now
we're still working on coming up with a solution for visualization
tasks, but everything else is being covered as we speak. This can be
something as *simple* as making sure that an API doesn't change, or
doesn't raise an exception, or DOES raise an exception. All of these
things are valid to do. We want not only to verify that we're doing
operations correctly, but also that APIs -- especially deep APIs that
other functionality relies on! -- don't change without warning or
reason. Cameron and I chatted about this today in the context of
moving making functionality; part of the tests should be to make sure
that the APIs such functionality would rely on don't change, or behave
the same way.
3) Frontends will be tested in the near future. This will be what we
currently call "Answer Testing" but will be more curated and
structured. Once I have finished up the changes necessary for this
I'll be writing with more.
4) New functionality or big bug fixes should be mediated by the issue
tracker, even if that issue is filled out *after* the bug fix is
As an example:
Nathan issued this PR:
. He then added unit tests during the PR process. After it was
accepted, it was added to the Shining Panda buildbot (immediately!)
Everything came up sunny on the buildbot side, but if it had failed,
it would have reported it to the yt-svn email list:
After it was accepted, he filled out this ticket:
Now when we prepare the 2.5 release notes, we will see this in the
list of resolved issues.
If you're adding a new test, let's say to
yt/utilities/tests/test_some_utility.py , you can run:
(without that leading yt/ on the filename; not sure why this is!) to
run just the tests in that file. If you want to run all of them:
nosetests -w yt -e "answer_testing"
python2.7 setup.py nosetests
Soon we'll get rid of the exclude option, when we can convert answer
So to sum up: add tests, do things with PRs, and fill out issues. I
think once we become more comfortable as a community writing and
relying on tests, we can start making the big changes we want to while
Thanks everybody for bearing with this as we solidify this workflow.
Any thoughts or comments on this process?
PS We should be hearing in a week or two about JIRA, but I'm really
happy about the Shining Panda stuff. Please do let me know if you'd
like your repository added. 3.0 will be going on the list shortly.
Today at UCSC, Nathan, Chris (Moody) and I sat down and went through
what we wanted to accomplish with testing. This comes back to the
age-old dichotomy between unit testing and answer testing. But what
this really comes back to, now that we had the opportunity to think
about it, is the difference between testing components and
functionality versus testing frontends.
So the idea here is:
Unit tests => Cover, using either manually inserted data values or
randomly generated "parameter files", individual units of the code.
Stephen and I have written a bunch in the last couple days. We have
nearly 500, and they take < 1 minute to run.
Frontend/Answer tests => Cover a large portion of high-level
functionality that touches a lot of the code, but do so by running
things like projections, profiles, etc on actual data from actual
simulation codes, which then get compared to reference values that are
stored somewhere. Currently we have ~550 answer tests, and they run
every 30 minutes on moving7_0010 (comes wit yt) and once a day on
JHK-DD0030 (on yt-project.org/data/ as IsolatedGalaxy .) We do not
have automated FLASH testing.
The next step is:
1) Getting a bunch of non-proprietary sets of data that are small
*and* medium, for each code base we want to test. This data must be
non-proprietary! For small, I would say they can be trivially small.
For medium, I'd prefer in the 0.5 - 5 gb range for size-on-disk. I
would think that GasSloshing and WindTunnel could work for FLASH. But
we still need ART data (from Chris Moody), GDF or Piernik data (from
Kacper), Orion data (if possible), Nyx data (if possible). I will
handle adding RAMSES data in the 3.0 branch.
2) Getting a mechanism to run answer tests that isn't "Matt's
desktop." I've emailed Shining Panda about this, but if they don't
have the ability to provide us with a FLOSS license, I think we can
identify some funding to do this.
3) Have a mechanism to display and collate results. ShiningPanda
would do this if we were on their systems.
4) Make it much easier to flag individual tests as needing updates. I
think the Data Hub will be the end place for this, but this is lower
5) Migrate answer testing to use unit testing framework, as most of
what we've done there re-implements stuff that is in the unit testing
frameworks. This will mean we can much more easily handle
test-discovery, which is a huge plus.
Ultimately, the end product of all of this is that we should
eventually have a method for running a single set of tests that do
test discovery that loads up a bunch of different data outputs, runs
answer tests on all of them, runs the unit tests, etc etc. I think it
just needs the last 25% to finish up the infrastructure.
So: those of you out there who have access to any datasets of types
otehr than FLASH or Enzo, can you provide non-proprietary, medium-size
and small-size datasets? I'd like to have two for every code base, at
So: those of you who want to help out, would you be interested in
looking at the answer_testing framework with me? I am happy to
discuss it over email or IRC to convert it to the numpy testing
format, which will be much easier to maintain in the long run and make
it much easier to have a single testing system that works for
One of the stated goals of 2.5 is to have a pip-installable yt. I
think we need to have a couple things fixed for this to work:
1) Fix setup.py to include correct dependencies
2) Make it clearer in the error messages in yt/utilities/setup.py that
HDF5_DIR needs to be set if it can't find HDF5 (and to de-emphasize
3) Same for the PNG and FreeType packages
I think that may be everything; I checked it just now and it actually
does work if you have the dependencies and you do:
pip install hg+https://bitbucket.org/yt_analysis/yt/
Does anyone (Casey? Jeff? Mike?) want to take this on? I've filled
out a bug here:
We should probably try to get a 2.5 release together by the end of the
year. It would be really helpful if you are working on something, to
fill it out and target both milestone 2.5 and version 2.5 as an issue.
That way we can identify goals and push to stable. Testing should
perhaps be a huge focus of this release. But, once it's done, I think
we can try to transition to 3.0 for development.
Here's the current list, which may need curation a bit as some seem to
be completed or in progress:
If you want to subdivide something, create a new milestone and target
*that*, but with *version* 2.5.
PS The new bitbucket redesign is quite nice!
Matplotlib 1.2 isn't released yet, but we had a pull request come in
that depends on it:
I am inclined to say that we should accept (once the submitter adds
docs and linewraps) but that we should throw an error if the version
is too low, and simply make streamlines inaccessible. I think the
audience is very small right now, and the window between now and when
we upgrade the dependencies to include 1.2 is short enough, that we
can go ahead and do this.
If you want to signal approval, go hit the "Approve" button. If you
want to disagree, please reply here! :)
I've noticed that at some point some of the things I used to rely on in IPython went missing from the standard install that yt does (at least for me):
1) "help blah" does not work, must now type "help(blah)"
Not really a big deal, just annoying.
2) "source blah" does not work at all
Anyone know what this is part of?
3) insert_ipython() doesn't know anything about the variables currently in scope
I can jump into a function but it doesn't seem to know anything about the variables defined in it. Everything is not defined.
Did I screw up somewhere?
I'd like to set up a google+ hangout so we can get organized to apply for a HIPACC grant to cover expenses for a yt developers workshop at UCSC.
Ji-hoon Kim and Chris Moody have kindly offered to be part of an LOC along with myself to handle budget details and local logistics, but it would be really great to hear from more of yt developers about what you would like to get out of the workshop. I'd like to end the hangout with a concrete plan for the length, format, number of participants, and a rough idea for a realistic funding request.
I'm available any time next week but to give people a little time to plan their schedules, let's have the meeting on Wednesday, Thursday or Friday. I've made a doodle poll here: http://www.doodle.com/6kvuu2fv8xigpccp (click the 'show all 15 options' link)
Please respond to the doodle poll with times you are free and I will try to figure out a time that works for everyone who is interested.
Astronomy & Astrophysics, UCSC