I would like to help with this, but it's difficult to figure out where to start.
Say I want to test projections. I make a fake 3D density field, maybe
something as simple as np.arange(4**3).reshape((4, 4, 4)). I write down the
answer to the x-projection. Now all I need to do is call
assert_allclose(yt_result, answer, rtol=1e-15), but I don't know what
pieces of low-level yt stuff to call to get to
Maybe this comes down to creating a fake frontend we can attach fields to?
On Fri, Sep 21, 2012 at 2:42 PM, Matthew Turk email@example.com wrote:
As some of you have seen (at least Stephen), I filed a ticket this morning about increasing testing coverage. The other night Anthony and I met up in NYC and he had something of an "intervention" about the sufficiency of answer testing for yt; it didn't take too much work on his part to convince me that we should be testing not just against a gold standard, but also performing unit tests. In the past I had eschewed unit testing simply because the task of mocking data was quite tricky, and by adding tests that use smaller bits we could cover unit testable areas with answer testing.
But, this isn't really a good strategy. Let's move to having both. The testing infrastructure he recommends is the nearly-omnipresent nose:
The ticket to track this is here:
There are a couple sub-items here:
1) NumPy's nose test plugins provide a lot of necessary functionality that we have reimplemented in the answer testing utilities. I'd like to start using the numpy plugins, which include things like conditional test execution, array comparisons, "slow" tests, etc etc. 2) We can evaluate, using conditional test execution, moving to nose for answer testing. But that's not on the agenda now. 3) Writing tests for nose is super easy, and running them is too. Just do:
nosetest -w yt/
when in your source directory.
4) I've written a simple sample here:
5) I'll handle writing up some mock data that doesn't require shipping lots of binary files, which can then be used for checking things that absolutely require hierarchies.
The way to organize tests is easy. Inside each directory with testable items create a new directory called "tests", and in here toss some scripts. You can stick a bunch of functions in those scripts.
Anyway, I'm going to start writing more of these (in the main yt repo, and this change will be grafted there as well) and I'll write back once the data mocking is ready. I'd like it if we started encouraging or even mandating simple tests (and/or answer tests) for functionality that gets added, but that's a discussion that should be held separately.
The items on the ticket:
Is anyone willing to claim any additional items that they will help write unit tests for?
yt-dev mailing list firstname.lastname@example.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org