Hi Anthony.
I completely agree that we should target the level of functions actually performing the projection rather than yt's organization. The mock frontend suggestion was just a hack to get there. I don't know if there's a way around it though...
Here's an example of what I sorted through to get to projections:
So it looks like the functionality is wrapped up in the __project_level and _project_grid methods. I can't think of a way to test those without creating an AMRProjBase, and that requires a staticoutput object.
So unfortunately, I think it would still come down to having a fake frontend. It's not ideal, but it seems like any more isolation would require big rewrites to yt.
Of course, I could be missing something. Matt, can you think of a better way?
On Mon, Sep 24, 2012 at 11:02 AM, Anthony Scopatz scopatz@gmail.com wrote:
Helo Casey,
Sorry for taking the whole weekend to respond.
I would like to help with this, but it's difficult to figure out where to
start.
Not to worry. I think that any of the items listed at the bottom of Matt's original email would be a great place to start.
>
Say I
want to test projections. I make a fake 3D density field, maybe
something as simple as np.arange(4**3).reshape((4, 4, 4)). I write down the
answer to the x-projection. Now all I need to do is call
assert_allclose(yt_result, answer, rtol=1e-15), but I don't know what
pieces of low-level yt stuff to call to get to yt_result
. Hopefully
that's clear...
Maybe this comes down to creating a fake frontend we can attach fields to?
Actually, I disagree with this strategy, as I told Matt when we spoke last week. What is important is that we test the science and math parts of the code before, **if ever, dealing with the software architecture that surrounds them. Let's taking your example of projections. What we need to test is the actual function or method which actually slogs through the projection calculation. In many cases in yt these functions are not directly attached to the front end but live in analysis, visualization or utilities subpackages. It is these such packages that we should worry about testing. We can easily create routines to feed them sample data.
On the other hand, testing or mocking things like frontends should be a very low priority. At the end of the day what you are testing here is pulling in data from disk or other sources. Effectively, this is just re-testing functionality present in h5py, etc. That is not really our job. Yes, in a perfect world, front ends would be tested too. But I think that the priority should be placed on things like the KDTree.
Be Well Anthony
>
On Fri, Sep 21, 2012 at 2:42 PM, Matthew Turk matthewturk@gmail.comwrote:
Hi all,
As some of you have seen (at least Stephen), I filed a ticket this morning about increasing testing coverage. The other night Anthony and I met up in NYC and he had something of an "intervention" about the sufficiency of answer testing for yt; it didn't take too much work on his part to convince me that we should be testing not just against a gold standard, but also performing unit tests. In the past I had eschewed unit testing simply because the task of mocking data was quite tricky, and by adding tests that use smaller bits we could cover unit testable areas with answer testing.
But, this isn't really a good strategy. Let's move to having both. The testing infrastructure he recommends is the nearly-omnipresent nose:
http://nose.readthedocs.org/en/latest/
The ticket to track this is here:
https://bitbucket.org/yt_analysis/yt/issue/426/increase-unit-test-coverage
There are a couple sub-items here:
1) NumPy's nose test plugins provide a lot of necessary functionality that we have reimplemented in the answer testing utilities. I'd like to start using the numpy plugins, which include things like conditional test execution, array comparisons, "slow" tests, etc etc. 2) We can evaluate, using conditional test execution, moving to nose for answer testing. But that's not on the agenda now. 3) Writing tests for nose is super easy, and running them is too. Just do:
nosetest -w yt/
when in your source directory.
4) I've written a simple sample here:
https://bitbucket.org/yt_analysis/yt-3.0/src/da10ffc17f6d/yt/utilities/tests...
5) I'll handle writing up some mock data that doesn't require shipping lots of binary files, which can then be used for checking things that absolutely require hierarchies.
--
The way to organize tests is easy. Inside each directory with testable items create a new directory called "tests", and in here toss some scripts. You can stick a bunch of functions in those scripts.
Anyway, I'm going to start writing more of these (in the main yt repo, and this change will be grafted there as well) and I'll write back once the data mocking is ready. I'd like it if we started encouraging or even mandating simple tests (and/or answer tests) for functionality that gets added, but that's a discussion that should be held separately.
The items on the ticket:
Is anyone willing to claim any additional items that they will help write unit tests for?
-Matt
yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org