I am writing about testing yt.
Right now we have 100 tests (on the nose), and it's super easy to blow through the entire list very rapidly. But, they only cover a tiny fraction of the code base. (The important fraction, but a fraction none the less.) And, they only run on a single dataset, the one that comes with yt.
To improve this situation, we need two things.
1. Datasets with some complexity. 2. More tests.
== Datasets ==
So, if you have a dataset you think might present some challenges -- lots of fields, interesting structure, or if it's NOT ENZO, please consider submitting it for testing!
*We need more non-Enzo datasets to run these tests on!*
I'm happy to set up testing, I just need some interesting datasets from the other codes -- Orion, Castro, Maestro, Nyx, FLASH in particular. I can copy them from wherever, or download from wherever.
== Tests ==
Writing tests is really easy. And if you have written a module for Enzo, you should *really* write a test for it. Make it simple, make it easy, and it doesn't matter if it takes a little while.
Here's how to write a test. The best examples are in the yt source repository, under tests/hierarchy_consistency.py and tests/object_field_values.py . There are basically three modes of testing in yt:
1. Testing that something is true or false. This is how some of the hierarchy consistency checks are done. 2. Testing that something remains *nearly* the same. This allows for some relative change between changesets. 3. Testing that something remains *exactly* the same. This allows for no change between changesets.
All three basically look the same. To write a test, just open up a file in tests/ and make it look a bit like the object_field_values.py file. For instance, put this at the top:
import hashlib import numpy as na
from yt.utilities.answer_testing.output_tests import \ YTStaticOutputTest, RegressionTestException, create_test
Now, make a new object, called MyModuleTest or something, and inherit from YTStaticOutputTest. Give it a name, and implement "run" and "compare". Inside run, set self.result. Inside compare, accept old_result and compare to self.result. Like this:
class MyModuleTest(YTStaticOutputTest): name = "my_module_test"
def run(self): self.result = my_module(self.pf)
def compare(self, old_value): self.compare_array_delta(old_result, self.result, 0.0)
There are lots of compare functions. This example ensures exact results for an array. You can also use hashlib.sha256 to compare things for precise, no delta differences. And if you want to check True/False, just store True or False as self.result.
I'm happy to help out with this, but testing all the contributed modules from over the years is a daunting task!
Please, if you have worked on analysis modules that you would like ... not to break ... contribute a test or two. It'll help ensure reliability and invariance over time. And if we decide an answer changed, and it was a good change, we can update the test results.
Feel free to write back with any questions or comments, and let me know if you have a dataset to contribute!