[AstroPy] Tests for AstroPy (was POLL: vision for a common Astronomy package)]
Victoria G. Laidler
laidler at stsci.edu
Thu Jul 7 12:44:10 EDT 2011
Erik Tollerud wrote:
> We (the Coordinating Committee) are finalizing a draft coding
> standards document that we will soon send out to the list. This does
> indeed include unit tests/regression tests, although the way we've
> phrased it is that tests are "encouraged for all public
> classes/methods/functions". The idea is that we don't want a strict
> requirement, but we (the committee) will reserve the right to not
> accept packages if they are not tested enough (and of course "enough"
> depends on how critical a given component is).
I suggest adding a specific requirement for end to end testing of common
Testing one method at a time doesn't guarantee that they all play
together nicely as they should.
> As for the testing framework, we were thinking nose
> (http://somethingaboutorange.com/mrl/projects/nose/1.0.0/). Nose is
> very nice for a project like this because it's super easy to run and
> also easy to write tests with, but is also compatible with the stdlib
Last I heard, nose had been more or less orphaned, because Kumar doesn't
have time to
continue maintenance so didn't want to commit to it going forward.
There was some discussion about a TIP-community-driven
successor that would be more cleanly compatible with Python 3, and I know
something started in that direction, but I don't know the status. I can
I've become less fond of nose the more I've used it for heavy testing,
because its support
for reasonably complex tests (eg, perform exactly the same tests for 25
different sets of input)
is not very robust or easy to use. I know there are other frameworks out
there that claim to do
better at this, but haven't had time to investigate them. I should be
able to get to that this summer.
I've also added a section on "Testing" on the wiki so people can sign up
to contribute efforts
in this area.
> On Wed, Jul 6, 2011 at 8:35 AM, Victoria G. Laidler <laidler at stsci.edu> wrote:
>> Ole Streicher wrote:
>>> BTW, are there ideas for regression tests of astropy? Especially in the
>>> case you mentioned, this would be *very* helpful to check that
>>> everything works in the target system.
>> I was thinking about this as well. I suggest that there be minimum
>> testing standards for astropy packages. Packages should come with tests
>> - can be run using a standard syntax (what?)
>> - verify that all significant elements of the package can successfully
>> - verify that all significant elements of the package, for some minimum
>> set of common use cases, give the right answer when run
>> - come with documentation that in some way describe test coverage (how
>> much of the system do the tests cover) and the source of the test
>> answers (ie, how did you determine the "right answer" used in the tests)
>> This last point is especially important if we want to get people to
>> adopt astropy libraries in place of their current favorite tool. In this
>> case, testing serves as a confidence builder: astropy package X is at
>> least as good as OtherPackage Y.
>> I'm using two phrases in the above that are meant to have a basic
>> definition but also allow some wiggle room for package developers:
>> - all significant elements of the package: the basic parts that most
>> people who use the package at all will use, similar to the dependency
>> - minimum set of common use cases: the typical use cases that most
>> people will encounter
>> This allows a package to be included in astropy without requiring the
>> developer to provide an exhaustive set of tests that verify correct
>> behavior in every possible corner of the domain space. This is
>> especially important for software whose behavior is data-dependent. Of
>> course, more tests are better, and would produce better coverage.
>> Vicki Laidler
>> AstroPy mailing list
>> AstroPy at scipy.org
More information about the AstroPy