[Baypiggies] How to determine what unit tests cover a code change?

Lincoln Peters anfrind at gmail.com
Thu Feb 3 01:19:16 CET 2011

On Wed, Feb 2, 2011 at 12:17 PM, Minesh B. Amin <mamin at mbasciences.com> wrote:
> Let me preface what follows by stating that, when it comes to
> "optimizing" how testing is done, any solution must produce no
> false positives, and no false negatives. Otherwise, the "optimizing"
> part would be self-defeating.

What I eventually want to do is set up a continuous integration system
so that the appropriate subset of unit tests is run immediately after
each code change, and the full suite is run on a less frequent
interval (maybe once per day).

> To gather the most accurate snapshot of the actual dependencies
> (that includes both python and non-python data/text/script files):
>   + each unittest must be run in a stand-alone session ... the
>     goal is to force each unittest to import whatever it wants;
>   + invoke each session as:
>          strace -f -e trace=open python <unittest>.py
>     and post-process the output produced
> Couple of issues to keep in mind:
>   + strace may produce relative paths which would need to be
>     resolved;
>   + in case this flow is too resource intensive (in terms of CPU time),
>     you may want to bundle the unittests according to some criteria.

So maybe what I want to do is write a script that would:

1. Build up a list of the shell commands I'd need to execute each test
(or load the test and then immediately exit?), one at a time.
2. Run each test under strace, capturing the output.
3. Parse the output for any opened objects in the project, or that are
built from objects in the project (tricky to map, but might be
especially useful if I want to extend this to compiled C and/or C++
4. Save the mapping between tests and objects in a form that I can
call on when running tests, at least until the mapping changes.

That seems like it would work.

Lincoln Peters
<anfrind at gmail.com>

More information about the Baypiggies mailing list