[Python-Dev] Code coverage reporting.
Titus Brown
titus at caltech.edu
Mon Jun 19 17:41:00 CEST 2006
On Sun, Jun 18, 2006 at 08:12:39PM -0700, Brett Cannon wrote:
-> On 6/15/06, Titus Brown <titus at caltech.edu> wrote:
-> >
-> >Folks,
-> >
-> >I've just run a code coverage report for the python2.4 branch:
-> >
-> > http://vallista.idyll.org/~t/temp/python2.4-svn/
-> >
-> >This report uses my figleaf code,
-> >
-> > http://darcs.idyll.org/~t/projects/figleaf-latest.tar.gz
->
->
-> Very nice, Titus!
->
-> I'm interested in feedback on a few things --
-> >
-> >* what more would you want to see in this report?
-> >
-> >* is there anything obviously wrong about the report?
-> >
-> >In other words... comments solicited ;).
->
-> Making the comments in the code stand out less (i.e., not black) might be
-> handy since my eye still gets drawn to the comments a lot.
I think I'd have to use the tokenizer to do this, no? The comments
aren't kept in the AST, and I don't want to write a half-arsed regexp
because I'm sure I'll stumble on comments in strings etc ;)
-> It would also be nice to be able to sort on different things, such as
-> filename.
Easy enough; just the index needs to be generated in multiple ways.
-> But it does seem accurate; random checking of some modules that got high but
-> not perfect covereage all seem to be instances where dependency injection
-> would be required to get the tests to work since they were based on
-> platform-specific things.
Great!
-> By the by, I'm also planning to integrate this into buildbot on some
-> >projects. I'll post the scripts when I get there, and I'd be happy
-> >to help Python itself set it up, of course.
->
->
-> I don't know if we need it hooked into the buildbots (unless it is dirt
-> cheap to generate the report). But hooking it up to the script in
-> Misc/build.sh that Neal has running to report reference leaks and
-> fundamental test failures would be wonderful.
Hmm, ok, I'll take a look.
The general cost is a ~2x slowdown for running with trace enabled, and the
HTML generation itself takes less than 5 minutes (all of that in AST
parsing/traversing to figure out what lines *should* be looked at).
cheers,
--titus
More information about the Python-Dev
mailing list