[Python-Dev] Coverity Scan

Christian Heimes christian at python.org
Fri Jul 26 00:56:30 CEST 2013


Am 26.07.2013 00:32, schrieb Terry Reedy:
> I found the answer here
> https://docs.google.com/file/d/0B5wQCOK_TiRiMWVqQ0xPaDEzbkU/edit
> Coverity Integrity Level 1 is 1 (defect/1000 lines)
> Level 2 is .1 (we have passed that)
> Level 3 is .01 + no major defects + <20% (all all defects?) false
> positives as that is their normal rate.#
> 
> A higher false positive rates requires auditing by Coverity. They claim
> "A higher false positive rate indicates misconfiguration, usage of
> unusual idioms, or incorrect diagnosis of a large number of defects."
> They else add "or a flaw in our analysis."
> 
> # Since false positives should stay constant as true positives are
> reduced toward 0, false / all should tend toward 1 (100%) if I
> understand the ratio correctly.

About 40% of the dismissed cases are cause by a handful of issues. I
have documented these issues as "known limitations"
http://docs.python.org/devguide/coverity.html#known-limitations .

For example about 35 false positives are related to PyLong_FromLong()
and our small integer optimization. A correct modeling file would
eliminate the false positive defects. My attempts don't work as hoped
and I don't have access to all professional coverity tools to debug my
trials.

Nearly 20 false positives are caused by Py_BuildValue("N"). I'm still
astonished that Coverity understands Python's reference counting most of
the time. :)

Did I mention that we have almost reached Level 3? All major defects
have been dealt with (one of them locally on the test machine until
Larry pushes his patch soonish), 4 of 7 minor issues must be closed and
our dismissed rate is just little over 20% (222 out of 1054 = 21%).

Christian





More information about the Python-Dev mailing list