[Python-ideas] Fwd: A PEP to define basical metric which allows to guarantee minimal code quality

Jason H jhihn at gmx.com
Thu Sep 21 17:02:18 EDT 2017


One of my hesitations on this topic is that it could create a false sense of security. And I mean security in both the 'comfortable with the code base' sense leading to insufficient testing, as well as 'we have a top-notch quality level, there are no vulnerabilities'. The one thing that I keep coming back to is all of the side-channel attacks. From legacy APICs on the mobo, to your DRAM leaking your crypto keys, so something arguably more code level like timing response of failed operations... I don't think we can even approximate quality to security. So given two code bases and as many dimensions as needed to express it, how do you compare the 'quality' of two code bases? Is that even fair? Can you only compare 'quality' to the previous iteration of the source? Is it normalized for size?

Even then I'd argue that it shouldn't be anything the developer can claim, or if they can, it's got to be enforced by completely breaking. There's got to be tool that measures it, and it can't be gamed. We've had static analysis tools for some time. I've used them, I don't know that they've done any good, aside from a copy-paste analyzer that helps keep things DRY.





More information about the Python-ideas mailing list