On 9/20/17, firstname.lastname@example.org email@example.com wrote:
But i think, as wee need to avoid to talk about any tool name in the PEP, we need to avoid to give a code example. The aim of this proposal is to have a guideline on minimal metrics to have minimal quality.
Michel Foucault wrote book (https://en.wikipedia.org/wiki/The_Birth_of_the_Clinic) which subtitle is "An Archaeology of Medical Perception".
Imagine that code is organism and quality is health.
Which archaeological era of medical perception is analogous to your proposal? ( https://en.wikipedia.org/wiki/File:Yupik_shaman_Nushagak.jpg or https://en.wikipedia.org/wiki/Blinded_experiment ?)
PS. Good analogy is probably heart rate. It is very simple metric which we could use. And it seems not too problematic to propose range where it is healthy.
But different kind of software is like different species. (see for example http://www.merckvetmanual.com/appendixes/reference-guides/resting-heart-rate... )
And there are development stages! Embryonic bpm is NA (because no heart) and then much bigger than child's and its bigger than adult's.
Another simple metric could be temperature.
But individual has also different tissues! Different part of body have different temperature. (We could probably not measure hair's temperature)
One of my hesitations on this topic is that it could create a false sense of security. And I mean security in both the 'comfortable with the code base' sense leading to insufficient testing, as well as 'we have a top-notch quality level, there are no vulnerabilities'. The one thing that I keep coming back to is all of the side-channel attacks. From legacy APICs on the mobo, to your DRAM leaking your crypto keys, so something arguably more code level like timing response of failed operations... I don't think we can even approximate quality to security. So given two code bases and as many dimensions as needed to express it, how do you compare the 'quality' of two code bases? Is that even fair? Can you only compare 'quality' to the previous iteration of the source? Is it normalized for size?
Even then I'd argue that it shouldn't be anything the developer can claim, or if they can, it's got to be enforced by completely breaking. There's got to be tool that measures it, and it can't be gamed. We've had static analysis tools for some time. I've used them, I don't know that they've done any good, aside from a copy-paste analyzer that helps keep things DRY.
Sorry from being late, i was in professional trip to Pycon FR.
I see that the subject is divising advises.
Reading responses, i have impression that my proposal has been saw as mandatory, that i don't want of course. As previously said, i see this "PEP" as an informational PEP. So it's a guideline, not a mandatory. Each developer will have right to ignore it, as each developer can choose to ignore PEP8 or PEP20.
Someones was saying that it was too generic, and not attached to Python, but i'm not OK on this point, because some metrics basically measured on Python code is justly PEP8 respect, with tools like PEP8, pylint, ... Not every metrics could be attached to another PEP, i'm OK on this point, but if at least one of its could be, it means in my mind, that a PEP can be justified.
@Jason, about false sense of security. Reading this make me thinking to the last week Pycon. Someone was talking to me to its coverage rate was falling. But in fact it was because an add of code lines, without TU on it. It means, that the purposed metrics are not to be used as "god" metrics, that we have only to read to know precisely the "health" of our code, but metrics which indicates to us the "health" trend of our code.
@Chris, about we measure only what we know. Effectively, i think it's the reality. One year ago, i didn't know McCabe principle and associated tool. But now, i'm using it. If a "PEP", existing from a time, was talking about this concept, i would read it and apply the concept.
Perfect solution does not exist, i know it, but i think this "PEP" could, partially, be a good guideline.