A PEP to define basical metric which allows to guarantee minimal code quality

Hi everybody, I'm a Python dev since about 10 years. I saw that customers are more and more curious about development process, talking from various reference on which allow to define what they call "good quality code". Someones are talking only about Pylint, other one from SonarQube, ... I search for SonarQube for Python, and saw it was using Pylint, coverage and Unittest. I saw also some customers internal tools which, every of them, was using internal tool, pylint, pylama, pep8, ... So, with help of my customers (thanks to them ^^), i realize there was a missing in PEP about this point. I made some search, and saw that there is no PEP which define, not even a minimum, which metrics and/or tools use to evaluate the quality of a Python code; no minimal indication in this goal. Also, my PEP idea was to try to define basical metrics to guarantee minimal code quality. I'd like to purpose it as an informational PEP. So users and implementers would be free to ignore it. I does not see it as a constrining PEP, but as a PEP reference which could give help, and be used by every developers to say he/she is PEP xxxx from its development. What are you thinking about this PEP idea?

On 15 September 2017 at 04:57, Alexandre GALODE <alexandre.galode@gmail.com> wrote:
What are you thinking about this PEP idea?
We don't really see it as the role of the language development team to formally provide direct advice on what tools people should be using to improve their development workflows (similar to the way that it isn't the C/C++ standards committees recommending tools like Coverity, or Oracle and the OpenJDK teams advising on the use of SonarQube). That said, there *are* collaborative groups working on these kinds of activities: - for linters & style checkers, there's the "PyCQA": http://meta.pycqa.org/en/latest/introduction.html - for testing, there's the "Python Testing Tools Taxonomy" (https://wiki.python.org/moin/PythonTestingToolsTaxonomy) and the testing-in-python mailing list (http://lists.idyll.org/listinfo/testing-in-python) - for gradual typing, we do get involved for the language level changes, but even there, the day-to-day focus of activity is more the typing repo at https://github.com/python/typing/ rather than CPython itself So while there's definitely scope for making these aspects of the Python ecosystem more approachable (similar to what we're aiming to do with packaging.python.org for the software distribution space), a PEP isn't the right vehicle for it. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Hi, Thanks for your return, and link. I knew PyCQA but not the others. It seems i didn't success to precise my idea correctly. I'm 100% OK with you that this is not language role to indicate precisely which tools to use. But my idea is only to define basical metrics, useful to evaluate quality code, and related to PEPs if existing. I precise i'd like to propose informational PEP only. I'm not considering that my idea must be mlandatory, but only an indication to improve its code quality. So each developer/society would be free to evaluate each metrics with the way he/it want. As for PEP8 for example, for which each developer can use pep8, pylint, ... The PEP257, for docstrings, indicate how to structure our docstring. My PEP, in my mind, would indicate how to basically evaluate our quality code, only which metrics are useful for that. But only this, no tools indication. Customer i see regularly ask me for being PEP8 & 257 & ... "certified". This PEP could be opportunity to have opportunity saying be "PEPxxxx certified", so customers would be sure that several metrics, based on PEP and other parameter, would be use to gurantee their quality code.

On 9/15/2017 1:58 AM, alexandre.galode@gmail.com wrote:
It seems to me that you are talking about stuff more appropriate to blog posts or the python wiki or even to a web site.
Automated test coverage is also a means to code quality, not quality in itself. I recently fixed a bug in 100% covered code. I discovered it by using the corresponding widgets like a user would. The automated test did not properly simulate a particular pattern of user actions.
PEP 8 started as a guide for new stdlib code, but we do not 'pep8 certify' Python patches to the stdlib. Security, stability, correctness, test coverage, and speed are all more important. It just happened that others eventually adopted PEP 8 for their own use, including in code checkers. Information about .rst only belongs in a PEP because we use it in our own docs. Information about code metrics might belong in a PEP only if we were applying them to the CPython code base. Note that our C code is checked by Coverity, but I don't believe there is a PEP about it. (No title has 'coverity'.) After pydev discussion, a core developer made the available free checks happen and then followed up on the deficiency reports. -- Terry Jan Reedy

Hi Alexandre, On Thu, Sep 14, 2017 at 10:58:43PM -0700, alexandre.galode@gmail.com wrote:
It might help if you tell us some of these proposed code quality metrics. What do you mean by "code quality", and how do you measure it? The only semi-objective measures of code quality that I am aware of are: - number of bugs per kiloline of code; - percentage of code covered by tests. And possibly this one: http://www.osnews.com/story/19266/WTFs_m Given two code bases in the same language, one is better quality than the other if the number of bugs per thousand lines is smaller, and the percentage of code coverage is higher. I am interested in, but suspicious of, measures of code complexity, coupling, code readability, and similar. What do you consider objective, reliable code quality metrics? -- Steve

Hi, thanks for your answer, and your questions. Very good draw. I printed it in my office ^^ First i'd like to precise that, as in title, aim is not to gurantee quality but minimal quality. I think it's really two different things. About metrics, my ideas was about following (for the moment): - Basical Python Rules respect: PEP8 & PEP20 respect - Docstring/documentation respect: PEP257 respect - Code readability: percentage of commentary line and percentage of blank line - Code maintainability / complexity: the facility to re-read code if old code, or to understand code for an external developer. If not comprehensive, for example, i use McCabe in my work - Code coverage: by unit tests

Hi, effectively it's one of the numerous tools whih allows to have metrics on code. This PEP would not have to precise tools name, but it's a good example ^^ ; and can give some more ideas on complementary metrics

Quality is something that an organisation and its people need to achieve by building appropriate processes and improvement methods into their work flow. Trying to be prescriptive will run into trouble for the wider world I suspect. Many of the maintainability metrics may help a team. However peer review and discussion within teams is a powerful process to achieve good code, which is process. I do not see quality as a quantity that can be easily measured. How can we set a minimum for the hard to measure? Barry p.s. Why does this thread have a reply address of python-ideas@googlegroups.com <mailto:python-ideas@googlegroups.com>?

Since the proposal first came out, I've been wondering about it's implications. I do think that there is some virtue to having it, but it quickly gets messy. In terms of quality as a quantity, the minimal would be to break on a linting error. JSHint has numbered issues: /* jshint -W034 */ Having a decorator of @pylint-034 could tell PyLint or python proper to refuse compilation/execution if the standard is not met. So would we limit or list the various standards that apply to the code? Do we limit it on a per-function or per file, or even per-line basis? How do we handle different organizational requirements? @pylint([34]) @pep([8,20]) def f(a): return math.sqrt(a) The other aspect that comes into code quality is unit tests. A decorator of what test functions need to be run on a function (and pass) would also be useful: def test_f_arg_negative(f): try: return f(-1) == 1 except(e): return False# ValueError: math domain error @tests([test_f_arg_positive, test_f_arg_negative, test_f_arg_zero, f_test_f_arg_int, test_f_arg_float]) def f(a): return math.sqrt(math.abs(a)) In a test-driven world, the test functions would come first, but this is rarely the case. Possible results are not just test failure, but also that a test does not yet exist. I think it would be great to know what tests the function is supposed to pass. The person coding the function has the best idea of what inputs it is expected to handle and side-effects it should cause. Hunting through a test kit for the function is usually tedious. This I think would vastly improve it while taking steps to describe and enforce 'quality' Just my $0.02

How do we handle different organizational requirements?
By keeping linting out of the code ( and certainly out of "official" python), and in the organization's development process where it belongs.
Yeach! But that's just my opinion -- if you really like this idea, you can certainly implement it in an external package. No need for a PEP or anything of the sort.
Again, I don't think it belongs there, but I do see your point. If you like this idea--implement it, put it on PyPi, and see if anyone else likes if as well. -CHB

Not my idea, but the question was raised as to what could a 'quality guarantee' or 'quality' even mean. I was just throwing out examples for discussion. I did not intend to make you vomit. I think in an abstract sense it's a good idea, but in my own head I would expect that all code to be written to the highest standard from the start. I have some nascent ideas, but they are not even worth mentioning yet, and I don't even know how they'd fit in any known language.

Hi, Thanks to everyone for your contribution to my proposal. @Barry, I agree with you that each organisation needs to integrate the "quality process" in their workflow. But before integrate it, this "quality" process need to be define, at minimal, i think. I don't think too this is easy to measure, in integrity, but we can define minimal metric, easy to get from code, as the ones defines previously. About your question for googlegroups, i don't know if it's bond, but i use google group to participate to this mailing list. @Jason, thanks for your example. When i discussed from this proposal with other devs of my team, they needed too example to have better idea of use. But i think, as wee need to avoid to talk about any tool name in the PEP, we need to avoid to give a code example. The aim of this proposal is to have a guideline on minimal metrics to have minimal quality. As you talked about, i ask to every devs of my team to respect the higher standard as possible. This, and the permanent request of my customers for the highest dev quality as possible, is the reason which explain this proposal.

The organisations work flow *is* its quality process, for better or worse. You can define metrics. But as to what they mean? Well that is the question. I was once told that you should measure a new metric for 2 years before you attempting to use it to change your processes. Oh and what is the most important thing for a piece of work? I'd claim its the requirements. Get them wrong and nothing else matters. Oh and thank you for raising a very important topic. Barry

You can define metrics. But as to what they mean? Well that is the question.
One big problem with metrics is that we tend to measure what we know how to measure -- generally really not the most useful metric... As for some kind of PEP or PEP-like document: I think we'd have to see a draft before we have any idea as to whether it's a useful document -- if some of the folks on this thread are inspired -- start writing! I, however, am skeptical because of two things: 1) most code-quality measures, processes, etc are not language specific -- so don't really belong in a PEP So why PEP 8? -- because the general quality metric is: " the source code confirms to a consistent style" -- there is nothing language specific about that. But we need to actually define that style for a given project or organization -- hence PEP 8. 2) while you can probably get folks to agree on general guidelines-- tests are good! -- I think you'll get buried in bike shedding of the details. And it will likely go beyond just the color of the bike shed. Again -- what about PEP8? Plenty of bike shedding opportunities there. But there was a need to reach consensus on SOMETHING-- we needed a style guide for the standard lib. I'm not that's the case with other "quality" metrics. But go ahead and prove me wrong! -CHB

On 9/20/17 8:37 PM, Chris Barker - NOAA Federal wrote:
I don't see the need for a PEP here. PEPs are written where we need the core committers to agree on something (how to change Python, how to style stdlib code, etc), or where we need multiple implementations to agree on something (what does "python" mean, how to change Python, etc). There's no need for the core committers or multiple implementations of Python to agree on quality metrics. There's no need for this to be a PEP. Write a document that proposes some quality metrics. Share it around. Get people to like it. If it becomes popular, then people will start to value it as a standard for project quality. It doesn't need to be a PEP. --Ned.

On 21 September 2017 at 10:51, Ned Batchelder <ned@nedbatchelder.com> wrote:
And explore the academic literature for research on quality measures that are actually predictors of real world benefits (e.g. readability, maintainability, correctness, affordability). There doesn't seem to be all that much research out there, although I did find https://link.springer.com/chapter/10.1007/978-3-642-12165-4_24 from several years ago, as well as http://ieeexplore.ieee.org/document/7809284/ from last year (both are examples of pay-to-play science though, so not particularly useful to open source practitioners). Regardless, Ned's point still stands: the PEP process only applies to situations where the CPython core developers (or a closely associated group like the Python Packaging Authority) are the relevant global authorities on a topic. Even PEP 7 and PEP 8 are technically only the style guides for the CPython reference implementation - folks just borrow them as the baseline style guides for their own Python projects. "Which characteristics of Python code are useful predictors of the ability to deliver software projects to specification on time and within budget?" (the most pragmatic definition of "software quality") is *not* one of those areas - for that, you'd be more looking towards groups like IEEE (Institute of Electrical & Electronics Engineers) and ACM (Association for Computing Machinery), who study that kind of thing across multiple languages and language communities, and try to put some empirical weight behind their findings, rather than relying primarily on instinct and experience (which is the way we tend to do things in the open source community, since it's more fun, and less effort). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 21.09.2017 03:32, Nick Coghlan wrote:
On the topic, you might want to have a look at a talk I held at EuroPython 2016 on valuation of a code base (in the context of valuation of a Python company): https://downloads.egenix.com/python/EuroPython-2016-Python-Startup-Valuation... (1596930 bytes) Video: https://www.youtube.com/watch?v=nIoE3KJxK6U There are a few things you can do with metrics to figure out how good a code base is. Of course, in the end you always have to do a code review, but the metrics are a good indicator of where to start looking for possible issues. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 21 2017)
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/
-- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 21 2017)
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/

On 15 September 2017 at 04:57, Alexandre GALODE <alexandre.galode@gmail.com> wrote:
What are you thinking about this PEP idea?
We don't really see it as the role of the language development team to formally provide direct advice on what tools people should be using to improve their development workflows (similar to the way that it isn't the C/C++ standards committees recommending tools like Coverity, or Oracle and the OpenJDK teams advising on the use of SonarQube). That said, there *are* collaborative groups working on these kinds of activities: - for linters & style checkers, there's the "PyCQA": http://meta.pycqa.org/en/latest/introduction.html - for testing, there's the "Python Testing Tools Taxonomy" (https://wiki.python.org/moin/PythonTestingToolsTaxonomy) and the testing-in-python mailing list (http://lists.idyll.org/listinfo/testing-in-python) - for gradual typing, we do get involved for the language level changes, but even there, the day-to-day focus of activity is more the typing repo at https://github.com/python/typing/ rather than CPython itself So while there's definitely scope for making these aspects of the Python ecosystem more approachable (similar to what we're aiming to do with packaging.python.org for the software distribution space), a PEP isn't the right vehicle for it. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Hi, Thanks for your return, and link. I knew PyCQA but not the others. It seems i didn't success to precise my idea correctly. I'm 100% OK with you that this is not language role to indicate precisely which tools to use. But my idea is only to define basical metrics, useful to evaluate quality code, and related to PEPs if existing. I precise i'd like to propose informational PEP only. I'm not considering that my idea must be mlandatory, but only an indication to improve its code quality. So each developer/society would be free to evaluate each metrics with the way he/it want. As for PEP8 for example, for which each developer can use pep8, pylint, ... The PEP257, for docstrings, indicate how to structure our docstring. My PEP, in my mind, would indicate how to basically evaluate our quality code, only which metrics are useful for that. But only this, no tools indication. Customer i see regularly ask me for being PEP8 & 257 & ... "certified". This PEP could be opportunity to have opportunity saying be "PEPxxxx certified", so customers would be sure that several metrics, based on PEP and other parameter, would be use to gurantee their quality code.

On 9/15/2017 1:58 AM, alexandre.galode@gmail.com wrote:
It seems to me that you are talking about stuff more appropriate to blog posts or the python wiki or even to a web site.
Automated test coverage is also a means to code quality, not quality in itself. I recently fixed a bug in 100% covered code. I discovered it by using the corresponding widgets like a user would. The automated test did not properly simulate a particular pattern of user actions.
PEP 8 started as a guide for new stdlib code, but we do not 'pep8 certify' Python patches to the stdlib. Security, stability, correctness, test coverage, and speed are all more important. It just happened that others eventually adopted PEP 8 for their own use, including in code checkers. Information about .rst only belongs in a PEP because we use it in our own docs. Information about code metrics might belong in a PEP only if we were applying them to the CPython code base. Note that our C code is checked by Coverity, but I don't believe there is a PEP about it. (No title has 'coverity'.) After pydev discussion, a core developer made the available free checks happen and then followed up on the deficiency reports. -- Terry Jan Reedy

Hi Alexandre, On Thu, Sep 14, 2017 at 10:58:43PM -0700, alexandre.galode@gmail.com wrote:
It might help if you tell us some of these proposed code quality metrics. What do you mean by "code quality", and how do you measure it? The only semi-objective measures of code quality that I am aware of are: - number of bugs per kiloline of code; - percentage of code covered by tests. And possibly this one: http://www.osnews.com/story/19266/WTFs_m Given two code bases in the same language, one is better quality than the other if the number of bugs per thousand lines is smaller, and the percentage of code coverage is higher. I am interested in, but suspicious of, measures of code complexity, coupling, code readability, and similar. What do you consider objective, reliable code quality metrics? -- Steve

Hi, thanks for your answer, and your questions. Very good draw. I printed it in my office ^^ First i'd like to precise that, as in title, aim is not to gurantee quality but minimal quality. I think it's really two different things. About metrics, my ideas was about following (for the moment): - Basical Python Rules respect: PEP8 & PEP20 respect - Docstring/documentation respect: PEP257 respect - Code readability: percentage of commentary line and percentage of blank line - Code maintainability / complexity: the facility to re-read code if old code, or to understand code for an external developer. If not comprehensive, for example, i use McCabe in my work - Code coverage: by unit tests

Hi, effectively it's one of the numerous tools whih allows to have metrics on code. This PEP would not have to precise tools name, but it's a good example ^^ ; and can give some more ideas on complementary metrics

Quality is something that an organisation and its people need to achieve by building appropriate processes and improvement methods into their work flow. Trying to be prescriptive will run into trouble for the wider world I suspect. Many of the maintainability metrics may help a team. However peer review and discussion within teams is a powerful process to achieve good code, which is process. I do not see quality as a quantity that can be easily measured. How can we set a minimum for the hard to measure? Barry p.s. Why does this thread have a reply address of python-ideas@googlegroups.com <mailto:python-ideas@googlegroups.com>?

Since the proposal first came out, I've been wondering about it's implications. I do think that there is some virtue to having it, but it quickly gets messy. In terms of quality as a quantity, the minimal would be to break on a linting error. JSHint has numbered issues: /* jshint -W034 */ Having a decorator of @pylint-034 could tell PyLint or python proper to refuse compilation/execution if the standard is not met. So would we limit or list the various standards that apply to the code? Do we limit it on a per-function or per file, or even per-line basis? How do we handle different organizational requirements? @pylint([34]) @pep([8,20]) def f(a): return math.sqrt(a) The other aspect that comes into code quality is unit tests. A decorator of what test functions need to be run on a function (and pass) would also be useful: def test_f_arg_negative(f): try: return f(-1) == 1 except(e): return False# ValueError: math domain error @tests([test_f_arg_positive, test_f_arg_negative, test_f_arg_zero, f_test_f_arg_int, test_f_arg_float]) def f(a): return math.sqrt(math.abs(a)) In a test-driven world, the test functions would come first, but this is rarely the case. Possible results are not just test failure, but also that a test does not yet exist. I think it would be great to know what tests the function is supposed to pass. The person coding the function has the best idea of what inputs it is expected to handle and side-effects it should cause. Hunting through a test kit for the function is usually tedious. This I think would vastly improve it while taking steps to describe and enforce 'quality' Just my $0.02

How do we handle different organizational requirements?
By keeping linting out of the code ( and certainly out of "official" python), and in the organization's development process where it belongs.
Yeach! But that's just my opinion -- if you really like this idea, you can certainly implement it in an external package. No need for a PEP or anything of the sort.
Again, I don't think it belongs there, but I do see your point. If you like this idea--implement it, put it on PyPi, and see if anyone else likes if as well. -CHB

Not my idea, but the question was raised as to what could a 'quality guarantee' or 'quality' even mean. I was just throwing out examples for discussion. I did not intend to make you vomit. I think in an abstract sense it's a good idea, but in my own head I would expect that all code to be written to the highest standard from the start. I have some nascent ideas, but they are not even worth mentioning yet, and I don't even know how they'd fit in any known language.

Hi, Thanks to everyone for your contribution to my proposal. @Barry, I agree with you that each organisation needs to integrate the "quality process" in their workflow. But before integrate it, this "quality" process need to be define, at minimal, i think. I don't think too this is easy to measure, in integrity, but we can define minimal metric, easy to get from code, as the ones defines previously. About your question for googlegroups, i don't know if it's bond, but i use google group to participate to this mailing list. @Jason, thanks for your example. When i discussed from this proposal with other devs of my team, they needed too example to have better idea of use. But i think, as wee need to avoid to talk about any tool name in the PEP, we need to avoid to give a code example. The aim of this proposal is to have a guideline on minimal metrics to have minimal quality. As you talked about, i ask to every devs of my team to respect the higher standard as possible. This, and the permanent request of my customers for the highest dev quality as possible, is the reason which explain this proposal.

The organisations work flow *is* its quality process, for better or worse. You can define metrics. But as to what they mean? Well that is the question. I was once told that you should measure a new metric for 2 years before you attempting to use it to change your processes. Oh and what is the most important thing for a piece of work? I'd claim its the requirements. Get them wrong and nothing else matters. Oh and thank you for raising a very important topic. Barry

You can define metrics. But as to what they mean? Well that is the question.
One big problem with metrics is that we tend to measure what we know how to measure -- generally really not the most useful metric... As for some kind of PEP or PEP-like document: I think we'd have to see a draft before we have any idea as to whether it's a useful document -- if some of the folks on this thread are inspired -- start writing! I, however, am skeptical because of two things: 1) most code-quality measures, processes, etc are not language specific -- so don't really belong in a PEP So why PEP 8? -- because the general quality metric is: " the source code confirms to a consistent style" -- there is nothing language specific about that. But we need to actually define that style for a given project or organization -- hence PEP 8. 2) while you can probably get folks to agree on general guidelines-- tests are good! -- I think you'll get buried in bike shedding of the details. And it will likely go beyond just the color of the bike shed. Again -- what about PEP8? Plenty of bike shedding opportunities there. But there was a need to reach consensus on SOMETHING-- we needed a style guide for the standard lib. I'm not that's the case with other "quality" metrics. But go ahead and prove me wrong! -CHB

On 9/20/17 8:37 PM, Chris Barker - NOAA Federal wrote:
I don't see the need for a PEP here. PEPs are written where we need the core committers to agree on something (how to change Python, how to style stdlib code, etc), or where we need multiple implementations to agree on something (what does "python" mean, how to change Python, etc). There's no need for the core committers or multiple implementations of Python to agree on quality metrics. There's no need for this to be a PEP. Write a document that proposes some quality metrics. Share it around. Get people to like it. If it becomes popular, then people will start to value it as a standard for project quality. It doesn't need to be a PEP. --Ned.

On 21 September 2017 at 10:51, Ned Batchelder <ned@nedbatchelder.com> wrote:
And explore the academic literature for research on quality measures that are actually predictors of real world benefits (e.g. readability, maintainability, correctness, affordability). There doesn't seem to be all that much research out there, although I did find https://link.springer.com/chapter/10.1007/978-3-642-12165-4_24 from several years ago, as well as http://ieeexplore.ieee.org/document/7809284/ from last year (both are examples of pay-to-play science though, so not particularly useful to open source practitioners). Regardless, Ned's point still stands: the PEP process only applies to situations where the CPython core developers (or a closely associated group like the Python Packaging Authority) are the relevant global authorities on a topic. Even PEP 7 and PEP 8 are technically only the style guides for the CPython reference implementation - folks just borrow them as the baseline style guides for their own Python projects. "Which characteristics of Python code are useful predictors of the ability to deliver software projects to specification on time and within budget?" (the most pragmatic definition of "software quality") is *not* one of those areas - for that, you'd be more looking towards groups like IEEE (Institute of Electrical & Electronics Engineers) and ACM (Association for Computing Machinery), who study that kind of thing across multiple languages and language communities, and try to put some empirical weight behind their findings, rather than relying primarily on instinct and experience (which is the way we tend to do things in the open source community, since it's more fun, and less effort). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 21.09.2017 03:32, Nick Coghlan wrote:
On the topic, you might want to have a look at a talk I held at EuroPython 2016 on valuation of a code base (in the context of valuation of a Python company): https://downloads.egenix.com/python/EuroPython-2016-Python-Startup-Valuation... (1596930 bytes) Video: https://www.youtube.com/watch?v=nIoE3KJxK6U There are a few things you can do with metrics to figure out how good a code base is. Of course, in the end you always have to do a code review, but the metrics are a good indicator of where to start looking for possible issues. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 21 2017)
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/
-- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Sep 21 2017)
::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/
participants (11)
-
Alexandre GALODE
-
alexandre.galode@gmail.com
-
Barry Scott
-
Chris Barker - NOAA Federal
-
Jason H
-
M.-A. Lemburg
-
Ned Batchelder
-
Nick Coghlan
-
Steven D'Aprano
-
suresh
-
Terry Reedy