[Python-ideas] Structural type checking for PEP 484
jlehtosalo at gmail.com
Fri Sep 11 08:24:36 CEST 2015
On Thu, Sep 10, 2015 at 9:42 AM, Sven R. Kunze <srkunze at mail.de> wrote:
> On 10.09.2015 06:12, Jukka Lehtosalo wrote:
> but there are some of main the benefits as I see them:
> - Code becomes more readable. This is especially true for code that
> doesn't have very detailed docstrings.
> If I have code without docstrings, I better write docstrings then. ;)
> I mean when I am really going to touch that file to improve documentation
> (which annotations are a piece of), I am going to add more information for
> the reader of my API and that mostly will be describing the behavior of the
> If my variables have crappy names, so I need to add type hints to them,
> well, then, I rather fix them first.
Even good variable names can leave the type ambiguous. And besides, if you
assume that all code is perfect or can be made perfect I think that you've
already lost the discussion. Reality disagrees with you. ;-)
You can't just wave a magic wand and to get every programmer to document
their code and write unit tests. However, we know quite well that
programmers are perfectly capable of writing type annotations, and tools
can even enforce that they are present (witness all the Java code in
existence). Tools can't verify that you have good variable names or useful
docstrings, and people are too inconsistent or lazy to be relied on.
> You'll get the biggest benefits if you are working on a large code base
> mostly written by other people with limited test coverage and little
> comments or documentation.
> If I had large untested and undocumented code base (well I actually have),
> then static type checking would be ONE tool to find out issues.
Sure, it doesn't solve everything.
> Once found out, I write tests as hell. Tests, tests, tests. I would not
> add type annotations. I need tested functionality not proper typing.
Most programmers only have limited time for improving existing code. Adding
type annotations is usually easier that writing tests. In a cost/benefit
analysis it may be optimal to spent half the available time on annotating
parts of the code base to get some (but necessarily limited) static
checking coverage and spend the remaining half on writing tests for
selected parts of the code base, for example. It's not all or nothing.
> You get extra credit if your tests are slow to run and flaky,
> We are problem solvers. So, I would tell my team: "make them faster and
> more reliable".
But you'd probably also ask them to implement new features (or *your*
manager might be unhappy), and they have to find the right balance, as they
only have 40 hours a week (or maybe 80 hours if you work at an early-stage
startup :-). Having more tools gives you more options for spending your
> I consider that difference pretty significant. I wouldn't want to increase
> the fraction of unchecked parts of my annotated code by a factor of 8, and
> I want to have control over which parts can be type checked.
> Granted. But you still don't know if your code runs correctly. You are
> better off with tests. And I agree type checking is 1 test to perform (out
> of 10K).
Actually a type checker can verify multiple properties of a typical line of
code. So for 10k lines of code, complete type checking coverage would give
you the equivalent of maybe 30,000 (simple) tests. :-P
And I'm sure it would take much less time to annotate your code than to
manually write the 30,000 test cases.
>> I don't see the effort for adding type hints AND the effort for further
>> parsing (by human eyes) justified by partially better IDE support and 1
>> single additional test within test suites of about 10,000s of tests.
>> Especially, when considering that correct types don't prove functionality
>> in any case. But tested functionality in some way proves correct typing.
> I didn't see you respond to that. But you probably know that. :)
This is a variation of an old argument, which goes along the lines of "if
you have tests and comments (and everybody should, of course!) type
checking doesn't buy you anyhing". But if the premise can't be met, the
argument doesn't actually say anything about the usefulness of type
It's often not cost effective to have good test coverage (and even 100%
line coverage doesn't give you full coverage of all interactions). Testing
can't prove that your code doesn't have defects -- it just proves that for
a tiny subset of possible inputs you code works as expected. A type checker
may be able to prove that for *all* possible inputs your code doesn't do
certain bad things, but it can't prove that it does the good things.
Neither subsumes the other, and both of these are approaches are useful and
complementary (but incomplete). I think that there was a good talk
basically about this at PyCon this year, by the way, but I can't remember
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas