A few random remarks:
I personally never used it. I'd rather have some machinery that would run the checks "magically" using the type annotations, without the need for explicit decoration.
2) I fully agree that there is a important difference between using runtime checks to verify your code (for instance, as part of an automated test suite, or even when manually testing an application), and for data validation.
3) In practice, however, I have never disabled assertions in production, but I agree if can be problematic if developers confuse AssertionErrors with TypeErros.
4) 10 years ago, when I was working on the EDOS project ( http://cordis.europa.eu/pub/ist/docs/directorate_d/st-ds/edos-project-story_en.pdf
), I ran a small experiment where I used, IIRC, the profile hook to intercept all function / method calls, and log information about arguments and return value types to a gigantic log file. Then the log file could be parsed and these information used to suggest type annotations. Except there were no type annotations at the time in Python.
I know PyCharm can do a similar thing now: you run your program or your tests under the debugger, it logs runtime type information somewhere, and then can use it to suggest autocompletion or maybe type annotations.
Now I believe something could be done along the lines:
a) record runtime type information from test or regular runs
b) massage these information and use them to annotate Python code with additional type information (up to the developer to then accept or not the proposed changes)
c) also run a test suite or an app under some magical machinery, and either raise a TypeError or log warnings when discrepancies are detected between type annotation and runtime behaviour.
(c) could be done independently from (a) and (b), (a) and (b) would use similar machinery, and (a), (b) and (c) would be probably a useful way to introduce type annotations to an existing code base without too much risk.
(a) and (b) could also provide data for an interesting SE research project.