
On Sat, Jun 24, 2017 at 10:42:19PM +0300, Koos Zevenhoven wrote: [...]
Clearly, there needs to be some sort of distinction between runtime classes/types and static types, because static types can be more precise than Python's dynamic runtime semantics.
I think that's backwards: runtime types can be more precise than static types. Runtime types can make use of information known at compile time *and* at runtime, while static types can only make use of information known at compile time. Consider: List[str if today == 'Tuesday' else int] The best that the compile-time checker can do is treat it as List[Union[str, int]] if even that, but at runtime we can tell whether or not [1, 2, 3] is legal or not. But in any case, *static types* and *dynamic types* (runtime types, classes) are distinct concepts, but with significant overlap. Static types apply to *variables* (or expressions) while dynamic types apply to *values*. Values are, in general, only known at runtime.
For example, Iterable[int] is an iterable that contains integers. For a static type checker, it is clear what this means. But at runtime, it may be impossible to figure out whether an iterable is really of this type without consuming the whole iterable and checking whether each yielded element is an integer.
There's a difference between *requesting* an object's runtime type and *verifying* that it is what it says it is. Of course if we try to verify that an iterator yields nothing but ints, we can't do so without consuming the iterator, or possibly even entering an infinite loop. But we can ask an object what type they are, they can tell you that they're an Iterable[int], and this could be an extremely fast check. Assuming you trust the object not to lie. ("Consenting adults" may apply here.)
Even that is not possible if the iterable is infinite. Even Sequence[int] is problematic, because checking the types of all elements of the sequence could take a long time.
Since things like isinstance(it, Iterable[int]) cannot guarantee a proper answer, one easily arrives at the conclusion that static types and runtime classes are just two separate things and that one cannot require that all types support something like isinstance at runtime.
That's way too strong. I agree that static types and runtime types (I don't use the term "class" because in principle at least this could include types not implemented as a class, e.g. a struct or record or primitive unboxed value) are distinct, but they do overlap. To describe them as "separate" implies that they are unconnected and that one could sensibly have things which are statically typed as (let's say) Sequence[bool] but runtime typed as float. Gradual typing is useful because the static types are at least an approximation to the runtime types. If they had no connection at all, we'd learn nothing from static type checking and there would be no reason to do it. So static types and runtime types must be at least closely related to be useful. [...]
These and other incompatibilities between runtime and static typing will create two (or more) different kinds of type-annotated Python: runtime-oriented Python and Python with static type checking. These may be incompatible in both directions: a static type checker may complain about code that is perfectly valid for the runtime folks, and code written for static type checking may not be able to use new Python techniques that make use of type hints at runtime.
Yes? What's your point? Consenting adults certainly applies here. There are lots of reasons why people might avoid "new Python techniques" for *anything*, not just type hints: - they have to support older versions of Python; - they're stuck on an older version and can't upgrade; - they just don't like those new techniques. Nobody forces you to run a static type-checker. If you choose to run one, and it gives the wrong answers, then you can: - stop using it; - use a better one that gives the right answer; - fix the broken code that the checker says is broken (regardless of whether it is genuinely broken or not); - add, remove or modify annotations to satisfy the checker; - disable type-checking for that code unit (module?) alone. But the critical thing here is that so long as Python is a dynamically typed language, you cannot eliminate runtime type checks. You can choose *not* to write them in your code, and rely on duck typing and exceptions, but the type checks are still there in the implementation. E.g. you have x + 1 in your code. Even if *you* don't guard with an type check: # if isinstance(x, int): y = x + 1 there's still a runtime check in the byte-code which prevents low-level machine code errors that could lead to a segmentation fault or worse.
There may not even be a fully functional subset of the two "languages".
What do you mean by "fully functional"? Of course there will be working code that can pass both the static checks and run without error. Here's a trivial example: print("Hello World") On the other hand, it's trivially true that code which works at runtime cannot *always* be statically checked: s = input("Type some Python code: ") exec(s) The static type checker cannot possibly check code that doesn't even exist until runtime! I don't think it is plausible to say that there is, or could be, no overlap between (a) legal Python code that runs under a type-checker, and (b) legal Python code that runs without it. That's literally impossible since the type-checker is not part of the Python interpreter, so you can always just *not run the type-checker* to turn (a) into (b).
Different libraries will adhere to different standards and will not be compatible with each other. The split will be much worse and more difficult to understand than Python 2 vs 3, peoples around the world will suffer like never before, and programming in Python will become a very complicated mess.
I think this is Chicken Little "The Sky Is Falling" FUD.
One way of solving the problem would be that type annotations are only a static concept, like with stubs or comment-based type annotations.
I don't agree that there's a problem that needs to be solved.
This would also be nice from a memory and performance perspective, as evaluating and storing the annotations would not occupy memory (although both issues and some more might be nicely solved by making the annotations lazily ealuated).
Sounds like premature optimization to me. How many distinct annotations do you have? How much memory do you think they will use? If you're running 64-bit Python, each pointer to the annotation takes a full eight bytes. If we assume that every annotation is distinct, and we allow 1000 bytes for each annotation, a thousand annotations would only use 1MB of memory. On modern machines, that's trivial. I don't think this will be a problem for the average developer. (Although people programming on embedded devices may be different.) If we want to support that optimization, we could add an optimization flag that strips annotations at runtime, just as the -OO flag strips docstrings. That becomes a matter of *consenting adults* -- if you don't want annotations, you don't need to keep them, but it then becomes your responsibility that you don't try to use them. (If you do, you'll get a runtime AttributeError.)
However, leaving out runtime effects of type annotations is not the approach taken, and runtime introspection of annotations seems to have some promising applications as well. And for many cases, the traditional Python class actually acts very nicely as both the runtime and static type.
So if type annotations will be both for runtime and for static checking, how to make everything work for both static and runtime typing?
Since a writer of a library does not know what the type hints will be used for by the library users,
No, that's backwards. The library creator gets to decide what their library uses annotations for: type-hints, or something else. As the user of a library, I don't get to decide what the library does with its own annotations.
it is very important that there is only one way of making type annotations which will work regardless of what the annotations are used for in the end. This will also make it much easier to learn Python typing.
I don't understand this.
Regarding runtime types and isinstance, let's look at the Iterable[int] example. For this case, there are a few options:
1) Don't implement isinstance
This is problematic for runtime uses of annotations.
2) isinstance([1, '2', 'three'], Iterable[int]) returns True
This is in fact now the case.
That's clearly a bug. If isinstance(... Iterable[int]) is supported at all, then clearly the result should be False. [...]
3) Check as much as you can at runtime
For what purpose?
4) Do a deeper check than in (2) but trust the annotations
For example, an instance of a class that has a method like
def __iter__(self) -> Iterator[int]: some code
could be identified as Iterable[int] at runtime, even if it is not guaranteed that all elements are really integers.
I suggested something similar to this earlier in this post.
On the other hand, an object returned by
def get_ints() -> Iterable[int]: some code
does not know its own annotations, so the check is difficult to do at runtime. And of course, there may not be annotations available.
Right -- when annotations are not available, the type checker will either infer types, if it can, or default to the Any type. I don't really understand where you are going with this. The premise, that statically-type-checked Python is fundamentally different from Python-without-static-checks, and therefore we have to bring in a bunch of extra runtime checks to make them the same, seems wrong to me. Perhaps I have not understood you. -- Steve