
Hi. Sorry for the misunderstanding. I understand that keep track of the value of each instance can be cumbersone (and it's not exactly in the scope of a *type* checker). Let me focus on the "type definition" of a TypedDict. Currently with TypedDict you can define all the 'required' or, with `total=False`, 'possible' keys a dict can have. I can understand how can be difficult to deal with current "non total" TypedDict, and how the simple flag takes to a "too coarse" definition, exacerbated in "non total" TypedDict with a lot of keys that can take to a lot of false positives. Still, now we are talking about giving a more "precise" definition of what keys are required, what are possible, and what are simply not present. How are the implementors suppose to use this additional information? What I mean is: let's say that now we use the `Required` annotation: from typing import TypedDict, Required class C(TypedDict, total=False): required: Required[int] optional: int def h(c: C, fallback: int) -> None: getitem_required = c['required'] getitem_optional = c['optional'] getitem_nonexistent = c['nonexistent'] get_required = c.get('required, fallback) get_optional = c.get('optional', fallback) get_existent = c.get('nonexistent', fallback) I think it's a "waste" to not use the additional explicit annotation and mark as error, in addition to the `getitem_nonexistent` and `get_existent` examples, also the `getitem_optional` and the `get_required` rows. Both the cases are code smell: in the first case is an hidden bug; the second one is basically dead code.