On 4/11/21 7:55 PM, Paul Bryan wrote:
PEP 563 also requires using ``eval()`` or ``typing.get_type_hints()`` to examine annotations. Code updated to work with PEP 563 that calls ``eval()`` directly would have to be updated simply to remove the ``eval()`` call. Code using ``typing.get_type_hints()`` would continue to work unchanged, though future use of that function would become optional in most cases.
I think it is worth noting somewhere that string annotations are still valid, and should still be evaluated if so.
That's not up to me, it's up to the static type checkers who created that idiom. But I assume they'll continue to support stringized annotations, whether manually or automatically created.
Because this PEP makes semantic changes to how annotations are evaluated, this PEP will be initially gated with a per-module ``from __future__ import co_annotations`` before it eventually becomes the default behavior.
Is it safe to assume that a module that does not import co_annotations, but imports a module that does, will exhibit PEP 649 behavior when the former accesses an annotation defined in the latter?
Yes.
* *Code that sets annotations on module or class attributes from inside any kind of flow control statement.* It's currently possible to set module and class attributes with annotations inside an ``if`` or ``try`` statement, and it works as one would expect. It's untenable to support this behavior when this PEP is active.
Is the following an example of the above?
@dataclass class Foo: if some_condition: x: int else: x: float If so, would the following still be valid?
if some_condition: type_ = int else: type_ = float @dataclass class Foo: x: type_
* *Code in module or class scope that references or modifies the local* ``__annotations__`` *dict directly.* Currently, when setting annotations on module or class attributes, the generated code simply creates a local ``__annotations__`` dict, then sets mappings in it as needed. It's also possible for user code to directly modify this dict, though this doesn't seem like it's an intentional feature. Although it would be possible to support this after a fashion when this PEP was active, the semantics would likely be surprising and wouldn't make anyone happy.
I recognize the point you make later about its impact on static type checkers. Setting that aside, I'm wondering about caes where annotations can be dynamically generated, such as dataclasses.make_dataclass(...). And, I could see reasons for overwriting values in __annotations__, especially in the case where it may be stored as a string and one wants to later affix its evaluated value. These are considerations specific to runtime (dynamic) type checking. It's fine to modify the __annotations__ dict after the creation of the class or module. It's code that modifies "__annotations__" from within
Your example was valid, and I think your workaround should be fine. Do you have a use case for this, or is this question motivated purely by curiosity? the class or module that is disallowed here. Similarly for dataclasses; once it creates a class object, it can explicitly set and / or modify the annotations dict on that class.
I wonder if it would make sense for each item in __annotations__ to be evaluated separately on first access /of each key/, rather than all __annotations__ on first access to the dict. Basically the dict would act as a LazyDict. It could also provide the benefit of lessening the expense of evaluating complex but otherwise unused annotations.
This would cause an immense proliferation of code objects (with some pre-bound to function objects). Rather than one code object per annotation dict, it would create one code object per annotation key. Also, we don't have a "lazy dict" object built in to Python, so we'd have to create one. I don't have any problems that this would solve, so I'm not super interested in it. Personally I'd want to see a real compelling use case for this feature before I'd consider adding it to Python. Of course, I'm not on the steering committee, so my opinion is only worth so much. //arry/