[Python-Dev] PEP 563: Postponed Evaluation of Annotations

Lukasz Langa lukasz at langa.pl
Sun Nov 5 23:40:00 EST 2017


> On 4 Nov, 2017, at 6:32 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> 
> The PEP's current attitude towards this is "Yes, it will break, but
> that's OK, because it doesn't matter for the type annotation use case,
> since static analysers will still understand it". Adopting such a
> cavalier approach towards backwards compatibility with behaviour that
> has been supported since Python 3.0 *isn't OK*, since it would mean we
> were taking the step from "type annotations are the primary use case"
> to "Other use cases for function annotations are no longer supported".

Well, this is what the PEP literally says in "Deprecation policy":

> In Python 4.0 this will become the default behavior. Use of annotations incompatible with this PEP is no longer supported.

The rationale here is that type annotations as defined by PEP 484 and others is the only notable use case. Note that "type annotations" includes things like data classes, auto_attribs in attrs, the dependency injection frameworks mentioned before, etc. Those are compatible with PEP 484. So, despite the open nature of annotations since Python 3.0, no alternative use case emerged that requires eager evaluation and access to local state. PEP 563 is addressing the pragmatic issue of improving usability of type annotations, instead of worrying about some unknown theoretically possible use case.

While function annotations were open to arbitrary use, typing was increasingly hinted (pun not intended) as *the* use case for them:

1. From Day 1, type checking is listed as the number one intended use case in PEP 3107 (and most others listed there are essentially type annotations by any other name).
2. PEP 484 says "We do hope that type hints will eventually become the sole use for annotations", and that "In order for maximal compatibility with offline type checking it may eventually be a good idea to change interfaces that rely on annotations to switch to a different mechanism, for example a decorator."
3. Variable annotations in PEP 526 were designed with type annotations as the sole stated purpose.

PEP 563 simply brings this multi-PEP dance to its logical conclusion, stating in "Rationale and Goals" that "uses for annotations incompatible with the aforementioned PEPs should be considered deprecated." The timeline for full deprecation is Python 4.0.


> The only workaround I can see for that breakage is that instead of
> using strings, we could instead define a new "thunk" type that
> consists of two things:
> 
> 1. A code object to be run with eval()
> 2. A dictionary mapping from variable names to closure cells (or None
> for not yet resolved references to globals and builtins)

This is intriguing.

1. Would that only be used for type annotations? Any other interesting things we could do with them?
2. It feels to me like that would make annotations *heavier* at runtime instead of leaner, since now we're forcing the relevant closures to stay in memory.
3. This form of lazy evaluation seems pretty implicit to me for the reader. Peter Ludemann's example of magic logging.debug() is a case in point here.

All in all, unless somebody else is ready to step up and write the PEP on this subject (and its implementation) right now, I think this idea will miss Python 3.7.


> Now, even without the introduction of the IndirectAttributeCell
> concept, this is amenable to a pretty simple workaround:
> 
>         A = Optional[int]
>         class C:
>            field: A = 1
>            def method(self, arg: A) -> None: ...
>        C.A = A
>        del A

This is a poor workaround, worse in fact than using a string literal as a forward reference. This is more verbose and error-prone. Decorators address the same construct and their wild popularity suggests that this notation is inferior.


> But I genuinely can't see how breaking annotation evaluation at class
> scope can be seen as a deal-breaker for the implicit lambda based
> approach without breaking annotation evaluation for nested functions
> also being seen as a deal-breaker for the string based approach.

The main reason to use type annotations is readability, just like decorators. While there's nothing stopping the programmer to write:

class C: ...
    def method(self, arg1): ...
    method.__annotations__ = {'arg1': str, 'return': int}
C.__annotations__ = {'attribute1': ...}

...this notation doesn't fit the bill. Since nested classes and types embedded as class attributes are popular among type hinting users, supporting this case is a no-brainer. On the other hand, if you have a factory function that generates some class or function, then you either:

1. Use annotations in the generated class/function for type checking; OR
2. Add annotations in the generated class/function for them to be preserved in __annotations__ for some future runtime use.

In the former case, you are unlikely to use local state. But even if you were, that doesn't matter since the static type checker doesn't resolve your annotation at runtime.
In the latter case, you are free to assign __annotations__ directly since clearly readability of the factory code isn't your goal, but rather the functionality that runtime annotations provide.

I was pretty careful surveying existing use cases looking for things that would be made impossible by the PEP 563. The backwards compatibility it causes requires source changes but I couldn't find situations where there would be irrecoverable functionality loss.

- Ł

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20171105/488adc91/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Message signed with OpenPGP
URL: <http://mail.python.org/pipermail/python-dev/attachments/20171105/488adc91/attachment-0001.sig>


More information about the Python-Dev mailing list