[Python-Dev] PEP 563: Postponed Evaluation of Annotations

Steven D'Aprano steve at pearwood.info
Thu Nov 2 11:45:16 EDT 2017

On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote:

> PEP: 563
> Title: Postponed Evaluation of Annotations

> This PEP proposes changing function annotations and variable annotations
> so that they are no longer evaluated at function definition time.
> Instead, they are preserved in ``__annotations__`` in string form.

This means that now *all* annotations, not just forward references, are 
no longer validated at runtime and will allow arbitrary typos and 

def spam(n:itn):  # now valid

Up to now, it has been only forward references that were vulnerable to 
that sort of thing. Of course running a type checker should pick those 
errors up, but the evaluation of annotations ensures that they are 
actually valid (not necessarily correct, but at least a valid name), 
even if you happen to not be running a type checker. That's useful.

Are we happy to live with that change?

> Rationale and Goals
> ===================
> PEP 3107 added support for arbitrary annotations on parts of a function
> definition.  Just like default values, annotations are evaluated at
> function definition time.  This creates a number of issues for the type
> hinting use case:
> * forward references: when a type hint contains names that have not been
>   defined yet, that definition needs to be expressed as a string
>   literal;

After all the discussion, I still don't see why this is an issue. 
Strings makes perfectly fine forward references. What is the problem 
that needs solving? Is this about people not wanting to type the leading 
and trailing ' around forward references?

> * type hints are executed at module import time, which is not
>   computationally free.

True; but is that really a performance bottleneck? If it is, that should 
be stated in the PEP, and state what typical performance improvement 
this change should give.

After all, if we're going to break people's code in order to improve 
performance, we should at least be sure that it improves performance :-)

> Postponing the evaluation of annotations solves both problems.

Actually it doesn't. As your PEP says later:

> This PEP is meant to solve the problem of forward references in type
> annotations.  There are still cases outside of annotations where
> forward references will require usage of string literals.  Those are
> listed in a later section of this document.

So the primary problem this PEP is designed to solve, isn't actually 
solved by this PEP.

(See Guido's comments, quoted later.)

> Implementation
> ==============
> In Python 4.0, function and variable annotations will no longer be
> evaluated at definition time.  Instead, a string form will be preserved
> in the respective ``__annotations__`` dictionary.  Static type checkers
> will see no difference in behavior,

Static checkers don't see __annotations__ at all, since that's not 
available at edit/compile time. Static checkers see only the source 
code. The checker (and the human reader!) will no longer have the useful 
clue that something is a forward reference:

    # before
    class C:
        def method(self, other:'C'): 

since the quotes around C will be redundant and almost certainly left 
out. And if they aren't left out, then what are we to make of the 
annotation? Will the quotes be stripped out, or left in?

In other words, will method's __annotations__ contain 'C' or "'C'"? That 
will make a difference when the type hint is eval'ed.

> If an annotation was already a string, this string is preserved
> verbatim.

That's ambiguous. See above.

> Annotations can only use names present in the module scope as postponed
> evaluation using local names is not reliable (with the sole exception of
> class-level names resolved by ``typing.get_type_hints()``).

Even if you call get_type_hints from inside the function defining the 
local names?

def function():
    A = something()
    def inner(x:A)->int:
    d = typing.get_type_hints(inner)
    return (d, inner)

I would expect that should work. Will it?

> For code which uses annotations for other purposes, a regular
> ``eval(ann, globals, locals)`` call is enough to resolve the
> annotation.

Let's just hope nobody doing that has allowed any tainted strings to 
be stuffed into __annotations__.

> * modules should use their own ``__dict__``.

Which is better written as ``vars()`` with no argument, I believe. Or 
possibly ``globals()``.

> If a function generates a class or a function with annotations that
> have to use local variables, it can populate the given generated
> object's ``__annotations__`` dictionary directly, without relying on
> the compiler.

I don't understand this paragraph.

> The biggest controversy on the issue was Guido van Rossum's concern
> that untokenizing annotation expressions back to their string form has
> no precedent in the Python programming language and feels like a hacky
> workaround.  He said:
>     One thing that comes to mind is that it's a very random change to
>     the language.  It might be useful to have a more compact way to
>     indicate deferred execution of expressions (using less syntax than
>     ``lambda:``).  But why would the use case of type annotations be so
>     all-important to change the language to do it there first (rather
>     than proposing a more general solution), given that there's already
>     a solution for this particular use case that requires very minimal
>     syntax?

I agree with Guido's concern here. A more general solution would 
(hopefully!) be like a thunk, and might allow some interesting 
techniques unrelated to type checking. Just off the top of my head, say, 
late binding of default values (without the "if arg is None: arg = []" 

> A few people voiced concerns that there are libraries using annotations
> for non-typing purposes.  However, none of the named libraries would be
> invalidated by this PEP.  They do require adapting to the new
> requirement to call ``eval()`` on the annotation with the correct
> ``globals`` and ``locals`` set.

Since this is likely to be a common task for such libraries, can we have 
a evaluate_annotations() function to do this, rather than have everyone 
reinvent the wheel?

def func(arg:int):

assert func.__annotations__['arg'] is int

It could be a decorator, as well as modifying __annotations__ in place.

I imagine something with a signature like this:

def evaluate_annotations(
        obj:Union[Function, Class], 
        )->Union[Function, Class]:
    """Evaluate the __annotations__ of a function, or recursively 
    a class and all its methods. Replace the __annotations__ in
    place. Returns the modified argument, making this suitable as
    a decorator.

    If globals is not given, it is taken from the function.__globals__ 
    or class.__module__ if available. If locals is not given, it 
    defaults to the current locals.


More information about the Python-Dev mailing list