[Python-Dev] PEP 563: Postponed Evaluation of Annotations

Brett Cannon brett at python.org
Thu Nov 2 13:00:19 EDT 2017


On Thu, 2 Nov 2017 at 08:46 Steven D'Aprano <steve at pearwood.info> wrote:

> On Wed, Nov 01, 2017 at 03:48:00PM -0700, Lukasz Langa wrote:
>
> > PEP: 563
> > Title: Postponed Evaluation of Annotations
>
> > This PEP proposes changing function annotations and variable annotations
> > so that they are no longer evaluated at function definition time.
> > Instead, they are preserved in ``__annotations__`` in string form.
>
> This means that now *all* annotations, not just forward references, are
> no longer validated at runtime and will allow arbitrary typos and
> errors:
>
> def spam(n:itn):  # now valid
>     ...
>
> Up to now, it has been only forward references that were vulnerable to
> that sort of thing. Of course running a type checker should pick those
> errors up, but the evaluation of annotations ensures that they are
> actually valid (not necessarily correct, but at least a valid name),
> even if you happen to not be running a type checker. That's useful.
>
> Are we happy to live with that change?
>

I would say "yes" for two reasons. One, if you're bothering to provide type
hints then you should be testing those type hints. So as you pointed out,
Steve, that will be caught at that point.

Two, code editors with auto-completion will help prevent this kind of typo.
Now I would never suggest that we design Python with expectations of what
sort of tooling people have available, but in this instance it will help.
It also feeds into a question you ask below...


>
>
> > Rationale and Goals
> > ===================
> >
> > PEP 3107 added support for arbitrary annotations on parts of a function
> > definition.  Just like default values, annotations are evaluated at
> > function definition time.  This creates a number of issues for the type
> > hinting use case:
> >
> > * forward references: when a type hint contains names that have not been
> >   defined yet, that definition needs to be expressed as a string
> >   literal;
>
> After all the discussion, I still don't see why this is an issue.
> Strings makes perfectly fine forward references. What is the problem
> that needs solving? Is this about people not wanting to type the leading
> and trailing ' around forward references?
>

I think it's mainly about the next point you ask about...


>
>
> > * type hints are executed at module import time, which is not
> >   computationally free.
>
> True; but is that really a performance bottleneck? If it is, that should
> be stated in the PEP, and state what typical performance improvement
> this change should give.
>
> After all, if we're going to break people's code in order to improve
> performance, we should at least be sure that it improves performance :-)
>

The cost of constructing some of the objects used as type hints can be very
expensive and make importing really expensive (this has been pointed out by
Lukasz previously as well as Inada-san). By making Python itself not have
to construct objects from e.g. the 'typing' module at runtime, you then
don't pay a runtime penalty for something you're almost never going to use
at runtime anyway.


>
>
> > Postponing the evaluation of annotations solves both problems.
>
> Actually it doesn't. As your PEP says later:
>
> > This PEP is meant to solve the problem of forward references in type
> > annotations.  There are still cases outside of annotations where
> > forward references will require usage of string literals.  Those are
> > listed in a later section of this document.
>
> So the primary problem this PEP is designed to solve, isn't actually
> solved by this PEP.
>

I think the performance bit is really the big deal here.

And as I mentioned earlier, if you turn all of your type hints into
strings, you lose auto-completion/intellisense which is a shame.

I think there's also a benefit here of promoting the fact that type hints
are not a runtime thing, they are a static analysis thing. By requiring the
extra step to convert from a string to an actual object, it helps get the
point across that type hints are just bits of metadata for tooling and not
something you're expected really interact with at runtime unless you have a
really good reason to.

So I'm +1 on the idea, but the __future__ statement is a bit too generic
for me. I would prefer something like `from __future__ import
annotation_strings` or `annotations_as_strings`.

-Brett


>
> (See Guido's comments, quoted later.)
>
>
>
> > Implementation
> > ==============
> >
> > In Python 4.0, function and variable annotations will no longer be
> > evaluated at definition time.  Instead, a string form will be preserved
> > in the respective ``__annotations__`` dictionary.  Static type checkers
> > will see no difference in behavior,
>
> Static checkers don't see __annotations__ at all, since that's not
> available at edit/compile time. Static checkers see only the source
> code. The checker (and the human reader!) will no longer have the useful
> clue that something is a forward reference:
>
>     # before
>     class C:
>         def method(self, other:'C'):
>             ...
>
> since the quotes around C will be redundant and almost certainly left
> out. And if they aren't left out, then what are we to make of the
> annotation? Will the quotes be stripped out, or left in?
>
> In other words, will method's __annotations__ contain 'C' or "'C'"? That
> will make a difference when the type hint is eval'ed.
>
>
> > If an annotation was already a string, this string is preserved
> > verbatim.
>
> That's ambiguous. See above.
>
>
> > Annotations can only use names present in the module scope as postponed
> > evaluation using local names is not reliable (with the sole exception of
> > class-level names resolved by ``typing.get_type_hints()``).
>
> Even if you call get_type_hints from inside the function defining the
> local names?
>
> def function():
>     A = something()
>     def inner(x:A)->int:
>         ...
>     d = typing.get_type_hints(inner)
>     return (d, inner)
>
> I would expect that should work. Will it?
>
>
> > For code which uses annotations for other purposes, a regular
> > ``eval(ann, globals, locals)`` call is enough to resolve the
> > annotation.
>
> Let's just hope nobody doing that has allowed any tainted strings to
> be stuffed into __annotations__.
>
>
> > * modules should use their own ``__dict__``.
>
> Which is better written as ``vars()`` with no argument, I believe. Or
> possibly ``globals()``.
>
>
> > If a function generates a class or a function with annotations that
> > have to use local variables, it can populate the given generated
> > object's ``__annotations__`` dictionary directly, without relying on
> > the compiler.
>
> I don't understand this paragraph.
>
>
> > The biggest controversy on the issue was Guido van Rossum's concern
> > that untokenizing annotation expressions back to their string form has
> > no precedent in the Python programming language and feels like a hacky
> > workaround.  He said:
> >
> >     One thing that comes to mind is that it's a very random change to
> >     the language.  It might be useful to have a more compact way to
> >     indicate deferred execution of expressions (using less syntax than
> >     ``lambda:``).  But why would the use case of type annotations be so
> >     all-important to change the language to do it there first (rather
> >     than proposing a more general solution), given that there's already
> >     a solution for this particular use case that requires very minimal
> >     syntax?
>
> I agree with Guido's concern here. A more general solution would
> (hopefully!) be like a thunk, and might allow some interesting
> techniques unrelated to type checking. Just off the top of my head, say,
> late binding of default values (without the "if arg is None: arg = []"
> trick).
>
>
> > A few people voiced concerns that there are libraries using annotations
> > for non-typing purposes.  However, none of the named libraries would be
> > invalidated by this PEP.  They do require adapting to the new
> > requirement to call ``eval()`` on the annotation with the correct
> > ``globals`` and ``locals`` set.
>
> Since this is likely to be a common task for such libraries, can we have
> a evaluate_annotations() function to do this, rather than have everyone
> reinvent the wheel?
>
> def func(arg:int):
>     ...
>
> evaluate_annotations(func)
> assert func.__annotations__['arg'] is int
>
> It could be a decorator, as well as modifying __annotations__ in place.
>
> I imagine something with a signature like this:
>
> def evaluate_annotations(
>         obj:Union[Function, Class],
>         globals:Dict=None,
>         locals:Dict=None
>         )->Union[Function, Class]:
>     """Evaluate the __annotations__ of a function, or recursively
>     a class and all its methods. Replace the __annotations__ in
>     place. Returns the modified argument, making this suitable as
>     a decorator.
>
>     If globals is not given, it is taken from the function.__globals__
>     or class.__module__ if available. If locals is not given, it
>     defaults to the current locals.
>     """
>
>
> --
> Steve
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20171102/9b915764/attachment.html>


More information about the Python-Dev mailing list