You're right, losing visibility into
the local scope (and outer function scopes) is part of why I
suggest the behavior be compile-time selectable. The pro-PEP-563
crowd doesn't seem to care that 563 costs them visibility into
anything but global scope; accepting this loss of visibility is
part of the bargain of enabling the behavior. But people who
don't need the runtime behavior of 563 don't have to live with it.
As to only offering marginal benefit
beyond typing.get_type_hints()--I think the benefit is larger than
that. I realize now I should have gone into this topic in the
original post; sorry, I kind of rushed through that. Let me fix
that here.
One reason you might not want to use
typing.get_type_hints() is that it doesn't return annotations
generally, it specifically returns type hints. This is
more opinionated than just returning the annotations, e.g.
- None is changed to type(None).
- Values are wrapped with Optional[] sometimes.
- String annotations are wrapped with ForwardRef().
- If __no_type_check__ is set on the object, it ignores the
annotations and returns an empty dict.
I've already proposed addressing this
for Python 3.10 by adding a new function to the standard library,
probably to be called inspect.get_annotations():
But even if you use this new function,
there's still some murky ambiguity.
Let's say you're using Python 3.9, and
you've written a library function that examines annotations.
(Again, specifically: annotations, not type hints.) And let's say
the annotations dict contains one value, and it's the string
"34". What should you do with it?
If the module that defined it imported
"from __future__ import annotations", then the actual desired
value of the annotation was the integer 34, so you should eval()
it. But if the module that defined it didn't import that
behavior, then the user probably wanted the string "34". How can
you tell what the user intended?
I think the only actual way to solve it
would be to go rooting around in the module to see if you can find
the future object. It's probably called "annotations". But it is
possible to compile with that behavior without the object being
visible--it could be renamed, the module could have deleted it.
Though these are admittedly unlikely.
By storing stringized annotations in
"o.__str_annotations__", we remove this ambiguity. Now we know
for certain that these annotations were stringized and we should
eval() them. And if a string shows up in "o.__annotations__" we
know we should leave it alone.
Of course, by making the language do
the eval() on the strings, we abstract away the behavior
completely. Now library code doesn't need to be aware if the
module had stringized annotations, or PEP-649-style delayed
annotations, or "stock" semantics. Accessing "o.__annotations__"
always gets you the real annotations values, every time.
Cheers,
/arry
On 4/18/21 7:06 AM, Jelle Zijlstra
wrote:
The heart of the debate between PEPs 563 and 649 is the
question: what should an annotation be? Should it be a
string or a Python value? It seems people who are
pro-PEP 563 want it to be a string, and people who are
pro-PEP 649 want it to be a value.
Actually, let me amend that slightly. Most people who
are pro-PEP 563 don't actually care that annotations are
strings, per se. What they want are specific runtime
behaviors, and they get those behaviors when PEP 563
turns their annotations into strings.
I have an idea--a rough proposal--on how we can mix
together aspects of PEP 563 and PEP 649. I think it
satisfies everyone's use cases for both PEPs. The
behavior it gets us:
- annotations can be stored as strings
- annotations stored as strings can be examined as
strings
- annotations can be examined as values
The idea:
We add a new type of compile-time flag, akin to a "from
__future__" import, but not from the future. Let's not
call it "from __present__", for now how about "from
__behavior__".
In this specific case, we call it "from __behavior__
import str_annotations". It behaves much like Python
3.9 does when you say "from __future__ import
annotations", except: it stores the dictionary with
stringized values in a new member on the
function/class/module called "__str_annotations__".
If an object "o" has "__str_annotations__", set, you
can access it and see the stringized values.
If you access "o.__annotations__", and the object has
"o.__str_annotations__" set but "o.__annotations__" is
not set, it builds (and caches) a new dict by iterating
over o.__str_annotations__, calling eval() on each value
in "o.__str_annotations__". It gets the globals() dict
the same way that PEP 649 does (including, if you
compile a class with str_annotations, it sets
__globals__ on the class). It does not unset
"o.__str_annotations__" unless someone explicitly sets
"o.__annotations__". This is so you can write your code
assuming that "o.__str_annotations__" is set, and it
doesn't explode if somebody somewhere ever looks at
"o.__annotations__". (This could lead to them getting
out of sync, if someone modified "o.__annotations__".
But I suspect practicality beats purity here.)
How would this work with annotations that access a local
scope?
def f():
class X: pass
def inner() -> X: pass
return innfer
f().__annotations__
From your description it sounds like it would fail, just
like calling typing.get_type_hints() would fail on it today.
If so I don't see this as much better than the current
situation: all it does is provide a builtin way of calling
get_type_hints().