On 1/18/21 2:39 PM, Guido van Rossum wrote:
Hm. It's unfortunate that this would break code using what is *currently* the best practice.

I can't figure out how to avoid it.  The problem is, current best practice sidesteps the class and goes straight to the dict.  How do we intercept that and run the code to lazy-calculate the annotations?

I mean, let's consider something crazy.  What if we change cls.__dict__ from a normal dict to a special dict that handles the __co_annotations__ machinery?  That might work, except, we literally allow users to supply their own cls.__dict__ via __prepare__.  So we can't rely on our special dict.

What if we change cls.__dict__ to a getset?  The user is allowed to set cls.__dict__, but when you get __dict__, we wrap the actual internal dict object with a special object that intercepts accesses to __annotations__ and handles the __co_annotations__ mechanism.  That might work but it's really crazy and unfortunate.  And it's remotely possible that a user might override __dict__ as a property, in a way that breaks this mechanism too.  So it's not guaranteed to always work.

I'm not suggesting we should do these things, I'm just trying to illustrate how hard I think the problem is.  If someone has a good idea how we can add the __co_annotations__ machinery without breaking current best practice I'd love to hear it.


Also, for functions and modules I would recommend `getattr(o, "__annotations__", None)` (perhaps with `or {}` added).

For functions you don't need to bother; fn.__annotations__ is guaranteed to always be set, and be either a dict or None.  (Python will only ever set it to a dict, but the user is permitted to set it to None.)

I agree with your suggested best practice for modules as it stands today.

And actually, let me walk back something I've said before.  I believe I've said several times that "people treat classes and modules the same".  Actually that's wrong.

So, for what it's worth, I literally have zero examples of people treating classes and modules the same when it comes to annotations.  Sorry for the confusion!


I would also honestly discount what dataclasses.py and typing.py have to do. But what do 3rd party packages do when they don't want to use get_type_hints() and they want to get it right for classes? That would give an indication of how serious we should take breaking current best practice.

I'm not sure how to figure that out.  Off the top of my head, the only current third-party packages I can think of that uses annotations are mypy and attrs.  I took a quick look at mypy but I can't figure out what it's doing.

attrs does something a little kooky.  It access __annotations__ using a function called _has_own_attributes(), which detects whether or not the object is inheriting an attribute.  But it doesn't peek in __dict__, instead it walks the mro and sees if any of its base classes have the same (non-False) value for that attribute.

https://github.com/python-attrs/attrs/blob/a025629e36440dcc27aee0ee5b04d6523bcc9931/src/attr/_make.py#L343

Happily, that seems like it would continue to work even if PEP 649 is accepted.  That's good news!


Cheers,


/arry