Maybe we could specialize the heck out of this and not bother with a function object? In the end we want to execute the code, the function object is just a convenient way to bundle defaults, free variables (cells) and globals. But co_annotation has no arguments or defaults, and is only called once. It does need to have access to the globals of the definition site (the call site may be in another module) and sometimes there are cells (not sure).
Yes, there are sometimes cells.
def foo():
my_type = int
def bar(a:my_type):
return a
return bar
Both bar() and the co_annotations function for bar() are nested functions inside foo, and the latter has to refer to the cell for my_type. Also, co_annotations on a class may keep a reference to the locals dict, permitting annotations to refer to values defined in the class.
I don't know if it's worth making a specialized object as you
suggest. I'd forgotten I did this, but late in the development of
the 649 prototype, I changed it so the co_annotations function is
always lazy-bound, only constructed on demand.
There are three possible blobs of information the descriptor might need when binding the co_annotations function:
The code object is always necessary, the other two are optional.
If we only need the code object, we store that code object inside
the function object and we're done. If we need either or both of
the other two blobs, we throw all the blobs we need in a tuple and
store the tuple instead. At the point that someone asks for the
annotations on that object, the descriptor does PyType_ checks to
determine which blobs of data it has, binds the function
appropriately, calls it, and returns the result. I think this
approach is reasonable and I'm not sure what a custom callable
object would get us.
/arry