As it happens, I have a working prototype of lazy in marshaling that would work well for this.

On Wed, Aug 11, 2021 at 06:07 Larry Hastings <larry@hastings.org> wrote:
On 8/11/21 5:21 AM, Inada Naoki wrote:
But memory footprint and GC time is still an issue.
Annotations in PEP 649 semantics can be much heavier than docstrings.


I'm convinced that, if we accept PEP 649 (or something like it), we can reduce its CPU and memory consumption.

Here's a slightly crazy idea I had this morning: what if we didn't unmarshall the code object for co_annotation during the initial import, but instead lazily loaded it on demand?  The annotated object would retain knowledge of what .pyc file to load, and what offset the co_annotation code object was stored at.  (And, if the co_annotations function had to be a closure, a reference to the closure tuple.)  If the user requested __annotations__ (or __co_annotations__), the code would open the .pyc file, unmarshall it, bind it, etc.  Obviously this would only work for code loaded from .pyc (etc) files.  To go even crazier, the runtime could LRU cache N (maybe == 1) open .pyc file handles as a speed optimization, perhaps closing them after some wall-clock timeout.

I doubt we'd do exactly this--it's easy to find problems with the approach.  But maybe this idea will lead to a better one?


/arry

_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-leave@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/JMLXOEV6GRBVPNHQD6CYEW63I7WJBWZC/
Code of Conduct: http://python.org/psf/codeofconduct/
--
--Guido (mobile)