Re: Additional LRU cache introspection facilities
I propose a method: ... returns a dictionary {arg: value} representing the cache. It wouldn't be the cache itself, just a shallow copy of the cache data
I recommend against going down this path. It exposes (and potentially locks in) implementation details such as how we distinguish positional arguments, keyword arguments, and type information (something that has changed more than once). Also, a shallow copy still leaves plenty of room for meddling with the contents of the keys, potentially breaking the integrity of the cache. Another concern is that we've worked hard to remove potential deadlocks from the lru_cache. Hanging on a lock while copying the whole cache complicates our efforts and risks breaking it as users exploit the new feature in unpredictable ways. FWIW, OrderedDict provides methods that make it easy to roll your own variants of the lru_cache(). It would better to do that than to complexify the base implementation in ways that I think we would regret. Raymond
Okay, thanks everyone for the feedback. I accept that there are more practical difficulties than I expected, and the work-arounds I have are not too onerous. -- Steve
participants (2)
-
Raymond Hettinger
-
Steven D'Aprano