On Tue, 12 Jan 2021 at 17:04, Steven D'Aprano
My use-case is debugging functions that are using an LRU cache, specifically complex recursive functions. I have some functions where:
f(N)
ends up calling itself many times, but not in any obvious pattern:
f(N-1), f(N-2), f(N-5), f(N-7), f(N-12), f(N-15), f(N-22), ...
for example. So each call to f() could make dozens of recursive calls, if N is big enough, and there are gaps in the calls.
I was having trouble with the function, and couldn't tell if the right arguments where going into the cache. What I wanted to do was peek at the cache and see which keys were ending up in the cache and compare that to what I expected.
I did end up get the function working, but I think it would have been much easier if I could have seen what was inside the cache and how the cache was changing from one call to the next.
So this is why I don't care about performance (within reason). My use case is interactive debugging.
For debugging, could you not temporarily write your own brain-dead simple cache? CACHE = {} def f(n): if n not in CACHE: calculate result CACHE[n] = result return CACHE[n] (make a decorator, if you feel like). You can then inspect and instrument as much as you like, and once you've got things working, replace the hand-written cache with the stdlib one. Personally, I don't use lru_cache enough to really have an opinion on this. My first reaction was "that would be neat", but a quick look at the code (the Python version, I didn't go near the C version) and Serhiy's comments made me think that "it's neat" isn't enough justification for me, at least. In practice, I'd just do as I described above, if I needed to. Paul