On 02.12.2014 13:57, Nick Coghlan wrote:
As far as I'm aware, this is actually a deliberate design decision. There are so many degrees of freedom in designing a cache API that without constrainting the usage model it's really quite difficult to come up with a flexible abstraction that's easier to use than just building your own custom caching class.
Then couldn't we just create a functools.cache, which takes as argument a data structure with dict-like actions. Then there is a collections.LruCache class, which is the data structure currently used in functools.lru_cache.
Where LruCache would be quite trivially based on collections.OrderedDict
And once you expose the underlying mapping in functools.lru_cache itself, it hugely constraints the internal implementation of that cache (since you just made a whole lot of things that are currently implementation details part of the public API).
With the above approach, exposing the underlying cache structure would be on purpose, s.t. the user can freely chose the kind of cache to use.
This can then be done more explicitly by exposing the underlying cache as decorated_function.cache or something like that.
It's OK if folks with needs that don't quite fit the standard idiom write their own custom class to handle it - that makes it possible to keep the standard tools simple to handle the standard cases, while folks with different needs can just write something specifically tailored to their situation, rather than trying to learn and configure a more generic API.
In this case I need the standard tools for the standard cases to expose more functionality, s.t. I can make my lib  interoperate with the standard lib. I.e. I need to manually add entries to the cache because there are function calls that the cache decorator can't get directly notified of.
Best Regards, Constantin
 my tail call optimization lib: https://titania.fs.uni-saarland.de/projects/libtco