[Python-ideas] functools.lru_cache manual cache modification

Raymond Hettinger raymond.hettinger at gmail.com
Wed Dec 3 18:11:20 CET 2014

> On Dec 2, 2014, at 4:57 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> And once you expose the underlying mapping in functools.lru_cache
> itself, it hugely constraints the internal implementation of that
> cache (since you just made a whole lot of things that are currently
> implementation details part of the public API).

Yes, that is exactly correct.  For example, there is a proposed C
implementation that has different internal details that it surely
won't want to or be able to expose.

> So collections.OrderedDict provides the raw building block needed to
> implement an LRU cache, while functools.lru_cache applies that
> building block to a particular use case.

Right.  The OrderedDict makes it really easy to roll your own variants.

And that is what the LRU cache used originally.  Later, it evolved to use
its own linked list so that it could have only one internal dictionary
instead of the two used by the OrderedDict.   This cut the space overhead
almost in half and sped-up cache lookups considerably.

Having its own structure also lets the LRU cache neatly deal with
multi-threading and re-entrancy so that it does fail in odd ways
when used in complex environments.

> It's OK if folks with needs that don't quite fit the standard idiom
> write their own custom class to handle it - that makes it possible to
> keep the standard tools simple to handle the standard cases, while
> folks with different needs can just write something specifically
> tailored to their situation, rather than trying to learn and configure
> a more generic API.

I concur.


More information about the Python-ideas mailing list