@lru_cache on functions with no arguments

Ian Kelly ian.g.kelly at gmail.com
Thu Aug 3 11:36:00 EDT 2017


On Thu, Aug 3, 2017 at 8:35 AM, Paul  Moore <p.f.moore at gmail.com> wrote:
> On Tuesday, 1 August 2017 15:54:42 UTC+1, t... at tomforb.es  wrote:
>> > _sentinel = object()
>> > _val = _sentinel
>> > def val():
>> >     if _val is _sentinel:
>> >         # Calculate _val
>> >     return _val
>> >
>> > seems entirely sufficient for this case. Write a custom decorator if you use the idiom often enough to make it worth the effort.
>>
>> I did some timings with this as part of my timings above and found it to be significantly slower than lru_cache with the C extension. I had to add `nonlocal` to get `_val` to resolve, which I think kills performance a bit.
>>
>> I agree with the premise though, it might be worth exploring.
>
> It's worth pointing out that there's nothing *wrong* with using lru_cache with maxsize=None. You're going to find it hard to get a pure-Python equivalent that's faster (after all, even maintaining a single variable is still a dict lookup, which is all the cache does when LRU functionality is disabled).

The single variable is only a dict lookup if it's a global. Locals and
closures are faster.

def simple_cache(function):
    sentinel = object()
    cached = sentinel

    @functools.wraps(function)
    def wrapper(*args, **kwargs):
        nonlocal cached
        if args or kwargs:
            return function(*args, **kwargs)  # No caching with args
        if cached is sentinel:
            cached = function()
        return cached
    return wrapper

*Zero* dict lookups at call-time. If that's not (marginally) faster
than lru_cache with maxsize=None I'll eat my socks.



More information about the Python-list mailing list