On 04.12.15 23:44, Bill Winslow wrote:

This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach...

The summary for people here is the following:

Here's a pattern I'm using for my code:

def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)

Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)

I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.

Is there any reasonable way to do this?

Memoize a closure. def deterministic_calculation(input): some_state = ... @lru_cache() def recursive_calculation(input): nonlocal some_state ... return recursive_calculation(reduced_input) return recursive_calculation(input)