Using functools.lru_cache only on some arguments of a function
This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach... The summary for people here is the following: Here's a pattern I'm using for my code: def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state) Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.) I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense. Is there any reasonable way to do this? Things such as memoizing a wrapper don't actually solve the problem. About the only way I can think of with current technology is either to have a hidden singleton class which maintains state, or a hidden global variable, which amount to the same thing of storing the partial state outside the function. But those strike me as unwieldy and unpythonic. What I'd really like to do is to have some way to tell functools.lru_cache to ignore some arguments of a function it's memoizing for the purposes of caching. One way would be to add an "arg_filter" argument, which for purposes of this example would be used like so: @lru_cache(arg_filter=lambda args, kwargs: args[:1], {}) def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state) This effectively ignores all arguments except the first positional one for the purposes of caching. Such an option could be implemented as in the diff below (provided only for discussion purposes). So what I want to know is: 1) Am I sane? Is there no particularly good way currently to go about caching functions following the given pattern? 2) Assuming the answer to 1) is "Yes I am sane, and there's no good way currently", is my proposal a reasonable one? Mostly on philosophical grounds, not necessarily specifics of how it works. Thank you for your time and consideration. Bill ---------------------------------------------------------------------------------------------------------------------------------- https://hg.python.org/cpython/file/3.5/Lib/functools.py diff functools.py functools.py.orig 363c363 < def _make_key(args, kwds, typed, arg_filter, ---
def _make_key(args, kwds, typed, 377,378d376 < if arg_filter is not None: < args, kwds = arg_filter(args, kwds) 393c391 < def lru_cache(maxsize=128, typed=False, arg_filter=None):
def lru_cache(maxsize=128, typed=False): 403,406d400 < *arg_filter* is an optional function which filters user-speicified portions < of the arguments from the caching key (e.g. if an argument affects the < computation but not the final result). < 428,430d421 < if arg_filter is not None and not callable(arg_filter): < raise TypeError('Expected arg_filter to be a callable') < 432,433c423 < wrapper = _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, < _CacheInfo)
wrapper = _lru_cache_wrapper(user_function, maxsize, typed,
_CacheInfo) 438c428 < def _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, _CacheInfo): ---
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): 466c456 < key = make_key(args, kwds, typed, arg_filter)
key = make_key(args, kwds, typed)
481c471 < key = make_key(args, kwds, typed, arg_filter) ---
key = make_key(args, kwds, typed)
http://thecodelesscode.com/topics/caching This smells like something that could cause trouble to me... If it's in the stdlib, it's easier for someone to use it and then be confused when something doesn't work correctly (maybe someone else will make partial_state affect the result, not realizing that that argument is ignored). Too buggy to me. On December 4, 2015 3:44:18 PM CST, Bill Winslow <bunslow@gmail.com> wrote:
This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach...
The summary for people here is the following:
Here's a pattern I'm using for my code:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Is there any reasonable way to do this?
Things such as memoizing a wrapper don't actually solve the problem. About the only way I can think of with current technology is either to have a hidden singleton class which maintains state, or a hidden global variable, which amount to the same thing of storing the partial state outside the function. But those strike me as unwieldy and unpythonic.
What I'd really like to do is to have some way to tell functools.lru_cache to ignore some arguments of a function it's memoizing for the purposes of caching.
One way would be to add an "arg_filter" argument, which for purposes of this example would be used like so:
@lru_cache(arg_filter=lambda args, kwargs: args[:1], {}) def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
This effectively ignores all arguments except the first positional one for the purposes of caching. Such an option could be implemented as in the diff below (provided only for discussion purposes).
So what I want to know is: 1) Am I sane? Is there no particularly good way currently to go about caching functions following the given pattern? 2) Assuming the answer to 1) is "Yes I am sane, and there's no good way currently", is my proposal a reasonable one? Mostly on philosophical grounds, not necessarily specifics of how it works.
Thank you for your time and consideration.
Bill
---------------------------------------------------------------------------------------------------------------------------------- https://hg.python.org/cpython/file/3.5/Lib/functools.py
diff functools.py functools.py.orig 363c363 < def _make_key(args, kwds, typed, arg_filter, ---
def _make_key(args, kwds, typed, 377,378d376 < if arg_filter is not None: < args, kwds = arg_filter(args, kwds) 393c391 < def lru_cache(maxsize=128, typed=False, arg_filter=None):
def lru_cache(maxsize=128, typed=False): 403,406d400 < *arg_filter* is an optional function which filters user-speicified portions < of the arguments from the caching key (e.g. if an argument affects the < computation but not the final result). < 428,430d421 < if arg_filter is not None and not callable(arg_filter): < raise TypeError('Expected arg_filter to be a callable') < 432,433c423 < wrapper = _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, < _CacheInfo)
wrapper = _lru_cache_wrapper(user_function, maxsize, typed,
_CacheInfo) 438c428 < def _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, _CacheInfo): ---
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): 466c456 < key = make_key(args, kwds, typed, arg_filter)
key = make_key(args, kwds, typed)
481c471 < key = make_key(args, kwds, typed, arg_filter) ---
key = make_key(args, kwds, typed)
------------------------------------------------------------------------
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity.
The source (https://hg.python.org/cpython/file/3.5/Lib/functools.py) warns on lines 411-414 that one should only use the public API for thread-safety and forwards-compatibility with a possible C version. Why not encapsulate the recursive function and its persistent state in a class? from functools import lru_cache class Fibonacci: 'the classic recursion example' def __init__(self, shortcut=False): self.shortcut = shortcut @lru_cache() def nth(self, n): if self.shortcut: return -1 if n == 0: return 0 if n == 1: return 1 return self.nth(n-1) + self.nth(n-2) On Fri, Dec 4, 2015 at 7:59 PM Ryan Gonzalez <rymg19@gmail.com> wrote:
http://thecodelesscode.com/topics/caching
This smells like something that could cause trouble to me... If it's in the stdlib, it's easier for someone to use it and then be confused when something doesn't work correctly (maybe someone else will make partial_state affect the result, not realizing that that argument is ignored).
Too buggy to me.
On December 4, 2015 3:44:18 PM CST, Bill Winslow <bunslow@gmail.com> wrote:
This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach...
The summary for people here is the following:
Here's a pattern I'm using for my code:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Is there any reasonable way to do this?
Things such as memoizing a wrapper don't actually solve the problem. About the only way I can think of with current technology is either to have a hidden singleton class which maintains state, or a hidden global variable, which amount to the same thing of storing the partial state outside the function. But those strike me as unwieldy and unpythonic.
What I'd really like to do is to have some way to tell functools.lru_cache to ignore some arguments of a function it's memoizing for the purposes of caching.
One way would be to add an "arg_filter" argument, which for purposes of this example would be used like so:
@lru_cache(arg_filter=lambda args, kwargs: args[:1], {}) def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
This effectively ignores all arguments except the first positional one for the purposes of caching. Such an option could be implemented as in the diff below (provided only for discussion purposes).
So what I want to know is: 1) Am I sane? Is there no particularly good way currently to go about caching functions following the given pattern? 2) Assuming the answer to 1) is "Yes I am sane, and there's no good way currently", is my proposal a reasonable one? Mostly on philosophical grounds, not necessarily specifics of how it works.
Thank you for your time and consideration.
Bill
---------------------------------------------------------------------------------------------------------------------------------- https://hg.python.org/cpython/file/3.5/Lib/functools.py
diff functools.py functools.py.orig 363c363 < def _make_key(args, kwds, typed, arg_filter, ---
def _make_key(args, kwds, typed, 377,378d376 < if arg_filter is not None: < args, kwds = arg_filter(args, kwds) 393c391 < def lru_cache(maxsize=128, typed=False, arg_filter=None):
def lru_cache(maxsize=128, typed=False): 403,406d400 < *arg_filter* is an optional function which filters user-speicified portions < of the arguments from the caching key (e.g. if an argument affects the < computation but not the final result). < 428,430d421 < if arg_filter is not None and not callable(arg_filter): < raise TypeError('Expected arg_filter to be a callable') < 432,433c423 < wrapper = _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, < _CacheInfo)
wrapper = _lru_cache_wrapper(user_function, maxsize, typed,
_CacheInfo) 438c428 < def _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, _CacheInfo): ---
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): 466c456 < key = make_key(args, kwds, typed, arg_filter)
key = make_key(args, kwds, typed)
481c471 < key = make_key(args, kwds, typed, arg_filter) ---
key = make_key(args, kwds, typed)
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
I saw that Bill previously considered and rejected the idea of a class on the Reddit thread, calling it unpythonic and musing about changing the keying function of lru_cache. It seems to me that while recursion is the hallmark of functional style, shared state is a major feature of object orientation. This example seems particularly appropriate for the blend of the two styles -- a recursive function housed in a class. To reduce the chances someone creates a second instance of the class, wasting the cached results of the first instance, one could wrap an instance in a plain module-level function. def recurse(*args, state=None): recurse.instance.state = state return recurse.instance.recurse(*args) recurse.instance = MethodsHaveCaches() Good, Bad, Ugly? On Sat, Dec 5, 2015 at 3:15 AM Michael Selik <mike@selik.org> wrote:
The source (https://hg.python.org/cpython/file/3.5/Lib/functools.py) warns on lines 411-414 that one should only use the public API for thread-safety and forwards-compatibility with a possible C version.
Why not encapsulate the recursive function and its persistent state in a class?
from functools import lru_cache
class Fibonacci: 'the classic recursion example'
def __init__(self, shortcut=False): self.shortcut = shortcut
@lru_cache() def nth(self, n): if self.shortcut: return -1 if n == 0: return 0 if n == 1: return 1 return self.nth(n-1) + self.nth(n-2)
On Fri, Dec 4, 2015 at 7:59 PM Ryan Gonzalez <rymg19@gmail.com> wrote:
http://thecodelesscode.com/topics/caching
This smells like something that could cause trouble to me... If it's in the stdlib, it's easier for someone to use it and then be confused when something doesn't work correctly (maybe someone else will make partial_state affect the result, not realizing that that argument is ignored).
Too buggy to me.
On December 4, 2015 3:44:18 PM CST, Bill Winslow <bunslow@gmail.com> wrote:
This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach...
The summary for people here is the following:
Here's a pattern I'm using for my code:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Is there any reasonable way to do this?
Things such as memoizing a wrapper don't actually solve the problem. About the only way I can think of with current technology is either to have a hidden singleton class which maintains state, or a hidden global variable, which amount to the same thing of storing the partial state outside the function. But those strike me as unwieldy and unpythonic.
What I'd really like to do is to have some way to tell functools.lru_cache to ignore some arguments of a function it's memoizing for the purposes of caching.
One way would be to add an "arg_filter" argument, which for purposes of this example would be used like so:
@lru_cache(arg_filter=lambda args, kwargs: args[:1], {}) def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
This effectively ignores all arguments except the first positional one for the purposes of caching. Such an option could be implemented as in the diff below (provided only for discussion purposes).
So what I want to know is: 1) Am I sane? Is there no particularly good way currently to go about caching functions following the given pattern? 2) Assuming the answer to 1) is "Yes I am sane, and there's no good way currently", is my proposal a reasonable one? Mostly on philosophical grounds, not necessarily specifics of how it works.
Thank you for your time and consideration.
Bill
---------------------------------------------------------------------------------------------------------------------------------- https://hg.python.org/cpython/file/3.5/Lib/functools.py
diff functools.py functools.py.orig 363c363 < def _make_key(args, kwds, typed, arg_filter, ---
def _make_key(args, kwds, typed, 377,378d376 < if arg_filter is not None: < args, kwds = arg_filter(args, kwds) 393c391 < def lru_cache(maxsize=128, typed=False, arg_filter=None):
def lru_cache(maxsize=128, typed=False): 403,406d400 < *arg_filter* is an optional function which filters user-speicified portions < of the arguments from the caching key (e.g. if an argument affects the < computation but not the final result). < 428,430d421 < if arg_filter is not None and not callable(arg_filter): < raise TypeError('Expected arg_filter to be a callable') < 432,433c423 < wrapper = _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, < _CacheInfo)
wrapper = _lru_cache_wrapper(user_function, maxsize, typed,
_CacheInfo) 438c428 < def _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, _CacheInfo): ---
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): 466c456 < key = make_key(args, kwds, typed, arg_filter)
key = make_key(args, kwds, typed)
481c471 < key = make_key(args, kwds, typed, arg_filter) ---
key = make_key(args, kwds, typed)
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
-- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
IIRC, lru_cache actually stores a make_key function that it calls on the args and kw. If this were exposed as part of the interface, you could just do this: @lru_cache() def spam(a, b): return a _make_key = spam.cache.make_key def make_key(args, kw): return _make_key(args[:1], kw) spam.cache.make_key = make_key Your idea of being able to pass a key-transformer or -maker into the constructor is pretty nice too, but it seems like the two should work together instead of being completely different things that end up having the same effect. One last thing: your transformer obviously won't handle the case where the user passes the args by keyword instead of by name (and same for mine). Maybe they'd never do that for the input argument, but for partial_state it might be more readable than passing it positionally. So maybe we want to make it easier to correctly specify the args that matter. The simplest interface would just be a list of parameter names (and lru_cache would then have to get_signature its argument to figure out the positional equivalents). On the other hand, maybe the implementation complexity isn't worth the interface simplicity. Sent from my iPhone
On Dec 4, 2015, at 13:44, Bill Winslow <bunslow@gmail.com> wrote: This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach...
The summary for people here is the following:
Here's a pattern I'm using for my code:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Is there any reasonable way to do this?
Things such as memoizing a wrapper don't actually solve the problem. About the only way I can think of with current technology is either to have a hidden singleton class which maintains state, or a hidden global variable, which amount to the same thing of storing the partial state outside the function. But those strike me as unwieldy and unpythonic.
What I'd really like to do is to have some way to tell functools.lru_cache to ignore some arguments of a function it's memoizing for the purposes of caching.
One way would be to add an "arg_filter" argument, which for purposes of this example would be used like so:
@lru_cache(arg_filter=lambda args, kwargs: args[:1], {}) def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
This effectively ignores all arguments except the first positional one for the purposes of caching. Such an option could be implemented as in the diff below (provided only for discussion purposes).
So what I want to know is: 1) Am I sane? Is there no particularly good way currently to go about caching functions following the given pattern? 2) Assuming the answer to 1) is "Yes I am sane, and there's no good way currently", is my proposal a reasonable one? Mostly on philosophical grounds, not necessarily specifics of how it works.
Thank you for your time and consideration.
Bill
---------------------------------------------------------------------------------------------------------------------------------- https://hg.python.org/cpython/file/3.5/Lib/functools.py
diff functools.py functools.py.orig 363c363 < def _make_key(args, kwds, typed, arg_filter, ---
def _make_key(args, kwds, typed, 377,378d376 < if arg_filter is not None: < args, kwds = arg_filter(args, kwds) 393c391 < def lru_cache(maxsize=128, typed=False, arg_filter=None):
def lru_cache(maxsize=128, typed=False): 403,406d400 < *arg_filter* is an optional function which filters user-speicified portions < of the arguments from the caching key (e.g. if an argument affects the < computation but not the final result). < 428,430d421 < if arg_filter is not None and not callable(arg_filter): < raise TypeError('Expected arg_filter to be a callable') < 432,433c423 < wrapper = _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, < _CacheInfo)
wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
438c428 < def _lru_cache_wrapper(user_function, maxsize, typed, arg_filter, _CacheInfo): ---
def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): 466c456 < key = make_key(args, kwds, typed, arg_filter)
key = make_key(args, kwds, typed)
481c471 < key = make_key(args, kwds, typed, arg_filter) ---
key = make_key(args, kwds, typed)
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 04.12.15 23:44, Bill Winslow wrote:
This is a question I posed to reddit, with no real resolution: https://www.reddit.com/r/learnpython/comments/3v75g4/using_functoolslru_cach...
The summary for people here is the following:
Here's a pattern I'm using for my code:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Is there any reasonable way to do this?
Memoize a closure. def deterministic_calculation(input): some_state = ... @lru_cache() def recursive_calculation(input): nonlocal some_state ... return recursive_calculation(reduced_input) return recursive_calculation(input)
On Sat, Dec 5, 2015 at 6:11 PM Serhiy Storchaka <storchaka@gmail.com> wrote:
On 04.12.15 23:44, Bill Winslow wrote:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Memoize a closure.
def deterministic_calculation(input): some_state = ... @lru_cache() def recursive_calculation(input): nonlocal some_state ... return recursive_calculation(reduced_input) return recursive_calculation(input)
This would provide the dynamic programming aspect for the recursion, but not cache across successive calls to the outer function. It does look cleaner than the OO version.
On 07.12.15 02:41, Michael Selik wrote:
On Sat, Dec 5, 2015 at 6:11 PM Serhiy Storchaka <storchaka@gmail.com <mailto:storchaka@gmail.com>> wrote:
On 04.12.15 23:44, Bill Winslow wrote: > def deterministic_recursive_calculation(input, partial_state=None): > condition = do_some_calculations(input) > if condition: > return deterministic_recursive_calculation(reduced_input, > some_state) > > I want to memoize this function for obvious reasons, but I need the > lru_cache to ignore the partial_state argument, for its value does not > affect the output, only the computation expense.
Memoize a closure.
def deterministic_calculation(input): some_state = ... @lru_cache() def recursive_calculation(input): nonlocal some_state ... return recursive_calculation(reduced_input) return recursive_calculation(input)
This would provide the dynamic programming aspect for the recursion, but not cache across successive calls to the outer function. It does look cleaner than the OO version.
Decorate the outer function too.
On Mon, Dec 7, 2015 at 8:31 AM Serhiy Storchaka <storchaka@gmail.com> wrote:
On Sat, Dec 5, 2015 at 6:11 PM Serhiy Storchaka <storchaka@gmail.com <mailto:storchaka@gmail.com>> wrote:
On 04.12.15 23:44, Bill Winslow wrote: > def deterministic_recursive_calculation(input,
On 07.12.15 02:41, Michael Selik wrote: partial_state=None):
> condition = do_some_calculations(input) > if condition: > return deterministic_recursive_calculation(reduced_input, > some_state) > > I want to memoize this function for obvious reasons, but I need
the
> lru_cache to ignore the partial_state argument, for its value does not > affect the output, only the computation expense.
Memoize a closure.
def deterministic_calculation(input): some_state = ... @lru_cache() def recursive_calculation(input): nonlocal some_state ... return recursive_calculation(reduced_input) return recursive_calculation(input)
This would provide the dynamic programming aspect for the recursion, but not cache across successive calls to the outer function. It does look cleaner than the OO version.
Decorate the outer function too.
Wouldn't that put us back where we started -- the cache is inappropriately keying on the state/shortcut? @lru_cache() def recursive(*args, shortcut=False): @lru_cache() def inner(*args): if shortcut: return 'took a shortcut' print('infinite recursion is fun!') return inner(*args) return inner(n) Creating a separate wrapper would do it, so that the recursive function isn't re-created each time. @lru_cache() def recursive(*args): if recursive.shortcut: return 'took a shortcut' print('infinite recursion is fun!') return recursive(*args) recursive.shortcut = False def wrapper(*args, shortcut=False): recursive.shortcut = shortcut return recursive(*args)
On Sat, Dec 5, 2015 at 8:44 AM, Bill Winslow <bunslow@gmail.com> wrote:
def deterministic_recursive_calculation(input, partial_state=None): condition = do_some_calculations(input) if condition: return deterministic_recursive_calculation(reduced_input, some_state)
Basically, in calculating the results of the subproblem, the subproblem can be calculated quicker by including/sharing some partial results from the superproblem. (Calling the subproblem without the partial state still gives the same result, but takes substantially longer.)
I want to memoize this function for obvious reasons, but I need the lru_cache to ignore the partial_state argument, for its value does not affect the output, only the computation expense.
Coming right back to the beginning here... Is there a reason the some_state argument is being passed around, instead of being global? If the result of the function depends only on reduced_input and not on some_state, then wouldn't it be possible to make use of state from an entirely separate call, not just the current one? (And if you _can't_ make use of some_state from an unrelated call, then it _does_ affect the call, and it needs to be incorporated into the cache key.) ISTM this should be exactly as global as the cache itself. ChrisA
participants (6)
-
Andrew Barnert
-
Bill Winslow
-
Chris Angelico
-
Michael Selik
-
Ryan Gonzalez
-
Serhiy Storchaka