why memoizing is faster

Stefan Behnel stefan_ml at behnel.de
Sat Mar 26 00:17:11 EDT 2011


Terry Reedy, 25.03.2011 21:18:
> On 3/25/2011 5:16 AM, Stefan Behnel wrote:
>
>> Terry's version is playing with the fact that default arguments are only
>> instantiated once, i.e. (unless overridden by passing an explicit
>> argument) the "_cache" is shared over all calls to the function. This is
>> similar to what your memoization decorator does, but without the
>> additional dict. And, it also "suffers" from the same timing problem
>> that I mentioned before.
>
> It depends on what one wants to measure. If one is memoizing because one
> expects to call the function 1000s of times in some program, then the fast
> lookup time of repeated calls represents the actual post-initialization
> burden on that program. If one want to measure the initialization time,
> then the cache can easily be restarted with each call (once one knows how):
> fib_iter(100,_cache=[0,1])

Not "restarted" in the sense that it gets cleaned up, though. The above 
simply passes an explicit value for it that will be used for the single 
call. Future calls won't be affected.

Stefan




More information about the Python-list mailing list