why memoizing is faster
Stefan Behnel
stefan_ml at behnel.de
Thu Mar 24 10:37:37 EDT 2011
Andrea Crotti, 24.03.2011 14:48:
> I was showing a nice memoize decorator to a friend using the classic
> fibonacci problem.
>
> --8<---------------cut here---------------start------------->8---
> def memoize(f, cache={}):
> def g(*args, **kwargs):
> # first must create a key to store the arguments called
> # it's formed by the function pointer, *args and **kwargs
> key = ( f, tuple(args), frozenset(kwargs.items()) )
> # if the result is not there compute it, store and return it
> if key not in cache:
> cache[key] = f(*args, **kwargs)
>
> return cache[key]
>
> return g
>
> # without the caching already for 100 it would be unfeasible
> @memoize
> def fib(n):
> if n<= 1:
> return 1
> else:
> return fib(n-1) + fib(n-2)
>
> --8<---------------cut here---------------end--------------->8---
>
>
> It works very well, but he said that the iterative version would be
> anyway faster.
>
> But I tried this
>
> --8<---------------cut here---------------start------------->8---
> def fib_iter(n):
> ls = {0: 1, 1:1}
> for i in range(2, n+1):
> ls[i] = ls[i-1] + ls[i-2]
>
> return ls[max(ls)]
> --8<---------------cut here---------------end--------------->8---
>
> and what happens is surprising, this version is 20 times slower then the
> recursive memoized version.
What have you used for timing? "timeit"? Note that the memoized version
saves the results globally, so even the end result is cached, and the next
call takes the result from there, instead of actually doing anything. The
timeit module repeatedly calls the code and only gives you the best runs,
i.e. those where the result was already cached.
Stefan
More information about the Python-list
mailing list