recursion vs iteration (was Re: reduce()--what is it good for?)

Bengt Richter bokr at
Mon Nov 10 07:06:30 CET 2003

On Sun, 09 Nov 2003 20:56:39 -0800, David Eppstein <eppstein at> wrote:

>In article <uq6dnflq6t88lTKiRVn-vw at>,
> "Terry Reedy" <tjreedy at> wrote:
>> "David Eppstein" <eppstein at> wrote in message
>> news:eppstein-3EA31B.11393809112003 at
>> > Recursive memoization can be better than iteration when the 
>> > recursion can avoid evaluating a large fraction of the possible 
>> > subproblems.
>> >
>> > An example would be the 0-1 knapsack code in
>> >
>> >
>> > If there are n items and size limit L, the iterative versions 
>> > (pack4 and pack5) take time O(nL), while the recursive version 
>> > (pack3) takes time O(min(2^n, nL)).  So the recursion can be better 
>> > when L is really large.
>> Are you claiming that one *cannot* write an iterative version (with
>> auxiliary stacks) of the same algorithm (which evaluates once each the
>> same restricted subset subproblems)  -- or merely that it would be
>> more difficult, and more difficult to recognize correctness (without
>> having mechanically translated the recursive version)?
>Of course you can make it iterative with auxiliary stacks.
>Any compiler for a compiled language would do that.
>I don't think of that as removing the recursion, just hiding it.
>I thought your question wasn't about semantic games, I thought it was 
>about the relation between memoization and dynamic programming.  Both 
>compute and store the same subproblem values, but in different orders; 
>usually they are the same in complexity but not always.
>> Standard example: the fibonacci function has at least two
>> non-constant-time, non-memoized algorithms: one exponential (due to
>> gross redundancy) and the other linear.
>Not to mention the logarithmic (in number of arithmetic operations) 
>> Too often, people present recursive exponential and iterative linear 
>> algorithms and falsely claim 'iteration is better (faster) than 
>> recursion'.
>That sounds dishonest.  But Fibonacci can be a good example of both 
>memoization and dynamic programming, and I would expect the iterative 
>linear version to be faster (by a constant factor) than the memoized 
>recursive one (also linear due to the memoization).
I played with fibonacci and was quite proud of myself for having
probably rediscovered a fast recursion algorithm for it
(about this time 1996 it seems, sheesh time flies) :

in scheme

and later I translated it to python

Perhaps you know of a similar predecessor?

BTW, do you mean dynamic programming above as in Bellman? Recursion can be used in
implementing that, but for fibonacci how would it apply?

Bengt Richter

More information about the Python-list mailing list