Software bugs aren't inevitable
steve at REMOVETHIScyber.com.au
Wed Sep 14 20:06:04 CEST 2005
On Wed, 14 Sep 2005 11:28:02 -0400, Terry Reedy wrote:
> "Steven D'Aprano" <steve at REMOVETHIScyber.com.au> wrote in message
> news:pan.2005.09.14.12.05.00.913356 at REMOVETHIScyber.com.au...
>> Which works wonderfully as an academic exercise, but doesn't tend to work
>> so terribly well in the real world where the performance and
>> resource-requirement differences between iteration and recursion can be
> I think your comparison is incomplete.
Yes, it is incomplete.
It seems that I've given the mistaken impression that I am opposed to
recursion in principle. I am not. Perhaps I should have qualified my
remarks by stating that sometimes recursion can be easier to comprehend
than iteration, more efficient and all those other goodies that developers
like their programs to be. A good example of a task that is frequently
better solved with recursion than with iteration is walking a binary tree.
But in the context of my response, I was replying to a paraphrased quote
from somebody who apparently believes that recursion is *always* better
than iteration. That is clearly not the case.
It is a "mere implementation detail" that (for most computer systems, and
most programming languages) stack space is at a premium and a deeply
recursive function can run out of stack space while the heap still has
lots of free memory. But those implementation details in the real world
make the difference between an elegant solution that runs like a lame duck
and an practical solution that has nothing going for it except the fact
that it works.
(And then there are the elegant solutions that are also practical. It is a
good day when you find yourself writing one of those.)
Recursion is frequently extravagant in its use of resources: if nothing
else, it takes resources to call a function, and recursion means you call
the same function over and over again. There is a reason why functional
programming never really took off.
Extravagance is not necessarily a bad thing -- if I thought it were, I
wouldn't be using a high-level object-oriented language like Python. But
it is important to be aware of those factors.
> Recursion and iteration are two syntaxes for the same operation: repetition
> with variation.
Yes, in general anything that can be solved recursively can be solved
iteratively. Some classes of problems lend themselves naturally to one or
the other solution, but it is always possible (in principle at least) to
> Abstractly, these are two algorithms for the same function. One runs in
> exponential time because it wastefully calculates and tosses away an
> exponential number of subvalues. The other runs in linear time because it
> calculates each subvalue once. When one only wants Fib(n), and not the
> sequence leading up to it, even this is wasteful, for large enough n, since
> there is a third algorithm that caluculates Fib(n) directly by a simple
> formula (something like the interger part of the golden ratio to the nth
Yes. There are lots of algorithms that could be done, and they all have
their pros and cons. Biset's formula, for example, is mathematically
correct, but for large enough n, the "mere implementation detail" that
floats have a finite precision will cause that algorithm to give incorrect
answers. For "large enough", on my system I mean n=71.
> Now: I could (and probably someday will) write an iterative version of the
> exponential algorithm (using an explicit stack) that calculates each
> subvalue exactly as many times as the recursive version of that same
> algorithm. And I could compare it to a recursive version of the more
> efficient linear algorithm (such as posted by Jerzy Karczmarczuk). And I
> could claim that this shows hows iteration can waste time compared to
Of course it can. But you have to really *work* at getting the iterative
version to be as wasteful as the obvious recursive version.
> But that, I admit, would be an invalid conclusion. And that, I claim,
> is also invalid when 'iteration' and 'recursion' are reversed, no matter
> how often repeated in texts and articles. The difference is between the
> algorithms, not the differing syntactic expressions thereof.
Now you go too far. You are right that a change of algorithm will often
make a much bigger difference to performance than merely swapping from one
form of repetition to another. But you ignore those "mere implementation
details" that lead to exceptions like this one:
RuntimeError: maximum recursion depth exceeded
(eg calling Jerzy Karczmarczuk's efficiently recursive function with
n=1000, while my iterative version works for at least values of n an order
of magnitude larger.)
Yes, the maximum recursion depth in Python is an artificial limit. But
that artificial limit is built into Python specifically to protect you
from running into a real recursion limit based on the hardware and
architecture of your PC, with painful consequences.
More information about the Python-list