Teaching : Python, Scheme, Java...
Alex Martelli
aleax at aleax.it
Fri Apr 18 11:23:26 EDT 2003
On Friday 18 April 2003 04:52 pm, Bjorn Pettersen wrote:
> > From: Alex Martelli [mailto:aleax at aleax.it]
> >
> > Donnal Walter wrote:
> > > What is the difference between "recursive calls" and "tail
> > > recursion"? Sometimes I have written methods that call the
> > > same method on a parent or child object, with the depth of
> > > recursion being limited by a method of the same name NOT
> > > calling itself. In the past I have thought of this
> > > as tail recursion. Is this incorrect?
> >
> > A function-call is "tail" if it's the last thing the caller does,
> > i.e., basically, if the caller executes "return somefunction(args)"
> > or a semantically equivalent sequence.
>
> [..recursive..]
>
> > A function-call is "tail recursive" if it's tail and recursive.
> > Such calls can be optimized by avoiding the allocation of a new
> > stack frame -- the same frame object that's already on the
> > stack to handle this specific call can be re-used (if you're
> > willing to lose traceback information -- Python currently does
> > not allow that).
>
> [To digress, elaborate, use my MS for something :-)] It's acutally more
> general. Any tail calls (recursive or not) can be optimized, trivially
Nor did I say otherwise, please note -- IF you can reuse the stack
frame rather than having to generate a new one, which depends
of course on what's going on behind the scenes (can the function
being called use the same frame that was used to call the one
that's now tail-calling it -- depends on stackframe format).
> (assuming the return address is stored in sp, etc.), instead of:
>
> return f()
>
> being compiled to (A) -- excuse my shorthand:
>
> push sp, loc1
> jump f
> loc1: pop sp
> reg1 = pop fn_result_stack # 1
> push reg1, fn_result_stack # 2
> jump sp
>
> (1 and 2 would disappear with a peephole optimizer, assuming there is
> one) tail call optimization creates (B):
>
> jump f
>
> with the last two lines of f looking like the last two lines of (A) the
> semantics are the same.
You're not dealing with the question of the stack frame. How does
f know it's being called without arguments? Surely it will find the
needed indication in its stack frame. Who removes the stack frame
from the stack, caller or callee? What happens if different calls need
different-sized stack frames -- can the currently-topmost stack frame
be "resized", and if so, can it still be correctly removed afterwards
(e.g. in a caller-removes-frame approach, does the caller needs the
frame size information to do the removal, and if so, can it find it out
about a frame that may have been resized?).
Calling conventions are established to try and optimize many issues,
and the ability to optimize all tail calls is only one of many. The case
of a recursive tail call can sometimes be optimized (because all stack
frames received by function f are in some manner "uniform", but ones
received by other functions need not be) even in call conventions that
do not allow the general-case optimization.
> [[Now what if you came up with an algorithm that converted any function
> to a function with tail call? Your code for return would be really fast,
> it would never "return" to a previous state, you wouldn't need a stack
> anymore, implementing continuations would be trivial... They did it for
Of course, such a design would ensure uniformity of stack frames, having
chosen to focus on this optimization. BTW, for those who may have
dabbled with Forth and friends in the past, this has some similarities
to "threaded interpreters" (even though there need not be an interpreter
involved, and it looks more like direct-threading than indirect) -- in Forth
of course the stack frame problem is swept away by having the programmer
handle the stack explicitly.
BTW, you may claim you don't need a stack, but that's in some sense
word-trickery;-). If a calls b calls c calls d calls e, at this point the
"nonstack" frames for 4 ongoing calls need to be "somewhere" -- this
somewhere may e.g be a singly linked list, but that's just because such
a list is a rather good implementation of... ahem, a last-in-first-out queue,
yes, that's the ticket! [must not say the S-word...]
> SML, and the various steps, algorithms, etc. are nicely described in:
>
> Compiling with Continuations, Andrew Appel
> Cambridge University Press 1992, ISBN 0-521-41695-7
>
> which is one of the more interesting books I read in college both for
> language and compiler theory (couldn't find an online-ref).]]
Nolo contendere (I didn't study it deeply, barely browsed it, 10 years
ago or so -- my work was Fortran, C and some C++ then...:-).
Alex
More information about the Python-list
mailing list