
Backtracking a bit: [Guido]
This is another key description of continuations (maybe not quite worth a hug :).
I suppose a kiss is out of the question, then.
The continuation captures exactly all state that is represented by "position in the program" and no state that is represented by variables.
Right!
But there are many hairy details. In antiquated assembly, there might not be a call stack, and a continuation could be represented by a single value: the program counter. But now we have a call stack, a value stack, a block stack (in Python) and who knows what else.
I'm trying to understand whether we can get away with saving just a pointer to a frame, whether we need to copy the frame, or whether we need to copy the entire frame stack.
As you convinced yourself in following paragraphs, for 1st-class continuations "the entire frame stack" *may* be necessary.
... How does Scheme do this?
I looked up R. Kent Dybvig's doctoral dissertation, at ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/3imp.ps.gz He gives detailed explanations of 3 Scheme implementations there (from whence "3imp", I guess). The first is all heap-based, and looks much like the simple Wilson implementation I summarized yesterday. Dybvig profiled it and discovered it spent half its time in, together, function call overhead and name resolution. So he took a different approach: Scheme is, at heart, just another lexically scoped language, like Algol or Pascal. So how about implementing it with a perfectly conventional shared, contiguous stack? Because that doesn't work: the advanced features (lexical closures with indefinite extent, and user-captured continuations) aren't stack-like. Tough, forget those at the start, and do whatever it takes later to *make* 'em work. So he did. When his stack implementation hit a user's call/cc, it made a physical copy of the entire stack. And everything ran much faster! He points out that "real programs" come in two flavors: 1) Very few, or no, call/cc thingies. Then most calls are no worse than Algol/Pascal/C functions, and the stack implementation runs them at Algol/Pascal/C speed (if we knew of anything faster than a plain stack, the latter would use it). 2) Lots of call/cc thingies. Then "the stack" is likely to be shallow (the program is spending most of its time co-transferring, not recursing deeply), and because the stack is contiguous he can exploit the platform's fastest block-copy operation (no need to chase pointer links, etc). So, in some respects, Dybvig's stack implementation of Scheme was more Pythonic than Python's current implementation <wink -- Dybvig effectively used a Python list at this level, while Python effectively uses a Lispish cons'ed frame list>. His third implementation was for some propeller-head theoretical "string machine", so I won't even mention it. worrying-about-the-worst-case-can-hurt-the-normal-cases-ly y'rs - tim