Python, the stack, and the heap

Chris Angelico rosuav at gmail.com
Mon Dec 17 17:45:15 EST 2018


On Tue, Dec 18, 2018 at 9:12 AM eryk sun <eryksun at gmail.com> wrote:
> Objects are heap allocated and use reference counting in CPython (i.e.
> Python implemented in C). A scope reference in CPython could be a fast
> (function) local that's internal to a frame object; a cell object
> that's used to share variables in nested scopes; a locals dict item in
> an unoptimized scope (e.g. module, class, exec); or a temporary
> reference in the frame's stack while evaluating bytecode. Commonly the
> latter is from a LOAD_* opcode. For example, here's the implementation
> of LOAD_FAST:

Hang on, you're conflating two different things here. From the
perspective of a Python program, there is a *call stack* which handles
stuff like this:

def f(x): return g(x - 1)
def g(x): return x * 3
print(f(8))

There's two distinct meanings of "x" here, and they're local to their
respective functions. And if you have any form of recursion or
reentrancy, the call stack is essential to keeping the different
locals separate. This is all part of the definition of the Python
language.

The other stack in CPython is the *operand stack*. You can see it
fairly easily with the dis.dis() function.

>>> dis.dis(f)
  1           0 LOAD_GLOBAL              0 (g)
              2 LOAD_FAST                0 (x)
              4 LOAD_CONST               1 (1)
              6 BINARY_SUBTRACT
              8 CALL_FUNCTION            1
             10 RETURN_VALUE

The expression "x - 1" means "fetch x, fetch 1, subtract, leave the
result on the stack". That's a CPython implementation detail, though;
other implementations can do completely different things.

There's no connection between the two stacks. Both of them can refer
to objects, but the operand stack doesn't really come into most
people's thinking, unless they're trying to dig deep into how
refcounting works or some other internal detail of CPython. (Which is
fun - don't get me wrong - but it's not how normal Python programming
is done.)

> Note that the bytecode evaluation stack is part of the heap-allocated
> frame object. It's not the native thread stack (sometimes called the C
> stack when code is written in C), but there's a rough correspondence
> between frame objects and the native stack, since the interpreter
> calls a C function to evaluate a Python frame.

I don't think this is really accurate. The call stack handles Python
functions, the operand stack handles expression evaluation. The
operand stack can have stuff left on it while another function
executes. For instance:

>>> def triangle(x): return x if x < 2 else x + triangle(x - 1)
...
>>> triangle(5)
15
>>> dis.dis(triangle)
  1           0 LOAD_FAST                0 (x)
              2 LOAD_CONST               1 (2)
              4 COMPARE_OP               0 (<)
              6 POP_JUMP_IF_FALSE       12
              8 LOAD_FAST                0 (x)
             10 RETURN_VALUE
        >>   12 LOAD_FAST                0 (x)
             14 LOAD_GLOBAL              0 (triangle)
             16 LOAD_FAST                0 (x)
             18 LOAD_CONST               2 (1)
             20 BINARY_SUBTRACT
             22 CALL_FUNCTION            1
             24 BINARY_ADD
             26 RETURN_VALUE

The recursive branch (from line 12 onwards) fetches up x, then calls
itself recursively, and then adds the return value to what was already
on the stack. The operand stack quite happily carries from one call to
another. (CPython has some protections, I believe, to ensure that the
operand stack doesn't get destroyed by a mismatched load/store or
something, but conceptually, it's just one stack for all functions.)

ChrisA


More information about the Python-list mailing list