[Python-Dev] LOAD_NAME & classes

Tim Peters tim.one@comcast.net
Mon, 22 Apr 2002 18:48:30 -0400


[Guido]
> I seem to have trouble explaining what I meant.

I know, and I confess I'm giving you a hard time.  There's a point to that
too:  uniqueness also imposes costs on newbies and/or newcomers.  Across the
world of programming languages now, dynamic scoping and lexical scoping are
"almost entirely *it*".  For example, the Perl spelling of the running
example here does work the way you intend, but the explanation in Perl is
full-blown dynamic scoping:

sub g {
    print "$x\n";    # prints 12 -- "full-blown dynamic scoping"
}

sub f {
    print "$x\n";    # prints 10
    local($x) = 12;
    &g();
}

$x = 10;
&f();
print "$x\n";        # prints 10

Once you make f print 10, you're on that path as far as anyone coming from
any other language can tell at first glance (or even second and third).  If
you go on to make g print 10 too, it's inexplicable via reference to how any
other language works.  If there were a huge payback for "being different"
here, cool, but the only real payback I see is letting newbies avoid
learning how lexical scoping works, and only for a little while.

> Long ago, before I introduced LOAD_FAST and friends, Python had
> something that for want of a better term I'll call "lexical scoping
> with dynamic lookup".

I'm old enough to remember this <wink>.

> It did a dynamic lookup in a (max 3 deep: local / global / builtin)
> stack of namespaces, but the set of namespaces was determined by the
> compiler.  This does not have the problems of dynamic scoping (the
> caller's stack frame can't cause trouble).  But it also doesn't have
> the problem of the current strict static scoping.

Nor its advantages, including better error detection, and ease of
transferring hard-won knowledge among other lexically scoped languages.

> I like the older model better than the current model (apart from
> nested scopes) and I believe that the "only runtime" rule explains why
> the old model is more attractive: it doesn't require you to think of
> the compiler scanning all the code of your function looking for
> definitions of names.  You can think of the interpreter pretty much
> executing code as it sees it.  You have to have a model for name
> lookup that requires a chaining of namespaces based on where a
> function is defined, but that's all still purely runtime (it involves
> executing the def statement).
>
> This requires some sophistication for a newbie to understand, but it's
> been explained successfully for years, and the explanation would be
> easier without UnboundLocalError.
>
> Note that it explains your example above completely: the namespace
> where f is defined contains a definition of x when f is called, and
> thus the search stops there.

Does it scale?

x = 0

def f(i):
    if i & 4:
        x = 10
    def g(i):
        if i & 2:
            x = 20
        def h(i):
            if i & 1:
                x = 30
            print x
        h(i)
    g(i)

f(3)

I can look at that today and predict with confidence that h() will either
print 30 (if and only if i is odd), or raise an exception.  This is from
purely local analysis of h's body -- it doesn't matter that it's nested, and
it's irrelvant what the enclosing functions look like or do.  That's a great
aid to writing correct code.  If the value of x h sees *may* come from h, or
from g, or from f, or from the module scope instead, depending on i's
specific value at the time f is called, there's a lot more to think about.

I could keep local+global straight in pre-1.0 Python, although I never got
used to the inability to write nested functions that could refer to each
other (perhaps you've forgotten how many times you had to explain that one,
and how difficult it was to get across?).  Now that Python has full-blown
nested scopes, the namespace interactions are potentially much more
convoluted, and the "purely local analysis" shortcut made possible by
everyone else's <wink> notion of lexical scoping becomes correspondingly
more valuable.

> ...
> Um, that's not what I'd call dynamic scoping.  It's dynamic lookup.

I know -- the problem is that you're the only one in the world making this
distinction, and that makes it hard to maintain over time.  If it had some
killer advantage ... but it doesn't seem to.  When Python switched to
"strict local" names before 1.0, I don't recall anyone complaining -- if
there was a real advantage to dynamic lookup at the local scope, it appeared
to have escaped Python's users <wink>.  I'll grant that it did make exec and
"import *" more predictable in corner cases.

> It's trouble for a compiler that wants to optimize builtins, but the
> semantic model is nice and simple and easy to explain with the "only
> runtime" rule.

Dynamic scoping is also easy to explain, but it doesn't scale.  I'm afraid
dynamic lookup doesn't scale either.  You should have stuck with Python's
original two-level namespace, you know <0.9 wink>.

the-builtins-didn't-count-ly y'rs  - tim