Recursion and Variable Scope

Alex Martelli aleax at aleax.it
Fri Sep 7 04:10:10 EDT 2001


"mudpyr8" <mudpyr8 at yahoo.com> wrote in message
news:d9f7d05e.0109061911.6bdcda3f at posting.google.com...
> I'm writing a simple function. It is recursive. Here's the code:
>
> ##########################
> def generate(sides, dice, roll):
> if dice > 0:
> for x in range(sides):
> roll.append(x+1)
> generate(sides, dice - 1, roll)
> roll.pop()
> else:
> print roll
> ##########################
>
> When calling it: generate(4, 2, [])
> will generate output of 16 pairs of values, 1-4 each. However, I
> cannot get that result stored in an object; I can only print it.
>
> The last line is the tricky part. Where it says   # print roll #   I
> want it to do something like:   # rolls.append(roll) #  .
> Unfortunately, creating a variable   # rolls = [] #   outside of the
> function definition, and even stating   # global rolls #   on the
> first line of the block doesn't seem to matter.

More precisely, if you say:
    rolls.append(roll)
you're adding to tolls a reference to the OBJECT to which
roll also refers, *NOT* a copy or snapshot of the way that
object is *currently* made up.  Python doesn't do copies
or snapshots behind your back: if you want a copy, you ask
for a copy.  So, to clarify, a slight mod to your code:

rolls = []

def generate(sides, dice, roll):
    if dice > 0:
        for x in range(sides):
            roll.append(x+1)
            generate(sides, dice - 1, roll)
            roll.pop()
    else:
        print roll
        rolls.append(roll)

(no "global rolls" needed: we are not BINDING rolls, so
the compiler knows it can't be local -- we just call a
method on rolls, and it's irrelevant to this purpose that
the method is a mutator rather than just an accessor:-).

Now, running this:

D:\>python -i ro.py
Alternative ReadLine 1.4 -- Copyright 2001, Chris Gonnerman
>>> generate(4,2,[])
[1, 1]
[1, 2]
[1, 3]
[1, 4]
[2, 1]
[2, 2]
[2, 3]
[2, 4]
[3, 1]
[3, 2]
[3, 3]
[3, 4]
[4, 1]
[4, 2]
[4, 3]
[4, 4]
>>> rolls
[[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []]
>>>

and just for clarity, if we now write in this interactive session:

>>> rolls[0].append('x')
>>> rolls
[['x'], ['x'], ['x'], ['x'], ['x'], ['x'], ['x'], ['x'], ['x'], ['x'],
['x'], ['
x'], ['x'], ['x'], ['x'], ['x']]
>>>

see?  rolls has 16 references to the SAME list object -- and, no
surprise about that, since we've always appended to rolls a
reference to the same list object each time.
A list object is mutable, so, as we mutate the list object (to
which 'roll' refers in function generate), all references see
the current (mutated) version.  Clear so far...?

So, simplest is to append a COPY of roll to rolls, e.g. changing
the very last line of my version of generate to:
        rolls.append(roll[:])
now python -i ro.py gives us the same output (same print
statements get executed) but then checking out interactively
what's left in global variable rolls we see:

>>> rolls
[[1, 1], [1, 2], [1, 3], [1, 4], [2, 1], [2, 2], [2, 3], [2, 4], [3, 1], [3,
2],
 [3, 3], [3, 4], [4, 1], [4, 2], [4, 3], [4, 4]]

which, I think, is what you desire.


> I've tried to wrap my mind around this but am stuck. I need to perform
> further manipulation on the results, but have no object with which to
> do so. I know I could write to a file (maybe, if the file handle works
> within the recursion) and then read it but that seems like a waste of
> time.

Yes, writing to a file IS a way to ensure a copy of the object
is made at that point (note that *pickling* to a pickler object
would not ensure that: if you pickle a mutable object to the
same pickler repeatedly, it's NOT "snapshotted" each time, you
get when unpickling all references to the value the object had
when FIRST pickled!), but a very round-about one indeed.

You want a copy, just ask for a copy -- a "full list slice"
roll[:] will do, or import copy and use copy.copy(roll) if
you need better polymorphism (which I don't think you do
here) or prefer the more explicit, readable form (but [:]
is very idiomatic to experienced Pythmen!-).

I hope this proves instructive regarding the crucial "objects
and references" underpinning of Python.  Still, there are of
course alternate approaches to your specific task than having
all those appends and pops from roll and recursion too.  My
personal favourite is looking at the lists you're generating
as rather transparent "base-N numbers of X digits each" where
X is dice and N is sides (you have a +1 to each digit, but
that's not crucial of course:-).  So just count from 0 to
N**X-1 with digit rollover, and you don't really need the
auxiliary list roll either:

def gen(sides, dice):
    results = []
    current = dice*[1]
    while 1:
        results.append(current[:])
        n = dice - 1
        while n>=0:
            current[n] += 1
            if current[n]<=sides: break
            current[n] = 1
            n -= 1
        else: break
    return results

Now:

>>> gen(4,2)
[[1, 1], [1, 2], [1, 3], [1, 4], [2, 1], [2, 2], [2, 3], [2, 4], [3, 1], [3,
2],
 [3, 3], [3, 4], [4, 1], [4, 2], [4, 3], [4, 4]]

Personally, I find it clearest to frame such enumeration
problems as "counting in a certain base" (or sequence of
bases, if the dice have different number of sides), but
that's pretty much up to individual taste, of course;
there's no substantial performance difference.  (in a
few test runs with 6 6-sided dice gen tends to be 20%
or 30% faster on my box, i.e pretty much in the "don't
worry about it" field:-).  I like gen because it's more
flexible -- I can easily optimize by preallocation...:

def gen1(sides, dice):
    results = [0]*(sides**dice)
    current = dice*[1]
    i = 0
    while 1:
        results[i] = current[:]
        i += 1
        n = dice - 1
        while n>=0:
            current[n] += 1
            if current[n]<=sides: break
            current[n] = 1
            n -= 1
        else: break
    return results

for a further 30%-or-so speedup -- and now gen1 is
being almost twice as fast as generate, which may
be already in the useful-speedup zone in some cases
(those in which this generation is the bottleneck
of some important operation of your program).  Or,
when I want flexibility instead, I can more easily
wrap gen up as a class with a __getitem__, so it
can be used as the sequence in a for statement (in
Python 2.2 this is not so important, thanks to
generator iterators, but sometimes it may be useful
to keep compatibility with older Python versions).

Generalizing to differently-sided dice is easier --
imagine sides is a sequence of dice items, each
a number giving the # sides of that specific die,
then all the changes you need (e.g. to gen1) are:

def gen2(sides, dice):
    import operator
    assert len(sides)==dice
    results = [0]*reduce(operator.mul,sides)
    current = dice*[1]
    i = 0
    while 1:
        results[i] = current[:]
        i += 1
        n = dice - 1
        while n>=0:
            current[n] += 1
            if current[n]<=sides[n]: break
            current[n] = 1
            n -= 1
        else: break
    return results

basically just testing vs sides[n] rather
than a single number 'sides', and a tiny bit
more work at startup for pre-allocation.

But I'm sure the recursive approach must have
some advantages too, if it's clearer to you or
more flexible along axes that matter to you.


Alex






More information about the Python-list mailing list