[Tutor] How to solve it ...

Tim Peters tim.one@comcast.net
Mon Feb 17 22:16:02 2003


[Tim, on Gregor Lingl puzzle]
> ...
> Trying the simplest thing that could possibly work is often a good
> start: generate all possible sequences composed of {D1, D2, P1, P2}, in
> increasing order of length, and for each one see whether it solves all
> initial states.  The first one that does is "the answer".

For fun, I'll include code for that here.  Of all the refinements I
discussed later, it only incorporates the observation that D1 and P1 are
really the same operation:

"""
#  1 2
#  4 8

# D1 and P1 have the same effect, so ignore (arbitrarily) P1.
code2masks = {"D1": (1, 2, 4, 8),
              "D2": (1+8, 2+4),
              "P2": (1+2, 4+8, 1+4, 2+8)}

winners = 0, 15 # all off or all on

class Finished(Exception):
    def __init__(self, prog):
        self.prog = prog

def solves_one(prog, state):
    if state in winners:
        return True
    if not prog:
        return False
    code, prog = prog[:2], prog[2:]
    for mask in code2masks[code]:
        if not solves_one(prog, state ^ mask):
            return False
    return True

states = range(2, 15)

def solves_all(prog):
    for state in states:
        if not solves_one(prog, state):
            return False
    return True

def extend(prog, depth, limit):
    if depth == limit:
        if solves_all(prog):
            raise Finished(prog)
        return
    for code in "D2", "P2", "D1":
        extend(prog + code, depth + 1, limit)

limit = 0
try:
    while True:
        limit += 1
        print "trying all programs of length", limit
        extend("", 0, limit)
except Finished, f:
    print "shortest program solving all initial states:", f.prog
"""

There are two techniques here especially worth noting.  First is that
"generating all" of some class of structured object is very often most
naturally coded via recursion.  However, recursion is most naturally a
depth-first process, and we're interested in finding the *shortest*
"program" (sequence of D1, D2, P2 thingies) solving all initial states, and
that's more naturally done with a breadth-first approach.  The technique
used above is a compromise called "iterative deepening":  the programs are
generated in a natural depth-first order, but the recursion isn't allowed to
go deeper than the value of the "limit" argument.  When generating all
programs of length N, we do all the work we did to generate all programs of
length N-1 all over again, so there's quite a bit of redundant work here.
However, because the amount of work required grows exponentially in N no
matter how it's programmed, this has little practical consequence.  Do a
Google search on iterative deepening to learn a lot more about search
tradeoffs; iterative deepening is a simple idea that often works great.

The other technique is simpler, raising an exception to "break out" of a
chain of recursive calls.  That's how the first shortest program found is
communicated back to the main loop.  This is a lot easier, and harder to
screw up, than having recursive functions return some sort of "search
status" result and then branching on that.

Have fun!