[Tutor] Automaton/transitional grammar query

Dave Angel davea at ieee.org
Mon Oct 12 13:02:47 CEST 2009


kevin parks wrote:
> <div class="moz-text-flowed" style="font-family: -moz-fixed">I posted 
> about this a couple weeks back, but then got horribly ill and dropped 
> the ball so i was hoping to revisit.
>
> I am not sure if this is and example of Finite Automaton or a Finite 
> State Machine or perhaps it is related to a transition table or markov 
> process. I think some one here told me it is called a transitional 
> grammar? I am not sure. I am not a math person but i would love to 
> know exactly what this is. I googled around and got lots of super 
> complicated gobbledegoo all with knotty regex stuff, but what i want 
> to do is much more simple. I am trying to use a table to define a 
> bunch of moves like so:
>
> 1 --> 2 5
> 2 --> 1 4
> 3 --> 3
> 4 --> 1
> 5 --> 4 3
>
> so that i can generate a sequence that, given an initial value, that 
> will continue to grow according to these rules. Starting with 1 we 
> then go to 2 & 5, 2 leads us too 1 & 4, the 5 leads us to 4 & 3, then 
> we iterate over 1 4 4 and 3 to get 2 5 1 1 and 3.... Like so:
>
> 1
> 2 5
> 1 4 4 3
> 2 5 1 1 3
> 1 4 4 3 2 5 2 5 3
>
> ..... etc.
>
> Essentially, iterating over the last added items to the list, applying 
> the table, appending those new items to the list, applying the table
> again... etc, until the sequence reaches some predetermined number of 
> iterations and quits.
>
> [ [1], [2, 5], [1, 4] , [4, 3], [2, 5], [1], [1], [3], [1, 4], [4, 3], 
> [2, 5], [2, 5], [3] ]
>
> First, as i mentioned I would like to know what, precisely, this kind 
> of process is called so that i can look it up. Second, i would l like 
> to add to what i have, which seems to work. But first here is the 
> code, where we left off below:
>
> #!/usr/bin/env python
>
> rules = {1: (2, 5), 2: (1, 4), 3: (3,), 4: (1,), 5: (4, 3)}
>
> def apply_rules(sequence):
>     for element in sequence:
>         # look it up in the global rules
>         values = rules[element]
>         # yield each of those in turn
>         for value in values:
>             yield value
>
> def flatten(l, ltypes=(list, tuple)):
>     ltype = type(l)
>     l = list(l)
>     i = 0
>     while i < len(l):
>         while isinstance(l[i], ltypes):
>             if not l[i]:
>                 l.pop(i)
>                 i -= 1
>                 break
>             else:
>                 l[i:i + 1] = l[i]
>         i += 1
>     return ltype(l)
>
> def test():
>     data = [1]
>     outlist = []
>     for i in range(10):
>         outlist.append(data)
>         gen = apply_rules(data)
>         data = list(gen)
>     outlist.append(data)  # one more to get the final result
>     print '\n\n', outlist, '\n\n'
>     flat = flatten(outlist)
>     count = 0
>     for item in flat:
>         print count, ',', item, ';'
>         count += 1
>     print '\n\n'
>
> if __name__ == "__main__":
>     test()
>
>
> This all appears to work. I am not sure if this is the best way to do 
> it, but for the size lists i have been generating it seems zippy.
>
> So what? You are asking a question you already know the answer to? 
> Well now I would like to have this set of rules contain some 
> probabilistic branching. Instead of having my "go to" rules or grammar 
> hard wired it would be good if some steps could also have weighted 
> choices. So that maybe 1 --> 2 5 70% of the time but maybe it goes 1 
> --> 2 4  every once in a while (as in 30%). So i am not sure how to do 
> that... also, it occurs to me that i could define a grammar that got 
> stuck in an infinite loop if not careful. I wonder if i should add 
> some mechanism to check the dictionary defined rules before 
> execution.... or if that is just too hard to do and i should just be 
> careful.
>
> Meanwhile I have my trusty old weighted random func all ready to go:
>
> import random
>
> def windex(lst):
>     '''an attempt to make a random.choose() function that makes 
> weighted choices
>
>     accepts a list of tuples with the item and probability as a pair'''
>
>         wtotal = sum([x[1] for x in lst])
>     n = random.uniform(0, wtotal)
>     for item, weight in lst:
>         if n &lt; weight:
>             break
>         n = n - weight
>     return item
>
> My question is how to apply this since i am setting up my rules in a 
> dictionary, so I am confused as to how all these pieces, which work in 
> isolation, would fit together. Lastly, if i add the probabilities... 
> is this just a super roundabout way to make a quasi markov table? i 
> dunno. But it seems like a cool way to make patterns.
>
>
>
> </div>
>
Often, when a combination of existing stdlib collection types gets too 
confusing, it's time to consider classes and objects.  Not necessarily 
to make the program "object oriented," but to make the program data 
structure understandable.

You have a dictionary mapping from ints to tuples of ints.  You want to 
keep the same dictionary keys, but make the values more complex.  Each 
value, instead of being a tuple, needs to be a list of tuples, with a 
probability assigned to each.

So create a class for that "list of tuples," and modify windex to work 
on an object of such a class.  And in apply_rules, change the values= 
line to call windex() after doing the lookup.

I don't see where an infinite loop is possible.  But it'd be useful to 
do some error checking for the class items, such as validating that the 
probabilities add up to 100% (within a tolerance factor).

DaveA


More information about the Tutor mailing list