emergent/swarm/evolutionary systems etc

Peter MacKenzie peter9547 at btinternet.com
Tue Mar 30 08:44:28 EST 2004


(Hmm, this might appear as a double posting, but I don't think my last one
made it through.)

Thanks, but.

("One approach to discussing and comparing AI
problem solving strategies is to categorize them using the
terms ''strong'' and ''weak'' methods. Generally, a weak
method is one which has the property of wide applicability
but, because it makes few assumptions about the problem
domain, can suffer from combinatorially explosive
solution costs when scaling up to larger problems. State
space search algorithms and random search are familiar
examples of weak methods."

Using Genetic Algorithms to Solve NP-Complete Problems
Kenneth A. De Jong
George Mason University
KDEJONG at GMUVAX2.GMU.EDU <mailto:KDEJONG at GMUVAX2.GMU.EDU>

William M. Spears
Navy Center for Applied Research in AI
Navy Center for Applied Research in AI
SPEARS at AIC.NRL.NAVY.MIL <mailto:SPEARS at AIC.NRL.NAVY.MIL>
<http://hive.cs.uwyo.edu/~wspears/papers/icga89.pdf>)

>From this, and other reading on genetic algorithms, I've found that nearly
all of the applications being developed and used are far 'stronger' than
what I have in mind. Most critically, the great preponderance of uses
involve a program that simply modulates a set of given variables in search
for an answer, rather than modifying the source code itself. Understandably,
this is because, as stated above, such 'weak' methods are of limited use for
most problems (I wouldn't want to fly in a plane that had been designed by
one).

Since, at the moment, I'm more interested in a 'pet cyber slime' than a tool
for designing efficient circuit layouts, such 'strong' applications are of
limited use. Even the 'state space' and 'random search' examples above are
considerably 'stronger' than I would desire to use.
For me, the goal is not only to create a program that can find an optimal
solution to a problem, but which can also find better ways of learning -
something disallowed by a fixed-algorithm engine.

Your pointer did lead me to consider a couple of aditional factors in my
design - the incoporation of a 'breeding' mechanism to allow files in a
population to share information, and a formalised chromosome stucture, to
increase the probability that mutations will produce sensible effect, and to
facilitate breeding:

(The process of species changing over time to become better at survival
within the environment is called evolution. One advance that it produced was
a new method of reproduction. Cells began to carry some of their genes in
rings called plasmids. During meetings between individual they could
exchange bundles, passing the environmental knowledge embedded within the
plasmid from individual to individual. Successful plasmids spread amongst
the population but unsuccessful plasmids encumbered their carrier who would
die sooner and have less chance to spread the bad genes. Greater benefits
came when certain genes came together that worked better in combination than
in isolation. The likelihood of random mutation equipping one cell with both
genes is low but with the crossover of genes between individuals these
traits can arise separately and, once they become common in the population,
join together within one individual.

Genetic Algorithms
Nature's Genetics
Matthew Caryl
<http://www.permutationcity.co.uk/projects/mutants/geneticalgorithms.html>)

There's a lot of other interesting stuff on this site, but it'll take a
while to examine it all, and it'll be quite a while more before I have the
time to start on Stephen Wolfram's "A New Kind of Science". One question
though - can I, indeed, create, modify, run and save files using just a
python script?

Peter MacKenzie <peter9547 at btinternet.com> wrote in message
news:c4aeq8$92a$1 at hercules.btinternet.com...
> Hello, I'm Peter, and I'm very much a newbie.
>
> New in the sense of being a novice at everything to do with programming,
> rather than just new to Python, so please be very explicit in your replies
> (should you deem me worth of such attention).  I've long had a passing
> interest in programming from a macro-conceptual level, but only very
> recently have I tried to learn the details of how to impliment my ideas in
> code.  Since I'm so inexperienced and learning soley from on-line
> documentation in my sparse spare time, I'm finding the learning curve
rather
> steep, and would appreciate it if you didn't assume that I 'know' anything
> (especially with formating issues).
>
> What instigated my uptake of programming was a recently sparked, and
rapidly
> growing interest in emergent processes, and particularly of computer
> applications for them.  I've not covered more than 20 hours of good study
on
> the subject (accounting for liberal procrastination), but I like to jump
in
> the deep end with things, and the logic of emergence really 'clicks' with
my
> brain.  I have, however covered biological evolution in considerably more
> depth, so I'm already quite familiar with the concepts involved.
>
> What I've been trying to do with Python is to set up something resembling
> John Von Neumann's conception of an imperfectly self-replicating program:
>
> (3.2.1 Self-Reproduction
> Nearly all of the practical implementations of arti cial worlds to be
> described are related
> in some way to the seminal work of John von Neumann. In the late 1940s and
> early
> 1950s, he devoted considerable time to the question of how complicated
> machines could
> evolve from simple machines.9 Speci cally, he wished to develop a formal
> description of
> a system that could support self-reproducing machines which were robust in
> the sense
> that they could withstand some types of mutation and pass these mutations
on
> to their
> o spring. Such machines could therefore participate in a process of
> evolution.
> Inspired by Alan Turing's earlier work on universal computing machines
> [Turing 36],
> von Neumann devised an architecture which could ful l these requirements.
> The ma-
> chine he envisaged was composed of three subcomponents [von Neumann 66]:
> 1. A general constructive machine, A, which could read a description (X)
of
> an-
> other machine, X, and build a copy of X from this description:
> A + (X) ; X (3.1)
> (where + indicates a single machine composed of the components to the left
> and
> right suitably arranged, and ; indicates a process of construction.)
> 2. A general copying machine, B, which could copy the instruction tape:
> B + (X) ; (X) (3.2)
> 9 Von Neumann had diculties in de ning precisely what the term
> `complicated' meant. He said \I
> am not thinking about how involved the object is, but how involved its
> purposive operations are.
> In this sense, an object is of the highest degree of complexity if it can
do
> very dicult and involved
> things." [von Neumann 66].
> 3.2. PREVIOUS WORK 47
> 3. A control machine, C, which, when combined with A and B, would rst
> activate
> B, then A, then link X to (X) and cut them loose from (A + B + C):
> A + B + C + (X) ; X + (X) (3.3)
> Now, if we choose X to be (A + B + C), then the end result is:
> A + B + C + (A + B + C) ; A + B + C + (A + B + C) (3.4)
> This complete machine plus tape, [A + B + C + (A + B + C)], is therefore
> self-
> reproducing. From the point of view of the evolvability of this
> architecture, the crucial
> feature is that we can add the description of an arbitrary additional
> automaton D to
> the input tape. This gives us:
> A + B + C + (A + B + C + D) ; A + B + C +D + (A + B + C + D) (3.5)
> Furthermore, notice that if the input tape (A + B + C +D) is mutated in
> such a
> way that the description of automaton D is changed, but that of A, B and C
> are
> una ected|that is, the mutated tape is (A + B + C +D0)|then the result of
> the
> construction will be:
> A + B + C+ D + (A + B + C + D) mutation
> ; A + B + C + D0 + (A + B + C+ D0)
> (3.6)
> The reproductive capability of the architecture is therefore robust to
some
> mutations
> (speci cally, those mutations which only a ect the description of D), so
the
> machines
> are able to evolve. Von Neumann pointed out that it was the action of the
> general copy-
> ing automaton, B, which gave his architecture the capacity for evolving
> machines of
> increased complexity, because B is able to copy the description of any
> machine, no mat-
> ter how complicated [von Neumann 66] (p.121). This ability is clearly
> demonstrated in
> Equation 3.5 above.
>
> From Artificial Evolution to Artificial Life
> Timothy John Taylor
> Ph.D
> University of Edinburgh
> 1999)
>
> -The formatting on the equations is a bit messed up, but I hope you get
the
> idea.
>
> Although the concept is quite straight forward, and I have my own ideas
> about what specific features I'd like to include, I'm usure as to how to
> implement it at the level of coding.  Basically, I wish to create:
> 1.  An entirely human-designed evolutionary engine.
> 2.  A file that it '1' will acess, modify, test against a goal to retrive
a
> score, then save with the modifications intact and ready for the next run
or
> loop-cycle.
> 3.  A buffer that will store a population of these files.
> 4.  A death engine that will remove low scoring files (probably as a part
of
> '1')
>
> These concepts aren't new, and are already well publicised.  Variants of
> them are used for determining coefficient values for engeneering, medical
> etc research, but I wish to create a program that's more open ended than a
> number-finder.  The Neumann model is incomplete, since it not only fails
to
> specify a method for mutation beyond the vague decription given, but also
> fails to account for the vanishingly small chance that completely random
> mutation of a program would produce any intelligible results relative to
the
> processor's capacity for run cycles (overlooked, perhaps, because of an
> over-riding focus on making the evolutionary engine robust to mutation).
To
> solve this, I propose an 'incubator' approach, where the engine would, as
> part of its evolutionary toolkit, and particularly at the beginnng of the
> process before the code becomes complex enough to allow the likelyhood of
> non-fatal mutations, scavange code from external sources.
>
> Regardless of any issues of copyright etc, this has the advantage of
basing
> the now-less random mutations on code that is known to be operable, rather
> than the infinite monkies approach of random alphanumeric generation.  To
> incorporate this and other revisions and refinements that I've just
spotted,
> the program structure would look like so:
>
> An entirely human-designed evolutionary engine, which:
> 1.  Allows the user to set a goal.
> 2.  Takes random sections of code from a library.
> 3.  Places them in a text file.
> 4.  Applies a 'scatter shot' of random alphanumeric changes (with either a
> random or a user specified % of alpahnumerals changed)
> 5.  Runs the file.
> 6.  Scores the file on how many valid lines it has, if it fufilled the
goal,
> how long it took, etc.
> 7.  Saves the score to the file (in the title?).
> 8.  Repeats for a user specified number of times.
> 9.  Selects the best programs, deletes the rest
> 10.  Returns to '3' and repeats until stopped.
>
> At the moment, I'm not even sure if python can create, chop and change
> files, so that the effects of a program being run can persist after it's
> closed down.  The library modules are still largely incomprehensible to
me,
> both in their function and their implementation.  I'm a very long way from
> understanding how to apply the randomised aspects of the file changes, or
> much else for that matter, and I can already see a bunch of things that
> would need a lot of tweaking, even if the structure is sound (this is the
> first proper program I've planned, which I'm sure means that the structure
> is almost certaintly gibberish).
>
> Other things that would be handy:
>
> A bit to make sensible sections of code less vunerable to mutation, and
> concentrate changes in 'junk' areas.
>
> Something to decrease or prevent the addition of scavanged code when the
> proportion of scavanged code to random alphanumeric changes and working
> code, or vice versa (with working code forming the fulcrum of the
dynamic).
>
> A 'skills' counter/protector, that would allow files to retain redundant
> code if their goal was changed, rather than necessarily losing it to the
> pressure of clock cycles and the equivalent of genetic drift.  (For this,
> the 'bastion' module looks like it might be useful, though I'm not
terribly
> sure about how it works, or 'exactly' what it does)
>
> An random goal changer, to provide an impetus for constant evolution
without
> the need for user input. (There's only so good you can be at any one
thing)
>
> An option to give the files an internal copy of the evolution engine,
which
> could then be modified by itself as it alters the entire file.  (This may
> prove difficult, as although this would allow the files to develop more
> efficient strategies for the process of evolution itself, it would make
them
> more vunerable to fatal mutations.  A possible solution would be to have a
> 'nuclear membrane' - a guard to prevent mutations that are particularly
> likely to be fatal eg. mutations that alter significant chunks of the
> engine.  It would also be possible to have multiple engines in each cell,
> which could self-assess and compare scores, replacing engines that
> underperform through damage or 'under-evolution' (radiation resistant
> bacteria use a similar technique)
>
>
> ==================
> All of this seems like a rather tall order, especially as I'm still
> struggling with some of the things from the beginners guide.  Although I
> should probably buy a book and work my way through lots of examples of
> increasing complexity, I like to tackle things head-on and learn as I go.
> Even just writing this message has helped me to clarify to myself what it
is
> I want to do.
>
> If anybody has even bothered to read this far, I'd ask you please to give
> some thought to these ideas (of myself and others), and perhaps to give me
> some pointers in the right direction.  Thankyou.
>
>





More information about the Python-list mailing list