The bombastic notion of evolving logic programs has been taken up
in earnest by John Koza, a professor of computer science at Stanford.
Koza was one of John Holland's students who brought knowledge of
Holland's genetic algorithms out of the dark ages of the '60s and '70s
into the renaissance of parallelism of the late '80s.
Rather than merely explore the space of possible equations, as Sims the
artist did, Koza wanted to breed the best equation to solve a particular
problem. One could imagine (as a somewhat silly example) that in the
space of possible pictures there might be one that would induce cows
gazing at it to produce more milk. Koza's method can evolve the
equations that would draw that particular picture. In this farfetched
idea, Koza would keep rewarding the equations which drew a picture that
even minutely increased milk production until there was no further
increase. For his actual experiments, though, Koza choose more practical
tests, such as finding an equation that could steer a moving robot.
But in a sense his searches were similar to those of Sims and the
others. He hunted in the Borgian Library of possible computer
programs -- not on an aimless mission to see what was there, but to find
the best equation for a particular practical problem. Koza wrote in
Genetic Programming, "I claim that the process of solving these problems
can be reformulated as a search for a highly fit individual computer
program in the space of possible computer programs."
For the same reason computer experts said Ray's scheme of computer
evolution couldn't work, Koza's desire to "find" equations by breeding
them bucked convention. Everyone "knew" that logic programs were brittle
and unforgiving of the slightest alteration. In computer science theory,
programs had two pure states: (1) flawlessly working; or (2) modified
and bombed. The third state -- modified at random yet working -- was not in
the cards. Slight modifications were known as bugs, and people paid a
lot of money to keep them out. If progressive modification and
improvement (evolution) of computer equations was at all possible, the
experts thought, it must be so only in a few precious areas or
specialized types of programs.
The surprise of artificial evolution has been that conventional wisdom
was so wrong. Sims, Ray, and Koza have wonderful evidence that logical
programs can evolve by progressive modifications.
Koza's method was based on the intuitive hunch that if two mathematical
equations are somewhat effective in solving a problem, then some parts
of them are valuable. And if the valuable parts from both are recombined
into a new program, the result might be more effective than either
parent. Koza randomly recombined, in thousands of combinations, parts of
two parents, banking on the probabilistic likelihood that one of those
random recombinations would include the optimal arrangement of valuable
parts to better solve the problem.
There are many similarities between Koza's method and Sims's. Koza's
soup, too, was a mixture of about a dozen arithmetical primitives, such
as add, multiply, cosine, rendered in the computer language LISP. The
units were strung together at random to form logical "trees," a
hierarchical organization somewhat like a computer flow chart. Koza's
system created 500 to 10,000 different individual logic trees as the
breeding population. The soup usually converged upon a decent offspring
in about 50 generations.
Variety was forced by sexually swapping branches from one tree to the
next. Sometimes a long branch was grafted, other times a mere twig or
terminal "leaf." Each branch could be thought of as an intact subroutine
of logic made of smaller branches. In this way, bits of equation (a
branch), or a little routine that worked and was valuable, had a chance
of being preserved or even passed around.
All manner of squirrely problems can be solved by evolving equations. A
typical riddle which Koza subjected to this cure was how to balance a
broom on a skateboard. The skateboard must be moved back and forth by a
motor to keep the inverted broom pivoted upright in the board's center.
The motor-control calculations are horrendous, but not very different
from the control circuits needed for maneuvering robot arms. Koza found
he could evolve a program to achieve this control.
Other problems he tested evolutionary equations against included:
strategies for navigating a maze; rules for solving quadratic equations;
methods to optimize the shortest route connecting many cities (also
known as traveling salesman problem); strategies for winning a simple
game like tic-tac-toe. In each case, Koza's system sought a formula for
the test problem rather than a specific answer for a specific instance
of the test. The more varied instances a sound formula was tested
against, the better the formula became with each generation.
While equation breeding yields solutions that work, they are usually the
ugliest ones you could imagine. When Koza began to inspect the insides
of his highly evolved prizes, he had the same shock that Sims and Ray
did: the solutions were a mess! Evolution went the long way around. Or
it burrowed through the problem by some circuitous loophole of logic.
Evolution was chock-full of redundancy. It was inelegant. Rather than
remove an erroneous section, evolution would just add a
countercorrecting section, or reroute the main event around the bad
sector. The final formula had the appearance of being some miraculous
Rube Goldberg collection of items that by some happy accident worked.
And that's exactly what it was, of course.
Take as an example a problem Koza once threw at his evolution machine.
It was a graph of two intertwining spirals. A rough approximation would
be the dual spirals in pinwheel. Koza's evolutionary equation machine
had to evolve the best equation capable of determining on which of the
two intertwined spiral lines each of about 200 data points lay.
Koza loaded his soup with 10,000 randomly generated computer formulas.
He let them breed, as his machine selected the equations that came
closest to getting the right formula. While Koza slept, the program
trees swapped branches, occasionally birthing a program that worked
better. He ran the machine while he was on vacation. When he returned,
the system had evolved an answer that perfectly categorized the twin
This was the future of software programming! Define a problem and the
machine will find a solution while the engineers play golf. But the
solution Koza's machine found tells us a lot about the handiwork of
evolution. Here's the equation it came up with:
(SIN (IFLTE (IFLTE (+ Y Y) (+ X Y) (- X Y) (+ Y Y)) (* X X) (SIN (IFLTE
(% Y Y) (% (SIN (SIN (% Y 0.30400002))) X) (% Y 0.30400002) (IFLTE
(IFLTE (% (SIN (% (% Y (+ X Y)) 0.30400002)) (+ X Y)) (% X 0.10399997)
(- X Y) (* (+ -0.12499994 -0.15999997) (- X Y))) 0.30400002 (SIN (SIN
(IFLTE (% (SIN (% (% Y 0.30400002) 0.30400002)) (+ X Y)) (% (SIN Y) Y)
(SIN (SIN (SIN (% (SIN X) (+ -0.12499994 -0.15999997))))) (% (+ (+ X Y)
(+ Y Y)) 0.30400002)))) (+ (+ X Y) (+ Y Y))))) (SIN (IFLTE (IFLTE Y (+ X
Y) (- X Y) (+ Y Y)) (* X X) (SIN (IFLTE (% Y Y) (% (SIN (SIN (% Y
0.30400002))) X) (% Y 0.30400002) (SIN (SIN (IFLTE (IFLTE (SIN (% (SIN
X) (+ -0.12499994 -0.15999997))) (% X -0.10399997) (- X Y) (+ X Y)) (SIN
(% (SIN X) (+ -0.12499994 -0.15999997))) (SIN (SIN (% (SIN X) (+
-0.12499994 -0.15999997)))) (+ (+ X Y) (+ Y Y))))))) (% Y
Not only is it ugly, it's incomprehensible. Even for a mathematician or
computer programmer, this evolved formula is a tar baby in the briar
patch. Tom Ray says evolution writes code that only an intoxicated human
programmer would write, but it may be more accurate to say evolution
generates code that only an alien would write; it is decidedly inhuman.
Backtracking through the evolving ancestors of the equation, Koza
eventually traced the manner in which the program tackled the problem.
By sheer persistence and by hook and crook it found a laborious
roundabout way to its own answer. But it worked.
The answer evolution discovered seems strange because almost any high
school algebra student could write a very elegant equation in a single
line that described the two spirals.
There was no evolutionary pressure in Koza's world toward simple
solutions. His experiment could not have found that distilled equation
because it wasn't structured to do so. Koza tried applying parsimony in
other runs but found that parsimony added to the beginning of a run
dampened the efficiency of the solutions. He'd find simple but mediocre
to poor solutions. He has some evidence that adding parsimony at the end
of evolutionary procedure -- that is, first let the system find a solution
that kind of works and then start paring it down -- is a better way to
evolve succinct equations.
But Koza passionately believes parsimony is highly overrated. It is, he
says, a mere "human esthetic." Nature isn't particularly parsimonious.
For instance, David Stork, then a scientist at Stanford, analyzed the
neural circuits in the muscles of a crayfish tail. The network triggers
a curious backflip when the crayfish wants to escape. To humans the
circuit looks baroquely complex and could be simplified easily with the
quick removal of a couple of superfluous loops. But the mess works.
Nature does not simplify simply to be elegant.