Cheaper than printing it out: buy the paperback book.

Out of Control
Chapter 17: AN OPEN UNIVERSE

Humans seek a simple formula such as Newton's f=ma, Koza suggests, because it reflects our innate faith that at bottom there is elegant order in the universe. More importantly, simplicity is a human convenience. The heartwarming beauty we perceive in f=ma is reinforced by the cold fact that it is a much easier formula to use than Koza's spiral monster. In the days before computers and calculators, a simple equation was more useful because it was easier to compute without errors. Complicated formulas were a grind and treacherous. But, within a certain range, neither nature nor parallel computers are troubled by convoluted logic. The extra steps we find ugly and stupefying, they do perfectly in tedious exactitude.

The great irony puzzling cognitive scientists is why human consciousness is so unable to think in parallel, despite the fact that the brain runs as a parallel machine. We have an almost uncanny blind spot in our intellect. We cannot innately grasp concepts in probability, horizontal causality, and simultaneous logic. We simply don't think like that. Instead our minds retreat to the serial narrative -- the linear story. That's why the first computers were programmed in von Neumann's serial design: because that's how humans think.

And this, again, is why parallel computers must be evolved rather than designed: because we are simpletons when it comes to thinking in parallel. Computers and evolution do parallel; consciousness does serial. In a very provocative essay in the Winter 1992 Daedalus, James Bailey, director of marketing at Thinking Machines, wrote of the wonderful boomeranging influence that parallel computers have on our thinking. Entitled "First We Reshape Our Computers. Then Our Computers Reshape Us," Bailey argues that parallel computers are opening up new territories in our intellectual landscape. New styles of computer logic in turn force new questions and new perspectives from us. "Perhaps," Bailey suggests, "whole new forms of reckoning exist, forms that only make sense in parallel." Thinking like evolution may open up new doors in the universe.

John Koza sees the ability of evolution to work on both ill-defined and parallel problems as another of its inimitable advantages. The problem with teaching computers how to learn to solve problems is that so far we have wound up explicitly reprogramming them for every new problem we come across. How can computers be designed to do what needs to be done, without being told in every instance what to do and how to do it?

Evolution, says Koza, is the answer. Evolution allows a computer's software to solve a problem to which the scope, kind, or range of the answer(s) may not be evident at all, as is usually the case in the real world. Problem: A banana hangs in a tree; what is the routine to get it? Most computer learning to date cannot solve that problem unless we explicitly clue the program in to certain narrow parameters such as: how many ladders are nearby? Any long poles?

Having defined the boundaries of the answer, we are half answering the question. If we don't tell it what rocks are near, we know we won't get the answer "throw a rock at it." Whereas in evolution, we might. More probably, evolution would hand us answers we could have never expected: use stilts; learn to jump high, employ birds to help you; wait until after storms; make children and have them stand on your head. Evolution did not narrowly require that insects fly or swim, only that they somehow move quick enough to escape predators or catch prey. The open problem of escape led to the narrow answers of water striders tiptoeing on water or grasshoppers springing in leaps.

Every worker dabbling in artificial evolution has been struck by the ease with which evolution produces the improbable. "Evolution doesn't care about what makes sense; it cares about what works," says Tom Ray.

The nature of life is to delight in all possible loopholes. It will break any rule it comes up with. Take these biological jaw-droppers: a female fish that is fertilized by her male mate who lives inside her, organisms that shrink as they grow, plants that never die. Biological life is a curiosity shop whose shelves never empty. Indeed the catalog of natural oddities is almost as long as the list of all creatures; every creature is in some way hacking a living by reinterpreting the rules.

The catalog of human inventions is far less diverse. Most machines are cut to fit a specific task. They, by our old definition, follow our rules. Yet if we imagine an ideal machine, a machine of our dreams, it would adapt, and -- better yet -- evolve.

Adaptation is the act of bending a structure to fit a new hole. Evolution, on the other hand, is a deeper change that reshapes the architecture of the structure itself -- how it can bend -- often producing new holes for others. If we predefine the organizational structure of a machine, we predefine what problems it can solve. The ideal machine is a general problem solver, one that has an open-ended list of things it can do. That means it must have an open-ended structure, too. Koza writes, "The size, shape, and structural complexity [of a solution] should be part of the answer produced by a problem solving technique -- not part of the question." In recognizing that a system itself sets the answers the system can make, what we ultimately want, then, is a way to generate machines that do not possess a predefined architecture. We want a machine that is constantly remaking itself.

Those interested in kindling artificial intelligence, of course, say "amen." Being able to come up with a solution without being unduly prompted to where the solution might exist -- lateral thinking it's called in humans -- is almost the definition of human intelligence.

The only machine we know of that can reshape its internal connections is the living gray tissue we call the brain. The only machine that would generate its own structure that we can presently even imagine manufacturing would be a software program that could reprogram itself. The evolving equations of Sims and Koza are the first step toward a self-reprogramming machine. An equation that can breed other equations is the basic soil for this kind of life. Equations that breed other equations are an open-ended universe. Any possible equation could arise, including self-replicating equations and formulas that loop back in a Uroborus bite to support themselves. This kind of recursive program, which reaches into itself and rewrites its own rules, unleashes the most magnificent power of all: the creation of perpetual novelty.

"Perpetual novelty" is John Holland's phrase. He has been crafting means of artificial evolution for years. What he is really working on, he says, is a new mathematics of perpetual novelty. Tools to create neverending newness.

Karl Sims told me, "Evolution is a very practical tool. It's a way of exploring new things you wouldn't have thought about. It's a way of refining things. And it's a way of exploring procedures without having to understand them. If computers are fast enough they can do all these things."

Exploring beyond the reach of our own understanding and refining what we have are gifts that directed, supervised, optimizing evolution can bring us. "But evolution," says Tom Ray, "is not just about optimization. We know that evolution can go beyond optimization and create new things to optimize." When a system can create new things to optimize we have a perpetual novelty tool and open-ended evolution.

Both Sims's selection of images and Koza's selection of software via the breeding of logic are examples of what biologists call breeding or artificial selection. The criteria for "fit" -- for what is selected -- is chosen by the breeder and is thus an artifact, or artificial. To get perpetual novelty -- to find things we don't anticipate -- we must let the system itself define the criteria for what it selects. This is what Darwin meant by "natural selection." The selection criteria was done by nature of the system; it arose naturally. Open-ended artificial evolution also requires natural selection, or if you will, artificial natural selection. The traits of selection should emerge naturally from the artificial world itself.

Tom Ray has installed the tool of artificial natural selection by letting his world determine its own fitness selection. Therefore his world is theoretically capable of evolving completely new things. But Ray did "cheat" a little to get going. He could not wait for his world to evolve self-replication on its own. So he introduced a self-replicating organism from the beginning, and once introduced, replication never vanished. In Ray's metaphor, he jump-started life as a single-celled organism, and then watched a "Cambrian explosion" of new organisms. But he isn't apologetic. "I'm just trying to get evolution and I don't really care how I get it. If I need to tweak my world's physics and chemistry to the point where they can support rich, open-ended evolution, I'm going to be happy. It doesn't make me feel guilty that I had to manipulate them to get it there. If I can engineer a world to the threshold of the Cambrian explosion and let it boil over the edge on its own, that will be truly impressive. The fact that I had to engineer it to get there will be trivial compared to what comes out of it."

Ray decided that getting artificial open-ended evolution up and running was enough of a challenge that he didn't need to evolve it to that stage. He would engineer his system until it could evolve on its own. As Karl Sims said, evolution is a tool. It can be combined with engineering. Ray used artificial natural selection after months of engineering. But it can go both ways. Other workers will engineer a result after months of evolution.

continue...