Cheaper than printing it out: buy the paperback book.

Out of Control
Chapter 22: PREDICTION MACHINERY

Tell me about the future," I plead.

I'm sitting on a sofa in the guru's office. I've trekked to this high mountain outpost at one of the planet's power points, the national research labs at Los Alamos, New Mexico. The office of the guru is decorated in colorful posters of past hi-tech conferences that trace his almost mythical career: from a maverick physics student who formed an underground band of hippie hackers to break the bank at Las Vegas with a wearable computer, to a principal character in a renegade band of scientists who invented the accelerating science of chaos by studying a dripping faucet, to a founding father of the artificial life movement, to current head of a small lab investigating the new science of complexity in an office kitty-corner to the museum of atomic weapons at Los Alamos.

The guru, Doyne Farmer, looks like Ichabod Crane in a bolo tie. Tall, bony, looking thirty-something, Doyne (pronounced Doan) was embarking on his next remarkable adventure. He was starting a company to beat the odds on Wall Street by predicting stock prices with computer simulations.

"I've been thinking about the future, and I have one question," I begin.

"You want to know if IBM is gonna be up or down!" Farmer suggests with a wry smile.

"No. I want to know why the future is so hard to predict."

"Oh, that's simple."

I was asking about predicting because a prediction is a form of control. It is a type of control particularly suited to distributed systems. By anticipating the future, a vivisystem can shift its stance to preadapt to it, and in this way control its destiny. John Holland says, "Anticipation is what complex adaptive systems do."

Farmer likes to use a favorite example when explaining the anatomy of a prediction. "Here catch this!" he says tossing you a ball. You grab it. "You know how you caught that?" he asks. "By prediction."

Farmer contends you have a model in your head of how baseballs fly. You could predict the trajectory of a high-fly using Newton's classic equation of f=ma, but your brain doesn't stock up on elementary physics equations. Rather, it builds a model directly from experiential data. A baseball player watches a thousand baseballs come off a bat, and a thousand times lifts his gloved hand, and a thousand times adjusts his guess with his mitt. Without knowing how, his brain gradually compiles a model of where the ball lands -- a model almost as good as f=ma, but not as generalized. It's based entirely on a series of hand-eye data from past catches. In the field of logic such a process is known as induction, in contradistinction to the deduction process that leads to f=ma.

In the early days of astronomy before the advent of Newton's f=ma, planetary events were predicted on Ptolemy's model of nested circular orbits -- wheels within wheels. Because the central premise upon which Ptolemy's theory was founded (that all heavenly bodies orbited the Earth) was wrong, his model needed mending every time new astronomical observations delivered more exact data for a planet's motions. But wheels-within-wheels was a model amazingly robust to amendments. Each time better data arrived, another layer of wheels inside wheels inside wheels was added to adjust the model. For all its serious faults, this baroque simulation worked and "learned." Ptolemy's simple-minded scheme served well enough to regulate the calendar and make practical celestial predictions for 1400 years!

An outfielder's empirically based "theory" of missiles is reminiscent of the latter stages of Ptolemic epicyclic models. If we parsed an outfielder's "theory" we would find it to be incoherent, ad-hoc, convoluted, and approximate. But it would also be evolvable. It's a rat's-nest of a theory, but it works and improves. If humans had to wait until each of our minds figured out f=ma (and half of f=ma is worse than nothing), no one would ever catch anything. Even knowing the equation now doesn't help. "You can do the flying baseball problem with f=ma, but you can't do it in the outfield in real-time," says Farmer.

"Now catch this!" Farmer says as he releases an inflated balloon. It ricochets around the room in a wild, drunken zoom. No one ever catches it. It's a classic illustration of chaos -- a system with sensitive dependence on initial conditions. Imperceptible changes in the launch can amplify into enormous changes in flight direction. Although the f=ma law still holds sway over the balloon, other forces such as propulsion and airlift push and pull, generate an unpredictable trajectory. In its chaotic dance, the careening balloon mirrors the unpredictable waltz of sunspot cycles, Ice Age's temperatures, epidemics, the flow of water down a tube, and, more to the point, the flux of the stock market.

But is the balloon really unpredictable? If you tried to solve the equations for the balloon's crazy flitter, its path would be nonlinear, therefore almost unsolvable, and therefore unforeseeable. Yet, a teenager reared on Nintendo could learn how to catch the balloon. Not infallibly, but better than chance. After a couple dozen tries, the teenage brain begins to mold a theory -- an intuition, an induction -- based on the data. After a thousand balloon takeoffs, his brain has modeled some aspect of the rubber's flight. It cannot predict precisely where the balloon will land, but it detects a direction the missile favors, say, to the rear of the launch or following a certain pattern of loops. Perhaps over time, the balloon-catcher hits 10 percent more than chance would dictate. For balloon catching, what more do you need? In some games, one doesn't require much information to make a prediction that is useful. While running from lions, or investing in stocks, the tiniest edge over raw luck is significant.

Almost by definition, vivisystems -- lions, stock markets, evolutionary populations, intelligences -- are unpredictable. Their messy, recursive field of causality, of every part being both cause and effect, makes it difficult for any part of the system to make routine linear extrapolations into the future. But the whole system can serve as a distributed apparatus to make approximate guesses about the future.

Farmer was into extracting the dynamics of financial markets so that he could crack the stock market. "The nice thing about markets is that you don't really have to predict very much to do an awful lot," says Farmer.

Plotted on the gray, end-pages of a newspaper, the graphed journey of the stock market as it rises and falls has just two dimensions: time and price. For as long as there has been a stock market, investors have scrutinized that wavering two-dimensional black line in the hopes of discerning some pattern that might predict its course. Even the vaguest, if reliable, hint in direction would lead to a pot of gold. Pricey financial newsletters promoting this or that method for forecasting the chart's future are a perennial fixture in the stock market world. Practitioners are known as chartists.

In the 1970s and 1980s chartists had modest success in predicting currency markets because, one theory says, the strong role of central banks and treasuries in currency markets constrained the variables so that they could be described in relatively simple linear equations. (In a linear equation, a solution can be expressed in a graph as a straight line.) As more and more chartists exploited the easy linear equations and successfully spotted trends, the market became less profitable. Naturally, forecasters began to look at the wild and woolly places where only chaotic nonlinear equations ruled. In nonlinear systems, the outcome is not proportional to the input. Most complexity in the world -- including all markets -- are nonlinear.

With the advent of cheap, industrial-strength computers, forecasters have been able to understand certain aspects of nonlinearity. Money, big money, is made by extracting reliable patterns out of the nonlinearity behind the two-dimensional plot of financial prices. Forecasters can extrapolate the graph's future and then bet on the prediction. On Wall Street the computer nerds who decipher these and other esoteric methods are called "rocket scientists." These geeks in suits, working in the basements of trading companies, are the hackers of the '90s. Doyne Farmer, former mathematical physicist, and colleagues from his earlier mathematical adventures, set up in a small, four-room house which serves as an office in adobe -- baked Santa Fe -- as far from Wall Street as one can get in America -- are currently some of Wall Street's hottest rocket scientists.

In reality, the two-dimensional chart of stocks does not hinge on several factors but on thousands of them. The stock's thousands of vectors are whited-out when plotted as a line, leaving only its price visible. The same goes for charts of sunspot activity and seasonal temperature. You can plot, say, solar activity as a simple thin line over time, but the factors responsible for that level are mind-bogglingly complicated, multiple, intertwined, and recursive. Behind the facade of a two-dimensional line seethes a chaotic mixture of forces driving the line. A true graph of a stock, sunspot, or climate would include an axis for every influence, and would become an unpicturable thousand-armed monster.

Mathematicians struggle with ways to tame these monsters, which they call "high dimensional" systems. Any living creature, complex robot, ecosystem, or autonomous world is a high-dimensional system. The Library of form is the architecture of a high-dimensional system. A mere 100 variables create a humongous swarm of possibilities. Because each behavior impinges upon the 99 others, it is impossible to examine one parameter without examining the whole interacting swarm at once. Even a simple three-variable model of weather, say, touches back upon itself in strange loops, breeding chaos, and making any kind of linear prediction unlikely. (The failure to predict weather led to the discovery of chaos theory in the first place.)

continue...