Cheaper than printing it out: buy the paperback book.

Out of Control
Chapter 3: MACHINES WITH AN ATTITUDE

"Fast, cheap, and out of control" began appearing on buttons of engineers at conferences and eventually made it to the title of Rodney Brooks's provocative paper. The new logic offered a completely different view of machines. There is no center of control among the mobots. Their identity was spread over time and space, the way a nation is spread over history and land. Make lots of them; don't treat them so precious.

Rodney Brooks grew up in Australia, where like a lot of boys round the world, he read science fiction books and built toy robots. He developed a Downunder perspective on things, wanting to turn views on their heads. Brooks followed up on his robot fantasies by hopscotching around the prime robot labs in the U.S., before landing a permanent job as director of mobile robots at MIT.

There, Brooks began an ambitious graduate program to build a robot that would be more insect than dinosaur. "Allen" was the first robot Brooks built. It kept its brains on a nearby desktop, because that's what all robot makers did at the time in order to have a brain worth keeping. The multiple cables leading to the brain box from Allen's bodily senses of video, sonar, and tactile were a neverending source of frustration for Brooks and crew. There was so much electronic background interference generated on the cables that Brooks burnt out a long string of undergraduate engineering students attempting to clear the problem. They checked every known communication media, including ham radio, police walkie-talkies and cellular phones, as alternatives, but all failed to find a static-free connection for such diverse signals. Eventually the undergraduates and Brooks vowed that on their next project they would incorporate the brains inside a robot -- where no significant wiring would be needed -- no matter how tiny the brains might have to be.

They were thus forced to use very primitive logic steps, and very short and primitive connections in "Tom" and "Jerry," the next two robots they built. But to their amazement they found that the dumb way their onboard neural circuit was organized worked far better than a brain in getting simple things done. When Brooks reexamined the abandoned Allen in light of their modest success with dumb neurons, he recalled that "it turned out that in Allen's brain, there really was not much happening."

The success of this profitable downsizing sent Brooks on a quest to see how dumb he could make a robot and still have it do something useful. He ended up with a type of reflex-based intelligence, and robots as dumb as ants. But they were as interesting as ants, too.

Brooks's ideas gelled in a cockroachlike contraption the size of a football called "Genghis." Brooks had pushed his downsizing to an extreme. Genghis had six legs but no "brain" at all. All of its 12 motors and 21 sensors were distributed in a decomposable network without a centralized controller. Yet the interaction of these 12 muscles and 21 sensors yielded an amazingly complex and lifelike behavior.

Each of Genghis's six tiny legs worked on its own, independent of the others. Each leg had its own ganglion of neural cells -- a tiny microprocessor -- that controlled the leg's actions. Each leg thought for itself! Walking for Genghis then became a group project with at least six small minds at work. Other small semiminds within its body coordinated communication between the legs. Entomologists say this is how ants and real cockroaches cope -- they have neurons in their legs that do the leg's thinking.

In the mobot Genghis, walking emerges out of the collective behavior of the 12 motors. Two motors at each leg lift, or not, depending on what the other legs around them are doing. If they activate in the right sequence -- Okay, hup! One, three, six, two, five, four! -- walking "happens."

No one place in the contraption governs walking. Without a smart central controller, control can trickle up from the bottom. Brooks called it "bottom-up control." Bottom-up walking. Bottom-up smartness. If you snip off one leg of a cockroach, it will shift gaits with the other five without losing a stride. The shift is not learned; it is an immediate self-reorganization. If you disable one leg of Genghis, the other legs organize walking around the five that work. They find a new gait as easily as the cockroach.

In one of his papers, Rod Brooks first laid out his instructions on how to make a creature walk without knowing how:

There is no central controller which directs the body where to put each foot or how high to lift a leg should there be an obstacle ahead. Instead, each leg is granted a few simple behaviors and each independently knows what to do under various circumstances. For instance, two basic behaviors can be thought of as "If I'm a leg and I'm up, put myself down, " or "If I'm a leg and I'm forward, put the other five legs back a little." These processes exist independently, run at all times, and fire whenever the sensory preconditions are true. To create walking then, there just needs to be a sequencing of lifting legs (this is the only instance where any central control is evident). As soon as a leg is raised it automatically swings itself forward, and also down. But the act of swinging forward triggers all the other legs to move back a little. Since those legs happen to be touching the ground, the body moves forward.

Once the beast can walk on a flat smooth floor without tripping, other behaviors can be added to improve the walk. For Genghis to get up and over a mound of phone books on the floor, it needs a pair of sensing whiskers to send information from the floor to the first set of legs. A signal from a whisker can suppress a motor's action. The rule might be, "If you feel something, I'll stop; if you don't, I'll keep going."

While Genghis learns to climb over an obstacle, the foundational walking routine is never fiddled with. This is a universal biological principle that Brooks helped illuminate -- a law of god: When something works, don't mess with it; build on top of it. In natural systems, improvements are "pasted" over an existing debugged system. The original layer continues to operate without even being (or needing to be) aware that it has another layer above it.

When friends give you directions on how to get to their house, they don't tell you to "avoid hitting other cars" even though you must absolutely follow this instruction. They don't need to communicate the goals of lower operating levels because that work is done smoothly by a well-practiced steering skill. Instead, the directions to their house all pertain to high-level activities like navigating through a town.

Animals learn (in evolutionary time) in a similar manner. As do Brooks's mobots. His machines learn to move through a complicated world by building up a hierarchy of behaviors, somewhat in this order:

Avoid contact with objects

Wander aimlessly

Explore the world

Build an internal map

Notice changes in the environment

Formulate travel plans

Anticipate and modify plans accordingly

The Wander-Aimlessly Department doesn't give a hoot about obstacles, since the Avoidance Department takes such good care of that.

The grad students in Brooks's mobot lab built what they cheerfully called "The Collection Machine" -- a mobot scavenger that collected empty soda cans in their lab offices at night. The Wander-Aimlessly Department of the Collection Machine kept the mobot wandering drunkenly through all the rooms; the Avoidance Department kept it from colliding with the furniture while it wandered aimlessly.

The Collection Machine roamed all night long until its video camera spotted the shape of a soda can on a desk. This signal triggered the wheels of the mobot and propelled it to right in front of the can. Rather than wait for a message from a central brain (which the mobot did not have), the arm of the robot "learned" where it was from the environment. The arm was wired so that it would "look" at its wheels. If it said, "Gee, my wheels aren't turning," then it knew, "I must be in front of a soda can." Then the arm reached out to pick up the can. If the can was heavier than an empty can, it left it on the desk; if it was light, it took it. With a can in hand the scavenger wandered aimlessly (not bumping into furniture or walls because of the avoidance department) until it ran across the recycle station. Then it would stop its wheels in front of it. The dumb arm would "look" at its hand to see if it was holding a can; if it was it would drop it. If it wasn't, it would begin randomly wandering again through offices until it spotted another can.

That crazy hit-or-miss system based on random chance encounters was one heck of an inefficient way to run a recycling program. But night after night when little else was going on, this very stupid but very reliable system amassed a great collection of aluminum.

The lab could grow the Collection Machine into something more complex by adding new behaviors over the old ones that worked. In this way complexity can be accrued by incremental additions, rather than basic revisions. The lowest levels of activities are not messed with. Once the wander-aimlessly module was debugged and working flawlessly, it was never altered. Even if wander-aimlessly should get in the way of some new higher behavior, the proven rule was suppressed, rather than deleted. Code was never altered, just ignored. How bureaucratic! How biological!

Furthermore, all parts (departments, agencies, rules, behaviors) worked -- and worked flawlessly -- as stand-alones. Avoidance worked whether or not Reach-For-Can was on. Reach-For-Can worked whether or not Avoidance was on. The frog's legs jumped even when removed from the circuits of its head.

The distributed control layout for robots that Brooks devised came to be known as "subsumption architecture" because the higher level of behaviors subsumed the roles of lower levels of behaviors when they wished to take control.

If a nation were a machine, here's how you could build it using subsumption architecture:

You start with towns. You get a town's logistics ironed out: basic stuff like streets, plumbing, lights, and law. Once you have a bunch of towns working reliably, you make a county. You keep the towns going while adding a layer of complexity that will take care of courts, jails, and schools in a whole district of towns. If the county apparatus were to disappear, the towns would still continue. Take a bunch of counties and add the layer of states. States collect taxes and subsume many of the responsibilities of governing from the county. Without states, the towns would continue, although perhaps not as effectively or as complexly. Once you have a bunch of states, you can add a federal government. The federal layer subsumes some of the activities of the states, by setting their limits, and organizing work above the state level. If the feds went away the thousands of local towns would still continue to do their local jobs -- streets, plumbing and lights. But the work of towns subsumed by states and finally subsumed by a nation is made more powerful. That is, towns organized by this subsumption architecture can build, educate, rule, and prosper far more than they could individually. The federal structure of the U.S. government is therefore a subsumption architecture.

continue...