The current default algorithm for testing new technologies is the Precautionary Principle. There are several formulas of the Precautionary Principle but all variations of this heuristic hold this in common: a technology must be shown to do no harm before it is embraced. It must be proven to be safe before it is disseminated. If it cannot be proven safe, it should be prohibited, curtailed, modified, junked, or ignored. In other words, the first response to a new idea should be inaction until its safety is established. When an innovation appears, we should pause. The second step is to test it offline, in a model, or in any non-critical, safe, lowest-risk manner. Only after is has been deemed okay should we try to live with it.
Unfortunately the Precautionary Principle doesn’t work as a reliable safeguard. Because of the inherent uncertainties in any model, laboratory, simulation, or test, the only reliable way to assess a new technology is to let it run in place. It has to be exercised sufficiently that it can begin to express secondary effects. When a technology is first cautiously tested soon after its birth only its primary effects are being examined. But in most cases it is the unintended second-order effects of technologies that are usually the root of most problems. Second order effects often require a certain density, a semi-ubiquity, to reveal themselves. The main concern of the first automobiles was for the occupants — that the gas engines didn’t blow up, or that the brakes don’t fail. But the real threat of autos was to society en masse — the accumulated exposure to their minute pollutants and ability to kill others at high speeds, not to mention the disruptions of suburbs, and long commutes – all second order effects.
Second order effects – the ones that usually overtake society – are rarely captured by forecasts, lab experiments, or white papers. Science fiction guru Arthur C. Clarke made the observation that in the age of horses many ordinary people eagerly imagined a horseless carriage. The automobile was an obvious anticipation since it was an extension of the first order dynamics of a carriage – a vehicle that goes forward by itself. An automobile would do everything a horse-pulled carriage did but without the horse. But Clarke went on to notice how difficult it was to imagine the second-order consequences of a horseless carriage, such as drive-in movies theaters, paralyzing traffic jams and road rage.
A common source of unforecastable effects of technologies stems from they way they interact with other technologies. In a 2005 report (PDF) analyzing why the former US Office for Technology Assessment did not have more of an impact, the researches concluded:
While plausible (although always uncertain) forecasts can be generated for very specific and fairly evolved technologies (e.g., the supersonic transport; a nuclear reactor; a particular pharmaceutical product), the radical transforming capacity of technology comes not from individual artifacts but from interacting suites of technologies that permeate society along many dimensions.
The absences of second-order effects in small precise experiments, and our collective impulse to adapt technology as we use it, make reliable models of advance technological innovations impossible. An emerging technology must be tested in action, and evaluated in real time. In other words the risks of a particular technology have to be determined by trial and error in real life. We can think of this vetting-by-action algorithm as the Proactionary Principle. Technologies are tested through action, rather than inaction. In this approach the appropriate response to a new idea is to immediately try it out.
And to keep trying it out, and testing it, as long as it exists. In fact, contrary to the Precautionary Principle, a technology can never be declared “proven safe.” It must be continuously tested with constant vigilance since it is constantly being re-engineered by users and the co-evolutionary environment it inhabits. The automobile today embedded in its matrix of superhighways, drive-ins, seat belts, gps, hypermiling is a different technology that the model T one hundred years ago. And most of those differences are due to secondary inventions rather than the internal combustion engine. In the same way Aspirin today, put into the context of other drugs in the body, changes in our longevity, pill-popping habits, cheapness, etc., is a different technology than either the folk medicines derived from the essence of willow bark, or the first synthesized version brought out by Bayer 100 years ago, even though they are all the same chemical, acetylsalicylic acid. Technologies shift as they thrive. They are remade as they are used. They unleash second and third order consequences as they disseminate. And almost always, they exert completely unpredicted effects as they near ubiquity.
Therefore, technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.
Of course we should forecast, anticipate and minimize known problems from the start.
All technologies will generate problems. None are problem free. All have social costs. And all technologies will cause disruptions to other technologies around them and may diminish technological benefits elsewhere. The problems of a new technology have to be weighed, balanced, and minimized but they cannot be fully eliminated.
Furthermore the costs of inaction (the default response called for by the Precautionary Principle), have to be weighed together with the costs of action. Inaction will also generate problems and unintended effects. In a very fast changing environment the status quo has hidden substantial penalties that might only become visible over time. These costs of inaction need to be added into the equations of evaluation.
The original version of the Proactionary Principle was first developed by Max More, the uber-extropian. He wrote a draft of the idea in 2004, and revised it in 2005. As he originally conceived it, the principle is an orientation, almost a philosophy. In the musings below I have simplified More’s elaborate philosophy to the point he may not recognize it. And to make it less confusing I punctuate it as the Pro-Actionary Principle. More’s second version contains a set of ten component principles; I have reduced these to five of my own.
The five Pro-Actions are:
All tools of anticipation are valid. The more techniques we use the better because different techniques fit different technologies. Scenarios, forecasts and outright science fiction can give partial pictures. Objective scientific measurement of models, simulations, and controlled experiments should carry greater weight, but these too are only partial. The process should try to imagine as many horrors as glories, and if possible to anticipate ubiquity; what happens if everyone has this for free? Anticipation should not a judgment. Rather the purpose of anticipation is to prepare a base for the next four steps. It is a way to rehearse future actions.
2. Continuous assessment
We have increasing means to quantifiably test everything we use all the time. By means of embedded technology we can turn daily use of technologies into large scale experiments. No matter how much a new technology is tested at first, it should be constantly retest in real time. We also have more precise means of niche-testing, so we can focus on susceptible neighborhoods, subcultures, gene pools, use patterns, etc. Testing should also be continuous, 24/7 rather than the traditional batch mode. Further, new technology allows citizen-driven concerns to surface into verifiable science by means of self-organized assessments. Testing is active and not passive. Constant vigilance is baked into the system.
3. Prioritize risks, including natural ones
Risks are real, but endless. Not all risks are equal. They must be weighted and prioritized. Known and proven threats to human and environmental health are given precedence over hypothetical risks.
Furthermore the risks of inaction and the risks of natural systems must be treated symmetrically. In More’s words: “Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks.”
4. Rapid restitution of harm
When things go wrong – and they always will – harm should be compensated quickly in proportion to actual damages. Penalizing for hypothetical harm or even potential harm demeans justice and weakens the system, reducing honesty and penalizing those who act in good faith. Mechanisms for actively fixing harms of current technologies indirectly aid future technologies, because it permits errors to be corrected quicker. The expectation that any given technology will create harms of some sort (not unlike bugs) that must be remedied should be part of technology creation.
5. Redirection rather than prohibition
Prohibition does not work with technology. Absolute prohibition produces absolute outlaws. In a review of past attempts to ban technology, I discovered that most technologies could only be temporarily displaced. Either they moved to somewhere else on the planet, or they moved into a different niche. The contemporary ban on nuclear weapons has not eliminated them from the planet at all. Bans of genetically modified foods have only displaced these crops to other continents. Bans on hand guns may succeed for citizens but not soldiers or cops. From technology’s point of view, bans only change their address, not their identity. In fact what we want to do with technologies that produce more harm than good is not to ban them but to find them new jobs. We want to move DDT from an insecticide aerial-sprayed on crops to a household malaria remedy. Society becomes a parent for our technological children, constantly hunting for the right mix of beneficial technological friends in which cultivates the best side of each new invention. Often times the first job we assign to a technological is not at all ideal, and we may take many tries, many jobs, before we find a great role for a given technology.
People sometimes ask what possible role of humans might play in a world of extremely smart autonomous technology? I think the answer is we’ll play parents; redirecting active technologies into healthy jobs, good friends, and instilling positive values.
If so, we should be looking for highly evolved tools that assist our pro-actions. On our list should be better tools for anticipation, better tools for ceaseless monitoring and testing, better tools for determining and ranking risks, better tools for remediation of harm done, and better tools and techniques for redirecting technologies as they grow.