The Technium

The Pro-Actionary Principle


[Translations: Japanese]

The current default algorithm for testing new technologies is the Precautionary Principle. There are several formulas of the Precautionary Principle but all variations of this heuristic hold this in common: a technology must be shown to do no harm before it is embraced. It must be proven to be safe before it is disseminated. If it cannot be proven safe, it should be prohibited, curtailed, modified, junked, or ignored. In other words, the first response to a new idea should be inaction until its safety is established. When an innovation appears, we should pause. The second step is to test it offline, in a model, or in any non-critical, safe, lowest-risk manner.  Only after is has been deemed okay should we try to live with it.

Unfortunately the Precautionary Principle doesn’t work as a reliable safeguard. Because of the inherent uncertainties in any model, laboratory, simulation, or test, the only reliable way to assess a new technology is to let it run in place. It has to be exercised sufficiently that it can begin to express secondary effects. When a technology is first cautiously tested  soon after its birth only its primary effects are being examined. But in most cases it is the unintended second-order effects of technologies that are usually the root of most problems. Second order effects often require a certain density, a semi-ubiquity, to reveal themselves. The main concern of the first automobiles was for the occupants — that the gas engines didn’t blow up, or that the brakes don’t fail. But the real threat of autos was to society en masse — the accumulated exposure to their minute pollutants and ability to kill others at high speeds, not to mention the disruptions of suburbs, and long commutes – all second order effects.

Pattern 10- Motorway L

Second order effects – the ones that usually overtake society – are rarely captured by forecasts, lab experiments, or white papers. Science fiction guru Arthur C. Clarke made the observation that in the age of horses many ordinary people eagerly imagined a horseless carriage. The automobile was an obvious anticipation since it was an extension of the first order dynamics of a carriage –  a vehicle that goes forward by itself. An automobile would do everything a horse-pulled carriage did but without the horse. But Clarke went on to notice how difficult it was to imagine the second-order consequences of a horseless carriage, such as drive-in movies theaters, paralyzing traffic jams and road rage.

A common source of unforecastable effects of technologies stems from they way they interact with other technologies. In a 2005 report (PDF) analyzing why the former US Office for Technology Assessment did not have more of an impact, the researches concluded:

While plausible (although always uncertain) forecasts can be generated for very specific and fairly evolved technologies (e.g., the supersonic transport; a nuclear reactor; a particular pharmaceutical product), the radical transforming capacity of technology comes not from individual artifacts but from interacting suites of technologies that permeate society along many dimensions.

The absences of second-order effects in small precise experiments, and our collective impulse to adapt technology as we use it, make reliable models of advance technological innovations impossible. An emerging technology must be tested in action, and evaluated in real time. In other words the risks of a particular technology have to be determined by trial and error in real life. We can think of this vetting-by-action algorithm as the Proactionary Principle. Technologies are tested through action, rather than inaction.  In this approach the appropriate response to a new idea is to immediately try it out.

And to keep trying it out, and testing it, as long as it exists. In fact, contrary to the Precautionary Principle, a technology can never be declared “proven safe.” It must be continuously tested with constant vigilance since it is constantly being re-engineered by users and the co-evolutionary environment it inhabits. The automobile today embedded in its matrix of superhighways, drive-ins, seat belts, gps, hypermiling is a different technology that the model T one hundred years ago. And most of those differences are due to secondary inventions rather than the internal combustion engine. In the same way Aspirin today, put into the context of other drugs in the body, changes in our longevity, pill-popping habits, cheapness, etc., is a different technology than either the folk medicines derived from the essence of willow bark, or the first synthesized version brought out by Bayer 100 years ago, even though they are all the same chemical, acetylsalicylic acid.  Technologies shift as they thrive. They are remade as they are used. They unleash second and third order consequences as they disseminate. And almost always, they exert completely unpredicted effects as they near ubiquity.

Therefore, technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.

Of course we should forecast, anticipate and minimize known problems from the start.

All technologies will generate problems. None are problem free. All have social costs. And all technologies will cause disruptions to other technologies around them and may diminish technological benefits elsewhere. The problems of a new technology have to be weighed, balanced, and minimized but they cannot be fully eliminated.

Furthermore the costs of inaction (the default response called for by the Precautionary Principle), have to be weighed together with the costs of action. Inaction will also generate problems and unintended effects.  In a very fast changing environment the status quo has hidden substantial penalties that might only become visible over time.  These costs of inaction need to be added into the equations of evaluation.

The original version of the Proactionary Principle was first developed by Max More, the uber-extropian. He wrote a draft of the idea in 2004, and revised it in 2005. As he originally conceived it, the principle is an orientation, almost a philosophy. In the musings below I have simplified More’s elaborate philosophy to the point he may not recognize it. And to make it less confusing I punctuate it as the Pro-Actionary Principle. More’s second version contains a set of ten component principles; I have reduced these to five of my own.

The five Pro-Actions are:

1. Anticipation

All tools of anticipation are valid. The more techniques we use the better because different techniques fit different technologies. Scenarios, forecasts and outright science fiction can give partial pictures. Objective scientific measurement of models, simulations, and controlled experiments should carry greater weight, but these too are only partial. The process should try to imagine as many horrors as glories, and if possible to anticipate ubiquity; what happens if everyone has this for free? Anticipation should not a judgment. Rather the purpose of anticipation is to prepare a base for the next four steps. It is a way to rehearse future actions.

2. Continuous assessment

We have increasing means to quantifiably test everything we use all the time. By means of embedded technology we can turn daily use of technologies into large scale experiments. No matter how much a new technology is tested at first, it should be constantly retest in real time. We also have more precise means of niche-testing, so we can focus on susceptible neighborhoods, subcultures, gene pools, use patterns, etc. Testing should also be continuous, 24/7 rather than the traditional batch mode. Further, new technology allows citizen-driven concerns to surface into verifiable science by means of self-organized assessments. Testing is active and not passive. Constant vigilance is baked into the system.

3. Prioritize risks, including natural ones

Risks are real, but endless. Not all risks are equal. They must be weighted and prioritized. Known and proven threats to human and environmental health are given precedence over hypothetical risks.

Furthermore the risks of inaction and the risks of natural systems must be treated symmetrically. In More’s words: “Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks.”

4. Rapid restitution of harm

When things go wrong – and they always will – harm should be compensated quickly in proportion to actual damages. Penalizing for hypothetical harm or even potential harm demeans justice and weakens the system, reducing honesty and penalizing those who act in good faith. Mechanisms for actively fixing harms of current technologies indirectly aid future technologies, because it permits errors to be corrected quicker. The expectation that any given  technology will create harms of some sort (not unlike bugs) that must be remedied should be part of technology creation.

5. Redirection rather than prohibition

Prohibition does not work with technology. Absolute prohibition produces absolute outlaws. In a review of past attempts to ban technology, I discovered that most technologies could only be temporarily displaced. Either they moved to somewhere else on the planet, or they moved into a different niche. The contemporary ban on nuclear weapons has not eliminated them from the planet at all. Bans of genetically modified foods have only displaced these crops to other continents. Bans on hand guns may succeed for citizens but not soldiers or cops. From technology’s point of view, bans only change their address, not their identity. In fact what we want to do with technologies that produce more harm than good is not to ban them but to find them new jobs. We want to move DDT from an insecticide aerial-sprayed on crops to a household malaria remedy. Society becomes a parent for our technological children, constantly hunting for the right mix of beneficial technological friends in which cultivates the best side of each new invention. Often times the first job we assign to a technological is not at all ideal, and we may take many tries, many jobs, before we find a great role for a given technology.

People sometimes ask what possible role of humans might play in a world of extremely smart autonomous technology? I think the answer is we’ll play parents; redirecting active technologies into healthy jobs, good friends, and instilling positive values.

If so, we should be looking for highly evolved tools that assist our pro-actions. On our list should be better tools for anticipation, better tools for ceaseless monitoring and testing, better tools for determining and ranking risks, better tools for remediation of harm done, and better tools and techniques for redirecting technologies as they grow.




Comments
  • Steve Lewis

    New technologies should be accompanied by as much technical foresight as possible. All actions are accompanied by reactions. Some of these secondary reactions become unintended consequences, and can become intensified or mininized (as desired) by feedback mechanisms. Many of these unintended consequences are problems while others are opportunities, depending on the state of related technologies. Here is where sociology and philosophy enters the picture. How can we set up a technology implementation matrix that increases our chances of creating a more desirable set of future problems for society? What problems do we want to be working on in the future? A perfect society is unattainable. Can an excellent society be defined as one that has has created problems for itself that are addressable through the technium? Technology foresight is key.

  • Kevin Donovan

    If you are not familiar with it, Perrow’s “Normal Accidents” fits nicely with this.

  • Bill Burris

    A major problem today, seems to be that everyone is looking for static permanent solutions. This eventually leads to gigantic crashes, for example the global financial system. We need to start using science & engineering techniques along with an evolution point of view in virtually all aspects of running society.

  • gmoke

    The standard I’d propose is Zero Emissions.

    Zero emissions as in zero defects on a production line as in Six Sigma or Total Quality Management.

    Zero emissions within the context of Bill McDonough’s ecological design principles:
    waste equals food
    use only available solar income
    respect diversity
    love all the children

    John Todd’s ecosystem design rules from _A Safe, Sustainable World_:

    1. Geological and mineral diversity must be present to evolve the biological responsiveness of rich soils.
    2. Nutrient reservoirs are essential to keep such essentials as nitrogen, phosphorus, and potassium available or the pants.
    3. Steep gradients between subcomponents must be engineered into the system to enable the biological elements to evolve rapidly to assist in the breakdown of toxic materials.
    4. High rates of exchange must be created by maximizing surface areas that house the bacteria that determine the metabolism of the system and facilitate treatment.
    5. Periodic and random pulsed exchanges improve performance. Just as random perturbations foster resilience in nature. in living technologies altering water flow creates self-organization in the system.
    6. Cellular design is the structural model as it is in nature where cells are the organizing unit. Expansion of system should also use a cellular model, as in increasing the number of tanks.
    7. A law of the minimum must be incorporated. At least three ecosystems such as a marsh, a pond, and a terrestrial area are needed to perform the assigned function and maintain overall stability.
    8. Microbial communities must be introduced periodically from the natural world to maintain diversity and facilitate evolutionary processes.
    9. Photosynthetic foundations are essential as oxygen-producing plants foster ecosystems that require less energy, aeration, and chemical management.
    10. Phylogenetic diversity must be encouraged as a range of aquatic animals from the unicellular to snails to fish are as essential to the evolution and self-maintenance of the system as the plants.
    11. Sequenced and repeated seedings are part of maintenance as a self-contained system cannot be isolated but must be interlinked through gaseous, nutrient, mineral, and biological pathways to the external environment.
    12. Ecological design should reflect the macrocosmos in the microcosmos, representing the natural world miniaturized and reflecting its proportions, as in terrestrial to oceanic and aquatic areas.

    • http://www.kk.org Kevin Kelly

      @ gmoke: I don’t get it. How does zero emissions help you decide on the benefits of IV fertilization or stem cell cloning?

  • stephanie gerson

    I actually think that the process of a technology’s development can tell you about the second and third order consequences it will have. In order to elaborate, I’m gonna have to get academic.

    Langdon Winner identifies two ways in which artifacts can have politics. The first, involving technical arrangements and social order, concerns how the invention, design, or arrangement of artifacts or the larger system becomes a mechanism for settling the affairs of a community. This way “transcends the simple categories of ‘intended’ and ‘unintended’ altogether, representing “instances in which the very process of technical development is so thoroughly biased in a particular direction that it regularly produces results heralded as wonderful breakthroughs by some social interests and crushing setbacks by others” (Winner, p. 25-6). This implies that the process of technological development is critical in determining the politics of an artifact; hence the importance of incorporating all stakeholders in it. If Winner’s ‘politics’ can be understood to correlate with your second and third order consequences, the application to your Pro-Actionary Principle becomes clear: perhaps it should be applied before Anticipation, during the actual process of a technology’s development.

    The second way in which artifacts can have politics refers to artifacts that correlate with particular kinds of political relationships, which Winner refers to as inherently political artifacts (Winner, p. 22). He distinguishes between two types of inherently political artifacts: those that require a particular sociological system and those that are strongly compatible with a particular sociological system (Winner, p. 29). A further distinction is made between conditions internal to the workings of a given technical system and those that are external to it (Winner, p. 33). I’ve visualized this second way in which artifacts can have politics as a 2-by-2 matrix, which consists of four ‘types’ of artifacts: those requiring a particular internal sociological system, those compatible with a particular internal sociological system, those requiring a particular external sociological system, and those compatible with a particular external sociological system:

    http://farm3.static.flickr.com/2190/2070050419_071d8859f2.jpg?v=0

    As are all typologies, this one is a simplification-by-boundary-work – in this case, the two boundaries are drawn between requiring and compatible, and between internal and external. It is this boundary-work that makes the typology useful – in our case, for conceptualizing how technologies ‘have politics.’ Situating technologies in this matrix – whether the combustion engine back then, or the automobile now – can similarly help us think about the second and third order consequences a technology does or might have. And of course, technologies can move around the matrix as they evolve.

    Much more to say here, but I doubt anyone is reading this anyway ;)

  • Max More

    Kevin: Your thoughts here are always insightful, so I’m delighted that you’ve applied some of those thoughts to the precautionary principle and the Proactionary Principle. After digesting this entry for a few days, here are my thoughts.

    I strongly agree with your focus on the need to allow technologies to be used if we are to better understand both their desired and undesired effects. To use a term from one of your recent Technium entries, many proponents of the ultra-conservative precautionary principle appear to be “thinkists.” They imagine (like the advocates of total central economic planning) that they can tell a priori the future possibilities and outcomes (and that those are always negative overall). Combined with the absolute value according to caution and safety, the prohibitory conclusion is preordained. Ironically, as you note, the absolutist adherence to safety turns out not to be safe at all.

    That leads to what I call (in my book chapter on the subject), the paradox of the precautionary principle: The principle endangers us by trying too hard to safeguard us. It tries “too hard” by being obsessively preoccupied with a single value—safety. By focusing us on safety to an excessive degree, the principle distracts policymakers and the public from other dangers. The more confident we are in the principle, and the more enthusiastically we apply it, the greater the hazard to our health and our standard of living. The principle ends up causing harm by diverting attention, financial resources, public health resources, time, and research effort from more urgent and weighty risks.

    Just before setting out your five “Pro-Actions” (I like that), you say that “As he originally conceived it, the principle is an orientation, almost a philosophy.” Well I am a philosopher. However, I call myself a “strategic philosopher” (since so much of the discipline is pointless) so I wouldn’t want the Proactionary Principle (“ProP” for short) to be seen ONLY as a philosophical orientation. In the longer, book treatment, I aim to show its eminent practicality. I’ll do much more of that in the Field Guide that I set aside until the main book is done.

    Even in the formulation of the ten component principles contained within “the” Proactionary Principle, I’ve tried to point toward actionable ideas. For instance in my “Use Best Objective Methods.” I’m struck over and over by how badly top decision makers ignore (and are probably ignorant of) the evidence for and against the effectiveness of various decision tools and forecasting methods.

    So, should I make changes to my own formulation in light of your thoughts. Perhaps. My principle of “Embrace Input” and “Revisit and Reflect” certainly require me to consider doing so. Thinking about your Pro-Actions gives me the sense that I should incorporate more of what you have under “Continuous assessment.” This would seem to fit well under my “Revisit and Reflect” but could be underscored.

    More obviously, your “Rapid restitution of harm” is important and not well covered by my ten component principles (although is implied and easily derived). Perhaps I should add this one but keep the total down to ten by merging “Be Comprehensive” and “Embrace Input.”

    • http://www.kk.org Kevin Kelly

      @Max: I am honored to have your response to my musings on your original idea — which I neglected to say I found inspirational. I’ve had the privilege to read a few chapters from book-in-progress on this subject, so my reply here will be colored by this new material.

      The difficulty with the expanded method you have outlined — the method you call the Proactionary Principle — is that it seems to require an elite apparatus to put it into effect. I mean by that, a well-informed institutional process to deliberate, adjudicate, finesse, and implement a whole bunch of decisions.

      One of the attractions of the precautionary principle is that while it operates at an institutional level as well, it also can operate at the individual and small group level. Not only can it be set up to guide policy decisions for a national, individuals can use it — and do use it — to guide their own decisions. A major weakness of your version I wanted to address was this apparent large-scale methodology, this extremely rational complex process of evaluation and risk balance, etc. that would somehow influence or control grass-root innovation and entreprenial activity many levels below (via laws, voluntary restrictions, self-policing?) It is not clear to me who or what this governance body is.

      I was trying to envision a version of the Pro-Actionary Principle that could be transferred into the grassroots at the level of individuals. A set of heuristics they could use to guide their own low-level decisions, rather than a high-level policy process that only ethicists, lawmakers, lawyers and funders shape. Much as the precautionary principle has seeped into the culture as default, this would also become embedded as a default.

      I am pretty certain that unless a perspective is embraced at the bottom, it is not going to prevail. So for me the question is: while I am working on my own tech start up, what questions should I be asking myself about my own technological inventions? How do I decide? The elaborate process you suggest with your 10 steps seem to me to be beyond individuals fulfilling. I stumbled when I try to imagine going through it for the little innovation I am working on. If I can’t apply the Principle, who could?

      I am looking for a more intimate set of heuristics. I believe if we can get those individual guidelines right, the high-level process will fall out from that without much difficulty. I want a general guideline that will work for community activists in the Amazon, for a bunch of 20-year-old nerds in Silicon Valley, and for my Amish friends in Lancaster, PA. Something they can carry in their heads — and then expand to policy levels and institutions when needed.