The Technium

Dealing With Rogue Technologies


I don’t worry about most new inventions, but there are four technologies I think are worth worrying about. These are the emerging technologies of geno-, robo-, info-, nano- stuff. Geno includes gene warfare, gene therapies, and drastic genetically modified organisms, and drastic genetic engineering of the human line. Robo is of course robots. Info concerns digital intrusions, artificial minds, cyberware, and virtual personas built from data accumulation. Nano are super tiny machines as small as bacteria that do all kinds of things, sort of like dry  life. The initials of these four worries are combined into the ironic acronym of GRIN.

Nano

The common element among the techniques of GRIN – and the reason they are worrisome – is that they are all self-reproducing. If we code changes into a gene line, those changes can replicate down generations forever. And not just in family lines. Genes can easily migrate horizontally between species. So copies of new genes – bad or good — might disseminate through both time and space. As we know from the digital era, once copies are released they are hard to take back. Robots, too, are potentially self-reproducing. If we can engineer an artificial mind as smart or smarter than us, then logically it can create another generation of mind smart or smarter than itself (and us). In that case what control do we have over such creations? What if they start out wrong? Information shares this same avalanching property of replicating out of our control. As we inhabit cyberspace more and more, we leave constant digital versions of our lives which can be forwarded, analyzed, combined, sold and copied ad infinitum. These virtual personas are hard to control and impossible to retrieve. Finally, nanotechnology promises marvelous super micro thingies which are constructed with the precision of single atoms, permitting very tiny artificial organisms that are capable in some cases of reproducing. The threat of these nano-organisms breeding out of control is known as the “gray goo” scenario.

The threat of self-duplicating technology is new. We’ve long had ad hoc societal mechanisms for vetting new inventions. You see how your neighbors use the new fangled thing. Maybe you try it yourself. If you don’t like the results, you stop using it. If enough users give it up, the technology goes obsolete. But this may not work with GRIN. GRIN technologies may derail the normal try-out period is one of two ways. In order to really “try” a GRIN invention you may need to develop it to the point where it can reproduce in order to enjoy its full benefits. If you change your mind at that point, the cascade might too late to undo. A more insidious consequence is that the negative consequences of the technology may only emerge after many generations. In fact, this is certain to happen. The delayed problems may arise because  they were invisible at first, or (more likely) they may arise because they were generated latter – either by mutation, drift, or co-evolutionary environment. In these cases no amount of inspection or appraisal in the beginning can eliminate this problem.

I’m not the only one worried about GRIN. Bill Joy, boy genius, creator of Java and Jini technologies, wrote an infamous Wired article that brought to public attention the dangers of GRIN technologies. Joy’s recommendation was that we learn to relinquish specific GRIN-ologies via education and global legislation. Other critics of GRIN-ologies, and particularly genetic engineering, have voiced ways to handle these emerging . One of these is Richard Hayes, a political activist at the Center for Genetics and Society, who writes:

How would we ensure that these increasingly powerful [GRIN] technologies are used for benign and beneficent purposes rather than pernicious ones?

We’d need laws and regulations, we’d need them at both national and international levels, and we’d need enforcement provisions. To ensure buy-in by mass publics we’d need democratic deliberation, involving education, open debate, congressional and international commissions, and all the rest. To allow the time necessary for all this to happen properly, we’d likely need at least moratoria on particularly risky technologies.

There is one more thing we’d need, and it is vital.  This missing ingredient is also a telling omission from Haye’s list. In order to write laws and regulations, enforce provisions and educate the public about an emerging technology we need to know something concrete about it. Vague generalizations can’t be legislated. But right now, for instance, robotic artificial intelligence and self-replicating nano-technology are mostly dreams. Almost anything we say about them now is likely to be wrong. We can say definite things about some genetic innovations because they exist. But – and I think this is key – because they exist, they are hard to eliminate. Yet they can’t really be relinquished until they exist. The nuclear bomb was not regulated until it exploded.

The paradox of technological regulation is that a specific species of invention can’t be regulated until it is operational, and once operational it is hard to regulate. There are a thousand of reasons why we would like to be able to abort a possible invention before birthing it, but I think we will find this impossible in most cases. The reason dwells in the nature of very complex technological systems – which the GRIN-ologies are. In fact they are the vanguard of complexity. The more complex a system is the less its behavior can be deduced from the behavior of its parts, and the more its behavior will only emerge when the entire whole is running. Natural ecologies, and organic bodies are like this. Complex software programs exhibit the same kind of irreducibility. The only way to check and verify them is to run them. Mathematically, there are no short cuts but to turn it on. Mathematically, there are no short cuts to finding out what the consequences of the GRIN-ologies will be but to construct them first.

This is the opposite of a moratorium. It’s more like a tryatorium. The result would be a conversation, a deliberate engagement with the emerging technology. It is released with our arms around it. We bend it this way and that. Let’s put more money here, let’s test this faster, let’s try to find a better home for these. A better metaphor would be: the technology is trained. As in the best animal and child training, positive aspects are reinforced with resources, and negative aspects are ignored until they diminish.  Troublesome and high-risk technology should be treated like rogue states. What you don’t want to do is banish and isolate them. You want to work with the bully and problem child.  High-risk technologies need more chances for us to discover their true strengths.  They need more of our investment, and more opportunities to be tried. Prohibiting them only drives them to the underground, where their worst traits are emphasized.

Robotsfather

GRIN-ologies are bully, rogue technologies. They will need our utmost attention in order to train them for long-term goodness. We need to invent appropriate long-term training technologies to guide them across the generations. There are already a few experiments to embed guiding heuristics in expert systems as a means to make “moral” artificial intelligence, and other experiments to embed long-range control systems in genetic and nano-systems. We have an existence proof such embedded principles work – in ourselves. If we can train our children – who are the ultimate power-hungry autonomous generational rogue beings — to be better than ourselves, then we can train our GRINs.

Like raising our children the real question – and disagreement – lies in what values do we want to transmit over generations? This is worth discussing, and I suspect, that as in real life we won’t all agree on the answers.




Comments


© 2023