The Technium

Incorruptible Technologies


I met Daniel Ellsberg for the first time recently. Ellsberg was a US military analyst, turned conscious objector, who released the top-secret Pentagon Papers in 1971 as an act of civil disobedience to try to stop the Vietnam War. He has also since been heavily involved in the decades-long battle to eliminate nuclear weapons. In our short conversation he kept returning to an idea that I found alarming. When talking about what technology wanted his hope for the future was technologies that were incorruptible. We had to make things, he said, that would not turn against us. Ones that could not be abused, unlike say, nuclear energy. Or genetic engineering. I was speechless for a while. Ellsberg considers himself a realist, but this call for incorruptible technology is a utopian dream. There can be no incorruptible technology, just as there can be no incorruptible free will. Any free will capable of producing a constructive thought will — by necessity — be capable of producing a destructive thought.

I was reminded of a passage I set out in What Technology Wants. This is part of a standard rant I’ve been giving to audiences of high tech folks for at least 20 years:

The consequences of a technology expand with its disruptive nature. Powerful technologies will be powerful in both directions–for good and bad. There is no powerfully constructive technology that is not also powerfully destructive in another direction, just as there is no great idea that cannot be greatly perverted for great harm. After all, the most beautiful human mind is still capable of murderous ideas. Indeed, an invention or idea is not really tremendous unless it can be tremendously abused. This should be the first law of technological expectation: The greater the promise of a new technology, the greater its potential for harm as well. That’s also true for new beloved technologies such the internet search engine, hypertext, and the web. These immensely powerful inventions have unleashed a level of creativity not seen since the Renaissance, but when (not if) they are abused, their ability to track and anticipate individual behavior will be awful.

Attempts to restrict any free will agent in order to refrain it from abuse is fraught with the dangers of control and authoritarianism. There seem to be two ways to constrain free will action:

1) Prevent an entity from generating a negative action. That is, try to program the engine so that it cannot produce harm. Or,

2) Guide the entity so that it wants, or is rewarded, to do good, while allowing it the possibility to do harm.

tesla.jpg

We have laws that penalize those people who do harm, but most observers agree that penalizing harm is not as effective as elevating good. Our laws work, to the extent that they do, because most folks will generally do good. If the attraction of doing good breaks down, then soon the constraints of law to stop harm break down. Law can not operate long without the elevation of the good. The penalization of harm needs the general beneficent environment of good to support it. The penalty of law in the absences of civil uplifting behavior rapidly collapses into terror and decline.

The same is true of technology. We have two courses open for managing the moral orientation of technology.

1) We can try to devise technology that is incapable of doing harm, Or

2) We can engineer technology so that it is biased to allow good, while allowing it the possibility to go harm.

I believe the former is impossible, because of the first law of technological expectation: powerful inventions can be powerfully abused. But how exactly to do the second is not clear. How do we engineer ethics into technological systems?

It is a difficult question, since we have trouble doing that in culture for the past few thousand years. But I think we have an existence proof that it can be done: our children. We have been somewhat successful in transmitting ethical values over generations beyond us. That means that we can, in theory transmit, values over generations in our technological systems. The real question is what values do we want to convey. What behaviors do we want for technology?

The one thing that wont be successful is creating technologies that are incapable of being used for harm. Anything that can be weaponized, probably will be sooner or later.

Rather we have to work on cultivating the conviviality of technology, and a way to train it, to instill and embed in it bias toward life and mind.

It’s a worthy quest.




Comments


© 2023