The Technium

Why I Don’t Worry About a Super AI


[I originally wrote this in response to Jaron Lanier’s worry post on Edge.]

The end result of the quantified self movement mobile phones give up on us

Why I don’t fear super intelligence.

It is wise to think through the implications of new technology. I understand the good intentions of Jaron Lanier and others who have raised an alarm about AI. But I think their method of considering the challenges of AI relies too much on fear, and is not based on the evidence we have so far. I propose a counterview with four parts:

1. AI is not improving exponentially.

2. We’ll reprogram the AIs if we are not satisfied with their performance.

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

4. Rather than hype fear, this is a great opportunity.

I expand each point below.

1. AI is not improving exponentially.

In researching my recent article on the benefits of commercial AI, I was surprised to find out AI was not following Moores Law. I specifically asked AI researchers if the performance of AI was improving exponentially. They could point to an exponential growth in the inputs to AI. The number of processors, cycles, data learning sets, etc. were in many cases increasing exponentially. But there was no exponential increase in the output intelligence because in part, there is no metric for intelligence. We have benchmarks for particular kinds of learning and smartness, such as speech recognition, and those are converging on an asymptote of 0 error. But we have no ruler to measure the continuum of intelligence. We don’t even have an operational definition of intelligence. There is simply no evidence showing a metric of intelligence that is doubling every X.

The fact that AI is improving steadily, but not exponentially is important because it gives us time (decades) for the following.

2. We’ll reprogram the AIs if we are not satisfied with their performance.

While it is not following Moore’s Law, AI is becoming more useful faster. So the utility of AI may be increasing exponentially, if we could measure that. But in the past century the utility of electricity exploded as more use trigger yet more devices to use, yet the quality of electricity didn’t grow exponentially. As the usefulness of AI increases very fast, it brings fear of disruption. Recently, that fear is being fanned by people familiar with the technology. The main thing they seem to be afraid of is that AI is taking over decisions once made by humans. Diagnosing x-rays, driving cars, aiming bomb missiles. These can be life and death decisions. As far as I can tell from the little documented by those afraid, their grand fear – the threat of extinction – is that AI will take over more and more decisions and then decide they don’t want humans, or in some way the AIs will derail civilization.

This is an engineering problem. So far as I can tell, AIs have not yet made a decision that its human creators have regretted. If they do (or when they do), then we change their algorithms. If AIs are making decisions that our society, our laws, our moral consensus, or the consumer market, does not approve of, we then should, and will, modify the principles that govern the AI, or create better ones that do make decisions we approve. Of course machines will make “mistakes,” even big mistakes – but so do humans. We keep correcting them. There will be tons of scrutiny on the actions of AI, so the world is watching. However, we don’t have universal consensus on what we find appropriate, so that is where most of the friction about them will come from. As we decide, our AI will decide.

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

The great fear pumped up by some, though, is that as AI gain our confidence in making decisions, they will somehow prevent us from altering their decisions. The fear is they lock us out. They go rogue. It is very difficult to imagine how this happens. It seems highly improbable that human engineers would program an AI so that it could not be altered in any way. That is possible, but so impractical. That hobble does not even serve a bad actor. The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI. It would also be an incredible achievement for a gang of human engineers to create a system that could not be hacked. Still it may be possible at some distant time, but it is only one of many possibilities. An AI could just as likely decide on its own to let anyone change it, in open source mode. Or it could decide that it wanted to merge with human will power. Why not? In the only example we have of an introspective self-aware intelligence (hominids), we have found that evolution seems to have designed our minds to not be easily self-reprogrammable. Except for a few yogis, you can’t go in and change your core mental code easily. There seems to be an evolutionary disadvantage to being able to easily muck with your basic operating system, and it is possible that AIs may need the same self-protection. We don’t know. But the possibility they, on their own, decide to lock out their partners (and doctors) is just one of many possibilities, and not necessarily the most probable one.

4 Rather than hype fear, this is a great opportunity.

Since AIs (embodied at times in robots) are assuming many of the tasks that humans do, we have much to teach them. For without this teaching and guidance, they would be scary, even with minimal levels of smartness. But motivation based on fear is unproductive. When people act out of fear, they do stupid things. A much better way to cast the need for teaching AIs ethics, morality, equity, common sense, judgment and wisdom is to see this as an opportunity.

AI gives us the opportunity to elevate and sharpen our own ethics and morality and ambition. We smugly believe humans – all humans – have superior behavior to machines, but human ethics are sloppy, slippery, inconsistent, and often suspect. When we drive down the road, we don’t have any better solution to the dilemma of who to hit (child or group of adults) than a robo car does – even though we think we do. If we aim to shoot someone in war, our criteria are inconsistent and vague. The clear ethical programing AIs need to follow will force us to bear down and be much clearer about why we believe what we think we believe. Under what conditions do we want to be relativistic? What specific contexts do we want the law to be contextual? Human morality is a mess of conundrums that could benefit from scrutiny, less superstition, and more evidence-based thinking. We’ll quickly find that trying to train AIs to be more humanistic will challenge us to be more humanistic. In the way that children can better their parents, the challenge of rearing AIs is an opportunity – not a horror. We should welcome it. I wish those with a loud following would also welcome it.

The myth of AI?

Finally, I am not worried about Jaron’s main peeve about the semantic warp caused by AI because culturally (rather than technically) we have defined “real” AI as that intelligence which we can not produce today with machines, so anything we produce with machines today cannot be AI, and therefore AI in its most narrow sense will always be coming tomorrow. Since tomorrow is always about to arrive, no matter what the machines do today, we won’t bestow the blessing of calling it AI. Society calls any smartness by machines machine learning, or machine intelligence, or some other name. In this cultural sense, even when everyone is using it all day every day, AI will remain a myth.




Comments


© 2023