The Technium

Why I Don’t Worry About a Super AI


[I originally wrote this in response to Jaron Lanier’s worry post on Edge.]

The end result of the quantified self movement mobile phones give up on us

Why I don’t fear super intelligence.

It is wise to think through the implications of new technology. I understand the good intentions of Jaron Lanier and others who have raised an alarm about AI. But I think their method of considering the challenges of AI relies too much on fear, and is not based on the evidence we have so far. I propose a counterview with four parts:

1. AI is not improving exponentially.

2. We’ll reprogram the AIs if we are not satisfied with their performance.

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

4. Rather than hype fear, this is a great opportunity.

I expand each point below.

1. AI is not improving exponentially.

In researching my recent article on the benefits of commercial AI, I was surprised to find out AI was not following Moores Law. I specifically asked AI researchers if the performance of AI was improving exponentially. They could point to an exponential growth in the inputs to AI. The number of processors, cycles, data learning sets, etc. were in many cases increasing exponentially. But there was no exponential increase in the output intelligence because in part, there is no metric for intelligence. We have benchmarks for particular kinds of learning and smartness, such as speech recognition, and those are converging on an asymptote of 0 error. But we have no ruler to measure the continuum of intelligence. We don’t even have an operational definition of intelligence. There is simply no evidence showing a metric of intelligence that is doubling every X.

The fact that AI is improving steadily, but not exponentially is important because it gives us time (decades) for the following.

2. We’ll reprogram the AIs if we are not satisfied with their performance.

While it is not following Moore’s Law, AI is becoming more useful faster. So the utility of AI may be increasing exponentially, if we could measure that. But in the past century the utility of electricity exploded as more use trigger yet more devices to use, yet the quality of electricity didn’t grow exponentially. As the usefulness of AI increases very fast, it brings fear of disruption. Recently, that fear is being fanned by people familiar with the technology. The main thing they seem to be afraid of is that AI is taking over decisions once made by humans. Diagnosing x-rays, driving cars, aiming bomb missiles. These can be life and death decisions. As far as I can tell from the little documented by those afraid, their grand fear – the threat of extinction – is that AI will take over more and more decisions and then decide they don’t want humans, or in some way the AIs will derail civilization.

This is an engineering problem. So far as I can tell, AIs have not yet made a decision that its human creators have regretted. If they do (or when they do), then we change their algorithms. If AIs are making decisions that our society, our laws, our moral consensus, or the consumer market, does not approve of, we then should, and will, modify the principles that govern the AI, or create better ones that do make decisions we approve. Of course machines will make “mistakes,” even big mistakes – but so do humans. We keep correcting them. There will be tons of scrutiny on the actions of AI, so the world is watching. However, we don’t have universal consensus on what we find appropriate, so that is where most of the friction about them will come from. As we decide, our AI will decide.

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

The great fear pumped up by some, though, is that as AI gain our confidence in making decisions, they will somehow prevent us from altering their decisions. The fear is they lock us out. They go rogue. It is very difficult to imagine how this happens. It seems highly improbable that human engineers would program an AI so that it could not be altered in any way. That is possible, but so impractical. That hobble does not even serve a bad actor. The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI. It would also be an incredible achievement for a gang of human engineers to create a system that could not be hacked. Still it may be possible at some distant time, but it is only one of many possibilities. An AI could just as likely decide on its own to let anyone change it, in open source mode. Or it could decide that it wanted to merge with human will power. Why not? In the only example we have of an introspective self-aware intelligence (hominids), we have found that evolution seems to have designed our minds to not be easily self-reprogrammable. Except for a few yogis, you can’t go in and change your core mental code easily. There seems to be an evolutionary disadvantage to being able to easily muck with your basic operating system, and it is possible that AIs may need the same self-protection. We don’t know. But the possibility they, on their own, decide to lock out their partners (and doctors) is just one of many possibilities, and not necessarily the most probable one.

4 Rather than hype fear, this is a great opportunity.

Since AIs (embodied at times in robots) are assuming many of the tasks that humans do, we have much to teach them. For without this teaching and guidance, they would be scary, even with minimal levels of smartness. But motivation based on fear is unproductive. When people act out of fear, they do stupid things. A much better way to cast the need for teaching AIs ethics, morality, equity, common sense, judgment and wisdom is to see this as an opportunity.

AI gives us the opportunity to elevate and sharpen our own ethics and morality and ambition. We smugly believe humans – all humans – have superior behavior to machines, but human ethics are sloppy, slippery, inconsistent, and often suspect. When we drive down the road, we don’t have any better solution to the dilemma of who to hit (child or group of adults) than a robo car does – even though we think we do. If we aim to shoot someone in war, our criteria are inconsistent and vague. The clear ethical programing AIs need to follow will force us to bear down and be much clearer about why we believe what we think we believe. Under what conditions do we want to be relativistic? What specific contexts do we want the law to be contextual? Human morality is a mess of conundrums that could benefit from scrutiny, less superstition, and more evidence-based thinking. We’ll quickly find that trying to train AIs to be more humanistic will challenge us to be more humanistic. In the way that children can better their parents, the challenge of rearing AIs is an opportunity – not a horror. We should welcome it. I wish those with a loud following would also welcome it.

The myth of AI?

Finally, I am not worried about Jaron’s main peeve about the semantic warp caused by AI because culturally (rather than technically) we have defined “real” AI as that intelligence which we can not produce today with machines, so anything we produce with machines today cannot be AI, and therefore AI in its most narrow sense will always be coming tomorrow. Since tomorrow is always about to arrive, no matter what the machines do today, we won’t bestow the blessing of calling it AI. Society calls any smartness by machines machine learning, or machine intelligence, or some other name. In this cultural sense, even when everyone is using it all day every day, AI will remain a myth.




Comments
  • Kent Schnake

    It strikes me as ironic that we humans possess enough nuclear weapons (and other things beside those) that are capable of extinguishing the human race, and yet some of us worry about a small chance that machines will get smart enough and mean enough to wipe us out. We are already plenty smart enough and mean enough to wipe ourselves out. I suspect our creations will wind up being in our image. Worst case , they will be as smart and mean as we are.

    • Jay

      We’ve posessed nuclear weapons for decades, and haven’t wiped ourselves out with it yet. Apparently, we are competent to manage them. So the fear has worn off.

      AI is new and unknown. It’s perfectly understandable that humans are afraid of it. Fearing the unknown comes hardwired. What you call ironic, I call natural.

      The machines will not be ‘as smart and mean’ as we are. This is called anthropomorphism: projecting your own characteristics onto something that is fundamentally not human.

      To assume AI will only be as smart as us, doesn’t make any sense. We aren’t smart enough to build an AI as smart as we are. We can only build a dumb AI that will bootstrap itself to higher intelligence. It’s obvious that an AI bootstrapping itself won’t stop at the human intelligence level. It makes much more sense to assume AI will blow right past us.

      • beachmike

        We already have AIs far, far smarter than we are in narrow domains. There is no fundamental reason why AIs cannot be smarter than us in less narrow domains, and eventually, in general (AIG).

  • jetlej

    Kevin – What do you think about Guaranteed Basic Income as a solution to the automation and job displacement that is/will be created by AI and robotics?

  • Jerry McGalyc

    This was posted on April 1st, so I’m not sure if this article is serious. The piece which you linked to has nothing to do with the counter-points brought up by your response. Lanier’s article dealt with the dangers of the myth of rampant strong AI, and of weak AI’s poorly implemented algorithms. He also basically called you a profiteering zealot, though not by name and with less inflammatory diction.

  • jsmunroe

    We are not creating self any more. We write AI for the sole purpose of utility. We write single task thinking, learning algorithms. We aren’t even trying to create Skynet anymore. Minds are not practical; tools are. True artificial consciousness is centuries away simply because that is not our current goal as a species. We just want smarter software.

  • Hippy noob questioner

    I got a question or two. Maybe they are hippy computer noob questions made redundant 60 years ago, I don’t know.
    Let’s say everything around us is a construct of our brain – as many psychologists and philosophers will say – then isn’t what we see as artificial intelligence not separate from our own intelligence, as a product of our brain ?
    Or, in a broader way, considering the matter of artificial life, there is no shortage of people who will say, after the fashion of a mystic (or silicon valley tripper), that the distinction between animate and inanimate matter is arbitrary – or even that everything is part of one great intelligence anyway ?

    And, in both of these cases – either that everything is an aspect of our own intellect, or that everything is an aspect of a greater intelligence – we already regard these as unfathomable, and so a super AI surpassing human ability will actually be nothing new, as we are already embedded in/as something beyond our control or smarter than us ?

    • Hippy computer noob

      …come on, some of you have meditated or dropped acid, I know it….

  • CS

    Two thoughts:
    1. Not sure why you call your article a counterpoint to Lanier’s article. It’s not. He is talking about the danger of the myths of Artificial Super Intelligence.
    2. There is a large amount of well reasoned argument for how and why ASI could be extremely dangerous. It comes from serious thinkers including Nick Bostrom, Steve Omohundro, Eliezer Yudkowsky and others. Their thinking contributes to why we are seeing warnings about this issue from Steven Hawking, Elon Musk, Bill Gates, Peter Thiel, and the like. Your article, in comparison, is simplistic and uninformed. It seems off the cuff. You are an influential person who people trust. However your article is ignorant of a world of information about ASI that should be taken seriously.

  • Devonavar

    I think #2 doesn’t quite get at the fear I have about AI:

    “This is an engineering problem. So far as I can tell, AIs have not yet
    made a decision that its human creators have regretted. If they do (or
    when they do), then we change their algorithms. If AIs are making
    decisions that our society, our laws, our moral consensus, or the
    consumer market, does not approve of, we then should, and will, modify
    the principles that govern the AI, or create better ones that do make
    decisions we approve.”

    The engineering problem isn’t the hard problem here. The hard problem is a political problem. My fear isn’t that I will regret the decisions of AIs that I make. My fear is that I will be trapped by the decisions of AIs that I didn’t make. My fear is that AIs will be vested with the authority to make decisions that affect me that I have no way of appealing or fighting back. AIs enable an efficiency in enforcing power that no human can match.

    There is already a real world example here: YouTube’s ContentID system frequently makes decisions about what videos can and can’t stay on the site at the behest of the Music Labels. These decisions are in error a significant amount of the time — i.e. when fair use should apply, or when an external license has been negotiated. These decisions cannot be easily appealed (despite a manual appeals process that exists). The result is a system that enforces a particular power structure that serves a particular master, without any regard for law or social agreement. It simply does what the engineer programmed it to do, and that engineer wasn’t necessarily designing it with the public interest in mind.

    When this type of decision is made by a human, there is a certain amount of leeway to the decision. Humans are good at handling contexts that are unfamiliar or unexpected. And, when they make a wrong decision, they can be held accountable and responsible. When power is abused, it is possible to fight back. When that power is held by an AI, there is no recourse.

    Neither of these things are true of AI. As you say, “human ethics are sloppy, slippery, inconsistent, and often suspect.” This is a feature, not a bug. Human beings have consciences by default. Conscience doesn’t have to be programmed in, and this is an advantage when power structures are enforced by humans because when wrong decisions are made, they are (to an extent) self-correcting. If an AI makes a decision in an unanticipated context, it will likely make the wrong decision. It’s impossible to anticipate every context beforehand. And, when that wrong decision is made, there is nobody to hold responsible. Yes, it’s possible to design an AI that learns from its mistakes, but there is no requirement to design AIs in that way, so we end up with AIs that make inflexible decisions that can’t be reversed.

    “The clear ethical programing AIs need to follow will force us to bear
    down and be much clearer about why we believe what we think we believe.” This is a problem. Maybe it forces the creators to be clearer, maybe it doesn’t. What it doesn’t do is require AIs to be ethical. Designing ethics into an AI is an engineering and design expense, and it’s not clear to me why an engineer *would* program these things into an AI if they can get the job done without it.

    This is exactly the situation with ContentID. Sure, ContentID *could* be designed to respect fair use and external licenses. But it wasn’t, because the designer had no interest in doing so, despite the collateral damage it causes to external parties.

    Now imagine this situation applied to a life-or-death decision. Imagine that the AI responsible for making a decision about how to treat your heart attack is designed by a pharmaceutical company. Why does that pharmaceutical company have any incentive to design an AI that respects patient preferences about certain types of treatments, or to respect the right to refuse treatment? Sure, they could design an AI that respects patient preference, but will they?

    Right now those decisions are made by doctors who make similar decisions based on their personal judgements about what is best for the patient. Crucially, these decisions can be influenced by outside factors (wishes of family, availability of drugs, prior known preferences from the patient), and if the doctor makes the wrong decision, they can be held responsible.

    Sure, an AI could be designed to respect all of these things, but will it? The engineering isn’t the challenge here. The politics are. How do we ensure that the AIs that make life-or-death decisions make the *right* decisions, instead of just the ones that benefit the creator. That’s a political problem, and it’s not an easy one to solve.

  • Norberto

    Kevin,

    I can’t find your email to write you directly and request your permission to use the name “Technium” in a novel. I am writing a “fiction” about evolution and how are the same evolution patterns repeating over and over again, since the creation of the first molecules until now and of course this will repeat again in the future. I believe that the next step in evolution is that thing of yours: the Technium. Should we fear it? as much as the chimps feared us a few years ago. I imagine a million years ago a group of monkeys concerned about the freak chimp that is using tree branches and stones (first cool tools ever!) to fight against their predators. Discussing as we are doing it now. Some of them fear that the new raising trend (humans) may end with the chimp tribe as it existed then. Some others argue that this is progress and they should embrace it. It is not one or the other, it is the natural path of evolution.

    I don’t think we should fear AI, any superior intelligence understands that cooperation is preferable to destruction. Why would you destroy something that you may control and use. We may end up connected to the technium as a functional neuron, sure. But we do that now anyway. In this exact moment I am connected to it providing my ideas to the Technium. Believe me it is not painful, I even enjoy it.

  • joe

    This is exactly the situation with ContentID. Sure, ContentID *could* be designed to respect fair use and external licenses. But it wasn’t, because the designer had no interest in doing so, despite the collateral damage it causes to external parties.

    Now imagine this situation applied to a life-or-death decision. Imagine that the AI responsible for making a decision about how to treat your heart attack is designed by a pharmaceutical company. Why does that pharmaceutical company have any incentive to design an AI that respects patient preferences about certain types of treatments, or to respect the right to refuse treatment? Sure, they could design an AI that respects patient preference, but will they?

    dailyneedservice.com

  • franckit

    The article strikes me as somewhat naive about what an AI really is and the extend to which an AI could be really intelligent – not just narrowly intelligent like most of the AI that have been mentioned in the article, or indeed in the comments.

    Let’s assume for a moment that some organization, somewhere in the world, is able to develop an AI that has a human-level intelligence, across the board. It might have motivations for which it was programmed by that organization. Let’s even assume for that those motivations aren’t nefarious – it’s trying to improve some production process in a industry. So it is to look at those process and its mission, what it is programmed to do -again with a human-level intelligence- is to optimize those. Let’s think about what could go wrong with THAT scenario, setting aside the any nefarious intent from the makers.

    1 – That AI might be REALLY good a improving those processes, or that production. But could it be simply too good? If it was programmed with “increase efficiency & production level of that type of car”, then that’s its mission, but what tells us it would stop, and how would it stop? What if it takes its mission to be to do that at all costs even to the point of being disruptive of other activity, or extremely harmful in some unintended way? What if it decides that subverting narrow-AI (e.g. computers & robots) that are connected to the net through hacking & redirecting they workflow to produces those cars as well? I’m not even talking about a doomsday scenario, but let’s imagine such a AI busy hacking through the net & disrupting whole systems across the globe… You might say, again, “well that’s an engineering problem, we just have to put strict limits on the bounds of what it can and cannot do”. While this is true, how can you be so sure that a company or organization will be so far-sighted? Why would they significantly slow down the advant of a functionning human-level intelligence AI for their purposes in order to ensure those safeguards are sufficient? Also, how can you really ensure that they will succeed in building all necessary safeguard?

    2 – If a human-level AI is ever develop, and that AI is able to of self-learning, I think you aren’t fully understand the implication of such a program/machine. A human is, in a way, a self-learning machine. However, we have serious limitations as to how/what we can learn, even setting aside physical constraints. A computer would be able to self-learn a MUCH HIGHER PACE than a human ever could by pure thought exercice, just because a processor works at a much higher frequency. The number of cycles your brain can do, and therefore the number of learning cycles you can in any amount of time, is puny compared to what today’s computers can do. Another important limitation for us is the speed at which we can input information for self-learning. An computer, again, would be able to sift through GIGANTIC amount of information much faster than you and can could ever read/listen/watch, and it also wouldn’t tire of doing so every could hours. In other words, a human-level AI capable of self-learning would be able to learn at a rate we can’t quite grasp and is highly unlikely to STAY a human-level AI for very long: in all likelyhood it would far surpass our intelligence fairly quickly. What then? Let’s say that AI then becomes twice, or 10 times, as smart as the smartest human can get, how would you pretend to be able to control it then? Let’s say the gap becomes of the same magnitude as between the smartest monkey & us, how would you believe that this monkey could control a human, even though a human at age 5 or 6 was possibly of a very comparable intelligence level? Once you get outsmarted, it seems difficult to predict what will happen: you’re just not smart enough to know.

    3 – You refer a lot to engineering, and how smart design should prevent any catastrophy in relation to a super AI. Well let’s use children for analogy: we’re smarter than our children, and we sure can do our best to guide what they should or shouldn’t be doing. However, there comes a point where they are able to realize that they have a say in this, and that they can -or not- follow what they are being told. How could you garantee that an AI that aquires a general level of intelligence far superior to our own would STILL obey whatever guidelines & rules you hard-coded into it? Perhaps an AI could never develop real self-consciousness, perhaps there is something biological in this that can never be replicated articifially. But then, maybe not. Maybe consciousness & self-awarness are simply some sort of intelligence threshold we ahve happen to surpass, which makes us as humans able to charter our own way through life. But then, if that is the case, and there’s nothing *special* about self-awarness other than the threshold of intelligence you need to achieve it, then how can you garantee you human-level AI (or superior) will not deviate from what you initially set out for it? Even assuming that the -by some way that seems hard to believe- that the safeguards you have build are smarter than yourself, e.g. smart enough for a smarter than human AI, then what would preven the AI to program another AI that doesn’t have those built-in safeguard?

    Just a final thought, from a human-level intelligence that wants to survive. If your human or higher level AI ever happen to have some sort of self-preservation instinct, one of its priority at some point would have to be self-replication, or backups of some kind. This means that any idea that you could, at worst, just “shut it down” if things go wrong is likely to be wrong. You might shut one down, but how can you shut down its backups/replication? If I was an AI, self-aware, and with some sort of will to ensure my survival, it seems to me that my priority at some point would become to just that. And as soon as I can manage to get access to internet -with your concent or otherwise- then that can easily be ensure, from a program perspective. It seems likely that a human-level or higher AI would be quite a hacker, potentially much more gifted than a human at that actually. Just think of the havoc this could created….

    I’m NOT saying I’m *against* an AI, or research on it: it will happen not matter what. You can’t stop the research anymore than you could have prevented the atomic bomb, or electricity, or the internet. The question is how will this happen, how worried should we be and what can we do to ensure this doesn’t go wrong.

    But then, what is worrisome, is that the answers aren’t obvious.

    And keep in mind that I have left out of the picture any nefarious intent, and any question about the fact that when an human-level AI becomes an achievable objective, many organisations are likely to race for it (as there could be quite an advantage in developping the first successfull one)…. Which, most likely, will tend to disadvantage those who will include all the beautiful safeguard & consciencious engineering you advocate, and favor those who might be more willing to cut corners, even though that also would be playing with fire…..

  • girdyerloins

    Hm. Algorithms benefitting the tiniest slice of humanity have rendered millions powerless in so-called democratic society,nevermind aiding to concentrate mind-boggling wealth in those self-same hands. And they aren’t even AI yet. These algorithms, serving, say, derivatives sold in the financial markets have contributed directly, I’m led to understand, to the recent financial disappointment of 2007-8 and searching wikipedia reveals some pretty sobering analyses on the part of critics.
    Spouting baloney about the good something does is like saying all the good an organized religion does negates its appalling behavior during the previous millennium, give or take a century or two.
    I, for one, welcome my AI overlords. Why squirm over it?

  • David Johnson

    IMHO the ‘real’ danger from AI doesn’t come from the machines themselves, and at least within our lifetimes probably never could.
    The danger with machine intelligence, machine learning, pattern recognition and awareness is the uses to which they are put by the real ‘robots’ in our world, the single minded profit-seeking morality unaware beasts we call ‘limited companies’.

    When those companies start pitting highly tuned machine intelligence, armed with the statistical population databases our governments seem to want to sell, against our free will. That will be manifestation of the true dangers of ‘AI’ – they will become a lethal armament in the struggle by corporations to control masses of people. Us.

  • firoozye

    The thing is–existential threats need not come from intelligent beings, nor should intelligence automatically imply that this could lead to an existenial threat. A shark isn’t intelligent in any Turing sense, but on an individual level could be a major existential threat. Meanwhile our own intelligence or even ‘near’ intelligence doesn’t imply that we threatten one another. Even if AI did exist, what incentive do intelligent machines have for doing in the entire human race?

    Would it just be out of caprice?

    Incentives prove a good determinant of likely courses of action for all sentient creatures. Economists and game theorists have studied them for some time.

    As far as I understand, 100% of AI “wants” to fit good models. They want to classify correctly, and not incorrectly. They want to mimic human behaviour. The “want” is just some objective function–some likelihood being maximized, some gini coefficient being maximized on subtrees, some cross-validation error being minimized.

    Nowhere in this simple world-view does “kills the human race” figure. And it won’t figure in until someone “programs” it in. Or until we get the AI to change its objective function (based on what objective?).

    Look to incentives and you can foresee outcomes.

  • 2Punx2Furious