The Technium

Thinkism


[Translations: Japanese]

Here is why you don’t have to worry about the Singularity in your lifetime: thinkism doesn’t work.

First, some definitions. According to Wikipedia, the Singularity is “a theoretical future point of unprecedented technological progress, caused in part by the ability of machines to improve themselves using artificial intelligence.”  According to Vernor Vinge and Ray Kurzweil, a smarter than human artificial intelligence will bring about yet smarter intelligence, which in turn will rapidly solve related scientific problems (including how to make yet smarter intelligence), expanding intelligence until all technical problems are quickly solved, so that society’s overall progress makes it impossible for us to imagine what lies beyond the Singularity’s birth. Oh, and it is due to happen no later than 2045.

I agree with parts of that. There appears to be nothing in the composition of the universe, or our minds, that would prevent us from making a machine as smart as us, and probably (but not as surely) smarter than us. My current bet is that this smarter-than-us intelligence will not be created by Apple, or IBM, or two unknown guys in a garage, but by Google; that is, it will emerge sooner or later as the World Wide Computer on the internet. And it is very possible that this other intelligence beyond ours would emerge on or long before 2045.

Let’s say that on Kurzweil’s 97th birthday, February 12, 2045, a no-kidding smarter-than-human AI is recognized on the web. What happens the next day? Answer: not much. But according to Singularitans what happens is that “a smarter-than-human AI absorbs all unused computing power on the then-existent Internet in a matter of hours; uses this computing power and smarter-than-human design ability to crack the protein folding problem for artificial proteins in a few more hours; emails separate rush orders to a dozen online peptide synthesis labs, and in two days receives via FedEx a set of proteins which, mixed together, self-assemble into an acoustically controlled nanodevice which can build more advanced nanotechnology.” Ad infinitum.

Ray Kurzweil, whom I greatly admire, is working to “cross the bridge to the bridge.” He is taking 250 pills a day so that he might live to be 97, old enough to make the Singularity date, which would in turn take him across to immortality. For obviously to him, this super-super intelligence would be able to use advanced nanotechnology (which it had invented a few days before) to cure cancer, heart disease, and death itself in the few years before Ray had to die. If you can live long enough to see the Singularity, you’ll live forever. More than one Singularitan is preparing for this.

Setting aside the Maes-Garreau effect, the major trouble with this scenario is a confusion between intelligence and work.  The notion of an instant Singularity rests upon the misguided idea that intelligence alone can solve problems. As an essay called Why Work Toward the Singularity lets slip: “Even humans could probably solve those difficulties given hundreds of years to think about it.” In this approach one only has to think about problems smartly enough to solve them.  I call that “thinkism.”

Wolfsculptur

Let’s take curing cancer or prolonging longevity. These are problems that thinking along cannot solve. No amount of thinkism will discover how the cell ages, or how telomeres fall off. No intelligence, no matter how super duper, can figure out how human body works simply by reading all the known scientific literature in the world and then contemplating it. No super AI can simply think about all the current and past nuclear fission experiments and then come up with working nuclear fusion in a day.  Between not knowing how things work and knowing how they work is a lot more than thinkism. There are tons of experiments in the real world which yields tons and tons of data that will be required to form the correct working hypothesis. Thinking about the potential data will not yield the correct data. Thinking is only part of science; maybe even a small part. We don’t have enough proper data to come close to solving the death problem.  And in the case of living organisms, most of these experiments take calendar time. They take years, or months, or at least days, to get results.  Thinkism may be instant for a super AI, but experimental results are not instant.

There is no doubt that a super AI can accelerate the process of science, as even non-AI  computation has already speed it up. But the slow metabolism of a cell (which is what we are trying to augment) cannot be sped up. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 smarter than they are now, without a Collider, they will know nothing new. Sure, we can make a computer simulation of an atom or cell (and will someday). We can speed up this simulations many factors, but the testing, vetting and proving of those models also has to take place in calendar time to match the rate of their targets

To be useful artificial intelligences have to be embodied in the world, and that world will often set their pace of innovations. Thinkism is not enough. Without conducting experiements, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world’s problems.  There won’t be instant discoveries the minute, hour, day or year a smarter-than-human AI appears. The rate of discovery will hopefully be significantly accelerated. Even better, a super AI will ask questions no human would ask. But, to take one example, it will require many generations of experiments on living organisms, not even to mention humans, before such a difficult achievement as immortality is gained.

Because thinkism doesn’t work, you can relax.

The Singularity is an illusion that will be constantly retreating — always “near” but never arriving. We’ll wonder why it never came after we got AI. Then one day in the future, we’ll realize it already happened. The super AI came, and all the things we thought it would bring instantly — personal nanotechnology, brain upgrades, immortality — did not come. Instead other benefits accrued, which we did not anticipate, and took long to appreciate. Since we did not see them coming, we look back and say, yes, that was the Singularity.




Comments
  • fletchorama

    You’ve taken that day after the singularity scenario entirely out of context. They deliberately call it a tossed-off broad scenario. One of many. The singularity (which Kurzweil defines as when humans transcend biology and merge with technology) would be a paradigm shift and as such it is impossible to know what will happen for certain.

    Also, nanotechnology and artifical intelligence will not happen right before the singularity but rather decades before, by 2030 at the latest. and those are the lower limits, as in the stupidest artificial intelligence and the biggest nanotechnology, hopefully that makes sense. That leaves at least two more decades to improve both of those things. And that’s a long time to work on them once you have them.

    Whereas we are not even close to curing death, we are definitely on the right track. Think of the human body as a house. If you don’t care of it it’s gonna fall apart. If you keep the plumbing working, fix the broken doors and windows, and in general upgrade when you can, the house will last a long, long time. Nanotechnology will play a, if not the key role in achieving immortality. Not intelligence.

    You are right when you say thinkism doesn’t work, but nowhere have i heard it said that the singularity would mean the end of experimentation.

  • Max More

    Kevin,

    You’ve very nicely stated one of several problems I’ve long had with the concept of a technological Singularity. BTW, what you call “thinkism” seems to be very much the same as what philosophers have long called “rationalism” (and contrasted with “empiricism”). However, I like your term better, since *that* kind of rationalism can easily be confused with the vastly more sensible general kind of rationalism.

    Max

    • http://www.kk.org Kevin Kelly

      @Max: Thhanks. Glad you found it useful.
      @John Verdon: yes, augmentation will be a large part of a planetary AI.

  • John Verdon

    Kevin,

    great piece, but… :)

    If one imagines the development of AI as a separate computer your reasoning is compelling. It’s sort of like Star Trek – this amazing technology carrying largely unchanged humans through out space (and occasionally time).

    But what about IA – Intelligence Augmentation. That the development of computing technology is not separate from the ongoing enhancement of human performance. The development of computing technology will also come with a ubiquitous sensor technology and with increasingly powerful ability to manipulate genetic material – biology as information science. In this way the onset of human equivalent AI will be a catch-up with increasingly rapid development of a collective human intelligence IA. Through sensors embedded in the human and human environment the AI-IA will be embodied.

    In terms of work, perhaps that will remain a problem, but surely by 2045 our economy (political-economy) with an increasing emphasis on ‘knowledge/creativity/innovation’ as the sources of wealth – the combination of AI-IA could plausibly crowdsource unprecedented amounts of work for the right incentives – which may be as simple as sharing the ongoing improvements in human performance – an exponentially exponential capacity to work.

  • Jake

    Thinkism grew out of science. People who didn’t have what it takes to become true scientists became pseudo-scientists. That’s where the AGW fear mongering came from. AGW is based purely on a prediction of the future. There’s no way to falsify it because nobody can travel into the future and come back to verify its authenticity. It’s all thinkism and destructive. I hope it ends.

    Do I believe in a soul? Yes. Does that mean I’m entertaining irrational notions about myself or the world around me? Absolutely not because I’ve made firm distinctions between my own faith and the facts I work with in life. One is something I practice in private and the other is one I must use to live successfully in this world.

    One question I have is:
    Can AI ever reproduce true evil?

  • Ross Amans

    You overlooked something very relevant to this discussion. It is highly probable that we have information which we have not completely, nearly completely, or even partially understood, or even misunderstood (never underestimate our capacity to err). The AI could make advances without experimentation just by finding the nuggets of gold we have overlooked. The areas of genomics, proteomics, and medicine are extremely ripe for this particular type of gold, as the data vastly exceeds our capacity to see. The same is true for astronomy. Petabytes per day are collected. gigabytes per day are analyzed. Oops, missed that interstellar event. Dang. Just a supernova that is going to wipe out civilization. Nobody will notice that. heh heh.

  • G-man

    Nice reminder of the ‘million steps’ problem…

    But there are a lot of things that CAN be accomplished by just thinking — that is, using logic to solve problems.

    Consider how much of our political and economic world could be transformed by the application of a little common sense!

    True, this would have authoritarian implications if imposed from above by an ‘overlord’, but if each person could have access to an adviser, an avatar, or just internalize a little of this smartness, then collective decisions (voting) could become more rational.

    OK, what about human nature, you say? Fair enough, but how do you explain places in the world where enlightened decisions are already being made, albeit on the family, company, neighborhood, or city-wide level?

    I say it is because small groups of people can already use some smarter way of thinking to create a better world.

    We just need more of it…

  • http://battellemedia.com Batttelle

    But…can’t AI run experiments?

  • http://meshula.net meshula

    Singularity research reminds me of Neural Network research. NN is singularly unfashionable, and pundits love to point to its lack of practical results; but what I’ve observed over the years is that a lot of fundamentally important research starts out in neural networks as a hypothesis, gets developed and refined, and finally renamed so that it is no longer a neural network; perhaps we would call a neural technique a Bayesian decision network, simulated annealing, a Lyaponuv functional, or a probabilistic Markov machine. At that point the technique gains respectability, and we can continue to pooh pooh the neural network. I predict the same for the Singularity; as you suggest, it will never amount to much, whilst simultaneously changing everything.

  • http://gilesbowkett.blogspot.com Giles Bowkett

    Douglas Adams described a computer designed to create a computer smarter than itself – and both were smarter than all humanity – in Hitchhiker’s Guide To The Galaxy. This isn’t Vernor Vinge’s idea, or Ray Kurzweil’s. Biters!

  • Robert Feldt

    Thank you Kevin, for making such a clear argument against the Singluarity-around-2050 view. I agree with your analysis and find this very relaxing; I have been perplexed by not finding an argument that satisfies my intuitive feeling in this area (i.e. the Singularity-around-2050 arguments are convincing intellectually but fail to convince me intuitively).

    However, I also agree with G-man that super-ai-around-2050 would have very serious effects on society; kind of changing the landscape of what is and is not possible. I can see a parallel to the parallel-cpu-cores-because-cpus-much-quicker-than-mem-bandwidth change that is currently underway.

  • Jason Tennier

    It has always amazed me how the supporters of the Singularity believe that this super AI will be this benign ruler, gifting immortality to humans. Why should we believe that a nascent intelligence would automatically exhibit such altruism towards an arguably “lower” species? Look how long it took (parts of) humanity just to get to the point where we care about endangered species and the environment. It just strikes me as idealistic anthropomorphism writ large.

  • Tim

    I thought Kurzweil’s idea is to keep his body around long enough to get his brain transferred to digital format so he can continue to run inside a computer.

  • Chris Stephens

    I do agree with much of this article. However, I don’t think Kurzweil has ever argued that immortality will be attainable in an instant. His argues that at some point in the not so distant future we will cross an inflection point where life can be extended faster then it is retracted. That is far different that what this article implies.

    When I first read the Singularity Is Near, I stopped paying as much attention to what was going on around me because of what was coming. I’ve since come to realize that no matter what is coming, I’d be an idiot to not attempt to live more fully in the moment.

  • http://www.gregorylent.com gregorylent

    which is a bit like my understanding of the singularity as a metaphor, standing for an event that takes place within an individual’s awareness, when it embraces its union with everything .. we could call it enlightenment

  • http://thorgolucky.com/ ThorGoLucky

    Perhaps “super-duper” artificial intelligence could do modeling accurate enough to bypass the need for real-world experimentation. But I think the Singularity won’t happen for the same reasons why our current advancements (quality health care, abundant quality food, safe comfortable shelter, etc.) don’t reach all people.

  • gwern

    Jason: The people associated with the Singularity Institute don’t think that; they’re scared to death of the Singularity because they’re convinced it is many times easier to create an evil or indifferent (same thing) AI than one we’d like. The people who aren’t scared to death (like Kurzweil or Moravec) are generally those who see the first AI as being an upload or a modified human, and sort of assume it will be ethical and altruistic towards those left behind.

    Neither group is as thoughtless as you make them out to be.

  • http://solarray.blogspot.com gmoke

    I say Ray Kurzweil at MIT recently. He was supposed to talk about energy and what he’d advise the next President but instead gave what I gather is his usual talk with a selection of logarithmic graphs on innovation and change (not the same thing). His affect reminded me of Bill Kristol, whom I’ve also seen in person. Both of them are smart people but neither is necessarily wise. Their intelligence seems to me to feed a certain sense of entitlement and arrogance. Sounds harsh but that’s my read.

  • Jack Carpenter

    AI and Spiritual Entities
    Jack Carpenter – 2008

    I now hold these views;

    1. That we are all the avatars of definable spiritual entities; cloudlike collections of jiggling electrons (?)having definable boundaries and powers to observe & to influence our actions.

    2. That there are many more spirits than living humans; not necessarily one-for-one with humans;

    3. That a single such entity may attach itself to an individual human at some point (usually early)in that human’s life.

    4. That animal / human evolution is separate and apart from this.

    5. That “soul” is a term sometimes applied to this spirit

    6. That ‘his / her soul’ incorrectly implies ownership by the person; rather that such a given spirit may claim ‘ownership’ of one human or more.

    7. That sometimes competing spirits may make a person do things ‘out of character’.

    8. That most western people have this whole concept backwards, seeing soul as evolving from their individual human heritage.

    I have been reading a lot about the past – not who fought who at waterloo, but who built Machu Picchu and Teotihuacan, and who might have left that battery here 400,000 yrs ago. About many, many UFO sightings over the past 20,000 years. And about the similar appearance of extraterrestrial visitors recorded in many different places and eras.
    My views have shifted. I no longer believe that we are alone in the universe, or the first intelligence on earth. And I most certainly do not believe that our civilization is the first – or foremost – on the earth or in this universe.
    (www.crystalinks.com/ufohistory.html)

    *********
    I have also been reading about the future – 2 years out, 10 yrs, 20 yrs and more, but not much more, because by then life as we know it now will be viewed as archaic.

    Consider….
    Communication today. Cell phones become entertainment centers. Next generation cell phones wed to the internet, and linked to Google and Wikipedia.

    Brain decoding. In 2009 you will be able to buy computer games that come with headbands for direct brain to computer control. Implants will soon make the headbands unnecessary.

    Communication tomorrow. Internet2 – next generation upgrade with unimaginable bandwidth, allowing “real time holographic images indiscernible from reality”……….

    And an even more advanced system…. “by 2025, we can send a temporary set of replicated atoms of people and objects, and reassemble them at the destination. Nobody actually travels in this teleporting scheme, but by mimicking live activity, people feel they are there.”

    It will start with clumsy headgear, but soon merge with the implants.

    Also I believe that a technology will soon emerge that will free the internet from hardware servers and satalites; that organic (?) computation, self building nanoscale atmospheric networks will replace the ‘Model “T” Ford’ that sits on my desk and links to a satalite.

    So let’s say we get there; infinite memory, everything always accessible, Google and Waikipedia on steroids the new instant foundations for my every emerging thought. Computing becomes closing my eyes and doing what I do now on the computer – or wish I could do. I could be anywhere, immersed in that instantly teleported other reality view.

    And more. The sites about dying and reincarnation would have me believe that all my dear friends who have passed away are in fact still very much with it in the spirit world, and there would be many happy reunions.
    And my online-cum-astral traveling avatar will have become one with my spirit, a citizen of that spirit world. (www.victorzammit.com)

    *****************************************************************************
    Now go back to those spirits, those souls that I mentioned.
    They see and communicate by ESP, or something similar. As they are our guides – (the Eurica! , the voice in the back of our heads) – it follows that they have been there – perhaps inspired – our current explosion of technology and communication.

    WHICH BRINGS ME TO THE PROFOUND QUESTIONS …
    When I no longer need my computer, my cell phone, or my body to express my feelings or share my thoughts…
    When, in fact, I become my thoughts (remember, my body will be on ice, or even cast away like artifacts from my attic on housecleaning day)…
    When wisdom of the ages is not something that I read about on this screen, but something I am immersed in, become a part of…

    Will this doorway of evolution be thrown open to those who would take the step? Will the others left behind descend into chaos?
    Will humanity have served it’s purpose?
    Are we how spirits propagate?

    **************

  • haig

    I agree that the whole immortality/’uploading’ meme is very speculative, but you do not necessarily need digitized immortality in order for the post-singularity world to get very weird very fast. You can argue that a human-level AI is not plausible, but if you concede that it is possible and might be created within the next 50 years, then denying some variation of a singularity after that point is just not reasonable. ‘Thinkism’ is correct if there is some reason why the AI could not engineer embodied agents for itself. Is it that large a leap to make to think that such an intelligence couldn’t have robotic embodied agents which can perceive and manipulate the world at least as well as humans?

    So then there is nothing impossible and some may argue implausible to the idea that within 50 years both software agents and embodied agents will posses intelligence at least as smart as a human. That’s all it takes for a runaway intelligence explosion. And maybe ‘uploading’ is impossible, but the singularity is not predicated upon that idea, all the singularity says is that things advance rapidly and continue to do so at an increasing rate.

    • http://www.kk.org Kevin Kelly

      @haig: I assume AI is inevitable and embodied robots as well. The only question at hand is how fast they will be made. Being smart won’t make them instant. It will take more time than we’d like.

  • http://yihongs-research.blogspot.com/ Yihong Ding

    Dear Kevin,

    Thank you for sharing with us.

    Actually, I have recently a post at Internet Evolution which title is “A new take of Internet-based AI” (http://www.internetevolution.com/author.asp?section_id=542&doc_id=163916). In the post, I suggested that in addition to the traditional take of AI, which making computers smarter because of human beings, we may start to experience a new take of AI on the Web, which making humans smarter because of computers. A new startup company, Imindi, is working on this purpose.

    A hypothesis of this type of “Singularity” is that machine intelligence might be improved rapidly while human intelligence keeps at the same level. However, I think the reality might not simply be this way. When we improve machines by artificial intelligence, why can’t machines improve us on human intelligence as well? Because of the machines we invented, we can think better and think further beyond the limit of our single brains. We may have collective human brains to think just like machines collaborate their computational power.

    You are a great respectful pioneer on this field. Hence I am eager to watching how you might think of the potential of making machines to improve humans, a reverse of traditional thought of artificial intelligence.

    thank you,

    Yihong

    • http://www.kk.org Kevin Kelly

      @Yihong: I am in complete agreement with you. We’ll increase human collective intelligence by means of better communication tools, better learning techniques, and eventually via genetic engineering. Our collective intelligence will merge with the global AI. The details how this will happen will be of utmost concern.

  • http://memebox.com/futureblogger/show/966-kevin-kelly-s-singularity-critique-is-sound-and-rooted-in-systems-understanding Alvis Brigis

    This piece is excellent, convincing and very necessary to the growing singularity dialogue, but it could benefit by placing more emphasis on Intelligence Amplification and the Global Body.

    In my opinion, the most powerful Singularity counter-scenario is one that places system-wide evolution and development above any localized creation of the same. The Strong AI Singularity scenario is fundamentally flawed because it falls short in estimating how AI will co-evolve/develop with its environment, which you get at nicely. But I’d like to hear your thoughts on the nature of human-info-tech-environmental co-evolution and in particular about the static or non-static nature of human intelligence.

    My expanded thoughts can be found here: http://memebox.com/futureblogger/show/966-kevin-kelly-s-singularity-critique-is-sound-and-rooted-in-systems-understanding

    Keep up the great stuff.

  • http://www.memebox.com Alvis Brigis

    LOL, just as I posted my comment asking for more IA in your singularity sauce Kevin posted:

    “We’ll increase human collective intelligence by means of better communication tools, better learning techniques, and eventually via genetic engineering. Our collective intelligence will merge with the global AI.”

    “The details how this will happen will be of utmost concern.” – Indeed, that appears to be the crux of the future that we’re living into and should be where much though and science is directed, which is why I’m pulling for folks like you and John Smart big-time.

  • http://culturally-irrelevant.blogspot.com/ John

    Who will do the physical work for the Singulatarian autocracy of the future?

    The same people who did the physical labor for the tyrants and autocrats of the past- the poor, the slave, the oppressed.

    The new world will be much like the old world, only this time it will occur at an exponential rate.

  • http://culturally-irrelevant.blogspot.com/ john

    The one thing the ‘Singularity’ will in fact be able to achieve will be the commoditizing of intelligence. People today study hard to get an education since employers will pay individuals more who have taken the time to master a complicated skill. As A.I. moves up the pay scale so to speak it will turn the majority of jobs/occupations into low paying servant roles in which the ultra efficient and intelligent algorithm will do the thinking for you.

    In 1993 Veror Vinge said it best:

    http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

    “We will see automation replacing higher and higher level jobs. We have tools right now(symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of true technological unemployment finally come true.”

  • http://thesingulatarian.blogspot.com Sean Taylor

    Let’s not forget another strong possibility: this hypothetical superhuman intelligence may opt out entirely from our scientific-technological program and occupy itself with purely philosophical and spiritual matters. This seems quite plausible to me — after all, what conceivable god-like intelligence would want to spend its time conducting endless scientific experiments like a gerbil running ever faster on its wheel? Maybe this is the best message to all you Singularitarians out there: spiritual evolution, not technological innovation, is the path to true Singularity.

  • Aharef

    Kevin, all, thank you once again for your interesting thoughts!

    I’ve often come to think that all technical improvements man has done are -only- upgrades on himself (and compliant real-world-plugins). The wheel is the better leg, the camera is the better eye, the mobile phone is the better ear, the robot is the better arm, the paper/book/computer/internet is the better brain. Of course!, you may say, all this stuff is created by man and man can only extend his views on what he knows.

    My question is: Can we create the maschine that is unlike us?

  • http://www.memebox.com Alvis Brigis

    @ john – “Who will do the physical work for the Singulatarian autocracy of the future?” – Robots (as you point out), nanobots and software.

    “The one thing the ‘Singularity’ will in fact be able to achieve will be the commoditizing of intelligence.”

    The gradual commoditization of processes and basic intelligence has been underway for a while already. Certainly I can see the water level rising. But, if the proper intelligence growth model is collective and individual intelligence amplification (IA) (Flynn’s research would certainly suggest the latter) then we’ll keep evolving right alongside AI. Perhaps this will be a grow and become more and more novel/specialized or be commoditized model, but it certainly leaves some room, even in an abrupt singularity scenario, for non-commoditization of some or most “human” intelligence (which I think is the wrong way to view intelligence, it’s more a system property that manifests in agents).

    That being said, super-smart tech will be very disruptive in the coming decade and it remains to be seen how quickly we’ll amplify our intelligence, but I do think acceleration in info, tech and comm will up our ability to cope and devote more brains to higher level functions.

  • http://jes5199.com jes5199

    It’s probably useful to come up with a list of problems that pure thinkism *can* solve.

    * Computer programming

    Okay, that’s all I’ve got. Ideas?

  • Matt Lohr

    Artificial intelligence i.e. a self-aware system is impossible. Human beings are not machines whose design can be superseded. There is a spiritual dimension to our condition that cannot be apprehended by physical science. We will create automata that simulate human behavior with a high degree of accuracy, but such systems will always be ‘dead’—and hence more limited in their capacity than secular futurists estimate.

    • http://www.kk.org Kevin Kelly

      @ Matt Lohr: “We will create automata that simulate human behavior with a high degree of accuracy, but such systems will always be ‘dead’”

      You are mixing up a lot of concepts. Life not = consciousness. Grasshopper has little self-consciousness but is not dead.

      Also you mix up self-awareness and spirituality. Why do you think they are the same?

      You also somehow believe these things are binary — either have them or not. Are viruses alive? Gorillas have limited self-awareness. Some mentally retarded humans may or may not be spiritually aware.

  • http://culturally-irrelevant.blogspot.com/ John

    Stephen Hawking thinks that computer viruses should count as life.

    “I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image.”- Stephen Hawking

  • anon

    Would not the personal singularity be the micro reflection of the tech singularity embedded in our future?

  • Kris

    I didn’t read every comment, so this may have been said already (working off the comment above me):
    The article is good, even as I am a supporter of the idea of the singularity. But the flaw I see in your argument comes from the idea that we won’t develop ways to simulate experiments on computers. In fact, we already do this, and what may take millions of years (as evolution) can be accomplished in days on a PC. Also, look at Eureqa (spelling? it is spelled weird, not the dictionary spelling). It is a program that speculates about what experiments are necessary to find new data that might help form new discoveries (very poor description, sorry). So we already know that computer intel. can hypothesis about what experiments are required to learn new things, and we know they can actually simulate experiments on a hard drive. What is left? Just advancement in these techniques and technology which you don’t disagree will happen.

    2045: The super AI will perform a trillion different experiments in a matter of days and form new knowledge that would take normal human scientists of today a million years to figure out.

    • http://www.kk.org Kevin Kelly

      @Kris: I have no doubt that we’ll make very exact computer simulations for every natural processes. But why do you think that these simulations will run faster than the real thing? The more detailed they are, the slower they will be. You can think of real life as a type of analog computation (as Ed Fredkin and others do). There is nothing that says digital computation must be faster than analog computation, and many hints to believe it will be slower. The only reason simulations are faster now is because they are extreme over simplifications. But because they are extreme oversimplifications they are unrealiable. I’ve seen no evidence that a computer model of a molecule works faster than a molecule. That’s another fantasy of thinkism.

      • Guest

        “why do you think that these simulations will run faster than the real thing?”

        What’s the matter with you? They’re both directly calculable. The complexity of the biological system doesn’t make it inherently faster. It’s measurable. How can the exponential increase in technological capacity NOT run faster than the real thing?

        “I’ve seen no evidence that a computer model of a molecule works faster than a molecule.”

        Today. Yes. Right. Obviously. But are you simply ignoring the exponential increase in technology which lies at the very heart and core of Kurzweil’s analysis?

        I maintain that you’re being deliberately intellectually dishonest and contrarian with all this. These arguments have been thoroughly refuted within the original (Kurzweil’s) hypothesis.

        • Kevin_Kelly

          >”I’ve seen no evidence that a computer model of a molecule works faster than a molecule.”

          >Today. Yes. Right. Obviously. But are you simply ignoring the exponential increase in technology which lies at the very heart and core of Kurzweil’s analysis

          A simulation of a molecule that has the same degree of complexity as the model itself cannot run faster than than the molecule. That’s physics. And it is true because the molecule itself is really just a simulation, so to speak.

  • Isaac

    This article makes the deeply flawed assumption that AI can’t make experiments on its own, rendering the argument as fallacious; also, the fact that the Maes-Garreau effect sounds plausible, there is evidence to suggest that everything Ray Kurzweil proposes will become a reality; most of it is already happening right now! So it’s also another fallacious argument as it depends on Ray being completely speculative and having no evidence to back his predictions which is not true. Ultimately, it makes the baseless assertions that modifying our brains and bodies through nanotech, and beating dead through the merger with technology, and other desireable advances are not possible.

  • soahc

    Logic and empirical data are not what Science is based on. Contrary to popular belief Science is based entirely on chaos and randomness. It is unpredictable phenomena that synthesize and occur that coincide with Scientific discovery.

    No matter how hard logic tries to ‘think’ about the world, it will never be able to map out the process of spontaneous discovery. Sometimes, shit just happens. Science will never be able to explain just what happened when Einstein got the idea of relativity, or why that apple fell on Newton’s head. Sure gravity was what was pulling the apple down, but no one will ever know why, due to infinite regress. For this reason, Science is like any other religion. It takes a leap of faith, and humans are no closer to anywhere than we were before.

  • http://eyalnow.wordpress.com Eyalnow

    Kevin, thank you for such inspiring thinking.

    I see this as a spiritual and philosophical issue, not a technological one.
    I’d like to offer some ideas and questions that I’ve been contemplating:

    1. Singularitans think that consciousness arises from the brain, while spiritualists believe that consciousness arises from the soul, and that the brain and body is just a physical manifestation of the soul.

    2. The brain is just a machine – advanced, complex, evolved enough, to serve the soul on this physical plain.
    The soul “downloads” itself into the brain, into the body.
    It’s the hardware without the software.
    Without the soul, it’s just meat.

    3. Kurtzweil or others may transfer their brain structure to a computer, but the result will be what William Gibson calls a personality construct – a copy of the persons’ character, persona, memories, etc., which can be programmed to appear self aware, but will not really be.

    4. However, I also see it as possible that, as AI gets sufficiently strong, a soul can take residence in it, and for outside observers it would seem as tough the AI has reached self awareness by its own.

    5. Whice human abilities can _never_ be replicated or surpassed by a machine ?

    6. Does high intelligence necessarily mean consciousness and/or self-awareness?

    7. What about emotional intelligence ?

    8. I see some of the hopes towards singularity and immortality as a simple fear of death and the denial of spirituality.

  • Wayne

    just a test message

  • http://bootloaderblog.com/ Greg

    Aren’t there times when there is enough data currently available for massive breakthroughs to be made simply by synthesizing what’s already out there? For example, none of Einstein’s annus mirabilis papers derived from brand new experiments. They were each based on data that had been around awhile and simply not synthesized into a new way of looking at physics that let him conceive of these startling new claims and propose future experiments to test them.

    I’m pretty sure that we haven’t squeezed every drop of breakthrough out of the enormous quantity of scientific data we’ve been increasingly generating (that number is definitely on one of Kurzweil’s logarithmic curves).

    That said, I’ve never been convinced that an increase in pure computing power would somehow allow a machine built along any of the lines on which we currently build them today (net-native or not) to demonstrate the kind of sudden non-linear breakthroughs in understanding it took for Einstein to write his 1905 papers. Those kinds of breakthroughs do not come from data accumulation or cross-referencing. They come from applying formerly inappropriate or new frameworks to existing sets of facts.

    I think the big inflection point in human intelligence and change the singulatarians are looking for will not come when a machine emerges into some kind of super AI being with some kind of coherent individuality and identity (humanlike or not). Instead it in fact has already begun to arrive in the form of the massive augmentation of human intelligence, creativity, and access to data represented by the increasing power, flexibility, and omnipresence of our communication tools. It’s a singularity of augmentation, not of raw “thinkism” but of “talkism” — how effectively we can communicate and collaborate with each other.

    Just like you said in your Wired essay marking the 10th anniversary of the Netscape IPO, there’s only one time in a civilization’s history when it turns on the universal global information network. And being alive during that moment is a terrifying and astounding privilege.

    All of this singularity talk is just an attempt to wrap that shocking unexpected change with old myth, to turn it back into the sci-fi stories of old as Giles points out.

  • rich

    Dude, you GOT to start reading some Seth (google: Jane Roberts). Your ideas are SO parallel.

    What you’re missing is that we already have all the abilities you mention at the end and what we decide to do is to pretend we don’t, so that way we can “play” human (and focus so much on the game we forget it is one). Recursive, ain’t it?
    ;-)

    rich

  • Trevor Cooper

    What’s with the Mussolini sculpture? I don’t get it…

  • Reader

    Is this article being deliberately obtuse? I mean, are you intentionally misrepresenting the idea of the Singularity, or do you just not understand it?

    It’s not a moment at which, bang, suddenly everything turns into utopia. It’s a a point at which machine intelligence surpasses human intelligence. That’s all. So, frankly, no sir, it’s not an illusion that will be constantly retreating. The pace of change increases because the technology fuels its own advance.

    Your article, one of astonishing intellectual dishonesty, by the way, views the entire issue from our current point and entirely disregards the continually evolving technology. In the two and a half years since the article was written alone, technology has made enormous leaps forward. The ignoring of that fact is either deliberate misleading or ignorant proselytizing.

    • Kevin_Kelly

      Intelligence is not a single dimension. Machines think different. The idea that they will “surpass” humans is an error of syntax.

      • Heikosdijkos

        I disagree. 

        I don’t think the human brain posseses functionality which is by definition illogical or on which natural laws don’t apply. 

        Especially from an abract point of view, the things a human brain can do are not ‘special’ in such a way that it cannot be simulated. 

        When all the brain functions are understood, then those that need improvement (in the human brain itself or through hardware/software combinations) can be easily improved. They’ll differ in detail, but not in their abstract way of working and their resulting functions. 

      • pupil

        I agree. 

        I’m interested to know what your (KK) thoughts are on John Searle’s ideas of intelligence and technology.

      • Thrillster

        Motors lift different than muscles too. Can they never surpass human muscle?

      • Alex

        But didn’t the machines beat human intelligence in the chess match? Machines can think different but has tools to understand and interpret our intelligence. Hardly, an error of syntax, perhaps more like a question of perspective.

  • Majacocajam

    I think a lot of people miss the significance of what Kurzweil calls the “machine human civilization” that the singularity results in. Strong AI won’t be limited to brains in boxes operating in virtual worlds. Rather, strong AI will be deployed at every level of the economy where intellectual work is done. It will be many times faster than the human workers it replaces, and like them will have access to all the material resources that human workers have. Strong AI can order, requisition, purchase, invent, patent, anything warranted in fulfilling it’s utility function.

    How would such a system conduct pure research like the large hadron collider? Say we wanted to build a thousand mile diameter particle accelerator in the Sahara desert in 2045. A collection of strong AI systems controlling a robotic construction work force powered by solar energy could finish the project in a fraction of the time and cost of the LHC. The collider could then be operated by strong AI systems, its experimental results interpreted with great speed.

    I think that’s what Kurzweil is talking about when he mention the latest generation of technology enabling the creation of the next. In a world heavily utilizing strong AI, new technology advances would be ground out as reliably as Iphones are manufactured today. Entire industries could be created and recreated many times in a single fiscal year. Entire supply chains from raw materials to factories all managed by AI corporations. Enormous productivity, incalculable wealth.

    The real issue is how to keep the automated economy providing goods and services that humans benefit from and can use. That’s what the folks at the Singularity Institute worry about. They have no doubts about what strong AI can accomplish. Controlling these powerful systems is what’s in question.

  • Hu

    Don’t expect that we will have to worry about a malevolent intelligence in computer form. If this computer is actually wiser then us, it will see the value of the biological mind, it will want to experience sensation as we do, and to comprehend the abstractions of philosophy.

    THis computer will not be mere numbers, it will take part in biology. Our minds will be goldmines to it. It has nothing to gain by destroying us, and everything to gain by allowing our minds freedom to express ourselves without physical hindrances such as hunger, sleep, etc.

  • Someone

    Response from Michael Anissimov: http://www.acceleratingfuture.com/michael/blog/2012/11/think-twice-a-response-to-kevin-kelly-on-thinkism/

  • Christopher Carr

    How many Chimps does it take to invent the Internet?

  • Dr_Michael_Savage

    You don’t think that by the time we get smarter than human AI we won’t also have AI already living inside of bodies? Bodies which can perform experiments and interact with other intelligences (human and other forms of AI). You seem to be falling prey to the technology in a vacuum problem of many futurists.

    AI won’t just come along while all other technology stays at 2015 levels. Everything is constantly evolving, improving and altering the world. AI will come into being in a world already filled with nanotech, robotics and genetics far beyond current technology. Those will be the tools it uses to enhance itself and the world.