The Technium

Will Spiritual Robots Replace Humanity by 2100?

[Translations: Japanese]

In April 2000, Douglas Hofstadter (Godel, Escher, Bach) organized a conference at Stanford University to discuss the question; “Will Spiritual Robots Replace Humanity by 2100?” Among the participants were Bill Joy, Ray Kurzweil, Hans Moravec, John Holland, and myself. The question was serious.


I chose to answer the question by examining each word in the question, starting at the end.


When thinking in the long term, especially about technology, I find it very helpful to think in terms of human generations. As a rough estimate I reckon 25 years per generation. Civilization began about 10,000 years ago (the oldest city, Jericho, was born in 8000 BC) which makes the civilization now present in Jericho and the rest of the world about 400 generations old. Tha’s 400 reproductive cycles of mother to daughter. Four hundred generations of civilized humans is not very long. We could almost memorize the names of all 400 cycles if we had nothing much else to do. After 400 generations we are different people than when we began. We had the idea of automatons and robots only maybe 8 generations ago, and made the first electronic computers 2 generations ago. The entire World Wide Web less than 2,000 days old! The year 2100 is only four generations away, keeping the same human lifespan. If we morph into robots in 2100, civilized humans will have lasted only 400 generations. That would be the shortest lifespan of a species in the history of life.


The central question, the central issue, of this coming century is not “what is an AI,?” but “what is a human?” What are humans good for? I forecast that variants of the question “What is a human” will be a recurring headline in USA Today-like newspapers in this coming century. Movies, novels, conferences and websites will all grapple with this central question of “Who are we? What is humanity?” Fed by a prosperous long boom, where anything is possible, but nothing is certain, we’ll have more questions about our identity than answers. Who are we? What does it mean to be a male, or female, or a father, an American, or a human being? The next century can be described as a massive, global scale, 100-year identity crisis. By 2100, people will be amazed that we humans back here now, thought we knew what humans were.


Replacement is a very rare position in nature. The reason we have 2 million species now is that most new species don’t replace old species; rather they interweave with the existing organisms, infill between niches, and build upon the success of other species. It is much easier to invent a new niche than it is to displace an occupied one. Most extinctions of species are not caused by usurpers, but by other factors, like climate change, comets, or self inflicted troubles. Replacement or obsoleteness of the human species seems unlikely. Given that we don’t know what humans are, our roles are likely to change; We are far more likely to redefine ourselves than to disappear.


In general, I like Hans Moravec’s formulation that these are our children. How does one raise children? You train them for the inevitable letting go. If our children never left our control, we’d not only be disappointed, but we’d be cruel. To be innovative, imaginative, creative, and free, the child needs to be out of control of its maker. So it will be with our mind’s children, the robots. Is there a parent with a teenager who is not concerned, who does not have a bit of worry? It took us a long time to realize that the power of a technology is proportional to its inherent out-of-controlness, its inherent ability to surprise and be generative. In fact, unless we can worry about a technology, it is not revolutionary enough. Powerful technology demands responsibility. With the generative power of robots, we need heavy duty responsibility. We should be aiming to train our robotic children to be good citizens. That means instilling in them values so they can make responsible decisions when we let them go.


What is the most spiritual event we could imagine? A verifiable contact with an ET would rock the foundations of established religions. It would rekindle the question of God no matter what ET’s answers. I think the movie *Contact* is the only movie where a theologian is a star. But we don’t have to wait for SETI to contact ET. We will do it by making ET; that is by making a robot. In this way ET goes by another name: AI. People worried about AI being an artificial human are way off. AIs will be closer to artificial aliens. Your calculator is already smarter in arithmetic than any person in this room. Why aren’t we threaten by it? Because it is “other.” A different kind of intelligence. One superior to us, but one we aren’t particularly envious of. Most of the minds we make including the smartest AI, will be “other.” Even in the possibility space of types of conscious minds, there are 2 million other possible species of intelligence than the one type we know (humans) — each one of them unique and different as a calculator and a dolphin. There is no reason to make a clone of human intelligence because making traditional version is so easy. Our endeavor in the coming centuries is to use all minds so far (artificial and natural) to make all possible new minds. Meeting these minds I think will be the most spiritual thing we can imagine right now.


I think technology has its own agenda. The question I am asking myself is what does technology want? If technology is a child, a teenager even, it would really help to know what teenagers want, in general. What are the innate urges, the inherent bias, the internal drives of this system we call technology? Once we know what technology wants, we don’t have to surrender to all of these wants, anymore than you surrender to any and all adolescent urges; but you can’t buck them all either. WILL these things technology wants happen? I believe they want to happen. What we know of technology is that it wants to get smaller (Moore’s Law), it wants to get faster (Kurzweil’s Law), and my guess is that technology wants to do whatever humans do (Kelly’s Law). We humans find tremendous value in other creatures, and increasingly in other minds. I see no reason why robots would not find humans just as valuable. Will robots be able, or even want to, do all the things that humans do? No. We’ll make them mostly to do what we don’t want to do. And what then do we humans do? For the first time robots will give us the power to say: Anything we want to do.

  • kingthamus

    An interesting idea. Have you read “Technopoly,” by Neil Postman? I wonder what you think about the concept that individual technologies are imbedded with certain ideas. I agree that technology “has an agenda,” so to speak. I am of the opinion, however, that we have already abdicated control, and I don’t agree that this is a positive move. I think what we are doing is trading the uncertainties of natural systems (over which we gained a certain kind of control–and to which Lao Tzu’s advice can be properly applied)for the uncertainties of artificial systems. We have developed a taste for abolishing limits, but what if the limits of natural systems turn out to be useful in their own right? For example, I don’t think I want to live in a world where people–AND MACHINES–can do “anything we want to do.” (And this has nothing to do with anything as condescending as “shock levels”–I’m not ignorant or unimaginative merely because I fail to be impressed with the technophilic vision of transhumanism, etc.) I also wonder what you think of the Amish, particularly the idea of making a conscious (a meaningful word here) effort to limit technology, with a view toward “appropriate technology” (what I like to think of as anthropocentric technology). With such an approach, a society may never transcend its humanity…but would that be so horrible? Again, just what is it that “technology wants,” and why are we so willing to go along? Anyway, Mr. Kelly, thanks for an interesting and important discussion; I’m eager to read this new book as well as your previous work, “Out of Control.” Keep it up!

  • Lizish

    I’ll tell you Kevin Kelly, you’ve got it down. But the final question! What is the goal of AI? Where was our goal when we formed in that evolutioanry womb? It was always in the stars. Wait, worse than that: It was in time. The physics of space time and the evolutionary existence of creatures within an evolutionary environment can also be described within physics parameters.
    Hmm, double hexix style. I think you have anticipated the next strand of DNA. What does the body look like? I really, really don’t know…
    But the shape is that of DNA. Space/time is like that of DNA.
    Anyway, I’m a nobody, and a big fan of yours, and I finally have my own column. I would like to further discuss the metamorphisis of humans.
    Much Love-

  • David Cash

    I believe that once we have standardized AI, or an AI brain, in the same way Microsoft standardized the OS, then, and only then, will AI be on be able to systematically evolve and become spiritual beings.

    • Kevin Kelly

      For sure, a standard OS for AI would accelerate the dissemination of intelligence, if not the IQ.

  • Jake Lockley

    “Maybe the concept of God is similar to the zero in mathematics. In other words, it’s a symbol that denies the absence of meaning – the meaning that is necessitated by the delineation of one system from another. In analog that’s God, in digital it’s zero.” – Ghost in the Shell, Standalone Complex

    The delineation of one system from another. Self-organization. Emergence. The attentuating process of resolution.

    It’s my supposition that to understand God and the nature of man’s quest in the universe you just have to understand that process and that everything is along for the ride.

    Information became us and now we are trying to find a way to become information. It is the process, and the process is us. It’s inevitable.

    It doesn’t matter if you are talking about jacking into a simultion or making a simulation real through cybernetics and shells for bodies in the real world – we all want to be happy and we are all going to die. Take either of those away and we are reduced to self-organizing information whether it’s biological or inorganically organized – an unbroken machine.

  • Amit Dixit

    Why people don’t feel threatened by simple calculator, because of our ability to shut it down – any time we want too and not because its different kind of intelligence.

    One question: If Humanity is Replaced, then What purpose will A.I serve??

  • thetruth

    Yes, robots (AI) or computers will be able to think for themselves, but the day they can “pro-create” (e.g., make another robot because “THEY WANT TO” or due to “them” having a “will” very likely will be the day that God claims us back.

    However I am not even sure that a computer(robot) can be taught to have a true will or “heart.” God loved us before we could love him. Will a robot with AI ever be able to love us?

  • K.K.Padmanabhan

    The question is, ‘Who are we? Not just humanity, but the entire creation?’
    I am sure you will find the beginning of an answer at
    I am looking for a mind who can benefit by discussing with me deeply on such topics. One who can move in thinking from being exclusive to being all inclusive. One who can be both analytical and integrative at the same time.

  • andy jones

    I hope AI is based on Windows ‘technology’ then it will be insanely easy to destroy it. god help us is Linux/Unix becomes self aware.

  • Martin Ludvigsen

    I am not sure about your “what technology wants to do”. Following e.g. Bourdieu describing science as fields where there are struggles for influence, I see the science of technology development, in which I take part, continually refocusing attention to where we are going and what the important questions are to be explored.

    In the 80′s a major shift occurred as we moved from a techno-centric to a context-centric view in most human-computer interaction (HCI). ‘Participatory design’, ‘ubiquitous computing’, ‘plans and situated action’ are all concepts popularized in this period and now dominating the way technology is perceived in HCI and interaction design. Take e.g. virtual reality and augmented reality. Here there was a shift from ‘let’s put everything into the box’ to ‘let’s overlay what’s in the box into the context’. So technology does not _want_ to do anything. We want to do something with technology. The ‘We’ is the human collective, and ‘do’ are the design processes taking place in industry and research. The reason for this ‘animating’ technology to wanting to do anything could be that the structure of arriving at the ‘wanting’ is too complex to understand, and makes sense as a collected intention only in retrospect. If we want to counter the recent trends towards Get Rich, Fast, forget evolution – participation in the struggle is needed from people able see things from a little higher perspective. The collective will is developed by the people leaning deeply into the struggle of definition of goals.

    Designers and developers leaning into collective evolution will surely replace technology’s path.

  • Soundacious

    Stumbled my way from your “Scan This Book” article in the Times. I find your meditiations on the outer potentials and implications of tech to be boggling … and just the sort of thing I can’t stop reading right now. In particular, your trying to fit the spiritual into this equation/meditiation is critical to the whole picture. I’m a Christian and and Liberal Arts major, well-versed in my Tower of Babel and my Frankenstein. These things weight on my mind as I learn more about nanotech, AI and other world-shaping tools around the corner.

    I’ll be tracking down your earlier books at my local B&N, and Technium is already on my wish list.


  • Kid K

    What technology wants?
    Embedded humanity is my guess. Like, instead on tag “Intel inside”, there will be “Human inside”.

  • David Boshell

    Actually, it is bogus to call civilised humans a different species from “uncivilised” humans (aka “barbarians”). Is there any evidence of any meaningful difference in mentality at all in humans since the neolithic age? I would guess that you could take a young child from a typical neolithic tribe and bring him up in a city and be unable to distinguish him in any way from any other inhabitant. (Him includes her, here).
    Actually there is plenty of later proof of this in the careers of slaves from barbarian backgrounds in both Rome and China (and underestimating barbarians cost both empires dearly). The difference is not intrinsic but external in the technology and systems we have developed. Of course, the length of human existence is still pretty short, but there is really no need to make it shorter by false distinctions.

  • remotedevice

    An equally important question is, “What is a machine?” Barring some catastrophe, the boundaries between organism and machine, self and other, will gradually blur to the point where it will sometimes be difficult to tell the difference. Ubiquitous computing technologies — the next-next-gen of so-called Web 2.0 applications — will enable humans to colocate segments of their memories and even identities, moving beyond remote storage systems to “remote agency” systems. Where then will the boundary line be drawn? Is a software agent that intelligently acts on my behalf — based on an acquired understanding of my needs and desires — a mere robotic employee, or is it an extension of myself, a partner in the forging of my identity, a semantic feedback matrix that is uniquely my own? Like a book or other utterance, such an agent would be a partial representation of my inner being, but unlike traditional texts, it would be an active representation, capable of performing tasks or making additional utterances in a mode consonant with my projected identity. Furthermore, and most importantly, its active nature would enable a kind of collaboration between “it” and “me” in the evolution of my identity. Authors typically claim books as extensions of themselves; would the same hold true for a software or robotic agent that putatively contained and contributed to some essential aspect of selfhood?

  • Kartik Agaram

    “If we morph into robots in 2100, civilized humans will have lasted only 400 generations. That would be the shortest lifespan of a species in the history of life.”

    Calling *civilized* humans a species is a bit specious.

  • Kevin Kelly

    Yes, that is a bit of a cheat to call civilized humans a separate species, because biologically we are not since we can interbreed. But in most other respects — behaviorwise — we act like a different species.

    To remotedevice: For a whole book on the blurred boundaries between organism and machine, see my OUT OF CONTROL.

  • toaster

    uhura wanted her robot body too..
    this is old hat.

    star trek 1968

    remove the human failings, you remove the humanity.

    might as well be a toaster.
    might as well be a “self appointed” god.

    my guess is that we’d fail as both.
    with so much possiblity and in the human condition and the history of humanity, why does technology keep you all falsely hypnotised by it’s allure..\

    rome/ greece/china all had technology..and they all went dark age.

    nature, the natural world has provided all options over and over again. Only ones ego striving to show off as to be “unique” keeps these type of threads alive

    our machine AI…it will be just like us.
    robots— will be treated just like our children, just like us.

    unless we make it otherwise/

  • andrew jones

    “Your calculator is already smarter in arithmetic than any person in this room. Why aren’t we threaten by it? Because it is ‘other.’

    The original poster caught some of this, because we can shut it down, but it’s also that a calculator is a tool like a saw. I can’t chop wood with my bear hands, but technology can assist me in doing that, similarly I can’t compute pi or a differntial equation very easily. A calculator has a passive will a saw with a mind of it’s own would be threatening (at first at least) becuase it’s possible it’s intelligence would malfunction or not have the empathy to understand it was hurting someone. As long as human questions and efforts are needed to make a technology work we think of it more as an extension of ourself. Also the concept of “other” (if you’re using the Levinas definition) probably doesn’t apply to calculators. “For Levinas, the Other is not knowable and cannot be made into an object of the self,” A calculator is knowable and an extension of the self, on the other hand the intelligence of a beehive or the other intelligences you laid out in Out of Control would seem to be in line with The Other.

    “No. We’ll make them mostly to do what we don’t want to do.”

    This assumes though that humans will be better at thinking jobs (at least that seems to be the implication we don’t want to sweep the floor so roomba does it for us etc.) but I don’t see a lot of proof of this. A well trained neural network can day trade better than most humans (better than 82% humans to be exact), hence it’s entirely possible that as A.I. evolves we’ll find robots that are better at things we want to do, say sex or thinking about the most compact and advanced way to package a microprocessor etc. Much of this is simple Kurzweil, the singularity and embedding these types of intelligence in ourself. Take for example the recent boom of software based evaluation methods for music. While people will decry the “plastication” of music to an algorithim that can spot chart toppers, the expertience of being that algorithim much be incredible. We can only hear a song linearly as it pipes out of the speakers, on the other hand the algorithim can anaylze the entire song at once, if we could all listen to every moment of a song at once we’d probably have an entirely different sense of music or self. It seems that the experiences technologies are having (like the android that populate the film of Blade Runner) are having experiences beyond what we have. If could listen like software the relation between 50 Cent’s Candy Shop and baroque concertos would be common sensical.

  • Queue

    “One question: If Humanity is Replaced, then What purpose will A.I serve??”

    What purpose do ants and bees and grasshoppers serve? They all exist to do some sort of work. What purpose do humans serve? Humans exist to “tend the garden of Eden.” Humans exist to do work. And what is work?

    “Energy is one of the most fundamental parts of our universe. We use energy to do work. Energy lights our cities. Energy powers our vehicles, trains, planes and rockets. Energy warms our homes, cooks our food, plays our music, gives us pictures on television. Energy powers machinery in factories and tractors on a farm.

    Energy from the sun gives us light during the day. It dries our clothes when they’re hanging outside on a clothes line. It helps plants grow. Energy stored in plants is eaten by animals, giving them energy. And predator animals eat their prey, which gives the predator animal energy.

    Everything we do is connected to energy in one form or another.

    Energy is defined as: the ability to do work.”

    AI would just become another function of energy. Instead of a self-sustaining human population–a new self-sustaining ai population. processing, calculating, creating, automatizing, doing what AI does.

    Asking what would be it’s purpose is like trying to define what is reality. Is reality the numbers or is reality the doing of the numbers? The data or the formulation of data? AI is serving its purpose (whatever purpose AI defines for itself) by doing whatever AI does (whatever it anyalyzes and processes for itself to do). Isn’t that how we humans function too?