The Technium

The Singularity Is Always Near

[Translations: Japanese]

There’s a visceral sense we are experiencing a singularity-like event with computers and the world wide web. But the current concept of a singularity is not be the best explanation for the transformation in progress.

The singularity is a term borrowed from physics to describe a cataclysmic threshold in a black hole. In the canonical use, an object is pulled into the center gravity of a black hole it passes a point beyond which nothing about it, including information, can escape. In other words, although an object’s entry into a black hole is steady and knowable, once it passes this discrete point nothing whatever about its future can be known. This disruption on the way to infinity is called a singular event – a singularity.

Mathematician and science fiction author Vernor Vinge applied this metaphor to the acceleration of technological change. The power of computers has been increasing at an exponential rate with no end in sight, which led Vinge to an alarming picture. In Vinge’s analysis, at some point not too far away, innovations in computer power would enable us to design computers more intelligent than we are, and these smarter computers could design computers yet smarter than themselves, and so on, the loop of computers-making-newer-computers accelerating very quickly towards unimaginable levels of intelligence. This progress in IQ and power, when graphed, generates a rising curve which appears to approach the straight up limit of infinity. In mathematical terms it resembles the singularity of a black hole, because, as Vinge announced, it will be impossible to know anything beyond this threshold. If we make an AI which in turn makes a greater AI, ad infinitum, then their future is unknowable to us, just as our lives have been unfathomable to a slug. So the singularity became a black hole, an impenetrable veil hiding our future from us.

Ray Kurzweil, a legendary inventor and computer scientist, seized on this metaphor and applied it across a broad range of technological frontiers. He demonstrated that this kind of exponential acceleration is not unique to computer chips but is happening in most categories of innovation driven by information, in fields as diverse as genomics, telecommunications, and commerce. The technium itself is accelerating in its rate of change. Kurzweil found that if you make a very crude comparison between the processing power of neurons in human brains and the processing powers of transistors in computers, you could map out the point at which computer intelligence will exceed human intelligence, and thus predict when the cross-over singularity would happen. Kurzweil calculates the singularity will happen about 2040. That seems like tomorrow, which prompted Kurzweil to announce with great trumpets that the “Singularity is near.” In the meantime everything is racing to that point – beyond which it is impossible for us to imagine what happens.

Even though we cannot know what will be on the other side of the singularity, that is, what kind of world our super intelligent brains will provide us, Kurzweil and others believe that our human minds, at least, become immortal because we’ll be able to either download them, migrate them, or eternally repair them with our collective super intelligence. Our minds (that is ourselves) will continue on with or without our upgraded bodies. The singularity, then, becomes a portal or bridge to future. All you have to do is live long enough to make it through the singularity in 2040. If you make it till then, you’ll become immortal.

I’m not the first person to point out the many similarities between the Singularity and the Rapture. The parallels are so close that some critics call the singularity the Spike to hint at that decisive moment of fundamentalist Christian apocalypse. At the Rapture, when Jesus returns, all believers will suddenly be lifted out their ordinary lives and ushered directly into heavenly immortality without going through death. This singular event will produce repaired bodies, intact minds full of eternal wisdom, and is scheduled to happen “in the near future.” The hope is almost identical to the techno Rapture of the singularity.

There are so many assumptions built into the Kurzweilian version of singularity that it is worth trying to unravel them because while a lot about the singularity of technology is misleading, some aspects of the notion do capture the dynamic of technological change.

First, immortality is in no way ensured by a singularity of AI. For any number of reasons our “selves” may not be very portable, or new engineered eternal bodies may not be very appealing, or super intelligence alone may not be enough to solve the problem of overcoming bodily death quickly.

Second, intelligence may or may not be infinitely expandable from our present point. Because we can imagine a manufactured intelligence greater than ours, we think that we possess enough intelligence right now to pull off this trick of bootstrapping. In order to reach a singularity of ever-increasing AI we have to be smart enough not only to create a greater intelligence, but to also make one that is able to create the next level one. A chimp is hundreds of times smarter than an ant, but the greater intelligence of a chimp is not smart enough to make a mind smarter than itself. Not all intelligences are capable of bootstrapping intelligence. We might call a mind capable of imaging another type of intelligence but incapable of replicating itself a Type 1 mind. A Type 2 mind would be an intelligence capable of replicating itself (making artificial minds) but incapable of making one substantially smarter. A Type 3 mind would be capable of creating an intelligence sufficiently smart that it could make another generation even smarter. We assume our human minds are Type 3, but it remains an assumption. It is possible that we own Type 1 minds, or that greater intelligence may have to be evolved slowly rather than bootstrapped instantly in a singularity.

Third, the notion of a mathematical singularity is illusionary. Any chart of an exponential growth will show why. Like many of Kurzweil’s examples, an exponential can be plotted linearly so that the chart shows the growth taking off like a rocket. Or it can be plotted on a log-log graph, which has the exponential growth built into the graph’s axis, so the takeoff is a perfectly straight line. His website has scores of them all showing straight line exponential growth headed to towards a singularity. But ANY log-log graph of a function will show a singularity at Time 0, that is, now. If something is growing exponentially, the point at which it will appear to rise to infinity will always be “just about now.”

Look at this chart of the exponential rate at which major events occur in the world, called Countdown to Singularity . It displays a beautiful laser straight rush across millions of years of history.
But if you continue the curve to now instead of stopping 30 years ago it shows something strange. Kevin Drum, a fan and critic of Kurzweil who writes for the Washington Monthly extended this chart to the right now by adding the pink section in the graph above, instead of stopping 30 years ago.
Blog Kurzweil
Surprisingly it suggests the singularity is now. Even weirder it suggests that the view would have looked the same almost any time along the curve. If Benjamin Franklin (an early Kurzweil type) had mapped out the same graph in 1800, his graph too would have suggested that the singularity would be happening then, RIGHT NOW! The same would have happened at the invention of radio, or the appearance of cities, or at any point in history since – as the straight line indicates – the “curve” or rate is the same anywhere along the line.

Switching chart modes doesn’t help. If you define the singularity as the near-vertical asymptote you get when you plot an exponential progression on a linear chart, then you’ll get that infinite slope at any arbitrary end point along the exponential progression. That means that the singularity is “near” at any end point along the time line — as long as you are in exponential growth. The singularity is simply a phantom that will materialize anytime you observe exponential acceleration retrospectively. Since these charts correctly demonstrate that exponential growth extends back to the beginning of the cosmos, that means that for millions of years the singularity was just about to happen! In other words, the singularity is always near, has always been “near”, and will always be “near.”

For instance, if we broadened the definition of intelligence to include evolution (a type of learning), then we could say that intelligence has been bootstrapping itself all along, with smarter stuff making itself smarter, ad infinitum, and that there is no discontinuity or discreet points to map. Therefore in the end, the singularity has always been near, and will always be near.

Fourth, and most important, I think that technological transitions represented by the singularity are completely imperceptible from WITHIN the transition that is represented (inaccurately) by a singularity. A phase shift from one level to the next level is only visible from the perch of the new level — after arrival there. Compared to a neuron the mind is a singularity — it is invisible and unimaginable to the lower parts. But from the viewpoint of a neuron the movement from a few neurons to many neurons to alert mind will appear to be a slow continuous smooth journey of gathering neurons. There is no sense of disruption, of Rapture. The discontinuity can only be seen in retrospect.

Language is a singularity of sorts, as was writing. But the path to both of these was continuous and imperceptible to the acquirers. I am reminded of a great story a friend tells of some cavemen sitting around the campfire 100,000 years ago, chewing on the last bits of meat, chatting in guttural sounds. One of them says,

“Hey, you guys, we are TALKING!
“What do you mean TALKING? Are you finished that bone?
“I mean we are SPEAKING to each other! Using WORDS. Don’t you get it?
“You’ve been drinking that grape stuff again, haven’t you.”
“See we are doing it right now!”

As the next level of organization kicks in, the current level is incapable of perceiving the new level, because that perception must take place at the new level. From within our emerging global cultural, the coming phase shift to another level is real, but it will be imperceptible to us during the transition. Sure, things will speed up, but that will only hide the real change, which is a change in the rules of the game. Therefore we can expect in the next hundred years that life will appear to be ordinary and not discontinuous, certainly not cataclysmic, all the while something new gathers, until slowly we recognize that we have acquired the tools to perceive that new tools are present – and have been for a while.

When I mentioned this to Esther Dyson, she reminded me that we have an experience close to the singularity every day. “It’s called waking up. Looking backwards, you can understand what happens, but in your dreams you are unaware that you could become awake….”

In a thousand years from now, all the 11-dimensional charts at that time will show that “the singularity is near.” Immortal beings and global consciousness and everything else we hope for in the future may be real and present but still, a linear-log curve in 3006 will show that a singularity approaches. The singularity is not a discreet event. It’s a continuum woven into the very warp of extropic systems. It is a traveling mirage that moves along with us, as life and the technium accelerate their evolution.

UPDATE: Philip Winston crafted a marvelous way of visualizing the inherent phantom nature of a technological singularity. In a post he calls The Singularity is Always Steep, he maps out the problem. I’ve combined his images into one picture here. In the first square (upper left) the curve of progress shows the vertical Singularity in 30 years. But if you keep the curve going another 10 years, that earlier point, oncer vertical, becomes horizontal, and a new vertical point appears. Likewise you can extend the curve ahead another ten years and then another, and all those former vertical Singularities sink into ordinaryness. The only remedy is to plot the point on a log curve(lower right box) which suddenly reveals the truth: Any point — in either the past, present, or future — along an exponential curve is a singularity. The Singularity is always near, always right now, and always in the past. In other words, it is meaningless.

  • Mark White

    So the singularity has already arrived, but it’s just a phase we are in on the way to some other singularity? Phase shifting does seem inherently more appealing and readily understandable as a theory for what’s happening…the “spike” seems a lot more psuedo-mystical and therefore off-putting for many of us

  • Kevin Kelly

    As I suggest, humanity has passed through at least one, maybe two or more phase shifts (or singularities) already, and we are in the middle of another, but it won’t appear disconintuous to us.

  • Eliezer Yudkowsky

    This misrepresents Vernor Vinge’s notion of a Singularity, which is the breakdown in our *model* of the future when the first smarter-than-human intelligence is created; because, if we knew exactly what they’d do, we’d be that smart ourselves. It is not about intelligence running off to infinity. It happens when you get the first smarter-than-human mind. The term “Singularity” is not in analogy to a mathematical singularity where a function approaches infinity as a limit, but in analogy to the breakdown that occurs in our *model* of the laws of physics when we try to figure out what happens at the center of a black hole. Vinge’s Singularity is an absolute threshold, not an affect of accelerating anything; it happens when the upper bound on intelligence that has held since the dawn of Homo sapiens in its modern form, the past fifty thousand years, ceases to hold.

    Kurzweil’s Singularity is actually diametrically opposed to Vinge’s Singularity, since Kurzweil believes that his accelerating trends in technology will continue to hold, the graph’s curve looking just the same, even after smarter-than-human intelligence is created. Kurzweil’s Singularity is a predictable, quantifiable acceleration; Vinge’s Singularity is a breakdown of the model.

    Vinge’s essential observation is that a future containing smarter-than-human minds is different *in kind* in a way that you don’t get from soaring skyscrapers, flying cars, even virtual reality. I think this is an incisive and important observation, and a proper rebuke to all futurists who mention augmented human intelligence in the same breath as nanotechnology or space colonization.

    Intelligence is not just another glamorous sparkling expensive shiny gadget like your MP3 player. Homo sapiens’ extraordinary powers of cognition are the foundation of our power, the strength that fuels all other human arts, the root from which grows all branches of the technology tree. When you make something smarter than a human, whether it’s an augmented human or an AI, you lift up the tree by its roots. You deliver a major kick to the foundations of the world. Vinge has justly pointed out that we ought to be paying attention to this.


    Eliezer Yudkowsky,

    Research Fellow, Singularity Institute for
    Artificial Intelligence.

  • Kevin Kelly


    I disagree. When I read Vernor’s original 1993 essay, it is clear to me that by Singularity he means what he says:

    “What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale…From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control.”

    He later talks about “runaway intelligence.” While he emphasizes the breaking of models and rules aspect, Vernor clearly has in mind something very akin to Ray’s idea as the root cause. Both authors assume a bootstraping intelligence, although both think about the consequences differently.

  • Barry

    I think there are some very likely reasons why a technological singularity is unlikely that have nothing to do with charts and graphs. You don’t need to be a mathematician to see the fallacies inherent in this techno-Chimera…although, if you are one, you may find this essay insufferably tongue-in-cheek. I stand by my sweeping generalizations, though. Visit to find out what they are…

  • Jochen Topf

    I haven’t read Rudy Rucker or Vernor Vinge (yet), so I’ll only comment on the mathematical base of the word “singularity”, because it seems there is some confusion. In mathematics “a singularity is in general a point at which a given mathematical object is not defined” (Wikipedia). The function y=1/x has a singularity at x=0, because it goes to +infinity when apporaching 0 from the left, to -infinity when approaching 0 from the right and is undefined at 0 itself. This is the kind of thing that supposedly happens in black holes.
    So when you look at the curve there is a very definite point, where “something strange happens”. All the rules go out the window, there is a discontinuity there.
    With exponential curves this is different. They don’t have this point. As Kevin says, there is no point on an exponential curve that looks any different from any other point on the curve. Exponential curves might look similiar to the 1/x curve on first glance, but they are mathematically very different.
    Going back to the human development, we can now argue whether we are on an exponential curve or an 1/x type curve.
    But there is another thing: The issue of “scale”. Exponential curves are self-similiar, but only if there is no scale. But we do have a scale here. If we bring in the human lifespan as an obvious scale, there *is* a huge difference between a development that takes 10.000 years and one that only takes 1 year. So while the development always seems to accelerate, it is very different whether this happens in the lifespan of a hundred generations, a single human or in the time he brushes his teeth. So while mathematically the exponential curve looks the same at every point, to us humans it looks very different. There doesn’t have to be a “special point” anywhere on the curve, because we humans have special points. We have already moved past some of these points: Only a hundred or two hundred years ago people could learn a job and expect to do the same job for the rest of their life. This is not true any more. We have passed the live-your-whole-life-the-same-point. And if we think this through and believe in the every-accelerating development, it is only a question of time, till we have to change jobs daily to keep up with development. But we humans can’t do that, so *something* has to give, something has to happen. Maybe the acceleration will slow down, maybe superhuman machine intelligence can keep up with the development and we humans will live in a world, that looks to the superhumans like the third world looks to us today.
    The while in mathematical theory the exponential curve doesn’t have a singularity, for all practical purposes it has, because humans are not scale-free.

  • Kevin Kelly


    That’s a very important point you bring up. I have to agree with you that the self-similarity aspect of exponential curves are “broken” by the unchanging limits of human scale – in particular our lifespan and attention spans. As I think about it, your explanation may be a much better way to describe our situation than the singularity. I’d like to muse on it for a while. Thank you!

  • Rik

    On the one hand you have Darwinian evolution, which is inherently biological. The idea of memes is based on genes, even though memes do not make sense within Darwinism, or neodarwinism for that matter.
    On the other hand you have, what I’d like to call Lamarckian evolution, which is inherently artificial. If you think Lamarck is useless, then humanity’s domination of the planet within a thousand years or so, does apparently not impress you. It happened of course because a peculiar bunch of barbarians invented science.
    The point of all this: I think the idea of the Singularity will be absorbed by the (not quite existing) field of artificial evolution.

    ps. what if there were mini-Singularities? Science as we know it works with explosions, which sounds Lamarckian to me. We might be living in such an explosion.

  • gs

    Recently, PZ Myers has revived this issue.

    Upon scrutiny, the Kurzweil/Drum plot reveals interesting properties and implications[1]:

    a. It resembles a log-log plot of a power law. Curiously, TSIN’s discussion of the figure does not draw attention to the power-law interpretation.

    b. There are constraints on the “time(s) to the next event” plotted on the y-axis: between any two events A and B, the sum of the ‘times to the next event’ equals the total time elapsed between A and B. That is obvious of course, but it has implications for the spacing of events that obey a power law or any formula for that matter. (Cf. Myers’ harsh criticism of Kurzweil’s selection of events.)

    c. The overall slope of the ‘line’ is more or less consistent with a value of -1. A(n approximately) unit slope is noteworthy (each event is half as close to the Singularity as its predecessor), but why not some other number? I won’t give equations, but basic engineering math suffices for handwaving arguments that the number of events will multiply logarithmically as the Singularity is approached. A logarithmic Singularity is not Kurzweil’s preferred scenario, but he mentions it on p. 495.

    d. Like commenter dreish at, I am unpersuaded by Kurzweil’s criticism of Drum’s extrapolation. Extending a line segment over a distance smaller than its length is plausible on its face. Kurzweil should make a better argument.

    e. Drum’s extrapolation begs a question that neither his post nor Kurzweil’s response addresses. It seems reasonable to try to use ‘countdown’ data to forecast when the Singularity will occur. If that’s not the case, why not? (Per ‘dreish’, consider the time axis, i.e. “years before today’s date as of writing”. If the ‘date of writing’ were the Singularity onset date, then events should have multiplied and clustered as the Singularity approached in real time; the x-axis of a log-time plot would have to be extended to display the real-time clustering. For a date of writing slightly before the Singularity onset date, I’d expect Kurzweil’s existing log-log plot to depart from a straight line when the time between events becomes smaller than the time to the Singularity. [For a date of writing after the Singularity, the straight line would change to a "waterfall" near the onset time.] If the data is good enough, the deviation from a straight line might yield an estimate the time of the Singularity. Kurzweil does not discuss the issue. In fact, he describes the World Wide Web as an ‘event’ but, surprisingly, does not predict the time to the event that will follow the WWW.)

    f. Afterthought. Note that if, as a function of the time to the Singularity, the time between events scales with a power less than one, the total number of events will necessarily be finite. (For sufficiently small positive x, x to a power is less than x if the power is less than one. Thus, it would be impossible to pack an infinite number of events close to each other as the Singularity approaches.) It’s not clear how to interpret such a situation.

    There is more to the Countdown plot than meets the eye at first glance. It’s not entirely apparent whether the chart is meant to be descriptive or quantitative. If it’s quantitative, it points to more information than has been extracted to date.
    [1] NB: in TSIN, the chart that Drum noticed is followed by over 600 pages including 100 pages of annotated footnotes. IMO the Drum-Kurzweil kerfluffle is interesting precisely because new things can be said without tangling with the interlinked sprawl of the overall book.

  • k.Ferrero

    Eliezer Yudkowsky makes a critical point here in re-emphasising from where the point of reference is originating here.
    Is there some way in which intelligence can break free and wreak havoc? Or improve matters generally?
    Or is intelligence fundamentally constrained by the user and the user’s interests?

    • Kevin Kelly


      The honest answer is: we don’t know.

  • Michael Ferguson

    A geometric progression, though easy to grasp intuitively, is the wrong analogical model. One should think in terms of a system of endogenously related time series equations about to point of instability. That is going to happen. Several times. However, I have a real problem with singularitarians and transhumanists. If Murphy thought that anything that could go wrong would, these guys seem to believe that everything that can go right, will. Equally foolish. Remember Gerard K. O’Niell? Where are those early 21st century space colonies anyway?

  • Kevin Kelly

    Ray Kurzweil, whose name I invoke and whose theories I criticize (and praise) in this essay wrote a thorough response to my comments. He asked me to post them here, which I am about to do. I thank him for taking the time to reply with grace. I will post my reply to his in the next message.

    Ray Kurzweil writes:

    Allow me to clarify the metaphor implied by the term “singularity.” The metaphor implicit in the term “singularity” as applied to future human history is not to a point of infinity, but rather to the event horizon surrounding a black hole. Densities are not infinite at the event horizon but merely large enough such that it is difficult to see past the event horizon from outside.

    I say difficult rather than impossible because the Hawking radiation emitted from the event horizon is likely to be quantum entangled with events inside the black hole, so there may be ways of retrieving the information. This was the concession made recently by Hawking. However, without getting into the details of this controversy, it is fair to say that seeing past the event horizon is difficult (impossible from a classical physics perspective) because the gravity of the black hole is strong enough to prevent classical information from inside the black hole getting out.

    We can, however, use our intelligence to infer what life is like inside the event horizon even though seeing past the event horizon is effectively blocked. Similarly, we can use our intelligence to make meaningful statements about the world after the historical singularity, but seeing past this event horizon is difficult because of the profound transformation that it represents.

    So discussions of infinity are not relevant. You are correct that exponential growth is smooth and continuous. From a mathematical perspective, an exponential looks the same everywhere and this applies to the exponential growth of the power (as expressed in price-performance, capacity, bandwidth, etc.) of information technologies. However, despite being smooth and continuous, exponential growth is nonetheless explosive once the curve reaches transformative levels. Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in one year, and then to 40,000 and then 80,000, it was of interest only to a few thousand scientists. When ten years later it went from 10 million nodes to 20 million, and then 40 million and 80 million, the appearance of this curve looks identical (especially when viewed on a log plot), but the consequences were profoundly more transformative. There is a point in the smooth exponential growth of these different aspects of information technology when they transform the world as we know it.

    You cite the extension made by Kevin Drum of the log-log plot that I provide of key paradigm shifts in biological and technological evolution (which appears on page 17 of Singularity is Near). This extension is utterly invalid. You cannot extend in this way a log-log plot for just the reasons you cite. The only straight line that is valid to extend on a log plot is a straight line representing exponential growth when the time axis is on a linear scale and the a value (such as price-performance) is on a log scale. Then you can extend the progression, but even here you have to make sure that the paradigms to support this ongoing exponential progression are available and will not saturate. That is why I discuss at length the paradigms that will support ongoing exponential growth of both hardware and software capabilities. But it is not valid to extend the straight line when the time axis is on a log scale. The only point of these graphs is that there has been acceleration in paradigm shift in biological and technological evolution.

    If you want to extend this type of progression, then you need to put time on a linear x axis and the number of years (for the paradigm shift or for adoption) as a log value on the y axis. Then it may be valid to extend the chart. I have a chart like this on page 50 of the book.

    This acceleration is a key point. These charts show that technological evolution emerges smoothly from the biological evolution that created the technology creating species. You mention that an evolutionary process can create greater complexity – and greater intelligence – than existed prior to the process. And it is precisely that intelligence creating process that will go into hyper drive once we can master, understand, model, simulate, and extend the methods of human intelligence through reverse-engineering it and applying these methods to computational substrates of exponentially expanding capability.

    That chimps are just below the threshold needed to understand their own intelligence is a result of the fact that they do not have the prerequisites to create technology. There were only a few small genetic changes, comprising a few tens of thousands of bytes of information, that distinguish us from our primate ancestors: a bigger skull (allowing a larger brain), a larger cerebral cortex, and a workable opposable appendage. There were a few other changes that other primates share to some extent such as mirror neurons and spindle cells

    As I pointed out in my long now talk, a chimp’s hand looks similar but the pivot point of the thumb does not allow facile manipulation of the environment. In contrast, our human ability to look inside the human brain and to model and simulate and recreate the processes we encounter there has already been demonstrated. The scale and resolution of these simulations will continue to expand exponentially. I make the case that we will reverse-engineer the principles of operation of the several hundred information processing regions of the human brain within about twenty years and then apply these principles (along with the extensive tool kit we are creating through other means in the AI field) to computers that will be many times (by the 2040s, billions of times) more powerful than needed to simulate the human brain.

    You write that “Kurzweil found that if you make a very crude comparison between the processing power of neurons in human brains and the processing powers of transistors in computers, you could map out the point at which computer intelligence will exceed human intelligence.” That is an oversimplification of my analysis. I provide in book four different approaches to estimating the amount of computation required to simulate all regions of the human brain based on actual functional recreations of brain regions. These all come up with answers in the same range, from 10^14 to 10^16 cps for creating a functional recreation of all regions of the human brain, so I’ve used 10^16 cps as a conservative estimate.

    This refers only to the hardware requirement. As noted above, I have an extensive analysis of the software requirements. While reverse-engineering the human brain is not the only source of intelligent algorithms (and, in fact, has not been a major source at all up until just recently because we did not have scanners that could see into the human with sufficient resolution until recently), my analysis of reverse-engineering the human brain is along the lines of an existence proof that we will have the software methods underlying human intelligence within a couple of decades.

    Another important point in this analysis is that the complexity of the design of the human brain is about a billion times simpler than the actual complexity we find in the brain. This is due to the brain (like all biology) being a probabilistic recursively expanded fractal. This discussion goes beyond what I can write here (although it is in the book). We can ascertain the complexity of the design of the human brain because the design is contained in the genome and I show that the genome (including non-coding regions) only has about 30 to 100 million bytes of compressed information in it due to the massive redundancies in the genome.

    So in summary, I agree that the singularity is not a discrete event. A single point of infinite growth or capability is not the metaphor being applied. Yes, the exponential growth of all facts of information technology is smooth, but is nonetheless explosive and transformative.

  • D. Nightshade

    A simple discontinuity in human civilisation might be the destruction of all human life. The potential for such an event enabled by advancing technology is exponentially increasing. Perhaps the only thing that may check man’s “natural” tendencies is non-human supervision. It is interesting to see AI’s potential developing along with man’s destructive capability. Which one will be first cross the “finish” line.

  • Kevin Kelly

    I am replying primarily to Ray’s summary. Ray Kurzweil says:

    “So in summary, I agree that the singularity is not a discrete event. A single point of infinite growth or capability is not the metaphor being applied. Yes, the exponential growth of all facts of information technology is smooth, but is nonetheless explosive and transformative.”

    I agree with this statement. However I would not use the word singularity to describe it. Exponential growth (of X) is explosive and transformative. Yes, but that is not telling us much. Missing from this description are all the qualities that make Ray’s argument interesting: the exact predictions of timing, the suggestions of how it changes everything, and the hints of transformation.

    Ray says “There is a point in the smooth exponential growth of these different aspects of information technology when they transform the world as we know it.”

    This is another way of saying “more is different.” Again I would agree that more is different, but the interesting part is determining at what point more is different, and how different.

    I think Ray suggests one threshold is the complexity of the human mind. That when we reach or exceed that level of “more” everything will transform. I agree with that.

    My point, which I think Ray agrees with — but maybe not others — is that we’ll sail through this transformation without reallly noticing it. That it will look transformative primarily in retrospect.

  • dreish

    Your math is a little off. The graph would not be a straight line in 1800. Try it and see.

    Also, see my response to your argument here:

  • Dwight Jones

    Technically, the eye needs 30 frames/sec and no more. After that, it wants information.

    Can we say that humans now have technology enough, vis a vis our glacial pedestrian genotype/phenotypes, and that now we need our institutions to get to work for us?

    Science is one such edifice, as is the family, but humans have taken great pains to build a church edifice. If the latter are restocked with humanist texts, might not that lead to a singularity in the destiny of our species? The exchange of weapons bankruptcy for Zen wealth? Is that not totally knowable, in the short term?

    The future is not a speeding freight train that will choose a dirt road – we wear the caps and we switch the signals.

    But, as Blake said “To be an error and to be cast out is a part of God’s design.” Not all of us will hear the call “all aboard!”

  • Andres

    “When I mentioned this to Esther Dyson, she reminded me that we have an experience close to the singularity every day. ‘It’s called waking up. Looking backwards, you can understand what happens, but in your dreams you are unaware that you could become awake….’ ”

    There are those of us, that through evolution, have become as aware in our dreams as you are this very moment, as you read this very sentence.

    I feel your comparison with the coming singularity and waking from a dream is not that far off. I disagree with you and Dyson, however, on the idea that we are unable to become aware of a singularity prior to it occurring, especially when compared to the context of dreaming.

    There are those of us who are far awake, in this cosmological dream we call life. I believe Kurzweil is such a man. And he is waking others.

    I can describe the feeling of reading kurzweil’s evolution of the Universe in the Age of the Spiritual Machines, as powerful a feeling as when I first opened my conscious eyes in a dream.

    The Universe has always been approaching the singularity, and the size of the singularity will always change exponentially. There are significant stages in the life of the singularity that are important, but I believe these stages can be predicted before they happen, can be noticed as they happen, and can also be reflected upon after they happened.

    The returns are what I’m interested in, not a single epoch.

    “As we see are selves so we act, and as we act so we become.” (Look at yourself, what would you like to be? Wake up, the universe does have a surprise for you)

  • keith

    I see what kelly is saying about the singularity, that it always goes to “now”; but that observation completely misses kurzweil’s point. kelly focuses on the fact that the x-axis of the log-log plot regresses to now. Of course it does. What is fascinating in that theory is the reality of the y-axis. The “earliest cities” or “Ben Franklin’s time” the y-axis, or time until the next paradigm shift was still 10,000 or 100 years, respectively. Of course, since its a log-log graph, the y-axis will never reach zero..MATHEMATICALLY. But it will reach zero, realistically. That is, it will become so low that we as humans don’t have enough time to digest it before the next event. To show my point, I remind you zeno’s paradox. If a man walks towards the wall going half-way there at at time, he can mathematically never reach the wall as the “half-way” point will only become infinitly smaller. same with approaching the singularity. mathematically we can never reach the “wall” or the time when the log-log plot’s y-axis goes to zero. But realistically we’ll hit the wall when the time is so short it feels like zero.

  • Jake Cannell

    If you zoom out to a big picture view of the universe’s historical development over time (as in Carl Sagan’s cosmic calendar) there is a clear trend. The trend seems to be one of exponentially decreasing timescales between significant events. The choice of events is arbitrary, but the temporal compression is not. An exponential model is not a good fit for this data, its actually better modeled by a geometric series, which does approach infinity in finite time – so a true mathematical singularity, and probably a cosmological singularity along the lines of that which created our own universe is what the data points to. (more here)

    Barry one of your ‘sweeping generalizations’ is correct to a degree – the exponential increase in transistor density described by Moore’s Law is accompanied by an unsustainable exponential increase in circuit complexity (and thus design team size), foundry cost, and fraction of the total GDP devoted to the semiconductor industry. If it doubles just five to six more times or so, we will essentially be devoting all of the earth’s resources and intellectual capital to creating faster computers. However you forget that process advances reduce power consumption, and the power efficiency will only improve going forward as parallelization increases.

    The likely future leading towards a Singularity is Moore’s Law slowing (but still exponentially growing) towards around 2020 or so and the end of the road predicted by the Semiconductor Industry Roadmap. Well before that time, semiconductor transistor density will have exceeded synaptic density in the brain (this is actually just about to happen – by around 2011-2012). This means CMOS technology will be advanced enough to reproduce circuits as complex as the brain in a similar total area (the cortex is about 1 meter squared if flattened out).

    This means that CMOS technology will soon be more advanced than the brain in the critical measure of information density (or miniaturization), and it will be possible in theory to reverse engineer the brain and create a similar system in circuitry. That is still a huge challenge, but research in those fields is progressing rapidly and its just a matter of time. Kurzweil talks about all kinds of fascinating new process technologies past current CMOS, but this is all beside the point – CMOS will not be abandoned until it completely loses steam, and CMOS will be more than sufficient to create an artificial cortex.

    Once we have a complete blueprint for the cortex, we can simulate it – first on supercomputers, and then we can design specialized ASICs and finally a direct mapping of hybrid digital/analog circuitry that is very effecient and directly emulates the brain. Now that by itself may not seem earth-shattering – but what is earth shattering is that the brains circuitry cycles at about 100 to 1000hz, and CMOS circuitry can cycle in the gigahertz.

    So once we have a blueprint for an artificial cortex, we will be able to keep improving that design with specialized hardware and run it faster and faster. Running at just megahertz to consume low power, a neuromorphic cortex could think one thousand times faster than a human brain. Drawing far more power and running in the gigahertz, its even possible in theory to run it a million times faster than a human brain – with proper cooling.

    Having posthumans that think thousands of times faster than us will not itself be the Singularity, but it will bring it about very very very quickly, as they create the next substrate on their own timescales – which will probably be some form of terahertz molecular nanotech computing, and at that point if the cosmic calendar prediction is correct they will rapidly transition to black hole like entities (perhaps instantly from a biological human’s perspective, but for an upload there will be a near infinite amount of time at any point) and create new universes. That is the Singularity.

  • Kristin K’eit

    While I am digesting summaries, interpretations and comments of Kurzweil, and recalling Einstein’s phenomenal ideas of beam-o-light travel, I might only suggest that the posting of various comments and blogs be reversed in date order. Albeit a minor detail, it helps considerably in the logic of postings, particularly for those new to the arena, in the transition from reading the initial article–”The Singularity is Always Near”–to jumping into the comments, only to find that the last is the first, and the first is the last. But then I wonder, is such a reversal possible? If not now, I’m sure it will be soon.

    Last, I would request that R Newton and DRomeo offer some suggested readings regarding “soul!”

  • Krish

    IMHO, we are splitting hairs with the word “Singularity”. The meaning of the words change depending on how people used it. Remember the words “Business Process Reengineering” (BPR)or “Business Intelligence” (BI) that Sears used to improve performance only to be bought out by K-Mart? The new word now is “Innovation”. I always thought Innovation means never been done before like Invention…but people use it everywhere to mean Normal Ideas.

    Do chimps know we exist…they sure do? Did they consciously do something that evolved to humans…we will never know. But, we know what is coming like the VCRs, or DVD palyers or the perpendicular recording. Are we aware of it? Sure. Is it continuous? No, because I still have the old drive below the new drive.

    Perhaps, Singularity is what we experience everyday – but that does not mean we can not extrapolate new technology through the veil. Can we make computers smarter than us? Why not – that could be part of the natural evolution. Who says evolution has to be biological. Stars are not biological nor the photons.

    Perhaps technology could evolve to the point where we can produce computers that can last a million years (MTBF) that are redundant and networked running on solar power that stores your mind just before your death. You could live on in a virtual earth (like Matrix?) while your grand children visit you through an interface…that day could be in 2040 unless we blow ourselves up.

  • Michael Gogins

    Some of those discussing a technological singularity are philosophically naive and import some presuppositional “ism”, usually nominalism, into the discussion. This “ism” then determines the interpretation of the outcome. It is a circular argument.

    There are major unknowns that must be known in order to interpret the nature of the technological future. First and most critical, is Nature essentially a Turing machine (i.e is the extended Church-Turing thesis true)? If so, there perhaps could be a technological singularity in which living human beings can become long-lived (of course not truly immortal sense even our universe must end) through uploading — maybe. If not, then that can NEVER happen. Second, even if Nature is a Turing machine, is it possible to measure a human identity? Even if it is possible in principle, is it possible in practice?

    Frankly, I doubt the strong Church-Turing thesis, and even more, I doubt the possibility of measuring a human identity.

    I am convinced that the “singularity” represents a form of idolatry, in which the religious sensibility that has been submerged by a scientific world-view (I would say, pseudo-scientific) re-emerges in pseudo-scientific garb. I urge those following this thread to take into account the historian of religion Mircea Eliade’s important idea of the “second fall,” i.e., the cultural consequences of pervasive secularism.

    By the way, I am a strict Darwinian, and I hope, and feel confident, that humanity may succeed in substantially extending its active life span. I also think feasible, and hope to see, an expansion of the human race into outer space.

    But I do think that the eschatological shadows within this singularity discussion should be understood for what they are.

  • John Smith

    Ahem. I’m very impressed by the intelligence and level of discussion here, but as a plebian representative of the “real world”, and an amateur futurist, I think there’s a vast series of relevant issues not being talked about. ***

    For just one example, 9/11 has created a paradigm shift in American, primarily, and world, secondarily, human conciousness about factors that may obviate what is being discussed above. ***

    As technology, information science, interconnected complex systems, and population grows and accelerates, so does the ability of a wider number of players develop to create weapons of mass destruction. And the perceived desire or need to use them. ***

    Horizontal proliferation of the means, therefore, to create weapons of mass impact (primarily biological and chemical, since nuclear bomb materials like U-238 and plutonium are still more difficult to create–at present), create an increasing liklihood that, despite all of the above theorization and speculation noted above, that the “singularity” of the means, materials, and motives to use, increasingly, WMD’s will also be accelerated in parallel with the possible exponential increase in the factors noted in others’ comments here. ***

    Given the fact that, with current and “short-term” forseeable world population growth trends, combined with trends such as human population geographical spread and their related material needs (approaching if not already exceeding the “carrying capacity” of the planet), ecological damage, increasingly complex systems of resource allocation and management, ethnic/religious/political conflict, etcetera (the usual litany of worrisome factors), we may find that the “singularity” which suggests so much hope and potentially accelerated development of both human intelligence (via genetic manipulation of those “holographically-related” DNA structures and relationships that give rise to higher levels of intelligence)and AI,it may turn out to be the most horrific, mass prelude to actual human extinction by our own hands will occur as a result of the _negative_ factors noted here that are also increasing and accelerating exponentially will occur. ***

    We are in a race. Given our primate heritage, and the current and near-term prospects of increasingly conflicted world cultures, I suspect humans may be the losers in this race toward the future. Between 2030 and 2060 is a likely timeframe. I am pragmatically pessimistic, sad to say. ***

    I would appreciate greatly hearing from anyone participating in this discussion to comment about my opinions here. Am I wrong? What other factors involved am I missing? Please let me know. I’ll be back to respond. ***


  • Kevin Kelly

    In the long run, the epic of evolution dominated by Darwinian evolution (the last 4 billion years) will be seen as the Darwinian interlude now that Lamarkian evolution and horizontal gene transfer will overtake biology via genetic engineering.

  • Atheist

    Has anyone here even heard the truth about exponential anything?


    Process that one for a while.
    Think deeply about that simple statement.

    Ever heard of peak oil?
    Ever heard of Global Warming?

    Go look into the mirror and repeat after me,


  • smb

    Two comments: (1) Some critics either ignore or overlook the incrementalism in Kurzweil’s outline. His projections up to now have been uncannily accurate because they are simply the logical outcomes of current technological trends. Some have implied that the Singularity is simply not possible since it is unclear how we get from Point A to Point Z. But the almost boring step by step advances that take one from Point A to Point B then Point C eventually end up at Point Z. His is an incremental approach.

    (2) Poor Ray has been called a “materialist” as if this disqualifies him from the discussion. The source of all “human” qualities cited above – intelligence, awareness, emotions, perception, the mind itself, is the brain, a powerful machine. No mystical intervention is required for these qualities to emerge. Human-designed intelligence may require sensory experience to become human but the bottom line is that we ARE, like computers, are material creatures.

  • Vidyardhi nanduri


    defines science of philosophy,philosophy of science and cosmos yoga
    the study of the physical universe, its structure, dynamics, origin and evolution
    a metaphysical study into the origin and nature of the universe
    borderland between natural science, natural philosophy and vedas
    COSOLOGY HELPS integration of metaphysical universe to divine cosmos and cosmic function of the universethrough
    sristi-creation, stithi-stability and laya-merger

  • DRomeo

    (Some of this thread reminds me of Frank Tipler’s book The Physics of Immortality.)

    I feel that the conversation here may be served by some Soul. It seems that much of it revolves around the asumption that intellegence is a material phenomenon. In reference to the point on Rapture, many spiritual teachers throughout the ages have said that the “singularity” is Here and NOW.

    The Universe is already infinitely intelligent. Perhaps as we become more AWARE we will become more conscious of this intelligence and it will be reflected back to us in our experience.

    Also, evidenced by the analysis of various graphing methodologies, I think there may be another assumption that seems central to the conceptualizations discussed here–intelligence as a function of time.

    What if time is a function of intelligence?

    What if as the more aware we become, the more our experience of time changes–analagous to Enstein’s thought experiment of riding on a beam of light. The past and future are revealed as ghosts of an eternal NOW.

    Do you remember when Luke is on his final bombing run to blow up the Death Star and Ben tells him to turn off his computer and “use the Force?”

    I think that is some good advice ;)

    Perhaps the comments I have shared here are more relevant to a discussion on wisdom.

    Kevin, Out of Control is really cool.


  • Randall Newton

    Kevin, I’ve followed Kurzweil’s work off and on over the years, and yours as well. The Technium concept seems to be a natural extension of your past work. I like your interpretation of the singularity as phase shifting. Kurzweil’s fascination with mapping the human mind onto an AI structure leaves me cold, because humans are not body and mind, but body and mind and spirit. Kurzweil mechanistically thinks we will only be leaving the body behind, but what he really advocates (without knowing it, of course) is leaving behind 2/3 of what makes us human. Here’s a crude metaphor. The body is the hardware, the mind the software, and the spirit the metadata about both. (I consider “soul” as mind + will + emotions.)

  • Kevin Kelly

    “What if as the more aware we become, the more our experience of time changes–analagous to Enstein’s thought experiment of riding on a beam of light. The past and future are revealed as ghosts of an eternal NOW.”

    Interesting thought. How would we test it?

  • Rob Jellinghaus

    I’m not sure why Kevin maintains that “we’ll sail through this transformation without really noticing it.” It seems to me that we are noticing it, all the time. What is the 20th century explosion in science fiction, if not a confrontation with the impending possibilities? What about globalization itself and the economic shifts it’s imposing — are those changes that we’re not noticing?

    I would consider the Unabomber an example of a total anti-Singularitarian — someone who is all too aware of the scale of the changes at hand, and who is unable to reconcile with them. Ted Kaczynski absolutely refused to sail through the changes. Over time, more will similarly start wanting to stop the world and get off.

    Since all the technological trends and changes of our world now are the trend lines that lead directly to the singularity, it’s quite reasonable to state that all the reactions to those trends and changes are just the precursors of the much larger reactions that will result from the much larger changes to come.

  • Aron

    When discussing whether the singularity contains a discontinuity it seems that the qualitiative tipping point is whether we generally reverse roles and become tools of our super-intelligent spawn. In that scenario it is quite plausible that we have little to no understanding of the change that rapidly surrounds us.

    It seems to me that unless we become indistinguishable from the machines, this tipping point will occur.

  • Harv Griffin

    Been a fan of COOL TOOLS for years. Just discovered THE TECHNIUM. Nice counterpoint to Kurzweil’s “Spiritual Machines” & “Singularity.” I like your quote/reply: “What if as the more aware we become, the more our experience of time changes–analogous to Einstein’s thought experiment of riding on a beam of light. The past and future are revealed as ghosts of an eternal NOW.” I would have said: The Meaning is the Mirage because the Message creates its own space-time continuum.

    My take? The short answer is I think entropy will prevent technological hell, whether of the green goo, terminator, or smart machines babysitting the silly humans variety. Entropy will also block technological heaven, immortality, and Kurzweil’s benevolent Borg vision of 10%human/90%machine superintelligence. Talk to any maintenance guy in any factory�he may not know what the second law of thermodynamics is�but he�ll be quick to tell you that the more complicated the machine, the quicker it�ll break down and the more high-powered support people are going to be needed to keep that machine up and running.

    Besides, just read James Gleick’s FASTER, and you’ll catch the drift that we’ve already passed through the “singularity,” into a future beyond anyone’s control: the acceleration of everything always, no brakes!

    The long answer? Why is LIFE valuable? Death! Where does AWARENESS�the precursor to INTELLIGENCE�come from? The struggle between life and death! Kurzweil’s core argument is seductive: given sufficient computer speed and complexity, the computer will become self-aware, and begin designing and producing computer children in the form of faster, smarter, more intelligent baby bit-brains. While this is happening, nanotechnology will redesign the environment in conformity to all our good dreams, our minds will be augmented by successively more invasive technological implants until superhuman is the norm and we dance off into the eternal shibumi, uploading and downloading our minds into multiple and various vessels according to whim and fashion: “My 256 Harveys can beat up your 128 Kevins!”

    Perhaps I simply lack imagination. Maybe my thoughts are stuck in old ruts unenlightened by the right analogy. I get the Garry Kasparov analogy. I get that faster CPUs and sharper software will enable computers and their robot “fingers” to best us brute humans in any and all of our games, any field of endeavor which we can rigorously define by precise rules. I get that computers will shortly pass the Turing Test�questioners will not know if they are talking to a machine or a human. But help me out here: what I don’t get is the part where the IT support guy is no longer a part of the equation. No matter how smart or how large the computer, I keep seeing an army of admins and hackers and software engineers behind the scenes pulling the strings, giving the computer its marching orders.

    Help me out here. Without the issue of Life and Death, how can there be AWARENESS? I get that I can use tools to design better tools–I can use a piece of chalk to design a pencil to design a pen to design a word processor. But that�s always me and the tool, designing the next tool. What I don’t get is where an Apple suddenly designs a Cray while I go to refill my coffee, and starts using me as its tool.

  • Yashia Lemuria

    My system of calculation suggests (alas, I infer) that the period of time between August 8 and August 23 (2008) of this year deserves our attention. No capstone or spike or omega point, per se, but the nearest we have come to a “noospheric moment” of infinite potential variation. The reason for the premature (not 2040) event horizon is the influence of the noosphere’s gravity itself, which if compensated, suggests a subtraction of 32 years. I hope I am wrong.

  • drak

    What will happen? if this is the basic question of the singularity, then the speculation should be imaginative, as Authur C. Clake would demand :)
    here is an imaginative answer. whatever god is, it is alone, because it is whole; the rock it can’t move is creating another god. so sets creation into motion with a infinitly small jump, and allows creation the choice(free will?)in becoming its companion through all sorts of things, like probability of the univeral constants, ect.
    lets assume the singularity to its final conclusion of a universe encopassing intellegence that beats entropy and causes casscading singularities through all of creation (the universe being a section of it.) in our expansion we meet up with other entities of intellegence, merge and cooperate (“the more intellegent a thing, the more cooperative it will be” Authur C. Clarke) until we arrive there with god and say “hey god, i’m god. whacha been up?”
    as good as any with respect to this speculation, ask a 19th century newtonian scientist about atomic energy and he would have said “rubbish! thats magic not science. you can’t get that much energy from slamming to pieces of this uranium stuff together!” so i guess the best compliment you could give me is “thats crap!”

  • johojo

    I think you probably mean “discrete” rather than “discreet”.

    1. Marked by, exercising, or showing prudence and wise self-restraint in speech and behavior; circumspect.
    2. Free from ostentation or pretension; modest.

    1. Constituting a separate thing. See Synonyms at distinct.
    2. Consisting of unconnected distinct parts.
    3. Mathematics. Defined for a finite or countable set of values; not continuous.

  • Writing Essay Services

    Amazing !!
    Great information thanks for sharing this with us.
    Thanks a lot for a bunch of good tips.
    I appreciate it!

  • gs

    1. Regarding the Update to KK’s post: For an exponential process, the time between equally spaced milestones is (asymptotically) exponentially small (as a function of time).

    However, my previous comment noted that the ‘Countdown to Singularity’ log-log plot is consistent with a power law. A log-log plot of an exponential is not linear. (Cf. Jake Cannell’s comment.)

    So there’s something incongruous, or unexplained, about the Countdown plot and the exponential-growth scenario.

    2. Unfortunately, the Afterthought in my previous comment contained a brain hiccup: For sufficiently small positive x, x to a power is less than x if the power is less than one. should be For sufficiently small positive x, x to a power is greater than x if the power is less than one..

    Btw, dreish’s and my links to the Kurzweil site’s discussion section no longer work.

  • Guillermo Santamaria

    I have heard many lectures by Terence McKenna. He was convinced of an omega point or a singularity I would suppose, arriving in 2012. I am curious to whether you ever spoke to him about these ideas? Of course, these ideas were in 1999, now we have a better perspective of how the Internet has affected things, but do you think he might have agreed with your point in this article had he had a chance to see it?

    Also, Michio Kaku has expressed some idea of a singularity. Have you ever discussed this with him? I am curious.

    • Kevin_Kelly

      I did have several conversations with Terence McKenna, but before the idea of a Singularity was floating around, so the topic never came up between us.

      • Guillermo Santamaria

        I am curious about this because from what I can tell he was espousing this singularity since his Time Wave Theory and software came out, or soon after that. So I have two questions.
        1. What did you talk to McKenna about?
        2. What is your view of his Time Wave Theory (despite any notions of 2012 being some apocalyptic or world-changing event)?
        Thanks Kevin.

        • Kevin_Kelly

          One did not talk to Terrance McKenna. One witnessed him, listened to him, audited him. He was a fount of eternal rhetoric and blarney (those were his words).

          • Guillermo Santamaria

            You are so right! I love him even though I never met him. I miss his mind now. So did you have a view on his Time Wave Theory? I assume you would not agree with it very much?

          • Kevin_Kelly

            I don’t know it well enough to have an opinion.

          • normbreyfogle

            You should read up on it. His Time Wave theory was, essentially, the Technological Singularity.

  • ksbo

    Your third point that the mathematical singularity is
    an illusion is true as we can proport ourselves to be anywhere in the
    exponential graph based on the scales that we use for the y and x axis.
    Therefore to me Philip Winston’s graphs simply shows the obvious fact I stated
    above, as the change in scales of the y axis and x axis alters the positions of
    A and B in the graph. However this is irrelevant to if the consequences of the
    singualrity that Kurzweil describes will occur: Where we will build strong AI
    from insights of brain reverse engineering (which is happening today e.g blue
    brain project with Henry Markham) which will then build stonger AI leading to
    our predictions of the future breaking down. What is relevant is not where we
    are in the shape of the graph, as any point represents a doubling from the last
    point, but the actual values of the x axis (when the singularity will occur)
    and y axis (the intelligence factor represented by strong AI). Your analysis of
    Kurzweil’s graph that the singularity should be now or always seems near is
    invalid as when you scale the evolutionarily progression for 4.6 billion years
    of the earth’s timeline then we are in fact in a mathematical singularity since
    we are evolving (altering ourselves to better suit the environment) via
    technology faster then we ever could biologically. Now you might argue that evolution
    has essentially stopped since the evolution of homo sapiens but to me it has not
    since humans (200 000), agriculture (3000), industrial revolution (150),
    computers (50), commercial internet (20), mobile phones (10). Now I argue that
    these technologies allow us to be more intelligent hence allowing us to survive
    better thereby evolving although not genetically. And this is an assumption you
    have to make if we are even to analyse Kurzweil’s graphs the way you do in the
    first place. So whether the methodical singularity is near or far really
    depends on the scale you are using but the consequences of the singularity have
    little to do with the position we are in on the curve. Case in point the very
    graphs Philip Winston uses: say if the y value is actually intelligence and
    point A is where we build AI 1 billion times more intelligent then ourselves
    i.e the singularity occurs in point A at 2040. Point A would look like the way
    it does on the next graph which is scaled differently even if point A was the
    singularity. Therefore it is not the point in the shape of the graph that
    determines the singularity but the y value (intelligence measure from creating
    strong ai or simple software intelligence as it is now) and x axis (what year).
    In conclusion the mathematical singularity is irrelevant to the singularity
    that Kurzweil describes as whether we are in a mathematical singularity depends
    on the scale used which really is a special trait of exponential growth. So you
    cannot use the fact that the mathematical singularity is an illusion to argue
    against the technological singularity happening. You might counter argue “well
    why does kurzweil always use those graphs”, well to me he is simply trying to
    show us the power of exponential growth.

    Your first argument is a matter of very pessermistic
    opinion. “For any number of reasons our “selves” may
    not be very portable, or new engineered eternal bodies may not be very
    appealing, or super intelligence alone may not be enough to solve the problem
    of overcoming bodily death quickly.” So what if our new engineered bodies which
    ensure immortality may not be appealing? By definition we have ensured
    immortality. And the idea that a super intelligence will not be able to extend
    our longevity to a point where it may be considered indefinite is very
    pessermistic. Using the intelligence and increasing technologies, we have
    already prevented and cured many diseases that would have killed us (2/3 of
    children died before age 5 in medival England) thereby extending our personal
    longevities already. So why would it not be possible for a super intelligence
    to do the same but to a greater extent?


    In your second argument, you state that “A chimp is hundreds of times smarter than an ant, but the greater
    intelligence of a chimp is not smart enough to make a mind smarter than itself.”.
    Well maybe if the ant (us) actually made the chimp (strong ai), which is the
    only way the chimp you refer to in this case will ever come into existence,
    then it is likely that the chimp which is now the new ant will be able to make
    the new chimp. Basically you fail to add that if we can make strong AI smarter
    then ourselves then in theory the strong AI will be able to make stronger AI,
    just as we made the AI in the first place. Also if we were ever able to build a
    synthetic brain for strong AI, it would no doubt be considered far superior to
    us in terms of intelligence (which is poorly defined). All it would require is
    a larger hippocampus to store an unbelievable amount of information which it
    will never forget and it would be thinking at the speed of light while we are
    thinking much slower as our impulses travel ~33m/s so years of thought can be
    completed in seconds. Therefore would it not be able to conclude information
    from scientific data or even set up experiments for scientific discovery much
    more efficiently then humans if it had all scientific literature stored in its
    Would it not be able to build smarter AI algorithms given its vast data base
    and fast thought speed? My opinion is yes and many brain builders like Hugo
    seem to agree.


    Your fourth argument seems to indicate that the
    singularity is in fact coming or has occurred rather than critize it. “From within our emerging global cultural, the coming phase shift
    to another level is real, but it will be imperceptible to us during the transition”
    and “I think that technological transitions represented by the singularity are
    completely imperceptible from WITHIN the transition that is represented
    (inaccurately) by a singularity.” This to me is the very reason why the
    singularity has its critics in my opinion as change is imperceptible to us
    during the transition.


    Finally you conclude “In a thousand years
    from now, all the 11-dimensional charts at that time will show that “the
    singularity is near.” Immortal beings and global consciousness and
    everything else we hope for in the future may be real and present but still, a
    linear-log curve in 3006 will show that a singularity approaches. The singularity
    is not a discreet event. It’s a continuum woven into the very warp of extropic
    systems. It is a traveling mirage that moves along with us, as life and the
    technium accelerate their evolution.”. Again this points to my first argument
    that all you are stating is that the mathematical singularity is an illusion,
    which it is, but has nothing to do with the consequences of the singularity
    which is what really defines when the singularity will occur as in your view of
    3006 it seemed to have. J

  • ksbo

    p.s I might seem like a support of Kuzweil’s timeline on the singularity but I am actually not. All I am stating is that your points of critism are poor. I would contend that a very good critisim would be that we would not understand the brain enough to create algorithims for intelligence required to build a self-improving synthetic brain within Kurzweil 2045 or 2029 timeline. This is the argument that I personally believe, since I personally belive the Singularity will occur sometime next century.

    • Fourb24

      To understand the brain I suggest researching .  Jeff Hawkins book “On Intelligence” is a fascinating read and is to my knowledge the best theory on the brain and its function.  His goal is to build intelligent machines.  The question I ask is our cognitive dissonance plastic enough to maintain with the pace of Information Tech.

  • B Lewis

    The entire Singulatarian religion is based upon three unproved and unprovable articles of faith:

    1. Matter, energy, space, and time are all that exist.

    2. Consciousness is an epiphenomenon of matter

    3. The brain is the material phenomenon (“biological computer”) from which consciousness arises.

    Bearing this in mind (so to speak), I’ll stick with the religion and articles of faith I know: Catholic Christianity. It at least offers the hope of Heaven for those who believe. An eternity of existence as disembodied software is an apt description of Hell.

    • Josh

      The same can be said of a soul without a body.

  • normbreyfogle

    Can we or can we not immanentize the eschaton? This is the question.

  • normbreyfogle


  • normbreyfogle

    But we’re *already* noticing it. As it gets even more intense, we’ll notice it even more.