The Technium

The Maes-Garreau Point


[Translations: Japanese]

Forecasts of future events are heavily influenced by present circumstances. That’s why predictions are usually wrong.  It’s hard to transcend current assumptions. Over time, these assumptions erode, which leads to surprise. Everybody “knew” that people won’t work for free, and if they did that it would not be quality work. So the common assumption that a reliable encyclopedia could not be constructed upon volunteer labor blinded us to the total surprise of a Wikipedia.

The present-bound nature of predictions is not news. But forecasts may be more bound to the personal life of the predictor than first appears. Here is a story. Pattie Maes, a researcher at the MIT Media Lab noticed something odd about her colleagues. A subset of them were very interested in downloading their brains into silicon machines. Should they be able to do this, they believed, they would achieve a kind of existential immortality. Presumably, once downloaded, their souls could easily be migrated from one hardware upgrade to the next. So on, ad infinitum.

The technology to work this miracle seemed far away, but all agreed that once someone designed the first super-human artificial intelligence, this AI could be convinced to develop the technology to download a human mind immediately. This moment was christened the Singularity because what happen afterwards seemed impossible to even imagine. But, if one could make it to the Singularity, that is, if one could live until the time when a super-human mind was operational, then one could be downloaded into immortality. You were home-free for eternity. The trick was to stay alive until this bridge was crossed.

All the guys who were counting on this were, well, … guys. Pattie saw this a very male desire. She hypothesized that “women may have less of a desire to reach immortality via living in the form of silicon, because women go through pregnancy & birth and as such experience a more biological method of ‘downloading/renewal/making of a copy of oneself.’ Of course men are involved in having children, but for a woman it is a more concrete, physical experience, and as such maybe more real.”

Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?

This idea is nicely summarized by this cartoon (found here):

340x.gif

In 1993 Maes gave a talk at Ars Electronic in Linz, Austria called “Why Immortality Is a Dead Idea.”  Rodney Brooks, one of her male colleagues, summarized the talk in his book Flesh and Machines:

[Maes] took as many people as she could find who had publicly predicted downloading of consciousness into silicon, and plotted the dates of their predictions, along with when they themselves would turn seventy years old. Not too surprisingly, the years matched up for each of them. Three score and ten years from their individual births, technology would be ripe for them to download their consciousnesses into a computer. Just in the nick of time! They were each, in their own minds, going to be remarkably lucky, to be in just the right place at the right time.

Maes did not write up her talk or keep the data. And in the intervening 14 years, many more guys have made public their predictions of when they think the Singularity will appear. So, with the help of a researcher, I have gathered all the predictions of the coming Singularity I can find, with the birthdates of the predictors, and charted their correspondence.

Singularityprediction

You will not be surprised to find that in half of the cases, particularly those within the last 50 years, the Singularity is expected to happen before they die – assuming they live to be 100. Joel Garreau, a journalist who reported on the cultural and almost religious beliefs surrounding the Singularity in his book Radical Evolution, noticed the same hope that Maes did. But Garreau widen the reach of this desire to other technologies. He suggested that when people start to imagine technologies which seem plausibly achievable, they tend to place them in the near future – within reach of their own lifespan.

I think they are onto something. I have formalized their hunches into a general hypothesis I christen in honor of them.

The latest possible date a prediction can come true and still remain in the lifetime of the person making it is defined as The Maes-Garreau Point. The period equals to n-1 of the person’s life expectancy.

This suggests a law:

Maes-Garreau Law: Most favorable predictions about future technology will fall within the Maes-Garreau Point.

I haven’t researched a lot of other predictions to confirm this general law, but its validity is disadvantaged by one fact. Singularity or not, it has become very hard to imagine what life will be like after we are dead.  The rate of change appears to accelerate, and so the next lifetime promises to be unlike our time, maybe even unimaginable. Naturally, then, when we forecast the future, we will picture something we can personally imagine, and that will thus tend to cast it within range of our own lives.

In other words we all carry around our own personal mini-singularity, which will happen when we die. It used to be that we could not imagine our existence after our death; now we cannot imagine the details of anyone’s existence after our death. Beyond this personal singularity, life is unknowable. We tend to place our imaginations and predictions before our own Maes-Garreau Point.

Because the official “Future” — that far away utopia — must reside in the territory of the unimaginable, the official “future” of a society should always be at least one Maes-Garreau Point away. That means the official future should begin after the average lifespan of an individual in that society.

The Baby Boom generation (a world-wide phenomenon) has an expected life span of about 80 years. Born about 1950, most baby boomers should be dead by 2040. However all kinds of other powerful things are expected to happen by 2040. China’s economy is due to overtake the US in 2040. 2040 is the average date when the Singularity is supposed to happen. 2040 is when we expect Moore’s Law to reach the computational power of a human on a desk top. 2040 is also about when the population of the world is supposed to peak once and for all, and environmental pressure decrease. This grand convergence of global scale disruptors are all scheduled to appear – no surprise – at exactly the date of this generation’s Maes-Garreau Point: 2040.

If, as many hope, our longevity increases with each year, maybe we can extend our lives way past 80. As we do, our Maes-Garreau Point slips further into the future. The hope of the gentleman in the chart – and my hope too – is that we can extend our personal mini-singularity past the grand Singularity, and live forever.




Comments
  • EJTower

    The Maes-Garreau point does appear to be a very human like bias. A tendency to read future events into your favor seems to be a very likely blind spot in human prediction capabilities.

    Though does that elevate it to the status of a law? Is the point really a singularity beyond which we cannot conceive of anyone�s future existence, or might it be more analogous to a logical fallacy in an argument? Might it be something which you can avoid through careful consideration? As Ad Hominine does not make arguments about people impossible, so might the Maes-Garreau point not make predictions beyond your life impossible; if you are aware of the bias?

    I am just concerned that Maes-Garreau might become a wall to prediction when it might actually be a large rock we can walk around.

    Your further research will probably come to some answers. Just wanted to toss these questions into the mix.

  • http://www.alphabetsoup.cl/blog/ Alphabet Soup

    Very interesting concept. Reminds me of a page I saw recently about types of cognitive biases.

    http://healthbolt.net/2007/02/14/26-reasons-what-you-think-is-right-is-wrong/

    Seems you’ve named a new type of cognitive bias: Maes-Garreau bias.

    Apart from the topic of predictions of the future, Pattie Maes’ idea that there is a “profile” of the personality types that are pursuing immortality seems worthy of further investigation. Besides being predominantly male, my experience with one such researcher leads me to think that the pursuit of immortality profile also tends towards egocentric, obsessive, abstract personalities with problems making social connections.

    My prediction is that we’ll become cyborgs long before we become “immortal” via downloading our consciousness into silicon vehicles. But I sure hope that I don’t live long enough to see it play out.

  • http://www.roughtype.com Nick Carr

    I wonder if apocalyptic predictions also obey the Maes-Garreau Law.

  • Kevin Kelly

    Nick,

    Probably.

    Alphabet,

    I’ll check out the cognitive biases theory. Sounds plausible.

  • cs

    “The Singularity” as a historical trope has always seemed to me to be so directly connected to apocalyptic and millenarian thought that it’s hard to take seriously. I just imagine all those good Christians laying down in coffins on whatever appropriate day in 1000 ad, panting in the dark and waiting: first with anticipation, then disappointment, and finally in despair.

    How long do you think it was they waited before climbing behind the plow again?

    The fundamental conceit of apocalyptic/millenarian historical narratives is that history has an end. The Singularity narrative (in forms less naive than the simple “rapture of the nerds”) tweaks this by saying “human nature” has an end (a “phase change”) via some either disruptively fast or insidiously slow technological rupture. As to the particulars of any of these scenarios I have no opinion, other than that they generally sound really neat and “future-y” in a way I have a hard time finding outside 1960′s Popular Science issues – which is definitely a great thing.

    But when you start to examine more closely the notions of “human nature” that are supposed to be meaningfully altered here, you tend to find the conclusions far less interesting than the descriptions of the conclusions: so we become bionic meta-aware cloud-minded cyborgs… awesome! But, does this change have any practical consequences whatsoever for the real challenges of me living my life? Does it provide “meaning” or “value?” Why, exactly, should I assign value to living forever in an infinitely reconfigurable simulation of living? (Or, once we’ve broken that seal, why assign some value to “living” which changes radically depending on how long it takes to get done with? What need am I addressing with the concept of “immortality,” and isn’t there something in the currently existing world that would answer that need?)

    However, it is very kind of the most educated and rational(-istic) minds around to give me some credible and well-informed reasons to believe in immortality again. It’s kind of like they’ve re-invented death, only in a Nerf version. And facing that is certainly a whole bunch easier than looking down the barrel of the real gun. Isn’t it?

    (Sorry for the snark, by the way. I wouldn’t bother if I didn’t value the ideas…)

  • Trevor Cooper

    I subscribe to the Maes-Moore’s-Murphy’s Law: Immortality will arrive the year AFTER I die.

  • Aron

    While I will vote for the singularity occuring at n-1, I further anticipate that life extension “escape velocity” will occur about 2 years later. Or rather, infinitely too late, depending on mathematical formulation.

  • solak

    I think that Barry’s comment back on March 15 is the most insightful. When we sit here in 2008 and look ahead to 2040, we might think as if we stepped into a time machine and walked out into the mid-century sunshine to see all the changes as sudden. In fact, the world will be evolving gradually in the intervening decades, and surely various partial solutions to our problems will grow in that time.

    I’d rather have those partial solutions keep me healthy and smart for all of my 70 or 100 years than have some imperfect copy of my mind live forever in an animatronic box. There may be currently unknown benefits to society from having individuals who live much longer, but so far, having each new generation mature and take over has done more for progress than having the oldsters say “back in my day we did it this way and it was good enough for us”.

    Yes, I realize that my argument can hoist itself on its own petard, but I state it as a possibility anyway.

  • feta99

    from your sixties onward, death is always 20 years later.

  • Ken Hackney

    “When I asked for a long life, I expected most of it to include good health.” What of eternity?

  • http://www.ewh.ieee.org/soc/es/kort.html Barry Kort

    Last week, Pattie Maes introduced our newest resident at the Media Lab — Distinguished Fellow John Hockenberry — who will be working on a project called Human 2.0.

    Hockoo says he is here for an upgrade.

    Like Rodney Brooks, Hockenberry says we are merging with technology, becoming bionic hybrids, part organic tissue, part technology.

    Perhaps instead of the Singularity arriving rapturously, like the Second Coming of Save-Your-Soul amidst a fanfare of trumpets, we will just gradually morph into a Messy-Antic Era where we exchange bits of memetic DNA with our silicon prostheses.

  • Carl Shulman

    It’s an interesting methodology, but the data is just terrible quality. For every person on that list whose views I know well, the attached point estimate is misleading to grossly misleading. For instance, it gives Nick Bostrom as predicting a Singularity in 2004, when Bostrom actually gives a broad probability distribution over the 21st century, with much probability mass beyond it as well. 2004 is in no way a good representative statistic of that distribution, and someone who had read his papers on the subject or emailed him could easily find that out. The Yudkowsky number was the low end of a range (if I say that between 100 and 500 people were at an event, that’s not the same thing as an estimate of 100 people!), and subsequently disavowed in favor of a broader probability distribution regardless. Marvin Minsky is listed as predicting 2070, when he has also given an estimate of most likely “5 to 500″ years, and this treatment is inconsistent with the treatment of the previous two estimates. Robin Hanson’s name is spelled incorrectly, and the figure beside his name is grossly unrepresentative of his writing on the subject (available for free on his website for the ‘researcher’ to look at). The listing for Kurzweil gives 2045, which is when Kurzweil expects a Singularity, as he defines it (meaning just an arbitrary benchmark for total computing power), but in his books he suggests that human brain emulation and life extension technology will be available in the previous decade, which would be the “living long enough to live a lot longer” break-even point if he were right about that.

    I’m not sure about the others on that list, but given the quality of the observed date, I don’t place much faith in the dataset as a whole. It also seems strangely sparse: where is Turing, or I.J. Good? Dan Dennett, Stephen Hawking, Richard Dawkins, Doug Hofstadter, Martin Rees, and many other luminaries are on record in predicting the eventual creation of superintelligent AI with long time-scales well after their actuarially predicted deaths. I think this search failed to pick up anyone using equivalent language in place of the term ‘Singularity,’ and was skewed as a result. Also, people who think that a technological singularity or the like will probably not occur for over 100 years are less likely to think it an important issue to talk about right now, and so are less likely to appear in a group selected by looking for attention-grabbing pronouncements.

    A serious attempt at this analysis would aim at the following:

    1) Not using point estimates, which can’t do justice to a probability distribution. Give a survey that lets people assign their probability mass to different periods, or at least specifically ask for an interval, e.g. 80% confidence that an intelligence explosion will have begun/been completed after X but before Y.

    2) Emailing the survey to living people to get their actual estimates.

    3) Surveying a group identified via some other criterion (like knowledge of AI, note that participants at the AI@50 conference were electronically surveyed on timelines to human-level AI) to reduce selection effects.

    • http://www.kk.org Kevin Kelly

      @Carl Shulman: You are right. Someone should do this scientifically and properly. It would be very revealing and much more credible. My roundup is only suggestive at best. Even better would be to track their estimates as they change over the course of their own lives. Don’t forget to ask their birthdays!

  • http://geeksjournal.blogspot.com John Walter

    Very interesting concept, certainly something to consider when looking at future predictions, but based on my interpretation of Kurzweil’s The Singularity is Near, he has provided many compelling arguments that support his predicted dates, and if, in actuality, the Singularity does arrive around 2040, then it will fall within the Maes-Garreau point for all Baby-Boomers, perhaps just coincidentally.