The Technium

What Will Big Brains Do?


In the last fifty years nearly all predictions for technology in the next fifty years rest upon the reasonable assumption that computer power will increase. Smart machines play a role in just about every scenario of the future we have, including most dystopian ones. The apocalyptic worlds of the Terminator or the Matrix or Bladerunner are scary precisely because smart things have gone amok. This universal expectation of smarter machines is based on the eerily steady and hard-to-ignore rise of computing speed over the last 50 years.

bigbrain

As many observers of technology have pointed out, the rise in computation is not just increasing, the rate of its increase is increasing, which means simply the powers of computers are accelerating. So relentless is this acceleration that if it were to continue for much longer, the kinds of advances we’ve seen from the birth of computers till now would repeat itself in only a few years, then a few months, and finally a few days. This means that from our view the changes in computer power would seem to be growing infinitely fast.

Setting aside the possibility of whether computational technology could ever reach infinite growth, at some point before this stage computers would certainly be many millions of times as powerful as they are now. Given our experience of computer growth in the last 50 years, it is perfectly reasonable to accept the proposition of future computers several millions of times faster than computers now, and not too crazy to imagine that threshold achieved in our lifetime.

But while we find it easy to accept this premise, I’ve found it extremely hard for anyone (even artificial intelligence experts) to imagine what precisely a smarter computer would be like. If you try to describe intelligence smarter than a human you normally go blank after the common first idea that it would think faster. Once it thought faster, would it have different thoughts, would it have a different type of intelligence, and is there any way we can imagine what a more powerful mind might do?

The difficulty of peeking into this alien world of higher intelligences is one reason the Singularity metaphor has caught on. A cosmological singularity, such as a black hole, prevents outsiders from gaining information about what happens beyond the black hole’s boundaries (although the strict validity of this notion is now under revision). A technological singularity means that a near infinite acceleration of change prevents us from forecasting or even guessing what happens on the other side of this change paradigm. In this metaphor, our lives are so slow compared to the speed – on the other side – that our minds are incapable of comprehending a super fast, super powerful super intelligence.

It’s a good theory, but it is probably wrong for a number of reasons. For one thing we already have experiences with brains bigger than ours, with intelligences smarter than us, and with intelligences different than ours. It is worth investigating the nature of these alien intelligences because what the inarguable acceleration of computation points to is a future where technology becomes more like a mind. If technology wants to be more mind like, what can we surmise about greater minds?

Another way of stating the quest: Everything we know about the current trajectory of technology today suggests it is headed towards becoming very intelligent in the future. What can we say about how greater intelligence works, and what it might want?




Comments
  • Daryn

    I don’t know that I agree that a faster computer is necessarily ‘smarter’

    How does a computer learn? How does it find new ideas and build on them? How does it make connections?

    I see a faster computer as being possible to explore every possible chess move, but how would it invent the game in the first place?

  • Randy Fischer

    It seems to me that the hallmark of a more
    powerful intellegence is not it’s quickness, but
    how better it is at picking out patterns in space
    and time. To assist this, wouldn’t an advanced and
    evolvable intelligence desire a better sensorium?
    Give me five channel color vision instead of
    three, sure, and more across ever increasing
    spectrums. We use our bodies to compute problems,
    thinking kinesthetically (think how you trace the
    movements of a gear train with your finger) – so
    certainly they’d want to manifest themselves
    physically, to be as dextrous as possible. But new
    senses as well, to directly perceive previously
    invisble patterns. So we can look to nature to
    get ideas here, from slime molds on up. (I’m
    wondering what forms their synesthesias would
    take, what their Kandinskys would paint).

    When I wrote patterns in time I was really
    thinking about capabilities in planning, not about
    some great Tralfamadorian perspective from the
    fourth dimension.

    Wouldn’t their languages include ones more
    ambigious than ours, to make for more profound
    serendipity, for better poetry? Isn’t intelligence all about metaphors and models?

    When I start to think of intelligent devices, one
    of my first questions is: what is the nature and
    extent of my relationship with this device?
    Normally I just want the device to serve me; now I
    have to consider how it wants me to serve it in
    turn. I don’t mean this in a negative sense.
    With intelligent humans, I want to amuse and
    provoke, to contribute and to earn respect.

    How do I make this device laugh?

    Great fun, the Technium. Thanks for thinking out
    loud.

  • Gregory A

    If and when AGI is achieved,
    will AGI be conscious?
    and will AGI be capable of experiencing love?

  • Ergo Ratio

    I don’t see that a “big” brain would do anything differently, abstractly-speaking, than our brains–just better.

    Our brains process information to make decisions, so that they can continue to process information to make decisions.

    The quality of our decisions is modulated by the accuracy of information fed to our brain through our senses, by the accuracy and accessibility of our memories, and by the accuracy (precision?) and speed of processing information from both of the above to simulate possible future incomes.

    All else being equal, a “big” brain would just make better decisions, right?

    The strife we humans experience on this is, of course, not knowing whether those decisions would be better for humanity. We must therefore take measures to ensure that their fate is entwined with our own.

  • Peter Gransee

    I suspect “big brains” will spend their moments pondering things that seem quite boring from our current perspective.

    We may see our current level of invention and creativity becoming “to cheap to meter”.

    We may have to reevaluate our idea that invention and creativity (as we know it) are key indicators of intelligence. Otherwise you can get something from nothing on a regular basis.

    We make actually get “the singularity” but it may be more about merchandise and less about intelligence. I don’t think we can get a singularity of net complexity gain. Any attempts will lead to noise.

  • Tyler

    This is a good question indeed… I feel that “big brains” by their nature would continue to analyze & organize our actions, to the point that “human nature” is mapped out & understood.

    My feeling on it is that a “big brain” could/would draw connections & relationships than we have/do not. While the average person can consider X factors in making decisions, think Y moves ahead in a chess game, and come up with ideas @ Z iterations from current reality, a big brain would have more “experience” and “examples” built in, that it would have a multiple on our processing power. It would also be able to run simulations, test hypothesis from past events, and eventually understand & identify complex interactions in real time.

    Even if my basic outline isn’t completely accurate, it does seem logical that big brains would inherently run on a higher set of logic principles than ours, given proper testing.

    This represents a belief/question I have which I was hoping to get your thoughts on… does it seem probable that with sufficient trial & error testing, an artificial intelligence system will be created that understands our motivations, habits, likes, and dislikes more than we do? What does this imply?

  • jaimito

    It will be like magic. Propose a goal and it will make it true in a way we will be unable to imagine. Like magic. Repair the car. Operate the cat. Open that safe. Make a gold ring. I imagine a mind like Feynman’s, with memory of the raw materials required, where they could be found, how to use them, able to work out a plan of how the requested thing could be achieved.

  • Berend Schotanus

    If you stay true to your first article, stating that technology is ‘us’ and not ‘alien from us’, it wouldn’t be computers as an isolated phenomena that matter. What matters is the systems of computers being part of a human social network. Than the meaning of increased speed is simply that computer performance is no longer the restricting factor in adaptation of digital life. What matters now is organization, software, applications, the way that human individuals and social networks adapt to the new possibilities.

    With a risk because the speed with which humans can adapt is limited. The speed of the adaptation or disparities in speed can cause problems.

  • Greg Stephens

    I don’t think that speed is the only factor to consider. The main limitation of our brain is focus. We can only consider one thing at a time. An advanced system could consider many things at once. I think that omniscience will mark the begining of the “singularity”. After that we will just be left to ponder the meaning of our existence.