In the last fifty years nearly all predictions for technology in the next fifty years rest upon the reasonable assumption that computer power will increase. Smart machines play a role in just about every scenario of the future we have, including most dystopian ones. The apocalyptic worlds of the Terminator or the Matrix or Bladerunner are scary precisely because smart things have gone amok. This universal expectation of smarter machines is based on the eerily steady and hard-to-ignore rise of computing speed over the last 50 years.
As many observers of technology have pointed out, the rise in computation is not just increasing, the rate of its increase is increasing, which means simply the powers of computers are accelerating. So relentless is this acceleration that if it were to continue for much longer, the kinds of advances we've seen from the birth of computers till now would repeat itself in only a few years, then a few months, and finally a few days. This means that from our view the changes in computer power would seem to be growing infinitely fast.
Setting aside the possibility of whether computational technology could ever reach infinite growth, at some point before this stage computers would certainly be many millions of times as powerful as they are now. Given our experience of computer growth in the last 50 years, it is perfectly reasonable to accept the proposition of future computers several millions of times faster than computers now, and not too crazy to imagine that threshold achieved in our lifetime.
But while we find it easy to accept this premise, I've found it extremely hard for anyone (even artificial intelligence experts) to imagine what precisely a smarter computer would be like. If you try to describe intelligence smarter than a human you normally go blank after the common first idea that it would think faster. Once it thought faster, would it have different thoughts, would it have a different type of intelligence, and is there any way we can imagine what a more powerful mind might do?
The difficulty of peeking into this alien world of higher intelligences is one reason the Singularity metaphor has caught on. A cosmological singularity, such as a black hole, prevents outsiders from gaining information about what happens beyond the black hole's boundaries (although the strict validity of this notion is now under revision). A technological singularity means that a near infinite acceleration of change prevents us from forecasting or even guessing what happens on the other side of this change paradigm. In this metaphor, our lives are so slow compared to the speed - on the other side - that our minds are incapable of comprehending a super fast, super powerful super intelligence.
It's a good theory, but it is probably wrong for a number of reasons. For one thing we already have experiences with brains bigger than ours, with intelligences smarter than us, and with intelligences different than ours. It is worth investigating the nature of these alien intelligences because what the inarguable acceleration of computation points to is a future where technology becomes more like a mind. If technology wants to be more mind like, what can we surmise about greater minds?
Another way of stating the quest: Everything we know about the current trajectory of technology today suggests it is headed towards becoming very intelligent in the future. What can we say about how greater intelligence works, and what it might want?