The Technium

The Landscape of Possible Intelligences


[Translations: Japanese]

In A Taxonomy of Minds I explore the varieties of intelligence which a greater-than-human intelligence might take.  We could meet greater-than-human intelligences in an alien ET, or we can make synthetic ones. The one foundational assumption behind our making new minds ourselves is that we assume our mind is intelligent enough to make a new and different mind. Just because we are conscious does not mean we have the smarts to make consciousness ourselves. Whether (or when) AI is possible will ultimately depend on whether we are smart enough to make something smarter than ourselves. We assume that ants have not achieved this level. We also assume that as smart as chimpanzees are, chimps are not smart enough to make a mind smarter than a chimp, and so have not reached this threshold either. While some people assume humans can create a mind smarter than a human mind, humans may be at a level of intelligence that is below that threshold also. We simply don’t know where the threshold of bootstrapping intelligence is, nor where we are on this metric.

We can distinguish several categories of elementary minds in relation to bootstrapping:

1) A  mind capable of imagining, or identifying a greater mind.

2) A  mind capable of imaging but incapable of designing a greater mind.

3) A  mind capable of designing a greater mind.

We fit the first criteria, but it is unclear whether we are of the second or third type of mind.  There is also a fourth type, which follows the third:

4) A mind capable of generating a greater mind which in turn itself creates a greater mind, and so on.

This is an cascading, bootstrapping mind. Once a mind reaches this level, the recursive mind-enlargement can either keep going ad infinitum, or it might reach some limit. On the other hand, there may be more than one threshold in intelligence. Think of it as quantum levels. A mind may be able to make a mind smarter than itself, but the offspring mind may not be smart enough to make the next leap, and so gets stuck.

If we imagine the levels of intelligence as a ladder with unevenly spaced rungs, there may be jumps that some intelligences are not able to complete, or their derivatives are not able to jump. So a type 3 mind may be able to jump up four levels of bootstrapping intelligence, but not five. Since I don’t believe intelligence is linear (that is I believe intelligence grows in many dimensions), a better illustration may be to view the problem of bootstrapping super intelligence as navigating across a rugged evolutionary landscape.

Intelligencelandscape

In this type of graph higher means better adapted, more suitable in form. Different hills indicated different varieties of environments, and different types of forms. This particular chart represents the landscape of possible types of intelligences. Here the higher a mind goes on a hill, the more highly it is suited or perfected for that type of intelligence.

In a very rugged fitness landscape, the danger is getting stuck on local optima of form. Your organism perfects a type of mind that is optimal for a local condition, but this very perfection imprisons you locally and prevents you from reaching a greater optimal form elsewhere. In other words, evolving to a higher elevation is not a matter of sheer power of intelligence, but of type. There may be certain kinds of minds that are powerful and optimal for some kinds of thinking, but that are incapable overcoming hurdles to reach a different, higher peak. Certain types of minds may be able to keep getting more powerful in the direction they have been evolving, but incapable of shifting direction in order to reach a new power. In other words, they may be incapable of bootstrapping the next generation. Other kinds of minds may be not as optimal but more nimble.

At the moment we are totally ignorant of what the possibility landscape of intelligence is. We have not yet even mapped out animal intelligences, and we have no real examples of other self-conscious intelligences to map. Navigating through the evolutionary landscape may be very smooth, or it may be very rough and very dependent on the path an evolving mind takes.

Because we have experience with such a small set of mind types, we really have no idea whether there are limits to the varieties and levels of intelligence.  While we can calculate the limits of computation (and folks like Seth Lloyd have done just that), I don’t think intelligence as we currently understand it is equivalent to computation. The internet as a whole is computationally larger than our brains, but not as intelligent in the way we crave. Some people, like Stephen Wolfram, believe there is only one type of computation, and that there is sort of one universal intelligence. I tend to think there will be millions and billions of types of minds.

Recently, in conversations with George Dyson, I realized there is a fifth type of elementary mind:

5) A mind incapable of designing a greater mind, but capable of creating a platform upon which greater mind emerges.

This type of mind cannot figure out how to birth an intelligence equal to itself, but it does figure out how to set up conditions of evolution so that a new mind emerges from the forces pushing it. Dyson and I believe this is what is happening with the web and Google. An intelligence is forming without an overt top-down designer.  Right now that intelligence is rather dimwitted, but it continues to grow. Whether it continues to develop into something near human or greater-than-human remains to be seen. But if this embryonic smartness continued, it would represent a new way of making a mind.  And of course, this indirect way of making something smarter than yourself could be used at any point in the evolutionary bootstrapping cycle of a mind. Perhaps the fourth of fifth generation of a mind may be incapable of designing the next generation but capable of designing a system in which it emerges.

We tend to think of intelligence as singular, but biologically this is unlikely. More likely intelligence is multiple, diverse, and fecund. In the long haul, the central question will concern the differences between the evolvability of these various intelligences. Which types are capable of bootstrapping? And are we one of those?




Comments
  • RobertJ

    The “no free lunch theorem” in optimization theory states that no method for climbing to the optimum values can be a-priori better than any other, if no a-priori information about the landscape is given.

    We also know that in terms of evolutionary fitness, the landscape keep being changed by the actors. This is both good and bad. It’s bad because you won’t get to know the landscape well so you can’t single out a strategy that’s better than blind luck (aka. evolution) but it’s good because what was yesterday a local peak may tomorrow be part of a slope towards a higher peak. You don’t necessarily get stuck in local optima and the “biggest peak” may even grow smaller or larger over time!

    But a changing landscape can’t be set up by one species. You’d somehow need an ecosystem of different intelligences, with cooperation, predation and parasitism.

  • stefan

    Considering the evolution of technology based intelligence aside, how would one similarly analyze the state of the possible intelligence of our Planetary Nature? It would, I surmise, satisfy most of the definitions of a superorganism, containing as it does, everything on this planet.

    We possibly consider Nature to be non-computational, a force dedicated to ensuring the development of the greatest number of viable living species. Describing the actions, reactions and methodologies used as observed in this evolution could keep many great minds occupied a long time. What tools as employed could be recognized, and possibly developed and refined in such a study.

    Stepping away from what we might consider the doings of our Planetary Nature, how does it fit in and work with what we might call Universal Nature?

    We could, in viewing these two observable aspects come to greater insights. What is the central motivating drive of our Universe? To simply keep expanding? To help with the life/death cycles of systems all the way from the smallest particle to the largest observable structures?

    A rather fractal entity. What role does Intelligence play in all this? Is Intelligence, at the end of the day simply the final and evolving result of the survival imperative?

  • rs sale
  • abc
  • abc
  • Ferrell33SASHA

    The loans suppose to be useful for people, which want to organize their organization. By the way, it is not very hard to receive a short term loan.

  • kanji

    The mind able to create platform for greated mind – the ultra-mind, and if that utra-mind is able to create a similar platform of its own – and even greater mind emerges: the mega-mind – there is a chance the first mind would not recognize or sence mega-mind’s presence or existance.

  • Jeremy

    I would argue that DNA is a “mind” of the 5th sort – it is smart of enough to create/be a platform for creating intelligence much more advanced than itself.

  • Dimitri

    Kevin (or Mr Kelly I really don’t know which one I should use) you landed a great article and developed it in a magnificent way! Congratulations!

    I think we should distinguish mind from thought. A lot of philosophers distinguish the mind from the thought (thought is the words in our heads, the mind is something deeper) and we can take it from there.

    There is no doubt that there is “something” with a mind (but no thought, at least none that we are aware of) out there with a high level of intelligence (again, the intelligence is separated from the though).

    If we do this we enter a new realm where intelligence can be found everywhere.

    I know that we are talking about different things, but before creating a new mind, I think that humans first have to clearly identify “what” is a mind, and then start discovering all the thoughtless minds (but far more intelligent than our minds) and learn from them.

    Well, that’s just a personal opinion.

    Regards

  • http://onesandzeros.tangozulu.biz Malcolm

    Your ladder of intelligences reminds me a little of the bootstrapping process in a computer (as you suggest). But all operate on the same hardware, higher levels just use it more effectively.

    I would argue that our minds (intelligences) are not as separable from our bodies as we might imagine. That the hardware makes more of a difference that we might easily see now.

    It makes interesting science fiction to move our minds into computers and vice-versa, but in practical reality our minds and our brains are highly integrated with our bodies through evolution and years of living together.

    We’re just beginning to understand the level of intelligence that cetaceans and octopi have and how it can be very different from ours.

    So, the first step in thinking of types of levels of intelligence might be considering how to understand or even classify existing intelligences. And also to understand how they differ from other types. In short, we might not yet know enough about intelligence to intelligently talk about it.

    • Kevin Kelly

      @ Jeremy: Yes, I would agree that DNA is a type of “mind” (the lowest possible level) that can allow a mind to emerge.

      @Malcom: Indeed, the body or substrate in which a mind operates DOES matter as you say. I don’t think we can have human type intelligence outside human type brain. We can have other types of intelligence in other media.

  • gwern

    Maybe I am missing something, but how is mind-type 3 different from mind-type 4?

    A mind-type 3 is designing a mind greater than itself; a mind greater than it logically would have to be able to do everything the original mind could and more, which includes designing a mind-greater-than-the-original-could, which includes… etc.

    Type 3 is recursively improvable just like type 4 is.

  • http://simplyted.blogspot.com Ted Holmes

    I’m irresistibly drawn to comment here Kevin. You’re sparking the edges of something I’ve been mulling for quite a while. I’ll try real hard to stay on topic :)

    With regard to Google and the Web becoming a springboard to the emergence of artificial minds: At this writing, Google has passed her 10th birthday. From Google Web to Maps to a newly invented browser about to rule the Web, to a Google phone, to a completely new computer operating system. I think the next really big Google sized opportunity will be the shift from organizing the world’s information to organizing the world’s Intelligence.

    But on the shape of this emerging Intelligence, it seems to me that Intelligence and communication networks are inseparable in that one can’t exist without the other. And in some cases (as in the Web), they symbiotically reinforce each other in a self accelerating feedback loop.

    The network acts as a scaffold for Intelligent systems. Meanwhile, the network compounds the power of the sum Intelligence within the network following Metcalfe’s Law. This combined networked Intelligence repays the favor in further diversifying, increasing the efficiency of, and enhancing the network.

    I think that’s why we started calling the Web “Web 2.0″. We recognized it had transitioned into something different. It started behaving meaningfully smarter.

    Lastly, as increasingly intelligent singular systems inevitably exchange the sum of their individual Intelligences at near the speed of light, a truly new global Intelligence is bound to emerge with the power to focus on a single global project as one mind and switch to trillions of projects individually. Again in a self reinforcing loop.

    Thanks for being such a great source of insight Kevin.

    Ted

  • grantmr

    godel says no

  • John Johnson

    I’m guessing that you regard Evolution to be a “greater” mind creating “lesser” minds.

    Otherwise we could claim that the mind of Australapithecus created the mind of Homo Erectus which created the mind of Homo Sapiens.

    I concur with Dyson’s conjecture that a digital mind may yet evolve out of the “primordial soup” we have created with the internet.

    I also have a sneaking suspicion that the “arms race” between self-mutating computer viruses and self-adapting antivirus software will play a part in achieving the “critical mass”.

    My mind feels like it’s done some evolving today. Thank you!

  • http://harwood-leon.com Paul Harwood

    The ant is a good analogy here and I agree with George Dyson on 5:

    Simply put – the ant does not understand the ant-hill but requires it. Humans are very similar in nature to ants. We do not understand intelligence, but require it, it is our own ant-hill. We can make it bigger, more complex, but all it becomes ultimately is a bigger ant hill. (bit like your diagram :) )

    When you use the term “greater”, I would argue that this adversely affects what you say about development of the mind. Lets separate the ant from the ant-hill a second. In this world, size does not matter, quantum or otherwise. The mind is not a matter of boundaries, it is a matter of effect. Your imagination has no boundaries, other than the effect that imagination has on your surroundings or activities. If you are talking of the ‘greater effect’ of the mind collectively or otherwise, then I agree.

    Thought stems from our fundamentals, from our own natural connection that has existed since the big bang, not only via a hybrid ether that we chose to create in the 80′s, this is simply an extension of communication. Even the LHC’s ambitious experiments and ‘Globus’ network are only there to prove what the mind already knows or to facilitate the minds imagination though science.

    Google is a mechanical recording and processing device. With human input to tweak its cyclical recording function. Mechanical recording devices as an extension of the biological function do not decree a ‘greater’ mind or even intelligence. I would rather use the concept of infinite mind where there is no scale. If i were to make a graph explaining development of the mind – it would be a mass that had no shape, with no x and y. Pretty useless, but more honest, I feel.

    Where you could measure this, is in the effect. I would argue that while individually some of us have more effective minds, we are becoming bigger as a species and collectively less intelligent (e.g. destruction of environment, war etc… our effects, not our books!) which make this greatness of a single mind, collective or otherwise pretty redundant, and because the effects of the mind are not strictly measurable in any other way.

    To put this to you, I would personally measure the growth, evolution and the optimal condition of a mind using effect;

    1) A mind capable of imagining, or identifying a greater effect.
    2) A mind capable of imaging but incapable of designing a greater effect.
    3) A mind capable of designing a greater effect.
    4) A mind capable of generating a greater effect which in turn itself creates a greater effect, and so on.

    and my favourite :)

    5) A mind incapable of designing a greater effect, but capable of creating a platform upon which greater effect emerges.

  • http://wearetheweb.wordpress.com/ Publius

    I bet, and I am all in, on #5.

    And I base this on something George Dyson once said:
    http://breadcrumbs4us.wordpress.com/2008/05/26/11/

    We can, and one day will, create the Platform. And it will encompass the entire world economy.

  • http://en.wikipedia.org/wiki/Anthropomorphic Paul B Ervine

    This article, and some of the comments, brush up against my own thoughts on the matter.

    For me, the “logic” behind AI speculation never ceases to astound. Why – simply because humans might imagine it or design it or create conditions for its evolution, etc. – is it implied that this greater mind will be something that humans can recognize or identify as related to, or springing from, their own?

    The premise is simply not valid; we cannot assume that a truly greater-than-human cognitive being – even one that we ourselves create – will act in any way, shape, or form that we will be able to recognize or understand. If it is greater-than-human… then isn’t it implied that humans – being “less” – might be completely ignorant of even the existence of that greater cognitive being?

    If we’re indeed delving into the realm of the Technological Singularity – rather than a synthetic enhancement or genetic augmentation of the biological human brain – then the argument becomes even more absurd. Since machines are not made of human stuff (ie., DNA, living interdependent biological systems, autonomous anatomical function), attempting to define their kind of intelligence (or “thought” or “consciousness”) in the same terms – and assembling a similar list of expectations for it – that we use to define and evaluate our own is simply another insidious form of anthropomorphic prejudice.

    In so many ways computers have already exceeded a boatload of different categories of human intelligence (computation, simulation, prediction, memory, procedural accuracy, mechanical control/precision, and so on)… yet here we are still hanging on to the notion that there’s some ultimate holy grail in the field of artificial intelligence, some peak that we’ll be able to recognize once it’s been reached.

    Well, I’ve got news for everybody: we won’t be able to recognize it, folks, when it happens; we won’t be the ones up there at the top, and we certainly won’t be able to see the top from our soft little spot at the bottom.

  • Steven Weinberg

    Not wanting to sound stupid, I have to say, nevertheless, that it is not at all clear to me what is meant by an “intelligence”, let alone “a variety of intelligences”. In fact, this lack of understanding is severely hampering my ability to enjoy not only what Kevin writes here, but also such other current works as Ray Kurzweil’s “The Singularity is Near”. Is there a common definition of intelligence? I believe Ray K that computers are going to be (by some definition that I don’t yet understand) smarter than people within the next 30 years, or so, and that this fact has unparalleled significance for the character and quality of the lives of my children and grandchildren and beyond. On a gut level I believe it. I just don’t really comprehend it. HELP!

    • http://www.kk.org Kevin Kelly

      @ steven weinberg: We don’t know what intelligence is. But we can see a distinction between consciousness and intelligence. An animal can have one without the other. Your dog may not be conscious but it is more intelligent than your lizard. I am talking about that latter continuum.

  • Ralph Weidner

    Are there multiple intelligences? In the field of education Howard Gardner gave an affirmative answer to this question several decades ago. Unfortunately, the connections between education and other fields of endeavor aren’t very good, so mostly only people interested in education seem to know about this. We definitely need to get better connected, and that will take more than just technology, imo.

  • http://mafafu.blogspot.com matt

    Wow, that was some good reading. It elaborates more clearly and deeply the thoughts I’ve had about the subject and attempted to put more humorously:

    http://mafafu.blogspot.com/2007/11/another-incarnation.html

    also @gwern, although it could seem that type 3 and type 4 are identical, there is a subtle distinction. One might suspect that if a lesser mind created a greater mind then that mind would then by default be able to create yet a greater mind, else how could it be considered greater than the lesser that created it. However, there may be some discontinuity on the scale of intelligences where you run into a wall you cannot surpass in your iterations of intelligence generation. For instance, you may be able to create an intelligence linearly greater than yourself, but it cannot reach into an even greater threshold beyond itself, say a geometric or exponential improvement. Although you could might make intelligences linearly better ad infinitum, it would still be the same *kind* of intelligence. It could not supersede the type by orders of magnitude.

    In the world of computers, think, you can add more cores and up the clock speed, but not fundamentally alter the architecture. That’s what I think the answer is anyway.

  • finest briar

    great to kick back and relax smoking one of my handmade pipes . They say pipe smokers are an intelligent and articulate bunch.

  • finestbriar

    great to kick back and relax smoking one of my handmade pipes . They say pipe smokers are an intelligent and articulate bunch.