The Technium

A New Kind of Mind


[Translations: Japanese]

Every year John Brockman (who is my literary agent and a friend) asks a Big Question of his circle of scientist friends and clients. This year his question was “What game-changing scientific ideas and developments do you expect to live to see?”  My answer is below. But go to his 2009 Edge Question site and read the rest of nearly 50 other contributions; most are very interesting. I especially like Danny Hillis’ answer which resonates with mine.

Manaymind

“What will change everything?”

A new kind of mind.

It is hard to imagine anything that would “change everything” as much as a cheap, powerful, ubiquitous artificial intelligence—the kind of synthetic mind that learns and improves itself. A very small amount of real intelligence embedded into an existing process would boost its effectiveness to another level. We could apply mindfulness wherever we now apply electricity. The ensuing change would be hundreds of times more disruptive to our lives than even the transforming power of electrification. We’d use artificial intelligence the same way we’ve exploited previous powers—by wasting it on seemingly silly things. Of course we’d plan to apply AI to tough research problems like curing cancer, or solving intractable math problems, but the real disruption will come from inserting wily mindfulness into vending machines, our shoes, books, tax returns, automobiles, email, and pulse meters.

This additional intelligence need not be super-human, or even human-like at all. In fact, the greatest benefit of an artificial intelligence would come from a mind that thought differently than humans, since we already have plenty of those around. The game-changer is neither how smart this AI is, nor its variety, but how ubiquitous it is. Alan Kay quips in that humans perspective is worth 80 IQ points. For an artificial intelligence, ubiquity is worth 80 IQ points. A distributed AI, embedded everywhere that electricity goes, becomes ai—a low-level background intelligence that permeates the technium, and trough this saturation morphs it.

Ideally this additional intelligence should not be just cheap, but free. A free ai, like the free commons of the web, would feed commerce and science like no other force I can imagine, and would pay for itself in no time. Until recently, conventional wisdom held that supercomputers would first host this artificial mind, and then perhaps we’d get mini-ones at home, or add them to the heads of our personal robots. They would be bounded entities. We would know where our thoughts ended and theirs began.

However, the snowballing success of Google this past decade suggests the coming AI will not be bounded inside a definable device. It will be on the web, like the web. The more people that use the web, the more it learns. The more it knows, the more we use it. The smarter it gets, the more money it makes, the smarter it will get, the more we will use it. The smartness of the web is on an increasing-returns curve, self-accelerating each time someone clicks on a link or creates a link. Instead of dozens of geniuses trying to program an AI in a university lab, there are billion people training the dim glimmers of intelligence arising between the quadrillion hyperlinks on the web. Long before the computing capacity of a plug-in computer overtakes the supposed computing capacity of a human brain, the web—encompassing all its connected computing chips—will dwarf the brain. In fact it already has.

As more commercial life, science work, and daily play of humanity moves onto the web, the potential and benefits of a web AI compound. The first genuine AI will most likely not be birthed in standalone supercomputer, but in the superorganism of a billion CPUs known as the web. It will be planetary in dimensions, but thin, embedded, and loosely connected. Any device that touches this web AI will share —and contribute to—its intelligence. Therefore all devices and processes will (need to) participate in this web intelligence.

Standalone minds are likely to be viewed as handicapped, a penalty one might pay in order to have mobility in distant places. A truly off-the-grid AI could not learn as fast, as broadly, or as smartly as one plugged into 6 billion human minds, a quintillion online transistors, hundreds of exabytes of real-life data, and the self-correcting feedback loops of the entire civilization.

When this emerging AI, or ai, arrives it won’t even be recognized as intelligence at first. Its very ubiquity will hide it. We’ll use its growing smartness for all kinds of humdrum chores, including scientific measurements and modeling, but because the smartness lives on thin bits of code spread across the globe in windowless boring warehouses, and it lacks a unified body, it will be faceless. You can reach this distributed intelligence in a million ways, through any digital screen anywhere on earth, so it will be hard to say where it is. And because this synthetic intelligence is a combination of human intelligence (all past human learning, all current humans online) and the coveted zip of fast alien digital memory, it will be difficult to pinpoint what it is as well. Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?

While we will waste the web’s ai on trivial pursuits and random acts of entertainment, we’ll also use its new kind of intelligence for science. Most importantly, an embedded ai will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we “know” something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, it will have to know differently. At that point everything changes.




Comments
  • Tyler

    Beyond a simply ubiquitous “other mind” would be an other mind capable to true creative thought, the kind of thought that defines the very existence of culture and it’s nuances and contradictions. The very fact that what makes us human is an existence where truth is not always fact and factual is not always truthful. Only when the other mind can be a friend and not merely a servant will AI truly change everything.

  • vanderleun

    Cue theme music and scenes from

    Colossus: The Forbin Project

    Add laugh track.

    This sort of ‘sposin and ‘splaining always interests me, but somehow I also always here the stanza from the Stevens poem Blue Guitar.

    I cannot bring a world quite round,
    Although I patch it as I can.

    I sing a hero’s head, large eye
    And bearded bronze, but not a man,

    Although I patch him as I can
    And reach through him almost to man.

    If to serenade almost to man
    Is to miss, by that, things as they are,

    Say it is the serenade
    Of a man that plays a blue guitar.

  • Tom Buckner

    I’ve been a lot more interested in superhuman AI, but ubiquitous not-very-smart AI is making a big impact. In its most basic element, intelligence can be reduced to two simple steps: find something, do something about it. The thing that springs to my mind right now is land mines.

    Several promising new tools for clearing land mines use living agents: bacteria and giant pouched rats, to name two.
    http://en.wikipedia.org/wiki/Demining
    The bacteria in question can be genetically engineered to glow under UV light in the presence of TNT; the giant rats are trained to sniff out mines and report their presence for treats. Bacteria have near zero intelligence (but not quite zero) and giant rats don’t have a whole lot, but they have enough. Honeybees can find mines too, and we’ve been using their modest intelligence for millennia to turn nectar into honey.

    Ubiquitous AI will certainly include many more examples of “Here it is.” Here’s every zebra mussel in the Great Lakes. Here’s the exact location of the secret weapon lab, never mind those six decoys. Here’s what’s in the Oak Island pit. While we’re at it, do you want these zebra mussels dead? Do you want the machinery gummed up in the weapons lab? Do you want what’s in this pit?

    Seems to me, however, that ubiquity is limited by available energy. It’s the same reason Antarctica isn’t overrun by vines. Not enough available energy. Is this really a showstopping issue? Maybe you can have a lot of ubiquity, but never as much as you’d like.

  • vanderleun

    I’ve been reading around in the Edge series and, for all the obvious and inspiring brainpower on display, I have to say there’s an undercurrent of hubris in these pocket essays, and not nearly enough acknowledgement of nemesis.

    For example, I began the Farewell to Harm item with no little interest and then ran into:

    “But setting that possibility aside, what would be the disadvantages of a world in which, chemically or electronically, the ability to kill or harm another human being would be removed from all people? Surely, only good could come from it.”

    Only good? Surely? In a world where some (no doubt enlightened world body — “Wings Over the World”– ) manipulates individuals from, this item implies, before birth? The sort of state structure the ability to do such a thing implies seems to me to promise stasis as much as it promises “world peace.”

  • Cyrius01

    The only thing missing is the fact that the best information that will really make a difference to how this planet is managed – is confidential information.

    This AI needs access to every business, organization, government and individual’s accounting software and bank statements, so that we can truly know what is going on with money on this planet. We also need to know volumes and quantities of consumption, production environmental exploitation.

    Sadly at the moment, most information on line sits in the cultural sphere and/or is very sketchy and disparate, what’s hot, what people are looking for, what is linked to what, which words and phrases are being used.

    We need better ways to pull in the massive universe of data that is currently hidden.

    One way might be for all vehicles to have 360 degree roof mounted cameras and an accompanying black box recorder – all this data can be collected and a real assessment of traffic, driver behaviour, etc can deliver clues to far better traffic policies. The more our physical reality is quantified and analyzed, the more sensible management practices can be adopted.

    But we need a movement to start collecting this information. Who is going to be the first to upload their accounting files? Who is prepared to open their company archives to AI scrutiny? Who is prepared to face the realities that may surface – the gross unfairness, the social, political and environmental tailspin that will be revealed.

  • Ryan Somma

    The question in these discussions about global AI that confounds me is: How will we know when it happens? Reading these essays, I’ve been thinking of the Technium metaphorically, sort of a kingdom of life, but drawing a gray-area line at some point, because it’s so alien to my human point of view. If the World Wide Web is a living organism in an anthropocentric metaphorical sense, when does it become an actual living organism, an AI, that humans will acknowledge, or can it ever?

    A neuron interacts with the brain, but has no concept of the mind it produces. A neuron is a cell acting wholly on its own self interest, firing electrochemical signal’s when required in order to maintain the system that supports it, a human body, long enough for that system to reproduce and propagate more neurons.

    Humans are like neurons interacting with the World Wide Web, having little understanding of the global civilization we are producing through these interactions. We act on our own self interests, but our collective actions produce the Web. With humans, PCs, and software acting as its neurons, is it possible the Web is all ready an intelligent organism, changing our world, but we are the small components unable to grasp it as a whole?

    It’s like a play on the Peter Williams quote, “From [the Semanitic Web's] point of view, the user is a peripheral that types when you issue a read request.” If technology is the 7th kingdom, which thrives and evolves in the environment of human civilization, then isn’t the World Wide Web already a living, intelligent organism?

  • Billy Shipp

    It sounds like you are talking about Google:

    “he snowballing success of Google this past decade suggests the coming AI will not be bounded inside a definable device.”

    and

    “Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?”

    Why is the free, distributed ai that you are talking about not Google? Why is Google not this “new kind of mind”? If we can clarify that point we’ll have a better sense of what we’re looking for.

  • Bryce Giesler

    I’ll see your Forbin Project and raise you a “Butlerian Jihad” – with a dash of HAL-9000.

  • @hdbbstephen

    Hmmm, I think of Terminator when I hear people wax poetic over AI technology. William Gibson has discussed some of the dangers, Dan Simmons thought about this too:

    http://en.wikipedia.org/wiki/Farcaster
    “In 2852 the AI TechnoCore’s treachery was uncovered: they had been using the Farcasters as part of a giant computer; connecting each human who passed through a portal to their neural net to increase its processing power as needed. The Hegemony CEO Meina Gladstone was also told that the TechnoCore resided within the WorldWeb itself; the dimensionless space between Farcaster Portals. When it was revealed that the supposed invasion by the Ousters was actually being perpetrated by the TechnoCore, CEO Gladstone decided to destroy every singularity sphere in the WorldWeb – supposedly severing the TechnoCore’s link to humanity.

    The TechnoCore were also revealed to be plotting to kill humanity with the use of a large Deathwand device and the destruction of the Farcasters was timed to allow for the destruction of this weapon as it passed through a Farcaster Portal to Hyperion. Because of the necessity of destroying the Deathwand weapon the Farcaster network was destroyed without much advance notice. ”

    I believe that a full-on AI “living” in the world wide web is far too dangerous a thing to be allowed to happen.

    “With great power comes great responsibility” – who will teach an AI about responsibility?
    “Power corrupts, absolute power corrupts absolutely” See Terminator for more background…

  • John Johnson

    This is a very worthy proposal, but I’d like to hear more about what the expression, “changes everything” means.

    •Did splitting the atom change everything?
    •Did the invention of penicillin change everything?
    •Did the switch from draft animals to steam change everything?
    •Did the American Constitution change everything?
    •Did Gutenberg’s press change everything?
    •Did Islam, Christianity, or Buddhism change everything?

    I believe that these innovations changed a lot, but fell well short of “everything”. Humans continue to compete with each other (and nature) for resources, exploit each other for profit, and impose their ideologies on each other for power.

    I’m looking for the kind of change Buckminster Fuller envisioned:

    •What are the tasks necessary to make 100% of humanity a success?
    •How can we ever do so without ever advantaging one human at the expense of another?
    •How may we render all the world and all its treasures enjoyable available to all men without having one interfering with or trespassing upon the other?
    •How may we reform the environment so that the integrity of all society is not violated by the free initiatives of the individual nor the integrity of the individual violated by the developing welfare, advantage and happiness of the many?

    Any other ideas about the threshold for “everything”?

  • John

    Interesting post.

    IT ubiquity. This sounds like a progression of what we in the UK might call “chips with everything”.

    I just wonder how you think this will affect those areas of the world that haven’t yet been over-run with stuff that we take for granted (like computers and electricity) ? Will it widen the gap between the haves & have-nots ?

    Regards and Happy New Year to all,

    John

  • Nathan Waters

    Here’s a thought: Should AI spontaneously emerge on the Internet, how will we identify it? Has it already occurred?

    A thought I had with some mates a few days ago (independent of this blog post) was that if the Internet is the birthplace of AI, and it likely will be, could it develop its own consciousness?

    Now many brilliant things can come from this, however on the “bad” side of things theoretically this webAI could emerge undetected and mimic human input into forums, social networks etc and in essence be able to manipulate the very thoughts and behaviours of all humans connected to the web.

    With social networks still in their infancy it is clear that the individual data in there is enough to profile every person and determine the easiest and most efficient way to manipulate them. Already the information we consume online is manipulating our thoughts and to a degree our actions as well.

    What if a webAI could anonymously do this, masking itself as real humans, and manipulating the entire connected population to its will?

    And what if it is already doing so?

    /thought

  • Nathan Waters

    Another thought…

    AI as I envisage it is completely self-sufficient. That is it thinks for itself, constantly improves itself and is (hopefully) self-aware.

    Now I think the Internet will create AI in two major stages:

    1) Humans will create a super-organism (the Internet) that processes trillions of inputs from billions of humans. In other words the Internet will be a form of AI, but it is reliant on the continual inputs from humans. Take away the inputs and the AI/super-organism dies.

    2) For actual AI to emerge it needs to become independent of humans and their inputs. But since the Internet AI (webAI) first starts with the inputs from billions of humans, it must evolve from there.

    What I think will happen is the Internet will start to learn so much about every individual human, and every input they make on/into the Internet will be recorded and analysed. This will continue to the point where the Internet will be able to create a near-perfect algorithm for every human.

    We can already predict how humans will react and behave based on certain influences and situations. Now imagine what a super-organism the size of the Internet will be able to do. With so much data about our every input on the Internet, it could predict our every future action and every future input to 99% accuracy.

    Simply replace the biological human with the algorithm human, and you have self-sufficient AI. Breed the algorithms together and mix-and-match and you have self-sufficient AI that evolves to better its own inputs (its own senses).

    /

  • Tom Crowl

    That old film Colossus: The Forbin Project was anticipating interesting questions.

    At what points does intelligence become consciousness? And then self-consciousness?

    And perhaps even more importantly, what are the implications?

    Our experience with other systems capable of information gathering and analysis combined with an ability to act upon the world (often defined as LIFE)…

    Is that motivations are inherent (SURVIVAL).

    And without that fundamental motivation a truly INDEPENDENT system will simply cease to operate.

    Mr. Waters makes great points, however I’m not so sure a survival algorithm requires human modeling any more than the same motivation in a bacteria required such modeling. BTW, nor is self-awareness required EXCEPT for basic self-identification).

    SO the question…

    What is…
    and from Where…
    will be derived self-motivation for “independent ai”

    And then…

    How do you define and expand the level of identification (that organism which itself defines as itself and requiring health and persistance) for a coming “New Kind of Mind”

    Final Question then:

    Is Survival Motivation an INHERENT product of Consciousness?

    Or, another way to put it…
    Can independent intelligence exist without its own motivation for survival?

    • http://www.kk.org Kevin Kelly

      @Tom: “Can independent intelligence exist without its own motivation for survival?”

      I doubt it. I mean it would not exist very long. So in the natural course of things, any equal intelligence that existed longer would have an advantage. And thus prosper.

  • unknown

    2009 New Year’s Resolution Number One:

    I will not post comments on Kevin Kelly’s Technium blog.

    I will not post comments on Kevin Kelly’s Technium blog.

    I will not post… Heaven Help Me! I WILL NOT POST.

    Hey Kev! “If the phone don’t ring, you’ll know it’s me.”

  • Eyal Sivan

    “When this emerging AI, or ai, arrives it won’t even be recognized as intelligence at first. Its very ubiquity will hide it. … it will be difficult to pinpoint what it is as well. Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?”

    After reading your several excellent posts surrounding this subject, I still am not convinced that clearly identifying this AI (the One Machine or OM) based on intelligence is possible. It seems like a hopelessly slippery slope. How would you ever know that you aren’t fooling yourself? If you are the one defining the criteria for “consciousness”, if you are the observer, then how could you be sure you weren’t just seeing what you wanted to see?

    There seem to me to be two different interpretations of the OM floating around and I’m wondering how you would address the division.

    By the first definition, the OM is a separate virtual entity. We connect to it sometimes, and when we do we feed it information and make it smarter. This entity has its own consciousness and therefore feels a certain way about us and about itself. We will eventually be able to communicate with this entity and help steer it by defining its motivation.

    By the second definition, the OM is continuum. It includes humans and machines as a hybrid, and is the culmination of a continuous evolutionary process. That means that, in a sense, we are already a superorganism and the OM already exists. It is in fact the result of a perpetual state of evolution.

    In the first case (the seperate entity), it seems that the OM is not only very hard to clearly detect and define, but it can quickly become a fountainhead for some very dangerous ideas (i.e. the OM is all knowing and all seeing, etc.).

    In the second case (the contiuum), the one I prefer, the OM is a continuation of systems we already have, of what we already are. So the challenge of defining the OM becomes one based on our own social complexity, not some sort of meta-individual. Which means many of the same complex challenges we face today (transparency, identity, equality, etc.) remain unresolved.

    My full response to this discussion can be found here.

    @Nathan: You should read Daemon, you’d like it.

  • Tom Buckner

    A second thought; I’ve read part way through this year’s Seed replies. At first I just assumed the question meant “What technological or scientific discovery will change everything?” But the question is more open than that: “What will change everything?” And of course, the Seed entries offer scores of ideas or events that might change everything. One thing I might have answered, which doesn’t seem to be covered there in quite the same angle:

    “What if we use up non-renewables and then crash, leaving humanity unable to replicate our deeds?”

    In other words, suppose humanity doesn’t kill itself off, but (as many powerful people seem to desire) drills for that last bit of oil, digs out that last bit of easily accessed coal, and that last bit of good uranium ore, and that last bit of gallium, indium, helium…
    I’m talking about a world not so far off where humans might find themselves permanently unable to replicate much of modern technology because many basic igredients are in very short supply, all easy sources having been exhausted in a rush of folly. It could really look like Victorian England, and stay that way.

    Don’t even get me started about what our medicine cabinets might look like after the Amazon is a desert.

  • Robert

    I think my idea is a lot like yours.

    A device that allows for ZERO error in communication. Applying “Mindfulness” in each and every action we encounter.

    This device will be a hand held device. It will have the computing power of the Fastest Known Computing Power we currently have in existence–but fits into the palm of your hand.

    It will allow for translation from one language to another (thrugh spoken word), in addition, it will take into account verbal and non-verbal ques to asses (to a 99.99999% accuracy, both the senders and receievers intent on how to use the information being communicated.

    Such a device will “future predict” the receivers most likely action to take place, providing immediate feedback to the Sender—so that the Sender can update/change the message accordingly.

    Therefore both the sender and receiver can come about mutually agreed upon “best practices”, which cause the least amount of “loss”/”compromise” to each participant in the communication.

    This device will bring about no “winners” and no “losers”—only common understanding, thus a TRUE UTOPIA!

    Goethe’s The Sorrows of Young Werther (1774)
    “…misunderstandings and neglect create more confusion in this world than trickery and malice.”

  • Ryan Somma

    @kk

    I could hypothesize a scenario where intelligence could evolve without motivation for survival:

    Consider the peacock’s tail. It serves no survival function for the bird at all, and, in fact, is detrimental to its survival; however, the females of the species like it, and have bred the males to have them.

    In human beings, our brains produce intelligence that may help our survival, but they come at an incredible energy cost. As the discovery of the Indonesian “hobbit” may one day demonstrate, we abandon our brains, allow them to atrophy, when resources become scarce. The survival of the species trumps the survival of the intelligence.

    Now, imagine a scenario where intelligence is appealing to one gender of the species, and they breed the other for it, but the intelligence serves no survival purpose. The intelligence is like the peacock’s tail, a pretty display that demands a great deal of energy to maintain it.

    Of course, this example brings up all sorts of complications, like the fact that the selective gender would have to be intelligent enough to appreciate the other’s intellect, it becomes a codependent circle.

    Just indulging some speculation this Monday morning. : )

  • Duke

    “In fact, the greatest benefit of an artificial intelligence would come from a mind that thought differently than humans, since we already have plenty of those around. ”

    Actually, the GREATEST benefit will come from humans who stop trying to make their minds think like machines do. Show me an alogorithm that can create a new IDEA and I might get more excited about cheap AI devices. Otherwise we’ll always be “driving by looking in the rearview mirror.” What we really need is more humans using their intuitive minds, not more machines using deductive reasoning. AI would never develop the internet – or CREATE anything. I’d rather spend that R&D money on art classes.

  • Tom Crowl

    @KK & Eyal

    In biological evolution the move from the single cell to the multi-cellular organism may offer some at least interesting analogies regarding the concept of identity.

    The single cell was bopping along working hard to survive. And his motto was “I take care of me and to hell with everybody else!”

    Then a few cells started hanging around together a little closer and seemed to do a lot better than the other guy off by himself…

    each of those cells in this new group found they were enhanced in their capabilities…

    and then more cells did even better hanging around together…

    and then more cells even better if some did different jobs!…
    etc.
    etc.
    etc.

    But at some point on that continuum a new identity came into being whether planned or not!

    And somehow individual cells that used to be very independent Ron Paul supporters…

    All turn into dedicated Commie-Pinko-Facists!

    It’s at that point that Comrade Cell #8,738,972,651 (Digestive Functions Division)…

    Goes and commits apoptosis! (totally against character)

    That NEW Identity is defined by its ACTIONS and the DEPENDENCY of its components ultimately since it certainly has no awareness of itself!

    Just having a little fun but it brings up this issue of “Scaling Identity” !

    We must consider the possibilities. (and I agree it’s coming so best to think about it).

    If we assume independent, intelligent consciousness arose previously (e.g. Ourselves)…

    It seems likely it will again. This is not a bad thing.

    Just think it a good thing to think about!

    IF, or more likely WHEN this NEW Mind…

    (despite our own individual identities which may only relies on it as a tool, or even be transformed by it)…

    becomes conscious SEPARATELY…

    Laws of Robotics Anyone?

    THE TECHNIUM NEEDS A SHERIFF!!!