Emotional Agents
Many people have found the intelligence of AIs to be shocking. This will seem quaint compared to a far bigger shock coming: highly emotional AIs. The arrival of synthetic emotions will unleash disruption, outrage, disturbance, confusion, and cultural shock in human society that will dwarf the fuss over synthetic intelligence. In the coming years the story headlines will shift from “everyone will lose their job” (they won’t) to “AI partners are the end of civilization as we know it.”
We can rationally process the fact that a computer could legitimately be rational. We may not like it, but we could accept the fact that a computer could be smart, in part because we have come to see our own brains as a type of computer. It is hard to believe they could be as smart as we are, but once they are, it kind of makes sense.
Accepting machine-made creativity is harder. Creativity seems very human, and it is in some ways perceived as the opposite of rationality, and so it does not appear to belong to machines, as rationality does.
Emotions are interesting because emotions clearly are not only found in humans, but in many many animals. Any pet owner could list the ways in which their pets perceive and display emotions. Part of the love of animals is being able to resonate with them emotionally. They respond to our emotions as we respond to theirs. There are genuine, deep emotional bonds between human and animal.
Those same kinds of emotional bonds are coming to machines. We see glimmers of it already. Nearly every week a stranger sends me logs of their chats with an AI demonstrating how deep and intuitive they are, how well they understand each other, and how connected they are in spirit. And we get reports of teenagers getting deeply wrapped up with AI “friends.” This is all before any serious work has been done to deliberately embed emotions into the AIs.
Why will we program emotions into AIs? For a number of reasons:
First, emotions are a great interface for a machine. It makes interacting with them much more natural and comfortable. Emotions are easy for humans. We don’t have to be taught how to act, we all intuitively understand results such as praise, enthusiasm, doubt, persuasion, surprise, perplexity – which a machine may want to use. Humans use subtle emotional charges to convey non-verbal information, importance, and instruction, and AIs will use similar emotional notes in their instruction and communications.
Second, the market will favor emotional agents, because humans do. AIs and robots will continue to diversify, even as their basic abilities converge, and so their personalities and emotional character will become more important in choosing which one to use. If they are all equally smart, the one that is friendlier, or nicer, or a better companion, will get the job.
Thirdly, a lot of what we hope artificial agents will do, whether they are software AIs or hard robots, will require more than rational calculations. It will not be enough that an AI can code all night long. We are currently over rating intelligence. To be truly creative and capable of innovations, to be wise enough to offer good advice, will require more than IQ. The bots need sophisticated emotional dynamics that are deeply embedded in its software.
Is that even possible? Yes.
There are research programs (such as those at MIT) going back decades figuring out how to distill emotions into attributes that can be ported over to machines. Some of this knowledge pertains to ways of visually displaying emotions in hardware, just as we do with our own faces. Other researchers have extracted ways we convey emotion with our voice, and even in words in a text. Recently we’ve witnessed AI makers tweaking how complimentary and “nice” their agents are because some users didn’t like their new personality, and some simply did not like the change in personality. While we can definitely program in personality and emotions, we don’t yet know which ones work best for a particular task.
Machines displaying emotions is only half of the work. The other half is detection and comprehension of human emotions by machines. Relationships are two way, and in order to truly be an emotional agent, it must get good at picking up your emotions. There has been a lot of research in that field, primarily in facial recognition, not just your identity, but how you are feeling. There are commercially released apps that can watch a user at their keyboard and detect whether they are depressed, or undergoing emotional stress. The extrapolation of that will be smart glasses that not only look out, but at the same time look back at your face to parse your emotions. Are you confused, or delighted? Surprised, or grateful? Determined, or relaxed? Already, Apple’s Vision Pro has backward facing cameras in its goggles that track your eyes and microexpressions such as blinks and eyebrow rises. Current text LLM’s make no attempt to detect your emotional state, except what can be gleaned from the letters in your prompt, but it is not technically a huge jump to do that.
In the coming years there will be lots of emotional experiments. Some AIs will be curt and logical; some will be talkative and extroverts. Some AIs will whisper, and only talk when you are ready to listen. Some people will prefer loud, funny, witty AIs that know how to make them laugh. And many commercial AIs will be designed to be your best friend.
We might find that admirable for an adult, but scary for a child. Indeed, there are tons of issues to be wary of when it comes to AIs and kids, not just emotions. But emotional bonds will be a key consideration in children’s AIs. Very young human children already can bond with, and become very close to inert dolls and teddy bears. Imagine if a teddy bear talked back, played with infinite patience, and mirrored their emotions. As the child grows it may not ever want to surrender the teddy. Therefore the quality of emotions in machines will likely become one of those areas where we have very different regimes, one for adults and one for children. Different rules, different expectations, different laws, different business models, etc.
But even adults will become very attatched to emotional agents, very much like the movie Her. At first society will brand those humans who get swept up in AI love as delusional or mentally unstable. But just as most of the people who have deep love for a dog or cat are not broken, but well adjusted and very empathetic beings, so most of the humans that will have close relationships with AIs and bots will likewise see these bonds as wholesome and broadening.
The common fear about cozy relationships with machines is that they may be so nice, so smart, so patient, so available, so much more helpful than other humans around, that people will withdraw from human relationships altogether. That could happen. It is not hard to imagine well-intention people only consuming the “yummy easy friendships” that AIs offer, just as they are tempted to consume only the yummy easy calories of processed foods. The best remedy to counter this temptation is similar to fast food: education and better choices. Part of growing up in this new world will be learning to discern the difference between pretty perfect relationships and messy, difficult, imperfect human ones, and the value the latter give. To be your best — whatever your defintion —requires that you spend time with humans!
Rather than ban AI relationships (or fast food) you moderated it, and keep it in perspective. Because in fact, the “perfect” behavior of an AI friend, mentor, coach, or partner can be a great role model. If you surround yourelf with AIs that have been trained and tweaked to be the best that humans can make, this is fabulous way to improve yourself. The average human has very shallow ethics, and contradictory principles, and is easily swayed by their own base desires and circumstances. In theory, we should be able to program AIs to have better ethics and principles than the average human. In the same way, we can engineer AIs to be a better friend than the average human. Having these educated AIs around can help us to improve ourselves, and to become better humans. And the people who develop deep relationships with them have a chance to be the most well-adjusted and empathetic people of all.
The argument that the AIs’ emotions are not real because “the bots can’t feel anything” will simply be ignored. Just like the criticism of artifical intelligence being artificial and therefore not real because they don’t understand. It doesn’t matter. We don’t understand what “feeling” really means and we don’t even understand what “understand” means. These are terms and notions that are habitual but no longer useful. AIs do real things we used to call intelligence, and they will start doing real things we used to call emotions. Most importantly the relationships humans will have with AIs, bot, robots, will be as real and as meaningful as any other human connection. They will be real relationships.
But the emotions that AIs/bots have, though real, are likely to be different. Real, but askew. AIs can be funny, but their sense of humor is slightly off, slightly different. They will laugh at things we don’t. And the way they will be funny will gradually shift our own humor, in the same way that the way they play chess and go has now changed how we play them. AIs are smart, but in an unhuman way. Their emotionality will be similarly alien, since AIs are essentially artifical aliens. In fact, we will learn more about what emotions fundamentally are from observing them than we haved learned from studying ourselves.
Emotions in machines will not arrive overnight. The emotions will gradually accumulate, so we have time to steer them. They begin with politeness, civility, niceness. They praise and flatter us, easily, maybe too easily. The central concern is not whether our connection with machines will be close and intimate (they will), nor whether these relationshipd are real (they are), nor whether they will preclude human relationships (they won’t), but rather who does your emotional agent work for? Who owns it? What is it being optimized for? Can you trust it to not manipulate you? These are the questions that will dominate the next decade.
Clearly the most sensitive data about us would be information stemming from our emotions. What are we afraid of? What exactly makes us happy? What do we find disgusting? What arouses us? After spending all day for years interacting with our always-on agent, said agent would have a full profile of us. Even if we never explicitly disclosed our deepest fears, our most cherished desires, and our most vulnerable moments, it would know all this just from the emotional valance of our communications, questions, and reactions. It would know us better than we know ourselves. This will be a common refrain in the comming decades, repeated in both exhiliration and terror: “My AI agent knows me better than I know myself.”
In many cases this will be true. In the best case scenario we use this tool to know ourselves better. In the worst case, this asymmetry in knowledge will be used to manipulate us, and expand our worst selves. I see no evidence that we will cease including AIs in our lives, hourly, if not by the minute. (There will be exceptions, like the Amish, who drop out but they will be a tiny minority.) Most of us, for most of the time, will have an intimate relationship with an AI agent/bot/robot that is always on, ready to help us in any way it can, and that relationship will become as real and as meaningful as any other human connection. We will willingly share our most intimate hours of our lives with it. On average we will lend it our most personal data as long as the benefits of doing so keep coming. (The gate in data privacy is not really who has it, but how much benefit do I get? People will share any kind of data if the benefits are great enough.)
Twenty five years from now, if the people whose constant companion is an always-on AI agent are total jerks, misanthropic bros, and losers, this will be the end of the story for emotional AIs. On the other hand, if people with a close relationship with an AI agent are more empathetic than average, more productive, distinctly unique, well adjusted, with a richer inner life, then this will be the beginning of the story.
We can steer the story to the beginning we want by rewarding those inventions that move us in that direction. The question is not whether AI will be emotional, but how we will use that emotionality.