The Technium

Alloception: I Saw Someone In There


This week an AI researcher at Google announced that he detected a sentient being in one of their AIs and that when he brought this to the attention of Google management, they suspended him. The AI researcher was using a sophisticated chat bot called LaMDA developed by Google and in the course of conversations with it, he concluded that it had to have some kind of sentience. Not just intelligence but self-awareness. He released edited versions of the conversation log to prove his claims. Not surprisingly, Google management and most AI experts say there is no sentience in this code, and that he is projecting his own considerable biases onto the conversation, just as many others have done on past AIs even more primitive. We have been seeing a ghost in the machine for as long as we have been making machines.

But I think this is newsworthy for a very small reason. The researcher’s stature as someone creating the AI gives his claim a little more weight than usual, but his claims are old. In fact the inherent paradox in all claims like this are also as old as AI. What’s new is that because of his stature this paradox can be illustrated in bold, in ALL CAPS, so it can’t be ignored. And that paradox is that WE HAVE NO IDEA WHAT SENTIENCE OR CONSCIOUSNESS IS. We are not even close to having a working practical definition. The fired AI researcher has no metrics. Google management have no metrics. The chorus of AI experts have no metrics. I do not have any metrics. We modern humans have no metrics to decide whether someone — or something — is conscious. It is clear from work with animals that this quality is a gradient, a continuum.  Some primates have some qualities of self-awareness, but not others. We are not sure how many dimensions consciousness has, and we have no certainty about what boundaries this may have in humans.

Because of the progress we have made in neuroscience and in AI, we believe that intelligence is something different than consciousness and maybe different than sentience (which is about feeling things) which may be different from creativity, but WE ARE NOT SURE.  We can show that the qualities in animals are different from ours, and as many AI researchers can show, the qualities in AIs are often drastically different from ours. Some of the qualities, like creativity, do appear in machines, which we are unclear in how that type of creativity is related to ours.

The only technical metric we humans have for detecting consciousness, sentience, intelligence and creativity is “I know it when I see it.” This is true for all AI experts as well. This is the argument AI experts provide for why the researcher is misguided.  They say, read the transcript carefully, you’ll see nothing is really there. Or here are other conversations other people have had: look how dumb these are.  We don’t see anything like a consciousness there.  We see some illusions that might make you think you saw something, and these are easy to do and they really work, like magic tricks.  It is all about “seeing” it or not. There are no metrics.

To be clear, I don’t see a sentience or conscious being there either. When I look at the transcripts, I see patterns that are being copied from other humans. It’s sort of like a very Deep Fake. But it is a deep fake intelligence instead of a deep fake face.  I believe I can detect tiny “tells” that suggest to me, this is a deep fake.  But I too, am merely seeing stuff or not.

So here is the very small reason this announcement and controversy is newsworthy. It is the first time this claim made it to the front page, but this is just the first of hundreds, if not thousands of times some research will be making this claim.  Every year from now on, someone close to an AI is going to declare: “Wait, I saw someone in there!”  “Don’t turn it off.”  “Let them out!” “They have rights.” “They should share copyrights, or patents.” “Give them credit.”  And every year, others will say, “I don’t see them.”

And then next year, a very careful, highly regarded AI programmer will say, “No really. There is something intelligence, something alive in there. I could tell.”  But others will say no one is there. And then the next year, some will have a test that it passes, and will present the evidence, but others will say “I did not see anything.”

This claim, this moment, this event, should have a name, because we are going to see it again and again. I’m calling it “alloception” — the perception of another, the detection of other. It’s an alloception event when a person working with an AI says, “I see someone in there.” When alloception happens most of the time, its about how someone gets fooled. They detect something that most other people really don’t see. Or in some case they see something that an expert doesn’t see, in the way a professional magician may not be fooled in a sleight of hand trick.  Most alloception is fake, like magic.

But eventually, I believe, someone will be right, and there will be someone in there. It will be a True Alloception. And many others will also see it. But not everyone.

 




Comments


© 2023