The Technium

The Trust Quotient (TQ)


Wherever there is autonomy, trust must follow. If we raise children to go off on their own, they need to be autonomous and we need to trust them. (Parenting is a school for learning how to trust.) If we make a system of autonomous agents, we need lots of trust between agents. If I delegate decisions to an AI, I then have to trust it, and if that AI relies on other AIs, it must trust them. Therefore we will need to develop a very robust trust system that can detect, verify, and generate trust between humans and machines, and more importantly between machines and machines.

Applicable research in trust follows two directions: understanding better how humans trust each other, and applying some of those principles in an abstract way into mechanical systems. Technologists have already created primitive trust systems to manage the security of data clouds and communications. For instance, should this device be allowed to connect? Can it be trusted to do what it claims it can do? How do we verify its identity, and its behavior? And so on.

So far these systems are not dealing with adaptive agents, whose behaviors and IDs and abilities are far more fluid, opaque, shifting, and also more consequential. That makes trusting them more difficult and more important.

Today when I am shopping for an AI, accuracy is the primary quality I am looking for. Will it give me correct answers? How much does it hallucinate? These qualities are proxies for trust. Can I trust the AI to give me an answer that is reliable? As AIs start to do more, to go out into the world to act, to make decisions for us, their trustworthiness becomes crucial.

Trust is a broad word that will be unbundled as it seeps into the AI ecosystem. Part security, part reliability, part responsibility, and part accountability, these strands will become more precise as we synthesize it and measure it. Trust will be something we’ll be talking a lot more about in the coming decade.

As the abilities and skills of AI begin to differentiate – some are better for certain tasks than others – reviews of them will begin to include their trustworthiness. Just as other manufactured products have specs that are advertised – such as fuel efficiency, or gigabytes of storage, pixel counts, or uptime, or cure rates – so the vendors of AIs will come to advertise the trust quotient of their agents. How reliably reliable are they? Even if this quality is not advertised it needs to be measured internally, so that the company can keep improving it.

When we depend on our AI agent to book vacation tickets, or renew our drug prescriptions, or to get our car repaired, we will be placing a lot of trust in them. It is not hard to imagine occasions where an AI agent can be involved in a life or death decision. There may even be legal liability consequences for how much we can expect to trust AI agents. Who is responsible if the agent screws up?

Right now, AIs own no responsibilities. If they get things wrong, they don’t guarantee to fix it. They take no responsibility for the trouble they may cause with their errors. In fact, this difference is currently the key difference between human employees and AI workers. The buck stops with the humans. They take responsibility for their work; you hire humans because you trust them to get the job done right. If it isn’t, they redo it, and they learn how to not make that mistake again. Not so with current AIs. This makes them hard to trust.

AI agents will form a network, a system of interacting AIs, and that system can assign a risk factor for each task. Some tasks, like purchasing airline tickets, or assigning prescription drugs, would have risk scores reflecting potential negative outcomes vs positive convenience. Each AI agent itself would have a dynamic risk score depending on what its permissions were. Agents would also accumulate trust scores based on their past performances. Trust is very asymmetrical; It can take many interactions over a long time to gain in value, but it can lose trust instantly, with a single mistake. The trust scores would be constantly changing, and tracked by the system.

Most AI work will be done invisibly, as agent to agent exchanges. Most of the output generated by an average AI agent will only be seen and consumed by another AI agent, one of trillions. Very little of the total AI work will ever be seen or noticed by humans. The number of AI agents that humans interact with will be very few, although they will loom in importance to us. While the AIs we engage with will be rare statistically, they will matter to us greatly, and their trust will be paramount.

In order to win that trust from us, an outward facing AI agent needs to connect with AI agents it can also trust, so a large part of its capabilities will be the skill of selecting and exploiting the most trustworthy AIs it can find. We can expect whole new scams, including fooling AI agents into trusting hollow agents, faking certificates of trust, counterfeiting IDs, spoofing tasks. Just as in the internet security world, an AI agent is only as trustworthy as its weakest sub-agent. And since sub-tasks can be assigned for many levels down, managing quality will be a prime effort for AIs.

Assigning correct blame for errors and rectifying mistakes also becomes a huge marketable skill for AIs. All systems – including the best humans – make mistakes. There can be no system mistake proof. So a large part of high trust is the accountability in mending one’s errors. The highest trusted agents will be those capable (and trusted!) to fix the mistakes they make, to have sufficient smart power to make amends, and get it right.

Ultimately the degree of trust we give to our prime AI agent — the one we interact with all day every day — will be a score that is boasted about, contested, shared, and advertised widely. In other domains, like a car or a phone, we take reliability for granted.

AI is so much more complex and personal, unlike other products and services in our lives today,

the trustworthiness of AI agents will be crucial and an ongoing concern. Its trust quotient (TQ) may be more important than its intelligence quotient (IQ). Picking and retaining agents with high TQ will be very much like hiring and keeping key human employees.

However, we tend to avoid assigning numerical scores to humans. The AI agent system, on the other hand will have all kinds of metrics we will use to decide which ones we want to help run our lives. The highest scoring AIs will likely be the most expensive ones as well. There will be whispers of ones with nearly perfect scores that you can’t afford. However, AI is a system that improves with increasing returns, which means the more it is used, the better it gets, so the best AIs will be among the most popular AIs. Billionaires use the same Google we use, and are likely to use the same AIs as us, though they might have intensely personalized interfaces for them. These too, will need to have the highest trust quotients.

Every company, and probably every person, will have an AI agent that represents them inside the AI system to other AI agents. Making sure your personal rep agent has a high trust score will be part of your responsibility. It is a little bit like a credit score for AI agents. You will want a high TQ for yours. Because some AI agents won’t engage with other agents having low TQs. This is not the same thing as having a personal social score (like the Chinese are reputed to have). This is not your score, but the TQ score of your agent, which represents you to other agents. You could have a robust social score reputation, but your agent could be lousy. And vice versa.

In the coming decades of the AI era, TQ will be seen as more important than IQ.




Comments


© 2023