The Technium

What Everyone Knows


Most of “what everyone knows” is true. Most of our knowledge as modern human beings is shared with many others. Everyone knows the capital of France is Paris, and it is true. Everyone knows how many letters in the alphabet, the color of stop lights, the shape of a rainbow. What everyone knows is usually correct. However, sometimes what everyone knows is wrong. Everyone knew humans could not fly, or build 100 stories into the air, or run a company renting out your extra bedroom. Turns out what everyone knows is sometimes wrong. But it is very hard to tell the difference.

So we rely on experts. Usually experts are correct. Experts spend their life dedicated to getting to the bottom of one or two things and because of that they really know the subject. Generally what they think is true can be trusted. But sometimes experts are wrong. And very often, there’ll be another expert who has a different, even contrary, professional opinion on the same subject. So non-experts are left having to decide which expert we want to believe.

There are two areas where experts are not 100% reliable: when things are moving very fast, and predicting the future. At Wired I was involved in a project called Reality Check where we surveyed informed people about the likely dates when a particular invention would arrive. Let’s say we wanted to know when there would be laser dental drills. In this case dental experts all predicted that laser drills would be far in the future, while non-dental futurists predicted it sooner. In general experts were much more conservative about future inventions *in their field of expertise.* They were more aware of the problems and challenges than anyone else, and so found it hard to see improbable breakthroughs. In the years since those predictions, in the cases where imaginary inventions have actually been invented, the experts in that field were usually wrong about the dates.

The other realm where experts are often wrong are when things are moving fast. Experts, at least good ones, rely on the consensus of science, which takes time. There are a lot of provisional theories, experiments to verify, data to sort, and then pieces to integrate with the rest of science. When things are fast and new, there is not enough time for that consensus to build.

We are in that time now all the time. In the early days of the Covid-19 virus, things were moving very fast. There was great ignorance and little certainty. While thousands of experiments took place, a consensus took time to emerge, and on many aspects of this virus are still just emerging. That means that for every expert out there, there is another equal anti-expert who disagrees on some point. Artificial intelligence is a very fast-moving frontier and what (and who) to believe about it is hard for a non-expert to decide. Crypto is another example of a big field that seems to contain conflicting experts. For the lay public it is very hard to know who to believe.

In the absence of consensus during turbulent times, humans turn to unorthodox ideas. This can be dangerous because a lot of unorthodox ideas are very wrong, and some are conspiratorial, and many are just silly. But we have to be open to contrarian ideas because sometimes what everyone knows is wrong. At the same time when experts begin to agree it is likely that “what everyone knows” is true. Two years into the covid pandemic, we know A LOT about the virus, how it is transmitted, and what helps. Our ignorance is still vast about it, but we have enough consensus among experts to declare some things true.

We don’t have very highly-evolved mechanisms for the early stages of knowledge that are moving so fast that the clumsy mechanism of science can kick in, and we can get a consensus of experts. We might use the covid example to see what worked best in the first year of surprises. If we rank the first experts to weigh in on covid, handing out points for those who came closest to the consensus say in 2 more years hence, we might learn how to identify them early. Is there some trait or venue or method they used that might be transferable to other areas to help us identify experts we can rely on? I am suggesting fast science might run different than normal science. There might be things scientists themselves can do to increase the likelihood of being right during very fast change. We know for sure, that given the nature of science — good science — that not everyone can be right. The system has to be open to unorthodox ideas. Is there a way to arrive at a proto-consensus fast — without leaving out the real contingent that everything we know is wrong? Being able to do fast science would be a great civilization skill.




Comments


© 2023