The Technium

Robots Will Make Us Better Humans


The paramount reason we put up with the churn of technology — always having to change and confront new problems — is that technology makes us better humans. It always has.

Our humanity is something we invented over the course of a million years. It’s our first and most important “tool”. In fact, we ourselves — humans — are the first wild creations we domesticated, before wheat, corn, dogs, cows and chickens. We’ve been modifying ourselves, and our genes, since day 1. It’s true that most of our behavior is primitive, unchanged, ancient, and no different than our animal cousins. But not all. And it is these different bits that make us human.

The 8 billion people alive on the planet today are not the same beings who walked through the Rift Valley millennia ago. We’ve changed our bodies, our minds, and our society. We are more human.

When we domesticated fire by learning how to start it and manage it, we used it to cook our food. We took plants we had trouble digesting and figured out how to pre-digest them by cooking them with fire. Fire was among our very first tools. It was definitely a transforming invention. Over time this external stomach provided increased nutrition that helped our brains expand. It also changed our teeth and jaws.

Archeologists can identify skulls of humans by our teeth and jaws. But we would not say our teeth are what define us, nor that they make us better. We might argue that having a bigger brain does in part define us. (We named ourselves Homo Sapiens, the brainy animal. )

When we make a list of those things that distinguish us from animals (and from machines) that becomes our working definition of human. If we can expand those same qualities, maybe improve them, then they would make us better humans.

At our best, humans display the following qualities that are not found in abundance in other animals: fairness, justice, mercy, ingenuity, self-consciousness, long-term thinking, deductive logic, intuition, transcendence, gratitude, imagination, creativity, and most important, empathy.

At one time our ancestors did not posses much of these, but now we do. So this is a work in progress, and there is no evidence we are done progressing along these dimensions.

Some of these attributes are held within us as individuals and some are held collectively and thus require a society to surface them. Over the span of many centuries, we have created systems that help us improve in those categories. We invented cities, societies, laws, and civilizations to build up trust, fairness, long-term thinking, and creativity. In that time frame we expanded our circle of empathy. We’ve gone from caring primarily about our clan, to caring about our tribe, then to caring about our nation, and recently to care about other species, and may be to caring about a planet.

We are accelerating this improvement with new inventions, new technologies. As we engineer ethics and morality into AIs and robots we will come to see that our own ethics and moral notions are shallow and inconsistent. Teaching robots will be like teaching our children; it will make us better at the subject.

This is already happening. As we invent self-driving cars, we need principles to guide their driving decisions. Should a self-driving car favor the safety of the riders or the safety of a pedestrian? During a potential accident should the car swerve away from a pedestrian at the risk of hurting the passenger, or should it swerve toward a pedestrian to ensure the saftey of the passengers? You may recognize that as the classic Trolley Problem in ethics. You have to choose, yet we find its dilemmna near impossible to answer, so we ignore it and leave what we decide during an accident up to chance. We pretend we have no autonomy at that moment. We don’t have that privilege in robotics; we have to choose a priority before hand, and so we are required to make that ethical choice now as we program the cars. In this way, we are requiring that AIs and robots have better ethics than ourselves. And that is just one of many ethical type decisions engineers have to settle on the way to give full autonomy to a self-driving car. Should they ever exceed the speed limit? What should they do when in an minor accident if they are carrying a paid rider? There are hundreds of quandries that engineers need to decide and they are deciding them today. Incidently, in respect to the Trolly Problem, in general self-driving cars are being taught to favor the passengers. After all, that is the choice anyone who purchases a self-driving car will want.

As we elevate AIs and robots with better and more deliberate choices than we have made in the past, we will be educated and pressured to elevate our own ethics. We’ll have to upgrade our own notions and practices to at least match our mind-children.

We are seeing a similar dynamic with games. On the fateful day when the IBM supercomputer Deep Blue first beat the best human chess player, many observers announced it was the end of chess for humans. In the succeeding years, computer chess apps became so good and so cheap that the chess program on your phone can crush almost any human player. Yet human chess playing is on the rise, and most importantly, computer chess programs have taught humans new ways to play chess. In every good sense, AIs have help humans become better chess players.

The same goes for playing Go. After thousands of years of human play, computer Go programs are teaching humans how to play Go in more advance ways that humans had ever thought of.

We can certainly expect AIs to teach us how to be better scientists and maybe even better artists. As we engineer creativity and ingenuity into AIs, they will force us to refine and develop our own creativity and ingenuity.

But as we fill the world with AI agents and AI companions it may be that AIs can also teach us how to be better friends, better companions. We will want to engineer AIs to be the best friends, therapists, companions, teachers, and mentors we can possibly imagine, and if they succced, this in turn may surround us with great examples, and thus elevate our own behavior for the better. In the best case scenario imagine growing up with multiple AI friends that are always nearby that ceaselessly exhibit ideal friendship behavior, and AI teachers that are always patient, kind, understanding, empathetic. Those qualities can easilly become what you aspire towards. (The negative traits, of courese, would manipulate us in the other direction.) We have the option of employing AI to help make us better friends and better companions.

For every trait we program into our machines, we have the option to not only match us, but exceed us. We are deliberately creating them to be smarter than us, but also more ethical, more empathetic, more friendlier, and more creative — better than us.

It is not inevitable, but as we make our mechanical and digital children better than us, and surround ourselves with their multitudes so that they are always with us, we have the option of using them to elevate ourselves.

There is a good chance AIs and robots will make us better humans. Rather than diminish our humanity, technology is on course to keep improving it.




Comments


© 2023