The Technium

Making the Inevitable Obvious

Weekly Links, 08/05/2022




Weekly Links, 07/29/2022




Weekly Links, 07/22/2022




Weekly Links, 07/15/2022




Weekly Links, 07/01/2022




The Need For a Body


Artificial intelligence, in the classical image, will probably require a body. To make an intelligence congruent with ours — so that we can partner with it — it needs an intangible mind in some kind of physical form able to interact with the real world. Otherwise the AI won’t understand such fundamental concepts as cause/effect, which we gain from everyday reality. What most AIs so far today lack the common sense of a two-year-old human. The toddler understands gravity, continuity, near/far, and cause and effect, which no AI today knows.

A body provides a constant stream of sensory data that gives context to the current moment. These sensations are needed to operate in real time. Real time behavior forces such traits as anticipation and prediction, key aspects of intelligence. It is not necessary that the body be a stand alone humanoid robot. Its body could be spread over many machines, with thousands of sensors,

This is a minority view. Many AI researchers believe that with enough data to draw upon, like the petabytes of real world scanning done by automobiles driving around, and robots working in factories, that unembodied minds will be able to master the logic of a physical world.

There is another argument that an AI needs a body once, but once it figures out the world, it can migrate that learning into all kinds of intangible minds. It can learn cause and effect, and near/far, as it would learn other things. In this way it has a memory of a body, in the same way we could imagine some mutant of a human living entirely in their severed heads. Here a body may only be a scaffolding for intelligence. Needed to create it, but not needed to operate it.

I am skeptical that a disembodied human mind would remain sane for long, so I side with the minority who believes that embodied AI will do more of what we want than disembodied ones. They will be more useful to us (why we maintain them) if they operated with ongoing common sense about how the world goes.

The body forms of AIs will be diverse. There will certainly be humanoid-shaped robots because they are the easiest for us to relate to and interface with. The more they mirror our form, the easier it will be to work with them. But embodiment can resemble a vehicle (Transformers!), a building, or a vast network of small things.

Not all intelligences will need a body for what they do. But the ones we engineer to be close to us, to partner in our daily work, to engage with us, to be comfortable with, will probably have a sensor-rich active body that is able to navigate and interact with the world on its own. As we do.



Wishful Worries


What we think will happen is more important then ever.

But pictures of the future are just fiction, that don’t exist. Yet, never before in the history of our species have we devoted so much of our time, energy and attention to things that we agree don’t exist. In the past societies might have devoted large portions of their resources to fund sacrifices to gods they believed to exist (but did not). That belief they were real was important. Today we expend resources on visions we would like to exist, but have to admit don’t.

For example, big-budget Hollywood science fiction films are all about things that don’t exist. Thanos, Darth Vader, Klingons — none of these exist. Huge space ships and warp speed and “beam me up” don’t exist either. But it is not just fiction. Almost any detailed picture of the future, by definition, is giving a lot of attention to something that does not exist. Advertisements of new products often depict versions that don’t exist in order to sell the meager version that does. I’m thinking of the AT&T TV commercials in the early 1990s showcasing the digitial online world that they wanted to make. The refrain after each new wishful product was introduced was to claim it was a near inevitability. They announced that today you can’t achieve these desires, but soon “You Will.”

Every start-up company is spending their resources on a vision that does not exist yet. Sometimes that vision is deliberately set decades hence, or sometimes they hope it is only a few years away. The more ambitious the vision is — that is the farther it is away from what is real now — the more likely it will be seen as hype. Hype is wishful thinking with the intent to make it happen.

We don’t think of the desirable futures described in Star Trek as hype because the creators are not necessarily trying to make them real. They want to make everything plausible, but not actual.

A certain amount of hype is needed to bring into reality anything complex that does not currently exist. You need a bit of hype to bring a product to market, so it can be used widely. You have to imagine it in great detail, and get others to see it, and then gets others to understand its value, which may be hard when it is new. To do this requires some degree of hype. Most founders deeply believe in something that does not exist at the moment, and they want to believe in it in order to make it real.

Inappropriate levels of hype arise when there is only hype, when the wishful thinking goes way beyond what is actually made, or can be made. There is a fine line between appropriate hype and inappropriate hype because often what is possible can only be realized in retrospect.

One significant consequence of hype is that this wishful picture is often the picture that critics of new technology have. When we naturally begin to think about the downsides of new things, we tend to imagine them as realer, more developed than they are. In other words we tend to worry about things that don’t exist (yet). In fact most of the popular technologies that people are worried about, are versions of things that may not exist for decades, if ever. These negative visions are as unreal as the positive hype visions. Some call these critiques of technology “wishful worries.” They are worried about something that the inventors wish would be. But ultimately the critics are spending attention and resources on things that don’t exist.

There are many examples of this, past and current. Entire academic departments are devoted to studying the ethical implications of genetic engineering of humans, such as making clones. But the evidence so far is that no human clones exist to study. Human clones are a wishful worry. Designer babies are a wishful worry.

Just as there are appropriate levels of hype, there are appropriate levels of wishful worry — basically hype with an inverse charge. We absolutely need to imagine not just what benefits might come with new things, but what harms might come. Where wishful worry becomes problematic is when we act on those wishful worries, to begin writing laws, or setting policies, when we have no evidence of actual harm.

Right now there is a lot of wishful worry about AI. Critics are worried about AIs that don’t exist right now and may not exist for a long time. While I think it is inevitable they will exist in the future, the problem with them not existing right now is that we have no evidence to base our response on.

As of 2022, no car drives itself. No driver has lost their job because of AI. In fact no one anywhere has been fired because of AI. Right now robots cannot flip hamburgers. They can’t clean your toilet. As of today we have no data on what life is like with working robots. We can make up stories (and do) but they are only fiction.

However, fictions about the future are good and important. The role and influence of science fiction – both utopias and dystopias — has been immeasurable in shaping modern life. We know for sure that science fiction never gets it exactly right. It is an unreliable prediction machine. So we should not decide on policies based on fiction. We need to run our lives based on evidence of how inventions are actually used.

It remains a remarkable fact that at no time in history have we thought about things so long before they happen. AI and genetic engineering will be the most rehearsed arrivals in the history of our species. We will have been thinking about them, arguing about them, debating them at least a century before they finally appear.

We need to keep in mind that we are rehearsing has been shaped by storytellers, hype, and wishful worries. The reality will be very different and will tell a different story.



Robots Will Make Us Better Humans


The paramount reason we put up with the churn of technology — always having to change and confront new problems — is that technology makes us better humans. It always has.

Our humanity is something we invented over the course of a million years. It’s our first and most important “tool”. In fact, we ourselves — humans — are the first wild creations we domesticated, before wheat, corn, dogs, cows and chickens. We’ve been modifying ourselves, and our genes, since day 1. It’s true that most of our behavior is primitive, unchanged, ancient, and no different than our animal cousins. But not all. And it is these different bits that make us human.

The 8 billion people alive on the planet today are not the same beings who walked through the Rift Valley millennia ago. We’ve changed our bodies, our minds, and our society. We are more human.

When we domesticated fire by learning how to start it and manage it, we used it to cook our food. We took plants we had trouble digesting and figured out how to pre-digest them by cooking them with fire. Fire was among our very first tools. It was definitely a transforming invention. Over time this external stomach provided increased nutrition that helped our brains expand. It also changed our teeth and jaws.

Archeologists can identify skulls of humans by our teeth and jaws. But we would not say our teeth are what define us, nor that they make us better. We might argue that having a bigger brain does in part define us. (We named ourselves Homo Sapiens, the brainy animal. )

When we make a list of those things that distinguish us from animals (and from machines) that becomes our working definition of human. If we can expand those same qualities, maybe improve them, then they would make us better humans.

At our best, humans display these qualities: fairness, justice, mercy, ingenuity, self-consciousness, long-term thinking, deductive logic, intuition, transcendence, gratitude, imagination, creativity, and most important, empathy.

Over the span of many centuries, we have created systems that help us improve in those categories. We invented cities, societies, laws, and civilizations to build up trust, fairness, long-term thinking, and creativity. IN that time we expanded our circle of empathy. We’ve gone from caring primarily about our clan, to our tribe, to our nation, to other species, to a planet.

We are going to accelerate this improvement with new inventions, new technologies.

  1. As we engineer creativity and ingenuity into AIs, they will force us to refine and develop our own creativity and ingenuity. We will gain new understandings of how these traits work (in order to synthesize them) and that will ignite us to refine what we do.
  2. As we engineer ethics and morality into AIs and robots we will come to see that our own ethics and moral notions are shallow and inconsistent. Teaching robots will be like teaching our children; it will make us better at the subject. We’ll have to upgrade our own notions and practices.
  3. As we invent new kinds of beings, perhaps even those with some degrees of self-awareness, we will continue to expand our empathy toward artificial minds.
  4. We will continue to weave ourselves together with communications, collaborating in the millions, which will create better ideas of equity and opportunity.
  5. New technologies of psychedelics and brain-computer interfaces will enable new kinds of transcendence and spirituality.

Rather than diminish our humanity, technology is on course to keep improving it. AIs and robots will make us better people (on average).



The Religions of Aliens


In all taxonomies, there are lumpers and splitters. Lumpers tend to lump categories together, to find similarities, to say “these are really the same,” while splitters tend to say, no, these are different and need to be counted separately.  I’ve been thinking about the taxonomy of religions on Earth in order to think about religions on other planets. In comparative religion studies, there are lumpers and splitters. The lumpers say, there are just a few basic religious beliefs that are shared by all religions, and the splitters say, no, there is a very wide diversity of beliefs born out of a wide diversity of cultures and environments, and those differences matter. Both are talking about religions on Earth. What about religions on distant civilized planets?

We know so little about what is possible are for alien beings that we can’t even begin to theorize. The culture of intelligent life might be so drastically different that we can’t begin to speculate with any confidence. A better exercise might be a counterfactual; what kinds of religions might have arisen among our own species on this planet if we re-ran the tape of history? What if civilization began in the period before the last ice age in a different river valley system? What could the course of religions look like?  That’s a start in imagining alien religions, by imagining aliens not that different from us, but with a different set of initial conditions.

I find this exercise useful not because I expect we will encounter aliens in the near future; this is highly unlikely. It is helpful, first, because this type of self-distancing is handy in looking at our present set of religions and religious assumptions. Counterfactuals help illuminate outside conditions and other driving forces that might form the present religious regimes and continue into the future. Second, and more importantly, although contact with aliens from another planet is remote and unlikely, it is near certain we will create artificial aliens, otherwise known as AIs and robots, who might exhibit religious leanings. This exercise might hint at what those leanings might be. Although I expect our created AIs and robots to be aliens —that is not human — we will constantly try to make them more human-like, so even if they are aliens, they may wind up more like us, than say the AIs and robots produced by another galactic civilization. ( A good question of interstellar AI experts: do the species of AI and robots tend to converge (lump) or diverge (split) throughout the galaxy?)

There seems to be an axial age in human history when roughly most of the major religions started. Buddhism, Zoroastrianism, Confucianism, Toaism, and Judaism, kind of started about the same era in humanity’s history, approximately 3000 years ago. Some scholars interpret this not as the first period of innovation in religious ideas as much as the first dissemination of religions. This was the age when global trading, money, empires, and writing first appeared, which made the spread of a few ideas possible. Suddenly the same belief could be shared by millions. It was the dawn of universal religions, beyond local religions. Nonetheless, while there was some geographical overlap, each of the axial religions were independent creations. Besides these major religions there were hundreds of less durable gods from Greek, Roman, Viking, Hindu, Native American religions, to shamans and voodoo beliefs, that also have ancient roots.

So how often do religions re-invent the same thing? Popularizers (and lumpers) like Joseph Cambell extract the commonality among religions of the world. He might say there is one belief with a thousand faces (The Hero with a Thousand Faces).  I too tend to be a lumper and from my vantage, I think most shamanistic religions are very similar. Generally if you are familiar with the perspective and rituals of one shamanic religion, you’d get the essence of the rest, although the details vary tremendously. In fact, it is the details that make them beautiful. But shifting from a Shamanic religion to one based on written scriptures is a big gap with fewer commonalities. So even a lumper like me recognizes that we have distinct species of religions by now.

As a lumper I think that if a universal religion originated in a valley in the new world pre-last ice age, maybe in an alternative history where agriculture got its first flying start in the Amazon or Mississippi, we’d soon have a monotheistic religion with a sky God. But instead of it relying on harsh desert wisdom it would rely on lush jungle wisdom. The logic of plants and gardeners would rule instead of the logic of animals and shepherds. The great battles defending God would not be played out between armies on vast plains, but inside the skulls of individuals. In this religion instead of a fixation on blood, it was about identity and names.

My guess is that this new world religion will invent similar principles as its old world counterpart. As different as some aspects of the new world were, its prehistory would mostly be the same conditions: early agriculture, transitioning from hunger/gather, the very first cities and the problems of urbanity, and the new power of writing, disrupting a previous oral culture. All that will form similar notions of God, and the afterlife.

If we had to invent a religion today, from scratch, one rooted in today’s hi-technology it could fork from our familiar spiritual paths. This is another way to speculate on the religion of aliens. Imagine robots had a religion. What would they want to believe? I don’t have any answers, and it will be a long time before asking them would get a meaningful answer. Nonetheless, I think this is a productive pursuit that could help prepare us if we should ever contact other civilizations. They will surely have their own notions of where they ultimately came from.



Future Scarcities


What kind of things might be scarce in a future world filled with advance AIs and advance science that could probably reverse engineer most things? It is hard to think of something we could not synthesize.

In the sci-fi story Dune, the empire revolves around melange spice, which can give users prophetic abilities. Use of the spice is the only way to navigate through the stars. Therefore the planet where the spice is mined becomes a battle ground because this spice is so valuable.

Why can’t this spice be synthesized? If a civilization could make inter-galatic travel possible, surely they could replicate this spice’s chemistry and physics. In any coherent plausible future, they would have. They would synthesize the substance near to where they needed it.

If it isn’t melange spice, is there anything, say in a thousand years from now, that would be worth traveling to another planet to retrieve? There are currently plans to mine asteroids. The idea is that the concentration of metals is so high in them that is it worth the cost of space construction and shipping to make it more economical than mining and refining on Earth. It is unclear if this will be true.

But what if you could hop around planets in our galaxy as easily as we hope around this planet?  Is it possible there could be anything on a planet that might make it valuable, like Dune? That is, the conditions on the planet might generate something that could not (easily) be generated in a lab somewhere else?

Perhaps there is something about a planet’s position in space that is scarce? Maybe a black hole nearby is needed to produce some form of natural Unobtainium, which is beyond most civilization’s skill set. So this material becomes an inter-galatic scarcity. Except to the more advanced civilizations, to whom it is trivial because they figured out how to mimic the conditions of a black hole. So if it is trivial, then why not manufacture it for cheap and make it widely available? If space hopping was easy, then this material would soon be in stores all over the galaxy.

Possible scarcities might be found in:

– Materials needing rare planetary forces to make. Star-level energies?

– Materials requiring super levels of intelligence /knowledge, and kept kind of secret

– Materials requiring vast collaborations of many minds to produce

– Living species that accomplish things like spinning silk that are not hard to synthesize but are rare and collectible.

– Materials requiring vast amounts of time to produce. Perhaps a natural substance needs a billion years to ripen, and this may be found in only a few places.

– Construction projects that need a long time to create, and so are hard to replicate. The chief hurdle in constructing a Dyson Sphere is not the energy or materials or even knowledge needed. It may be the very long time (millennia?) needed. A society can change their mind mid-way, or even forget why they are building it. I wonder if the galaxy is strewn not with the ruins of vanished civilization but with the ruins of abandoned grand projects — half a death star, half a Dyson sphere. A complete Dyson Sphere may be extremely valuable.

– Very odd physical circumstances might enable a service or a material — like say a wormhole — that could be controlled by a society. The planet happens to sit next to a wormhole, so anyone who wants to use this wormhole needs to stop at their planet first. This combination could be rare.

The most obvious answer to this question is that the holy grail may be the very conditions for higher life forms.  Perhaps goldilocks planets are rare in the universe. In other words, a planet with sufficient conditions that can breed intelligent life  — like the planet Earth — may be very rare. Life of some sort may be common in the universe, but highly developed intelligent life may be rare, because in turn, a planet with the right gravity, the right amount of water, the right distance from a star, the right orbit, the right magnetic field, etc etc maybe extremely rare.  In which case this goldilocks planet itself is the prize.

Beyond finding a whole Earth-like planet, there may not be much reason for space hauling. If no material is scarce then there won’t be much inter-stellar freight. It seems the main attraction will not be materials but either planets or minds. It is possible that habitable planets might produce beings, ideas and civilizations that are found no where else. in the universe. Therefore there could be all kinds of technologies found only on certain planets. So if there was space hopping, then sailing around the universe looking for new technologies might be a viable enterprise.

The thing about advanced technologies is that they may be hard to export; unless you export the system and maybe the cultural system that supports them. If your tech hunter found a planet with really cool inter-dimensional travel gear, you might need a whole colony of the beings to run it, and to explain it, and to maybe transfer it (or not).

Technological and cultural products may be the only scarcities in the galaxy. The only reason to hop around the galaxy is to search for some new bizarre technology that your own civilization may never dream of making, no matter how long it lived. It literally could not think it because of the very specific conditions that set of ideas needs are only present on that specific weird planet. Ideas are the only scarcities.





© 2022