The Technium

Making the Inevitable Obvious

Type 2 Growth


While technology has gotten us into this climate change mess only technology can get us out of it. Only the technium (our technological system) is “big” enough to work at the global scale needed to fix this planetary sized problem. Individual personal virtue (bicycling, using recycling bins) is not enough. However the worry of some environmentalists is that technology can only contribute more to the problem and none to the solution. They believe that tech is incapable of being green because it is the source of relentless consumerism at the expense of diminishing nature, and that our technological civilization requires endless growth to keep the system going. I disagree.

In English there is a curious and unhelpful conflation of the two meanings of the word “growth.”  The most immediate meaning is to increase in size, or increase in girth, to gain in weight, to add numbers, to get bigger. In short, growth means “more.” More dollars, more people, more land, more stuff. More is fundamentally what biological, economic, and technological systems want to do: dandelions and parking lots tend to fill all available empty places. If that is all they did, we’d be well to worry. But there is another equally valid and common use of the word “growth” to mean develop, as in to mature, to ripen, to evolve.  We talk about growing up, or our own personal growth. This kind of growth is not about added pounds, but about betterment. It is what we might call evolutionary or developmental, or type 2 growth. It’s about using the same ingredients in better ways. Over time evolution arranges the same number of atoms in more complex patterns to yield more complex organisms, for instance producing an agile lemur the same size and weight as a jelly fish. We seek the same shift in the technium. Standard economic growth aims to get consumers to drink more wine. Type 2 growth aims to get them to not drink more wine, but better wine. 

The technium, like nature, excels at both meanings of growth. It can produce more, rapidly, and it can produce better, slowly. Individually, corporately and socially, we’ve tended to favor functions that produce more. For instance, to measure (and thus increase) productivity we count up the number of refrigerators manufactured and sold each year. More is generally better. But this counting tends to overlook the fact that refrigerators have gotten better over time. In addition to making cold, they now dispense ice cubes, or self-defrost, and use less energy. And they may cost less in real dollars. This betterment is truly real value, but is not accounted for in the “more” column. Indeed a tremendous amount of the betterment in our lives that is brought about by new technology is difficult to measure, even though it feels evident. This “betterment surplus” is often slow moving, wrapped up with new problems, and usually appears in intangibles, such as increased options, safety, choices, new categories, and self actualization — which like most intangibles, are very hard to pin down.  The benefits only become more obvious when we look back in retrospect to realize what we have gained. Part of our growth as a civilization is moving from a system that favors more barrels of wine, to one that favors the same barrels of better wine.

A major characteristic of sapiens has been our compulsion to invent things, which we have been doing for tens of thousands of years. But for most of history our betterment levels were flatlined, without much evidence of type 2 growth. That changed about 300 years ago when we invented our greatest invention — the scientific method. Once we had hold of this meta-invention we accelerated evolution. We turned up our growth rate in every dimension, inventing more tools, more food, more surplus, more population, more minds, more ideas, more inventions, in a virtuous spiral. Betterment began to climb. For several hundred years, and especially for the last hundred years, we experience steady betterment. But that betterment — the type 2 growth — has coincided with massive expansion of “moreness.” We’ve exploded our human population by an order of magnitude, we’ve doubled our living space per person, we have rooms full of stuff our ancestors did not. Our betterment, that is our living standards, have increased alongside the expansion of the technium and our economy, and most importantly the expansion of our population. There is obviously some part of a feedback loop where increased living standards enables yearly population increases and more people create the technology for higher living standards, but causation is hard to parse. What we can say for sure is that as a species we don’t have much experience, if any, with increasing living standards and fewer people every year. We’ve only experience increased living standards alongside of increased population.

By their nature demographic changes unroll slowly because they run on generational time. Inspecting the demographic momentum today it is very clear human populations are headed for a reversal on the global scale by the next generation. After a peak population around 2070, the total human population on this planet will start to diminish each year. So far, nothing we have tried has reversed this decline locally. Individual countries can mask this global decline by stealing residents from each other via immigration, but the global total matters for our global economy. This means that it is imperative that we figure out how to shift more of our type 1 growth to type 2 growth, because we won’t be able to keep expanding the usual “more.”  We will have to perfect a system that can keep improving and getting better with fewer customers each year, smaller markets and audiences, and fewer workers. That is a huge shift from the past few centuries where every year there has been more of everything. 

In this respect “degrowthers” are correct in that there are limits to bulk growth — and running out of humans may be one of them. But they don’t seem to understand that evolutionary growth, which includes the expansion of intangibles such as freedom, wisdom, and complexity, doesn’t have similar limits. We can always figure out a way to improve things, even without using more stuff — especially without using more stuff! There is no limit to betterment. We can keep growing (type 2) indefinitely.

The related concern about the adverse impact of the technology on nature is understandable, but I believe, can also be solved. The first phases of agriculture and industrialization did indeed steamroll forests and wreck ecosystems. Industry often required colossal structures of high-temperature, high pressure operations that did not operate at human or biological scale. The work was done behind foot-thick safety walls and chain link fences. But we have “grown.” We’ve learned the importance of the irreplaceable subsidy nature provides our civilizations and we have begun to invent more suitable technologies. Industrial-strength nuclear fission power will eventually give way to less toxic nuclear fusion power. The work of this digital age is more accommodating to biological conditions. As kind of a symbolic example, the raw ingredients for our most valuable products, like chips, require ultra cleanliness, and copious volumes of air and water cleaner than we’d ever need ourselves. The tech is becoming more aligned with our biological scale. In a real sense, much of the commercial work done today is not done by machines that could kill us, but by machines we carry right next to our skin in our pockets. We continue to create new technologies that are more aligned with our biosphere. We know how to make things with less materials. We know how to run things with less energy. We’ve invented energy sources that reduce warming. So far we’ve not invented any technology that we could not successfully make more green.

We have a ways to go before we implement these at scale, economically, with consensus. And it is not inevitable at all that we will grab the political will to make these choices. But it is important to realize that the technium is not inherently contrary to nature; it is inherently derived from evolution and thus inherently capable of being compatible with nature. We can choose to create versions of the technium that are aligned with the natural world. Or not. As a radical optimist, I work towards a civilization full of life-affirming high technology, because I think this is possible, and by imagining “what could be” gives us a much greater chance of making it real.

[This essay began as a response in an email interview with Noah Smith, published in his newsletter, here. His newsletter is great; I am a paying subscriber.]



Weekly Links, 02/23/2024




Rights / Responsibilities


We tend to talk about rights without ever mentioning their corresponding duties. Every right needs an obligation to support it. At the very least we have a duty to grant the same rights to others, but more consequentially, rights have trade-offs: things we have to do or surrender in order to earn the right. The right to vote entails the duty to pay taxes. The right of free speech entails the duty of using it responsibly, not inciting violence, harm, insurrection.

So it is with rights in our information age. Every cyber right entails a cyber duty.

For example, security. Modern security is an attribute of systems. Your thing, your home, your company, cannot maintain security alone outside of a system. Once a digital device is connected, everything it connects with needs to be secure as well. In fact, any system will only be as secure as the weakest part of that system. If your part is 99% secure, but your colleague is only 90% secure, you are actually only 90% secure as well. This small difference is huge in security. There are documented cases where an insecure baby monitor in a household system became the back door to unauthorized entry into that family’s network. In this way, the lax security of one piece sets the security for the whole system

Therefore, every part of a system has a duty to maintain the required level of security. Since one part’s security in part determines and impacts all parts’ security, every part of the system can rightfully demand that all parts level up. It is therefore the duty of each part (person, organization) to maintain the proper security.  It is not hard to imagine protocols that say you and your devices can’t join this network unless you can demonstrate you have proper security. In short you have the right to connect to the public commons (without permission), but you have the duty to ensure the security of your connection (and the commons as a whole).

This is not much different from what most nations say, which is you have a right to use any public road, but you must demonstrate you are responsible for its safety and security by passing a driver’s test. Your right is mobility on the roads; your duty is drive responsibility (no drinking!), and to get a license to prove it.

There are other rights/duties animated by digital tech, such as your identity. You don’t really need a name yourself. Your name is most useful to other people, so they can identify you, and in turn, trust you. Like your face, your name is at once the most personal thing about you and the most public thing about you. Your name and face are both indisputably “yours” and also indisputably in the commonwealth. Our faces are so peculiarly public that we are spooked about people who hide their face. And legally, many activities require that we keep our face public, like getting a license, flying in a plane, entering a secure facility, or voting. We have a right to look however we like, but we also have an obligation to keep our face public.

Names are similar, both intensely personal and outright public. We can try to hide our names behind pseudonyms and anonymity, but that diminishes some of our powers to affect change in the world, and also reduces trust from others. Privacy is part of a tradeoff. In order to be treated as an individual, with respect, we have to be transparent as an individual. We have a history, we have a story, we have context, we have needs and talents. All this is wrapped in our identity. So in order to be treated as an individual we have to convey that identity. Personalized individuality is a right that demands a duty of transmitting our identity. We also have (and must have) the right to remain opaque, unknown, and hidden, but the tradeoff for that right is the duty to accept that we will be treated generically, as not-an-individual, but as a number. The right of obscurity is the duty of silence; the right of individuality is the duty of transparency.

As more of our lives are connected constantly, the distinction between digital rights and rights in the rest of the world tend to vanish.  I don’t find it useful to separate them. However, there will still be new “rights” and “responsibilities” arising from new technology. They will first appear in high tech, and then as that tech becomes the norm, so will the rights. Currently, generative AIs demand we think about the rights – and responsibilities – of referencing a creation. The right of copy, or copyright, addressed the need to govern copies of a creation. Generative AI does not make copies, so copyright norms are helpless with this. We realize now that there might be the need for articulating a new right/duty around training an intelligence. If my creation is referenced to be used to train a student, or train an AI, that is, an agent that will go to create things themselves influenced by my work, should I not receive something for that? Should I have any control over what is made from my influence? I can imagine an emerging system such that any creation that is granted copyright is duty bound to be available for reference training by others, with the corresponding right for anyone to train upon a creation with a duty to pass some of the credit (status and monetary) back to creators.

The mirroworlds of AR and XR will necessitate further new pairs of rights/responsibilities around the messy concerns of common and shared works, for the mirrorworld is 100% a commons. What do I get from a collaborative creation, and what do I owe the others who contribute, when those boundaries are unclear? Because every new technology generates new possibilities, I expect it to also produce new pairs of duties and rights.



Weekly Links, 02/16/2024




The Scarcity of the Long-Term


The chief hurdle in constructing a Death Star is not the energy, materials, or even knowledge needed. It’s the long time needed. A society can change their mind mid-way over centuries, and simply move on. The galaxy is likely strewn with abandoned Half-a-Death Stars.

Despite the acceleration of technological progress, indeed because of it, one of the few scarcities in the future will be a long attention span. There will be no shortage of amazing materials that can do magic, astounding sources of energy to make anything happen, and great reserves of knowledge and know-how to accomplish our dreams. But the gating factor for actually completing those big dreams may be the distractions that any society feels over the centuries. What your parents and grandparents found important you may find silly, or even embarrassing.

To build something that extends over multiples of individual lifespans requires a very good mechanism to transmit the priority of that mission. It is much easier to build a rocket that will sail 500 years to the nearest star, then it is to ensure that the future generations of people born on board that 500-year rocket maintain the mission. It is very likely that before it reaches the 250-year halfway point, that the people on board turn it around and head back to a certain future. They did not sign up for this crazy idea!

It is certain that 250 years after the launch of that starship, the society that made it will have wonderous new technologies, new ideas, maybe even new ways to explore, and they could easily change their mind about the importance of sending flesh into space, or decide to explore other frontiers. That is not even to mention how the minds of those onboard could also be changed by new inventions in 250 years.

If let alone, an advanced civilization, could over many millennia, invent AIs and other tools that would allow it to invent almost any material it could imagine. There would be no resource in the universe it could not synthesize at home. In that sense, there would be no material and energy scarcity. Perhaps no knowledge scarcity either. The only real scarcity would be of a long attention span. That is not something you can buy, or download. You’d need some new tools for transmitting values and missions into the future.

There’s an ethical dilemma around transmitting a mission into the far future. We don’t necessarily want to burden a future generation with obligations they had no choice in; we don’t want to rob them of their free will to choose their own destinations. We DO want to transmit them opportunities and tools, but it is very hard to predict which are the gifts and which are the burdens from far away. There’s a high probability, they could come regard our best intentions as misguided, and choose a very different path, leaving our mission behind.

In this way it is easy to break the chain of a long-term mission. one that is far longer than an individual lifespan, and perhaps even one that is longer than a society life-span. It may turn out that the most common scarcity among galactic advanced civilizations is a long-term attention span. Perhaps vanishing few ever complete a project that lasts as long as a 1,000 years. Perhaps few projects remain viable after 500 years.

I am asking myself, what would I need to hear and get from a past generation to convince me to complete a project they began?



Hill-Making vs Hill-Climbing


There are two modes of learning, two paths to improvement. One is to relentlessly, deliberately improve what you can do already, by trying to perfect your process. Focus on optimizing what works. The other way is to create new areas that can be exploited and perfected. Explore regions that are suboptimal with a hope you can make them work – and sometimes they will – giving you new territory to work in. Most attempts to get better are a mix of these methods, but in their extremes these two functions – exploit and explore – operate in different ways, and require different strategies. 

The first mode, exploiting and perfecting, rewards efficiency and optimization. It has diminishing returns, becoming more difficult as fitness and perfection is increased. But it is reliable and low risk. The second mode, exploring and creating, on the other hand, is highly uncertain, with high risks, yet there is less competition in this mode and the yields by this approach are, in theory, unlimited.

This trade off between exploiting and exploring is present in all domains, from the personal to the biggest systems. Balancing the desire to improve your own skills versus being less productive while you learn new skills is one example at the personal level. The tradeoff between investing heavily in optimizing your production methods instead of investing in new technology that will obsolete your current methods is an example of that tradeoff at the systems level. This particular tradeoff in the corporate world is sometimes called the “innovator’s dilemma.” The more efficient and excellent a company becomes, the less it can afford to dabble in some new-fangled idea that is very likely to waste investments that can more profitably be used to maximize their strengths. Since statistically most new inventions will fail, and most experiments will not pan out, this reluctance for the unknown is valid.

More importantly, if new methods do succeed, they will cannibalize the currently optimal products and processes. This dilemma makes it very hard to invest and explore new products, when it is far safer, more profitable, to optimize what already works. Toyota has spent many decades optimizing small efficient gasoline combustion engines; no one in the world is better. They are at the peak of gas car engines. Any money spent on investing into unproven alternative engines, such as electric motors, would reduce their profits. They would be devolving their expertise and becoming less excellent. They cannot afford to be less profitable.

Successful artists, musicians and other creatives have a similar dilemma. Their fans want more of what they did to reach their success. They want them to play their old songs, paint the same style paintings, and make the same kind of great movies. But in order to keep improving, to maintain their reputation as someone creative, the artists need to try new stuff, which means they will make stuff that fails, that is almost crap, that is not excellent. To win future fans the artist has to explore rather than merely exploit. They have to take up a new instrument, or develop a new style, or invent new characters, which is risky. Sequels are just so much more profitable.

But even sequels – if they are to be perfect – are not easy. They take a kind of creativity to perfect. This kind of everyday creativity, the kind of problem solving that any decent art or innovation requires, is what we might call “lower-case” or base creativity. Devising a new logo, if done well, requires creativity. But designing logos is a well-trod domain, with hundreds of thousands of examples, with base creativity as the main ingredient. Designing another great book cover is primarily a matter of exploiting known processes and skills. Occasionally someone will step up and create a book cover so unusual and weird but cool that it creates a whole new way to make covers in the future. This is upper-case, disruptive Creativity.

Upper-case, disruptive Creativity discovers or creates new areas to be creative in. It alters the landscape. Most everyday base creativity is filling in the landscape rather than birthing it. Disruptive Creativity is like the discovery of DNA, or the invention of photography, or Impressionism in art. Rather than just solving for the best possibility, it is enlarging the space of all possibilities.

Both biologists and computer scientists use the same analogy when visualizing this inherent trade-off between optimization and exploration. Imagine a geological landscape with mountains and valleys. The elevation of a place on the landscape is reckoned as its fitness, or its technical perfection. The higher up, the more fit. If an organism, or a product, or an answer is as perfect as it can be, then it registers as being at the very peak of a mountain, because it cannot be any more fit or perfect for its environment. Evolution is pictured as the journey of an organism in this conceptual landscape. Over time, as the organism’s form adapts more and more to its environment, this change is represented as that organism going up (getting better), as it climbs toward a peak. In shorthand, the organism is said to be hill-climbing.

Computer scientists employ the same analog to illustrate how an algorithm can produce the best answer. If higher up in the landscape represents better answers then every slope has a gradient. If your program always rewards any answer higher up the gradient, then over time it will climb the hill, eventually reaching the optimal peak. The idea is that you don’t really have to know what direction to go, as long as you keep moving up. Computer science is filled with shortcuts and all kinds of tricks to speed up this hill-climbing.

In the broadest sense, most every-day creativity, ordinary innovation, and even biological adaptation in evolution is hill climbing. Most new things, new ideas, most improvements are incremental advances, and for most of the time, this optimization is exactly what is needed.

But every now and then an upper-case, disruptive jump occurs, creating a whole new genre, a new territory, or a new way to improve. Instead of incrementally climbing up the gradient, this change is creating a whole new hill to climb. This process is known as hill-making rather than hill climbing.

Hill-making is much harder to do, but far more productive. The difficulty stems from the challenge of finding a territory that is different, yet plausible, inhabitable, coherent, rather than just different and chaotic, untenable, or random nonsense. It is very easy to make drastic change, but most drastic changes do not work. Hill-making entails occupying (or finding*) an area where your work increases the possibilities for more work.

If a musician invents a new instrument, rather than just another song, that new instrument opens up a whole new area to write many new songs in, new ways to make music that could be exploited and explored by many. Inventing the technology of cameras was more than merely an incremental step in optics, it opened up vast realms of visual possibilities, each of those, such as cinema, or photojournalism, became new hills that can be climbed. The first comedians to figure out stand-up comedy, created a whole new hill – a new vocation that could be perfected.

The techniques needed for this upper-case Creativity, this disruptive innovation, this hill-making, are significantly different from the techniques for hill-climbing. To date, generative LLM-AI is capable of generating lower-case base creativity. It can create novel images, novel text, novel solutions that are predominately incremental. These products may not be things we’ve seen or heard before, or even things we would have thought of ourselves, but on average the LLMs do not seem to be creating entirely new ways to approach text, images, and tasks. So far they seem to like the skill of making a new hill.

Hill-making demands huge inefficiencies. There is a lot of wandering around, exploring, with plenty of dead ends, detours, and dry deserts where things die. The cost of exploring is that you only discover something worthwhile occasionally. This is why the kinds of paths that tend toward hill-making such as science, entrepreneurship, art, and frontier exploration are inherently expensive. Most of the time nothing radically new is discovered. Not only is patience needed, but hill-making requires a high tolerance of failure. Wastage is the norm.

At the same time, hill-finding requires the ability to look way beyond current success, current fitness, current peaks. Hill-finding cannot be focused on “going up the gradient” – becoming excellent. In fact it requires going down the gradient, becoming less excellent, and in a certain sense, devolving. The kind of agents that are good at evolving towards excellence are usually horrible in devolving toward possible death. But the only way to reach a new hill – a hill that might potentially grow to be taller than the hill you are currently climbing – is to head down. That is extremely hard for any ambitious, hard working person, organism, organization, or AI.

At the moment generative AI is not good at this. At the moment AI and robots are very good at being efficient, answering questions, and going up gradients towards betterment, but not very good at following hunches, asking questions, and wondering. If, when, they do that, that will be good. My hope it that they will help us humans hill-find better.



Weekly Links, 02/02/2024




Future Embarrassments


Because moral progress is real (we improve our morality over time) new generations will inevitably see moral deficiencies in previous generations. To some degree we can guess and anticipate what they will find embarrassing about us today, but it will be impossible to align ourselves fully with their view. The more we are immersed in today’s culture, the harder to see our future. So we just have to get used to the idea that our descendents will find many of the things we do and believe today to be eye-rolling embarrassing. With the caveat that most of these will be wrong, what are some possible future embarrassments?

Eating animal flesh with gusto

Believing gender is only binary

Denying consciousness in machines

Prohibiting euthanasia

Outlawing psychedelics

Hatred against engineering human genetics 

Acceptance of passports to prevent mobility

Tolerating destitute poverty

Employing capital punishment 

Belief that killing in war is not murder

Prison is justified punishment

Not choosing your own name

Human clones are diabolical

Assuming photographs are evidence



The Trust Flip


For most of human history, it was very hard to determine whether what someone told us was true. Should we believe them? The answer came down to several factors: does the claim make sense with what we already believed to be true? Was the person who told us reliable? Were they truthful in the past? Were they gullible or skeptical themselves? Could anyone else confirm what they claimed? What did the evidence look like and could we examine it?

It was easier to vet the claims of something that happened in the recent past. It was very hard to vet the claim of something that just happened, particularly if it happen far away. For that reason, rumors were rampant in the old days. Someone they trusted told them something they heard from someone they trusted and they were now telling you. In fact, before the age of printing, this chain of communication was primarily how most information was conveyed, and it was extremely hard to weed out what was true and what was exaggerated or false.

The invention of photography changed this dynamic. We came to believe a photograph as evidence of truth. You might claim something, and I might not believe it, but if you showed me a photograph of it, I HAD to believe it. A photograph was inherently believable, unless obviously altered, in contrast to words, which were inherently malleable. When you viewed a photograph, it was innocent, unless proven guilty. Video had the same default. A video was inherently truthful, unless labeled otherwise.

The arrival of generative AI has flipped the polarity of truthfulness back to what it was in old times. Now when we see a photograph we assume it is fake, unless proven otherwise. When we see video, we assume it has been altered, generated, special effected, unless claimed otherwise. The new default for all images, including photographic ones, is that they are fiction – unless they expressly claim to be real. 

This claim for veracity can come in several ways. Increasingly, the origins of an image will be embedded in its metadata. It will code for its origins either as a generated image, or an unaltered image from a trusted camera. Secondly, an image can claim its source. Is the person or institution who provides the image, trustworthy and reliable in the past?

We will come to see that our default of “trust first and check later” was only a short temporary anomaly in our long history. We are back to the state we have been in for most of our time as humans, where we “check first and trust later.” The trust flip happened. Just recently, our initial default response to a photograph was to believe it as real, unless proven a fake. Now our default response to an image is to disbelieve as a fake, unless proven to be real.



Things We Didn’t Know About Ourselves


The fact that everyone alive on our planet is now connected electronically is not a surprise. This universal connection has been a scifi theme for many decades before it happened, and this view of universal connection was not that difficult to imagine once radios were invented. Televisions connecting each other seemed inevitable once we had telephones. They were called desktop picture phones at first, and they were long expected, and in fact by this century considered long overdue. 

But the smartphone — a small pocketable screen – was not at all expected. It was a complete surprise because no one thought it would be possible to engage with such a tiny screen. It was a shock to everyone (including me)  that a screen smaller than my palm would be enough to watch a movie, or read a book, or get your news. That kind of behavior seemed to go against “what everybody knows” about movie watching and book reading. In fact the idea of an appealing micro-window seemed contrary to what we thought we knew about our physiology – that we needed a wide view with high fidelity, and that it was unnatural and uncomfortable to have to restrict our gazes into such a tiny screen. Turns out we were very wrong. We have zero trouble watching hours of movies on this sliver of a screen. This comfort with a small screen was one of many things we did not know about ourselves.

There are so many other things we didn’t know about ourselves. We had been painting and observing images for thousands of years before we discovered that we can fool our own eyes and minds to perceive motion by rapidly flicking a series of images with minor alterations. These illusions are called movies. We didn’t know we had this ability to perceive motion until we had the technology to manifest this ability. In other words, we could not have known this about ourselves until we invented cinema.

We are discovering something similar with VR. We didn’t know we can be convinced of the presence of something by generating a volumetric, spatial image of it.  Rendering an image spatial makes it feel like it is present, even when our logical mind knows it is not. This trick makes VR worlds feel real. We also could not have known this about our own eyes until we invented VR technology.

I am pretty sure that we did not know that we humans much prefer personal attention to personal privacy. Until we invented the technology of social media, we thought we naturally favored privacy over attention, but we were also wrong about that. We found out that when given a choice people prefer to reveal themselves to get personal attention rather than the obscurity of privacy.

All this should make us wonder how many other things we don’t know about ourselves? And what kind of technology do we need to uncover them? Also, it is possible that every bit of complex technology will in its turn reveal to us something about ourselves we did not know. Part of inventing and taming our inventions is coming to terms with the new things we learn about ourselves.





© 2023