The Technium

Making the Inevitable Obvious

Rights / Responsibilities


We tend to talk about rights without ever mentioning their corresponding duties. Every right needs an obligation to support it. At the very least we have a duty to grant the same rights to others, but more consequentially, rights have trade-offs: things we have to do or surrender in order to earn the right. The right to vote entails the duty to pay taxes. The right of free speech entails the duty of using it responsibly, not inciting violence, harm, insurrection.

So it is with rights in our information age. Every cyber right entails a cyber duty.

For example, security. Modern security is an attribute of systems. Your thing, your home, your company, cannot maintain security alone outside of a system. Once a digital device is connected, everything it connects with needs to be secure as well. In fact, any system will only be as secure as the weakest part of that system. If your part is 99% secure, but your colleague is only 90% secure, you are actually only 90% secure as well. This small difference is huge in security. There are documented cases where an insecure baby monitor in a household system became the back door to unauthorized entry into that family’s network. In this way, the lax security of one piece sets the security for the whole system

Therefore, every part of a system has a duty to maintain the required level of security. Since one part’s security in part determines and impacts all parts’ security, every part of the system can rightfully demand that all parts level up. It is therefore the duty of each part (person, organization) to maintain the proper security.  It is not hard to imagine protocols that say you and your devices can’t join this network unless you can demonstrate you have proper security. In short you have the right to connect to the public commons (without permission), but you have the duty to ensure the security of your connection (and the commons as a whole).

This is not much different from what most nations say, which is you have a right to use any public road, but you must demonstrate you are responsible for its safety and security by passing a driver’s test. Your right is mobility on the roads; your duty is drive responsibility (no drinking!), and to get a license to prove it.

There are other rights/duties animated by digital tech, such as your identity. You don’t really need a name yourself. Your name is most useful to other people, so they can identify you, and in turn, trust you. Like your face, your name is at once the most personal thing about you and the most public thing about you. Your name and face are both indisputably “yours” and also indisputably in the commonwealth. Our faces are so peculiarly public that we are spooked about people who hide their face. And legally, many activities require that we keep our face public, like getting a license, flying in a plane, entering a secure facility, or voting. We have a right to look however we like, but we also have an obligation to keep our face public.

Names are similar, both intensely personal and outright public. We can try to hide our names behind pseudonyms and anonymity, but that diminishes some of our powers to affect change in the world, and also reduces trust from others. Privacy is part of a tradeoff. In order to be treated as an individual, with respect, we have to be transparent as an individual. We have a history, we have a story, we have context, we have needs and talents. All this is wrapped in our identity. So in order to be treated as an individual we have to convey that identity. Personalized individuality is a right that demands a duty of transmitting our identity. We also have (and must have) the right to remain opaque, unknown, and hidden, but the tradeoff for that right is the duty to accept that we will be treated generically, as not-an-individual, but as a number. The right of obscurity is the duty of silence; the right of individuality is the duty of transparency.

As more of our lives are connected constantly, the distinction between digital rights and rights in the rest of the world tend to vanish.  I don’t find it useful to separate them. However, there will still be new “rights” and “responsibilities” arising from new technology. They will first appear in high tech, and then as that tech becomes the norm, so will the rights. Currently, generative AIs demand we think about the rights – and responsibilities – of referencing a creation. The right of copy, or copyright, addressed the need to govern copies of a creation. Generative AI does not make copies, so copyright norms are helpless with this. We realize now that there might be the need for articulating a new right/duty around training an intelligence. If my creation is referenced to be used to train a student, or train an AI, that is, an agent that will go to create things themselves influenced by my work, should I not receive something for that? Should I have any control over what is made from my influence? I can imagine an emerging system such that any creation that is granted copyright is duty bound to be available for reference training by others, with the corresponding right for anyone to train upon a creation with a duty to pass some of the credit (status and monetary) back to creators.

The mirroworlds of AR and XR will necessitate further new pairs of rights/responsibilities around the messy concerns of common and shared works, for the mirrorworld is 100% a commons. What do I get from a collaborative creation, and what do I owe the others who contribute, when those boundaries are unclear? Because every new technology generates new possibilities, I expect it to also produce new pairs of duties and rights.



Weekly Links, 02/16/2024




The Scarcity of the Long-Term


The chief hurdle in constructing a Death Star is not the energy, materials, or even knowledge needed. It’s the long time needed. A society can change their mind mid-way over centuries, and simply move on. The galaxy is likely strewn with abandoned Half-a-Death Stars.

Despite the acceleration of technological progress, indeed because of it, one of the few scarcities in the future will be a long attention span. There will be no shortage of amazing materials that can do magic, astounding sources of energy to make anything happen, and great reserves of knowledge and know-how to accomplish our dreams. But the gating factor for actually completing those big dreams may be the distractions that any society feels over the centuries. What your parents and grandparents found important you may find silly, or even embarrassing.

To build something that extends over multiples of individual lifespans requires a very good mechanism to transmit the priority of that mission. It is much easier to build a rocket that will sail 500 years to the nearest star, then it is to ensure that the future generations of people born on board that 500-year rocket maintain the mission. It is very likely that before it reaches the 250-year halfway point, that the people on board turn it around and head back to a certain future. They did not sign up for this crazy idea!

It is certain that 250 years after the launch of that starship, the society that made it will have wonderous new technologies, new ideas, maybe even new ways to explore, and they could easily change their mind about the importance of sending flesh into space, or decide to explore other frontiers. That is not even to mention how the minds of those onboard could also be changed by new inventions in 250 years.

If let alone, an advanced civilization, could over many millennia, invent AIs and other tools that would allow it to invent almost any material it could imagine. There would be no resource in the universe it could not synthesize at home. In that sense, there would be no material and energy scarcity. Perhaps no knowledge scarcity either. The only real scarcity would be of a long attention span. That is not something you can buy, or download. You’d need some new tools for transmitting values and missions into the future.

There’s an ethical dilemma around transmitting a mission into the far future. We don’t necessarily want to burden a future generation with obligations they had no choice in; we don’t want to rob them of their free will to choose their own destinations. We DO want to transmit them opportunities and tools, but it is very hard to predict which are the gifts and which are the burdens from far away. There’s a high probability, they could come regard our best intentions as misguided, and choose a very different path, leaving our mission behind.

In this way it is easy to break the chain of a long-term mission. one that is far longer than an individual lifespan, and perhaps even one that is longer than a society life-span. It may turn out that the most common scarcity among galactic advanced civilizations is a long-term attention span. Perhaps vanishing few ever complete a project that lasts as long as a 1,000 years. Perhaps few projects remain viable after 500 years.

I am asking myself, what would I need to hear and get from a past generation to convince me to complete a project they began?



Hill-Making vs Hill-Climbing


There are two modes of learning, two paths to improvement. One is to relentlessly, deliberately improve what you can do already, by trying to perfect your process. Focus on optimizing what works. The other way is to create new areas that can be exploited and perfected. Explore regions that are suboptimal with a hope you can make them work – and sometimes they will – giving you new territory to work in. Most attempts to get better are a mix of these methods, but in their extremes these two functions – exploit and explore – operate in different ways, and require different strategies. 

The first mode, exploiting and perfecting, rewards efficiency and optimization. It has diminishing returns, becoming more difficult as fitness and perfection is increased. But it is reliable and low risk. The second mode, exploring and creating, on the other hand, is highly uncertain, with high risks, yet there is less competition in this mode and the yields by this approach are, in theory, unlimited.

This trade off between exploiting and exploring is present in all domains, from the personal to the biggest systems. Balancing the desire to improve your own skills versus being less productive while you learn new skills is one example at the personal level. The tradeoff between investing heavily in optimizing your production methods instead of investing in new technology that will obsolete your current methods is an example of that tradeoff at the systems level. This particular tradeoff in the corporate world is sometimes called the “innovator’s dilemma.” The more efficient and excellent a company becomes, the less it can afford to dabble in some new-fangled idea that is very likely to waste investments that can more profitably be used to maximize their strengths. Since statistically most new inventions will fail, and most experiments will not pan out, this reluctance for the unknown is valid.

More importantly, if new methods do succeed, they will cannibalize the currently optimal products and processes. This dilemma makes it very hard to invest and explore new products, when it is far safer, more profitable, to optimize what already works. Toyota has spent many decades optimizing small efficient gasoline combustion engines; no one in the world is better. They are at the peak of gas car engines. Any money spent on investing into unproven alternative engines, such as electric motors, would reduce their profits. They would be devolving their expertise and becoming less excellent. They cannot afford to be less profitable.

Successful artists, musicians and other creatives have a similar dilemma. Their fans want more of what they did to reach their success. They want them to play their old songs, paint the same style paintings, and make the same kind of great movies. But in order to keep improving, to maintain their reputation as someone creative, the artists need to try new stuff, which means they will make stuff that fails, that is almost crap, that is not excellent. To win future fans the artist has to explore rather than merely exploit. They have to take up a new instrument, or develop a new style, or invent new characters, which is risky. Sequels are just so much more profitable.

But even sequels – if they are to be perfect – are not easy. They take a kind of creativity to perfect. This kind of everyday creativity, the kind of problem solving that any decent art or innovation requires, is what we might call “lower-case” or base creativity. Devising a new logo, if done well, requires creativity. But designing logos is a well-trod domain, with hundreds of thousands of examples, with base creativity as the main ingredient. Designing another great book cover is primarily a matter of exploiting known processes and skills. Occasionally someone will step up and create a book cover so unusual and weird but cool that it creates a whole new way to make covers in the future. This is upper-case, disruptive Creativity.

Upper-case, disruptive Creativity discovers or creates new areas to be creative in. It alters the landscape. Most everyday base creativity is filling in the landscape rather than birthing it. Disruptive Creativity is like the discovery of DNA, or the invention of photography, or Impressionism in art. Rather than just solving for the best possibility, it is enlarging the space of all possibilities.

Both biologists and computer scientists use the same analogy when visualizing this inherent trade-off between optimization and exploration. Imagine a geological landscape with mountains and valleys. The elevation of a place on the landscape is reckoned as its fitness, or its technical perfection. The higher up, the more fit. If an organism, or a product, or an answer is as perfect as it can be, then it registers as being at the very peak of a mountain, because it cannot be any more fit or perfect for its environment. Evolution is pictured as the journey of an organism in this conceptual landscape. Over time, as the organism’s form adapts more and more to its environment, this change is represented as that organism going up (getting better), as it climbs toward a peak. In shorthand, the organism is said to be hill-climbing.

Computer scientists employ the same analog to illustrate how an algorithm can produce the best answer. If higher up in the landscape represents better answers then every slope has a gradient. If your program always rewards any answer higher up the gradient, then over time it will climb the hill, eventually reaching the optimal peak. The idea is that you don’t really have to know what direction to go, as long as you keep moving up. Computer science is filled with shortcuts and all kinds of tricks to speed up this hill-climbing.

In the broadest sense, most every-day creativity, ordinary innovation, and even biological adaptation in evolution is hill climbing. Most new things, new ideas, most improvements are incremental advances, and for most of the time, this optimization is exactly what is needed.

But every now and then an upper-case, disruptive jump occurs, creating a whole new genre, a new territory, or a new way to improve. Instead of incrementally climbing up the gradient, this change is creating a whole new hill to climb. This process is known as hill-making rather than hill climbing.

Hill-making is much harder to do, but far more productive. The difficulty stems from the challenge of finding a territory that is different, yet plausible, inhabitable, coherent, rather than just different and chaotic, untenable, or random nonsense. It is very easy to make drastic change, but most drastic changes do not work. Hill-making entails occupying (or finding*) an area where your work increases the possibilities for more work.

If a musician invents a new instrument, rather than just another song, that new instrument opens up a whole new area to write many new songs in, new ways to make music that could be exploited and explored by many. Inventing the technology of cameras was more than merely an incremental step in optics, it opened up vast realms of visual possibilities, each of those, such as cinema, or photojournalism, became new hills that can be climbed. The first comedians to figure out stand-up comedy, created a whole new hill – a new vocation that could be perfected.

The techniques needed for this upper-case Creativity, this disruptive innovation, this hill-making, are significantly different from the techniques for hill-climbing. To date, generative LLM-AI is capable of generating lower-case base creativity. It can create novel images, novel text, novel solutions that are predominately incremental. These products may not be things we’ve seen or heard before, or even things we would have thought of ourselves, but on average the LLMs do not seem to be creating entirely new ways to approach text, images, and tasks. So far they seem to like the skill of making a new hill.

Hill-making demands huge inefficiencies. There is a lot of wandering around, exploring, with plenty of dead ends, detours, and dry deserts where things die. The cost of exploring is that you only discover something worthwhile occasionally. This is why the kinds of paths that tend toward hill-making such as science, entrepreneurship, art, and frontier exploration are inherently expensive. Most of the time nothing radically new is discovered. Not only is patience needed, but hill-making requires a high tolerance of failure. Wastage is the norm.

At the same time, hill-finding requires the ability to look way beyond current success, current fitness, current peaks. Hill-finding cannot be focused on “going up the gradient” – becoming excellent. In fact it requires going down the gradient, becoming less excellent, and in a certain sense, devolving. The kind of agents that are good at evolving towards excellence are usually horrible in devolving toward possible death. But the only way to reach a new hill – a hill that might potentially grow to be taller than the hill you are currently climbing – is to head down. That is extremely hard for any ambitious, hard working person, organism, organization, or AI.

At the moment generative AI is not good at this. At the moment AI and robots are very good at being efficient, answering questions, and going up gradients towards betterment, but not very good at following hunches, asking questions, and wondering. If, when, they do that, that will be good. My hope it that they will help us humans hill-find better.



Weekly Links, 02/02/2024




Future Embarrassments


Because moral progress is real (we improve our morality over time) new generations will inevitably see moral deficiencies in previous generations. To some degree we can guess and anticipate what they will find embarrassing about us today, but it will be impossible to align ourselves fully with their view. The more we are immersed in today’s culture, the harder to see our future. So we just have to get used to the idea that our descendents will find many of the things we do and believe today to be eye-rolling embarrassing. With the caveat that most of these will be wrong, what are some possible future embarrassments?

Eating animal flesh with gusto

Believing gender is only binary

Denying consciousness in machines

Prohibiting euthanasia

Outlawing psychedelics

Hatred against engineering human genetics 

Acceptance of passports to prevent mobility

Tolerating destitute poverty

Employing capital punishment 

Belief that killing in war is not murder

Prison is justified punishment

Not choosing your own name

Human clones are diabolical

Assuming photographs are evidence



The Trust Flip


For most of human history, it was very hard to determine whether what someone told us was true. Should we believe them? The answer came down to several factors: does the claim make sense with what we already believed to be true? Was the person who told us reliable? Were they truthful in the past? Were they gullible or skeptical themselves? Could anyone else confirm what they claimed? What did the evidence look like and could we examine it?

It was easier to vet the claims of something that happened in the recent past. It was very hard to vet the claim of something that just happened, particularly if it happen far away. For that reason, rumors were rampant in the old days. Someone they trusted told them something they heard from someone they trusted and they were now telling you. In fact, before the age of printing, this chain of communication was primarily how most information was conveyed, and it was extremely hard to weed out what was true and what was exaggerated or false.

The invention of photography changed this dynamic. We came to believe a photograph as evidence of truth. You might claim something, and I might not believe it, but if you showed me a photograph of it, I HAD to believe it. A photograph was inherently believable, unless obviously altered, in contrast to words, which were inherently malleable. When you viewed a photograph, it was innocent, unless proven guilty. Video had the same default. A video was inherently truthful, unless labeled otherwise.

The arrival of generative AI has flipped the polarity of truthfulness back to what it was in old times. Now when we see a photograph we assume it is fake, unless proven otherwise. When we see video, we assume it has been altered, generated, special effected, unless claimed otherwise. The new default for all images, including photographic ones, is that they are fiction – unless they expressly claim to be real. 

This claim for veracity can come in several ways. Increasingly, the origins of an image will be embedded in its metadata. It will code for its origins either as a generated image, or an unaltered image from a trusted camera. Secondly, an image can claim its source. Is the person or institution who provides the image, trustworthy and reliable in the past?

We will come to see that our default of “trust first and check later” was only a short temporary anomaly in our long history. We are back to the state we have been in for most of our time as humans, where we “check first and trust later.” The trust flip happened. Just recently, our initial default response to a photograph was to believe it as real, unless proven a fake. Now our default response to an image is to disbelieve as a fake, unless proven to be real.



Things We Didn’t Know About Ourselves


The fact that everyone alive on our planet is now connected electronically is not a surprise. This universal connection has been a scifi theme for many decades before it happened, and this view of universal connection was not that difficult to imagine once radios were invented. Televisions connecting each other seemed inevitable once we had telephones. They were called desktop picture phones at first, and they were long expected, and in fact by this century considered long overdue. 

But the smartphone — a small pocketable screen – was not at all expected. It was a complete surprise because no one thought it would be possible to engage with such a tiny screen. It was a shock to everyone (including me)  that a screen smaller than my palm would be enough to watch a movie, or read a book, or get your news. That kind of behavior seemed to go against “what everybody knows” about movie watching and book reading. In fact the idea of an appealing micro-window seemed contrary to what we thought we knew about our physiology – that we needed a wide view with high fidelity, and that it was unnatural and uncomfortable to have to restrict our gazes into such a tiny screen. Turns out we were very wrong. We have zero trouble watching hours of movies on this sliver of a screen. This comfort with a small screen was one of many things we did not know about ourselves.

There are so many other things we didn’t know about ourselves. We had been painting and observing images for thousands of years before we discovered that we can fool our own eyes and minds to perceive motion by rapidly flicking a series of images with minor alterations. These illusions are called movies. We didn’t know we had this ability to perceive motion until we had the technology to manifest this ability. In other words, we could not have known this about ourselves until we invented cinema.

We are discovering something similar with VR. We didn’t know we can be convinced of the presence of something by generating a volumetric, spatial image of it.  Rendering an image spatial makes it feel like it is present, even when our logical mind knows it is not. This trick makes VR worlds feel real. We also could not have known this about our own eyes until we invented VR technology.

I am pretty sure that we did not know that we humans much prefer personal attention to personal privacy. Until we invented the technology of social media, we thought we naturally favored privacy over attention, but we were also wrong about that. We found out that when given a choice people prefer to reveal themselves to get personal attention rather than the obscurity of privacy.

All this should make us wonder how many other things we don’t know about ourselves? And what kind of technology do we need to uncover them? Also, it is possible that every bit of complex technology will in its turn reveal to us something about ourselves we did not know. Part of inventing and taming our inventions is coming to terms with the new things we learn about ourselves.



Weekly Links, 01/26/2024




The Boredom Device


When it comes to the digital world we might need a device that reverses the usual charge of wants. Instead of a small thing in your pocket that gives you exactly what you want to see at any minute, this device gives you stuff you don’t want to see.

This is a personal device that uses AI to learn what you like to spend time on, and then deliberately gives you the opposite. Sort of like TikTok, but in reverse. It only shows you articles, social media posts, and advertisements that it knows you will never click on. And if you don’t click, it will keep sending you more and more stuff like that. Entire streams on stuff you won’t click on.

The idea is that you will pick up your phone to scan what’s going on in the world, and not see anything worth reading. Bored, you put your phone away. You have satisfied the itch of “what’s going on in the world” and been assured “not much, get back to what you were doing.” This works not by giving you things that are your opposite, as in things that you disagree with. Disagreement is actually engaging. Opposition is energizing. Hate commands your attention. The key is to deliver to your screen stuff that you are indifferent to. Dreadfully sleepy.  For me the counter to right wing madness is not progressive sanity but beauty tips. For someone else it would be theories about Rome, and the optimal method to calculate your taxes. 

This device / phone / OS / browser / social media / API would be marketed to people who want to decouple themselves from their phone addictions, knowing that cold turkey will never work because of FOMO. You basically assuage the FOMO and replace it with digital ennui. It’s nicotine chewing gum for digital addiction. 

A lot of people would pay a subscription for this, which is the only monetary model that would work.

(This idea was first suggested to me by the brilliant science fiction author Hugh Howey.)





© 2023