The Technium

Making the Inevitable Obvious

Cyberweapons: A Real Worry


There is not too much about technology that I worry about. But one technological area I do worry a lot about is cyber war, cyber security, cyber conflict. My worry stems from the lack of accountability and the lack of consensus in this arena. It is devilishly difficult to discern what is being done cyberwise, and who is doing it. At the same time, there is no consensus about which actions need to be disclosed, or monitored, or verified. Nor is there real consensus on what actions are allowed, permitted, prohibited, discouraged, or encouraged. Finally, there are no limits, remedies, restrictions that can be enforced.

What this means is that right now there are huge cyber operations happening around the world every day. Some of these are defensive, but many are offensive attacks. Systems are breached, probed, potential damage is rehearsed, future secret entrances installed, small things are broken. The US, China, Russia, Isreal, Iran, North Korea — to name some of the most active countries — plus many more non-state, quasi-state, organized crime agents, like hacker groups, are involved in huge maneuvers that are invisible to the rest of the world. Increasingly these data vs data conflicts are touching the physical infrastructure. The world’s electrical grids, transportation networks, hospitals, water systems all depend on an intangible data structure, where these skirmishes are taking place. So far only a few incursions have crippled physical civic services; a hospital is cut from electricity, or traffic lights are disrupted. My worry is that because there is neither transparency nor agreed norms, these mutual attacks will escalate until something horrible happens. There is no push-back on this arms race. The public doesn’t see it, and the experts who do see it, don’t agree on where to go.

We beings on this planet have evolved an elaborate set of rules about how to conduct war. Weirdly we have agreed on how to kill each other. Some ways are okay and some are not. You can’t kill someone you take as  prisoner. You can’t intentionally kill children. You can’t torture. Etc. As new weapons were invented we added them to our agreement. We have agreed to avoid using nuclear bombs (although some countries, including the US, still make them).

Cyber weapons are new, and have not been included in our agreements. In war is it okay to take down a nation’s banking system? Is it permissible to disable everyone’s phones? Should the world accept hacking interference in another nation’s election?

Problematic weapons like nuclear, chemical, and biological ones, have extensive, complicated programs of verification to make sure our collective agreement is adhered to. Part of that process is self-reporting, self-disclosure by those who posses these weapons. None of this disclosure is happening in cyberspace.

None of the countries active in using these new weapons will acknowledge they have the weapons; they deny they are using them, and don’t even communicate when others use the weapons against them. There is a conspiracy of silence in cyberwar. That is the danger.

This silence and denial also creates cover for non-state attacks by criminals, rouge state hackers, naive teenage hackers, to do damage. They are hidden behind the same cloak that nations are hiding behind. Together state and non-state hacking can add up to a potentially mutual destruction. Today every developed country is potentially very vulnerable to a cyber attack. And soon every developed country will be capable of delivering a crippling attack.

We have nuclear arms treaty because we realized we had the capability of mutual destruction . Our next step is to realize we have the capability of mutual CYBER destruction. The remedy is  similar: a global agreement on acceptable use of cyber weapons, and a public accounting of those weapons.

A significant hurdle for the accountability of cyber weapons is their close alignment with intelligence gathering. Cyberwar is fought with information, and information is the heart of intelligence. It is very difficult to unravel cyber weapons from cyber tools. There is the thinnest line between hacking a system to learn about it (intelligence gathering) and hacking it to learn how to damage it (reconnaissance) or hacking it to damage it (war). The same tools (weapons?) may be used in each case.

Understandably, the intelligence departments of nations are reluctant to reveal their methods, or share their tools, or in any way handicap themselves. Cyber-weapons derive from cyber spy tools, and it is a challenge to untangle the two. Knowledge and intelligence can be wielded as a weapon. It’s hard to see a way to account for information weapons that does not expose information spying.

But not impossible. We can regulate specific actions via treaties and agreements. Rather than outlaw tools (or weapons), we can outlaw outcomes. We might agree that taking a banking system down is not acceptable, whether you use a computer virus, a social media hack, or a EMP bomb blast. Interfering in an election should be prohibited via any method, even the most indirect.

The remaining challenge is mutual verification of the source of cyber actions. Tracking the source of actions is made difficult by the dark web. Much can be hidden by anonymizers and cleverness. But a lot online is hidden because the global internet is a patchwork of national networks, and because the actual humans creating attacks are shielded from inspection by national laws. Hackers in country X casting spells on country Y, even if proven bad, may be out of reach of country Y.

Part of the needed reform for a consensus on cyber war extends to making it harder to hide behind the walls erected by nations. I predict the nations will begin to cooperate more in disclosing the source of actions, including their own departments, for this simple reason: nations will come to understand that there is no national cyber security without global cyber security.

Rather than kumbaya global peace, pure self-interest will drive nations to be more cooperative in the cyber dimensions. When you have a global network, your security is only reliable as the weakest link in that system.  Attackers bleed to the least secure edges where they can continue to cause damage.  Ultimately security within your nation will fail unless the security of all the other nations is also maintained.

In addition to improving the overt security in peacetime, this requirement for global mutual security can drive the transparency needed to regulate cyber weapons.  My only worry is that it may take a huge cyber disaster with many people dying before nations come together in agreement on how we should treat these new weapons.



Recent Readings, 8


IMHO, reading this subReddit written by an AI feels very similar to reading a subReddit written by humans that post on Reddit. Link.

Pregnant women operate at a the limit of human energy endurance, just slightly ahead of elite ultramarathoners. The limiting factor is not heart, lung or muscles, but the amount of calories your digestive system can process — about 4,000 calories per day. Link.

E-sports are huge, mostly in Asia, but worldwide. This illuminating video explains the financial landscape of e-sports.

The perennial question of why ancient China, which invented most important inventions centuries before the West did, did not invent the most power invention of the scientific method, gets a summary answer here.

“The new American religion of UFOs. Belief in aliens is like faith in religion — and may come to replace it.” I believe this. This is the headline of a Vox article. Link.

To my ear these AI-generated voices of famous thinkers are a completely convincing simulation. You can make Bill Gates, or Jane Goodall, or Stephen Wolfram say anything you want. Go to the Select Speakers section and pull down a pundit’s voice sample. Link.

Awareness of Chinese science fiction is beginning to rise in the west, and this tide is swelling in China as well. At the forefront is the author of The Three Body Problem. Two articles delve into the new wave. A New Yorker profile of Liu Cixin is gracefully done and incredibly revealing about Chinese society. The second is an Economist round up of other Chinese sci-fi just behind Mr. Liu.

 



Arrival of the Babel Fish


In the very near future, maybe in ten years, we’ll have earpods that will do real time language translation. Someone speaks Greek to you, and with the slightest delay, you’ll hear English. You respond in English, they’ll hear Greek. It’ll work for most spoken languages, x to x. You might recognize this as the Babel fish in Douglas Adams’ fiction, but this one will be real. We are not far from it today. I’ve been using Google Translate on my phone when traveling in China. I can speak or write English through it, or listen or read Chinese from it. It’s about 80-90% accurate, which is good enough to speak with taxi cab drivers, or navigate as a tourist. I have also been using a couple of different AI translation services, such as Trint, to create a text transcript from podcasts. It listens to the podcast audio file and puts the words into text with about 95% accuracy. It does this in minutes and for a few dollars.

When even more accurate machine translation becomes available in ever more handy forms — like earbuds, or embedded into smart glasses — I can imagine huge economic changes arising from this technology. The first thing it will do is to enable people around the world who have very desirable skills, except the skill of English, to participate in the global economy. This Babel fish would permit a talented programmer in Jakarta who spoke no English to work for a Google. It would allow a talented programmer in Utah to work for a Chinese company, in Chinese. Nor does the translation have to happen online. Two employees in the same room could each be wearing the Babel fish. Of course it is immensely effective combined with virtual telepresence. When a colleague is teleporting in from a remote place to appear virtually, it is relatively easy to translate what they are saying in real time because all that information is being captured anyway. For even greater verisimilitude, their mouth movement can be reconfigured to match what they are saying in translation so it really feels they are speaking your language. It might be even be use to overcome heavy accents in the same language. Going further, the same technology could simply translate your voice into one that was a different gender, or more musical, or improved in some way. It would be your “best” voice. Some relationships might prefer to meet this way all the time because the ease of communication was greater than in real life.

This unleashing and liquidity of talent would be a huge boost to the global economy and would help in leveling some of the inequality between wages around the world.

There would be other effects: films, music, videos, books would not need to be laboriously and expensively translated beforehand, or to reach some level of popularity before getting dubbed. Now with the Babel fish they would be instantly subtitled, dubbed, translated in real time, on demand. Over time, even regional differences (American vs Australian) could be accounted for. This universal translation-on-demand (UTOD) immediately increases the potential audience size for creative works, increasing the probability that obscure interests can find the thousand true fans around the world it’ll need to be sustainable.

I can also imagine this UTOD technology aiding migration and human mobility.  When the global population plunges later this century, mega-cities around the world will begin to compete for workers and citizens; without the added hurdle of having to speak a new language will make it much easier to migrate. Many might move to Tokyo if they could virtually speak Japanese fluently.

UTOD might diminish the dominance of English as a second language. Why bother with it? On the other hand it is very possible that having simultaneous translation whispered into your ear all day for years would, over time, with the right attention, act as a teacher and help a person learn another language. Or the program could be modified to accelerate such learning if someone desired.

Today I can use Google Translate for free, just like other Google products. Ideally there would be a free version of Babel fish so that those to whom this would most make a difference would have full access to it. But we know free has its own costs. There will be pressure to insert advertising into UTOD. One could imagine how annoying it would be to be conversing with someone when every now and then you are interrupted with an ad that you both hear in your language. Worse, the ad could be related to what you were talking about, since the machine would “know” exactly what you are talking about in order to translate it. Other biz models would not interrupt you in conversation, but would try to exploit that very specific data in other modes or parts of your life. The poor and desperate are likely to take that bargain, but their data is less valuable (being poor and desperate). Alternatively, there would be a paid (no ad, no track) version.

UTOD, encased in a wearable like a Babel fish, is almost here. If adopted widely its consequences would be enormous, and I think, sudden. Even though it has been gradually improving, it might come as a huge “overnight” surprise to the world.



Dumbsmart


We need a better word than smart. Or dumb. I’m trying to come up with the word that we’ll use to describe artificial intelligences that fuel our self-driving cars, or enliven digital assistants. These agents will be incredibly smart and incredibly dumb at the same time. They will be to solve a Rubik’s cube in a blink, but will be unable to tie a shoelace; they will recognize your face instantly, but never get that you wanted to hide from someone; They will crack the lock in a safe in a few seconds but never be able to find the safe hidden in a room; or they will beat you in chess, but always lose any other game your kids make up.

We’ll find this dumb-smartness infuriating. It will drive us crazy. How can it beat me here but be so dumb? There will be comedy sketches about this failure, whole movies based on this paradoxical combination of ultra brilliance and utter stupidity. We have some experience with this state in certain handicapped humans called in the past idiot savants. I find that term for humans degrading. But there is a germ of truth in it for machines. The will be idiot-geniuses. Maybe we call them genidiots.

These everyday AIs will be brimming with dumbsmarts. They will be so dumbsmarten they can actually be smart enough to know they are stupid!  Or stupid enough to not know they are smart. Both at once.

It should be a short word because we’re going to use it in anger a lot. Sad to say, I predict the word will also be used about humans, when they act like a machine this way. It will definitely become an insult. Perhaps languages other than English already have a word that means Dumbsmart. If so post it in the comments.



Ingenic


Ingenic: Content created in the same media that it is consumed in. As an example, if one uses VR tools within VR to create a VR world, that content is ingenic. That is, the world has been generated within the framework of its consumption. If one created a VR world using standard PCs and 2D tools outside of VR, then that content is non-ingenic, or exgenic. Most of the VR content made today is constructed using tools on screens that are not 3D. It’s made with pens on a flat plane, or images display on flat screens. The 3D nature of the constructed world has to be guessed at, approximated by moving and swirling the world.

Most of the VR content in the future will be constructed by makers inside of VR. The working interface to their tools will have volume, thickness, and spatial arrangements. The app Tiltbrush is a good example of an ingenic tool. To create with Tiltbrush, you enter VR and “paint” in three dimensions. You basically paint a sculpture, or sculpt a painting.

The old classical 2D interface of menus and windows aren’t adequate in VR. The new UIs will be volumetric and spatial. As one example, the two industry standard tools for creating 3D worlds and models, the game engines Unity and Unreal, are most commonly used in desktop mode — that is 2D. Their menus and palettes are definitely exgenic to the VR worlds it can make. Recently Unity and Unreal began offering an ingenic version of the editors, whereby developers can employ the engine within VR itself to create VR content. The user must don headgear, enter VR, and inside this spatial world, create. However these versions of an ingenic 3D editor carries over the old 2D metaphor of menus and palettes, so it is not an ideal ingenic tool. Future versions of VR tools will have interfaces optimized for ingenic creation by inventing new organizing metaphors beyond windows and menus.

In a loose sense you could say that web-based tools (like say Google Docs) are ingenic for web-based content. Whereas the classic Microsoft Word in desktop mode is exgenic. And for whatever new worlds that come after the spatial world of 3D, the first tools for them will likely be exgenic 3D tools, and only later fully ingenic.

Ingenic means “the genesis happens within.”  Thank you to my son Tywen Kelly, who came up with this term.



Recent Readings, 7


This a helpful summary of 5 lessons from history. Link.

With remarkable accuracy this AI neural net from MIT can guess what a person looks like based on a short clip of their voice. It’s interesting research, but not clear what the use case is. Link.

No need to freak out about rare earth elements. They aren’t rare. It’s just a matter of money. Link.

From a relationship tip by Julie Rice, during an interview by Tim Ferriss, suggested by a couple’s therapist, in a magazine by Oprah, on how to listen actively and flood with praise. Link.

As a man who’s had a beard most of my adult life, I am tickled by the general shift toward hairiness in formerly clean shaven occupations like the military and sports. The trend toward our “hairy century” is explored here.



Recent Readings, 6


Sunsets on Mars are blue. Link.

I too laugh. ” ‘Once we build a generally intelligent system, basically we will ask [OpenAI] to figure out a way to make an investment return for you.’” When the crowd erupted with laughter Sam Altman himself offered that it sounds like an episode of “Silicon Valley.” Link.

I’ve long written about extensions to classical evolution theory, or revolutions in evolution. The biggest challenge is to describe how natural variations are created to be selected. There is a new good name for this view, called the Extended Evolutionary Synthesis (EES), and a well-written introductory article about it.

“The killer app for having the internet in your pocket was, well, having the internet in your pocket.”

I stopped eating mammals 15 years ago. I find the Impossible burger’s vegetarian meat to be delicious, and with that old burger taste that I do miss. I did not know it was also GMO food, which I think is great. A good article on the company behind the clean meat. Link.

Human composting. I’m all for it once I’m dead. Now legal in Washington state. Link.

“Why do so many Egyptian statues have broken noses?” Long answer in this article. Link

Academic paper on the difficult proposition of introducing AI into the “kill chain,” meaning giving machines the ability to decide to kill, or in other words, weaponizing AI. No answers, just a framework to discuss it. Link.



Recent Readings, 5


This is a fun online comic (from Google) which explains the idea of “federated learning” — part of a solution for privacy. It’s way to share the advantages of aggregated data without aggregating the data, only aggregating the results. Cool.

This is true for me: Walking is the key to productivity. Link.

Backing up civilization on Earth by making copies of important texts and sending them into space, or depositing them on the Moon. It seems trivial, but if done iteratively, might be useful. Here is the account of putting the current “back up”on the Moon. What gets backed up, is a question worth asking, and working on. Link.

This long book review of 3 books about vaccines and the culture around them helped me understand the current unsettling avoidance of vaccines by some.

Introducing risk-aversion (sometimes called fear) into AIs who are learning how to drive can accelerate their learning. Link.

Number of births in the US is the lowest in 32 years, and the US fertility rate is near a historic low for the US of 1.763. Link.

“Cardboard is the gateway drug to making,” says Adam Savage. It’s versatile, free. Well, here is some massive art-making with tons of cardboard. Link.

The always insightful AI expert Rodney Brooks, inventor of the Roomba, says general artificial intelligence is probably a century away, and autonomous cars at least 30-50 years away. Link.

Most flying car prototypes to date have used multiple horizontal spinning blades — like a larger drone. Lilium is a new design, with wings, but without the other controlling surfaces on a plane. Novel 5-person taxi. Link.



Progress and the Randomized Time Machine


Here is a thought experiment. I give you a ride in a time machine. It has only one lever. You can choose to go forward in time, or backwards. All trips are one-way. Whenever you arrive, you arrive as a newborn baby.  Where you land is random, and so are your parents. You might be born rich or poor, male or female, dark or light, healthy or sick, wanted or unwanted.

Your only choice is whether you choose to be thrust forward in time, spending your new life in some random future in some random place, or thrust into the past, in some random time and random place. I have not met anyone yet who would point the lever to the past. (If you would, leave a comment why.) Even if we constrained the time machine to jump mere decades away, everyone points it to the future. For while we can certainly select certain places, certain eras in the past that seem attractive, their attractiveness disappears if we arrive as a servant, a slave, an outcast ethnicity, or even as a farmer during a drought, or during never-ending raiding and wars.

The only argument I’ve heard for choosing the past is that the downsides are known; you have a randomized chance of being a slave, or the fourth wife, or a Roman miner, while the downsides of some future date are unknown and could possibly be worse. Perhaps there is no civilization at all in 500 years, and you therefore arrive in a toxic wasteland, or all humans are enslaved to robots. In this calculus the known horror is preferred to unknown horrors. The likelihood of self-eradication seems to some people, at this point in time, to increase the further out in history we might go. Five thousand years in the future may be as unappealing a destination to some as five thousand years in the past.

But since this is random placement, there is still a higher chance you’d get a bearable life in the future, even if you were at the bottom of that society, than you’d randomly for sure get in the past. If we have any sense of what the past was really like, we intuitively know that today is much better than the past. This difference is probable (not guaranteed) to be true of a future date; it is highly likely no one born in 2070 would want to be born in 2020.

The denial of progress is directly linked to ignorance of the past. There are romantic notions of the past that are not based on evidence; some of these lovely visions of the past are not untrue; it’s just that they are select, rare, privileged slivers that disregard the actual state of most humans for most times in most places, which any serious inquiry into global history will reveal. Today there is still huge discrepancy between the well-off of the world and the bulk mass of most humans in most places. But the point of the time machine thought experiment is that virtually everyone would rather be at the bottom today than at the bottom 200+ years ago. Indeed, those most eager to point their time machine ride into the future are those who have the least today, which is the bulk, or most of humanity. We would rather inhabit a random future role than a random past role because progress (on average) is real.



Recent Readings, 4


Second thoughts on sunscreen toxicity. Link.

The verification paradox: the best perceiving machines (AI) may not be verifiable by humans; but the most “trustworthy” algos may not be the best. Good intro: Link.

This is very true: “Science Fiction Doesn’t Have to Be Dystopian.” Title of a long book review in the New Yorker by Joyce Carol Oates of new Ted Chiang short stories. Link.

I don’t worry about much, but I do worry about cyberwar because 1) we have no consensus on what is permitted, 2) it is hard to track, and 3) it can spill into the real world. Link.

Writers are more and more relying on paid newsletters for income. They find 1,000 true fans and get them to pay for their writing. Good intro to this emerging genre. Link.

Breaking up Facebook (into what?) will not remedy the issues of overuse, nor fake news, nor extremist views, nor privacy. Breakup is a romantic fantasy. Study the long history of anti-trust and AT&T, beginning in 1913. For that I am reading “The Fall of Telecom” by Thomas Lauria.

This account of a tech demo that failed is a pretty good update on the business aspect of the coming Mirrorworld. Link.

Big, long-term, many-decade projects are out of fashion now. One exception is China’s Belt and Road initiative. Here is a site that tracks it’s progress. Link.

With a contrary view, this article argues that China’s Belt and Road Initiative is One Big Mistake. It is merely atmospheric handwaving, a pretty name for regional states to keep the state-owned industries and construction complex going by building infrastructure further from home. Link.