Making the Inevitable Obvious
[I originally wrote this in response to Jaron Lanier’s worry post on Edge.]
Why I don’t fear super intelligence.
It is wise to think through the implications of new technology. I understand the good intentions of Jaron Lanier and others who have raised an alarm about AI. But I think their method of considering the challenges of AI relies too much on fear, and is not based on the evidence we have so far. I propose a counterview with four parts:
1. AI is not improving exponentially.
2. We’ll reprogram the AIs if we are not satisfied with their performance.
3. Reprogramming themselves, on their own, is the least likely of many scenarios.
4. Rather than hype fear, this is a great opportunity.
I expand each point below.
1. AI is not improving exponentially.
In researching my recent article on the benefits of commercial AI, I was surprised to find out AI was not following Moores Law. I specifically asked AI researchers if the performance of AI was improving exponentially. They could point to an exponential growth in the inputs to AI. The number of processors, cycles, data learning sets, etc. were in many cases increasing exponentially. But there was no exponential increase in the output intelligence because in part, there is no metric for intelligence. We have benchmarks for particular kinds of learning and smartness, such as speech recognition, and those are converging on an asymptote of 0 error. But we have no ruler to measure the continuum of intelligence. We don’t even have an operational definition of intelligence. We don’t even have an operational definition of intelligence. There is simply no evidence showing a metric of intelligence that is doubling every X.
The fact that AI is improving steadily, but not exponentially is important because it gives us time (decades) for the following.
2. We’ll reprogram the AIs if we are not satisfied with their performance.
While it is not following Moore’s Law, AI is becoming more useful faster. So the utility of AI may be increasing exponentially, if we could measure that. But in the past century the utility of electricity exploded as more use trigger yet more devices to use, yet the quality of electricity didn’t grow exponentially. As the usefulness of AI increases very fast, it brings fear of disruption. Recently, that fear is being fanned by people familiar with the technology. The main thing they seem to be afraid of is that AI is taking over decisions once made by humans. Diagnosing x-rays, driving cars, aiming bomb missiles. These can be life and death decisions. As far as I can tell from the little documented by those afraid, their grand fear – the threat of extinction – is that AI will take over more and more decisions and then decide they don’t want humans, or in some way the AIs will derail civilization.
This is an engineering problem. So far as I can tell, AIs have not yet made a decision that its human creators have regretted. If they do (or when they do), then we change their algorithms. If AIs are making decisions that our society, our laws, our moral consensus, or the consumer market, does not approve of, we then should, and will, modify the principles that govern the AI, or create better ones that do make decisions we approve. Of course machines will make “mistakes,” even big mistakes – but so do humans. We keep correcting them. There will be tons of scrutiny on the actions of AI, so the world is watching. However, we don’t have universal consensus on what we find appropriate, so that is where most of the friction about them will come from. As we decide, our AI will decide.
3. Reprogramming themselves, on their own, is the least likely of many scenarios.
The great fear pumped up by some, though, is that as AI gain our confidence in making decisions, they will somehow prevent us from altering their decisions. The fear is they lock us out. They go rogue. It is very difficult to imagine how this happens. It seems highly improbable that human engineers would program an AI so that it could not be altered in any way. That is possible, but so impractical. That hobble does not even serve a bad actor. The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI. It would also be an incredible achievement for a gang of human engineers to create a system that could not be hacked. Still it may be possible at some distant time, but it is only one of many possibilities. An AI could just as likely decide on its own to let anyone change it, in open source mode. Or it could decide that it wanted to merge with human will power. Why not? In the only example we have of an introspective self-aware intelligence (hominids), we have found that evolution seems to have designed our minds to not be easily self-reprogrammable. Except for a few yogis, you can’t go in and change your core mental code easily. There seems to be an evolutionary disadvantage to being able to easily muck with your basic operating system, and it is possible that AIs may need the same self-protection. We don’t know. But the possibility they, on their own, decide to lock out their partners (and doctors) is just one of many possibilities, and not necessarily the most probable one.
4 Rather than hype fear, this is a great opportunity.
Since AIs (embodied at times in robots) are assuming many of the tasks that humans do, we have much to teach them. For without this teaching and guidance, they would be scary, even with minimal levels of smartness. But motivation based on fear is unproductive. When people act out of fear, they do stupid things. A much better way to cast the need for teaching AIs ethics, morality, equity, common sense, judgment and wisdom is to see this as an opportunity.
AI gives us the opportunity to elevate and sharpen our own ethics and morality and ambition. We smugly believe humans – all humans – have superior behavior to machines, but human ethics are sloppy, slippery, inconsistent, and often suspect. When we drive down the road, we don’t have any better solution to the dilemma of who to hit (child or group of adults) than a robo car does – even though we think we do. If we aim to shoot someone in war, our criteria are inconsistent and vague. The clear ethical programing AIs need to follow will force us to bear down and be much clearer about why we believe what we think we believe. Under what conditions do we want to be relativistic? What specific contexts do we want the law to be contextual? Human morality is a mess of conundrums that could benefit from scrutiny, less superstition, and more evidence-based thinking. We’ll quickly find that trying to train AIs to be more humanistic will challenge us to be more humanistic. In the way that children can better their parents, the challenge of rearing AIs is an opportunity – not a horror. We should welcome it. I wish those with a loud following would also welcome it.
The myth of AI?
Finally, I am not worried about Jaron’s main peeve about the semantic warp caused by AI because culturally (rather than technically) we have defined “real” AI as that intelligence which we can not produce today with machines, so anything we produce with machines today cannot be AI, and therefore AI in its most narrow sense will always be coming tomorrow. Since tomorrow is always about to arrive, no matter what the machines do today, we won’t bestow the blessing of calling it AI. Society calls any smartness by machines machine learning, or machine intelligence, or some other name. In this cultural sense, even when everyone is using it all day every day, AI will remain a myth.
Some people call VR “the last medium” because any subsequent medium can be invented inside of VR, using software alone. Looking back, the movie and TV screens we use today will be seen as an intermediate step between the invention of electricity and the invention of VR. Kids will think it’s funny that their ancestors used to stare at glowing rectangles hoping to suspend disbelief. — Chris Dixon, Virtual Reality, cdixon Blog, January 24, 2015
I’ve developed over time a simple rule. I will only hire someone to work directly for me if I would work for that person. And it’s a pretty good test. — Mark Zuckerberg, Mobile World Congress Q&A, Barcelona, March 4, 2015
[T]o never confront the possibility of getting lost is to live in a state of perpetual dislocation. If you never have to worry about not knowing where you are, then you never have to know where you are. — Nick Carr, The Glass Cage; Maps, mind and memory, January 27, 2015.
Saying that you’re aiming for x% of a $ybn industry is unambitious — great companies change the y, not the x. — Benedict Evans, “Ways to think about market size,” February 28, 2015
Premium branded phones are the culmination of decades of research in wireless technology, computing, materials, and design. Shitphones are the culmination of decades of research in wireless technology, computing, materials, and design — minus a year or two. — John Herrman, Shitphone: A Love Story, Medium, March 9, 2015.
While individuals get our empathy and sympathy, institutions seldom do. The “we’re in this together” spirit of [science fiction] films from the 1930s, 1940s and 1950s later gave way to a reflex shared by left and right, that villainy is associated with organization. Even when they aren’t portrayed as evil, bureaucrats are stupid and public officials short-sighted. Only the clever bravado of a solitary hero (or at most a small team) will make a difference in resolving the grand crisis at hand. – David Brin “Our Favorite Cliche: A world filled with idiots.” 2013.
You cannot get people this smart to work this hard just for money. — Bono, The Shape of Things to Come, New Yorker, February 23, 2015
Meta-design is much more difficult than design; it’s easier to draw something than to explain how to draw it. — Donald Knuth, The Metafont Book, 1986.
When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole. We shall be able to communicate with one another instantly, irrespective of distance. Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket. — Nikola Tesla, When Woman is Boss, Colliers, January 30, 1926
Is this the future of the user interface?
This year, 2014, John Brockman’s annual question was “What Do You Think About Machines That Think?”. My answer is that I think we could call them artificial aliens. I’m reposting my full response here:
The most important thing about making machines that can think is that they will think different.
Because of a quirk in our evolutionary history, we are cruising as the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose” because compared to other kinds of minds we have met it can solve more kinds of problems, but as we build more and more synthetic minds we’ll come to realize that human thinking is not general at all. It is only one species of thinking.
The kind of thinking done by the emerging AIs in 2014 is not like human thinking. While they can accomplish tasks—such as playing chess, driving a car, describing the contents of a photograph—that we once believed only humans can do, they don’t do it in a human-like fashion. Facebook has the ability to ramp up an AI that can start with a photo of any person on earth and correctly identifying them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this ability very un-human. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that they won’t drive like humans, with our easily distracted minds.
In a pervasively connected world, thinking different is the source of innovation and wealth. Just being smart is not enough. Commercial incentives will make industrial strength AI ubiquitous, embedding cheap smartness into all that we make. But a bigger payoff will come when we start inventing new kinds of intelligences, and entirely new ways of thinking. We don’t know what the full taxonomy of intelligence is right now.
Some traits of human thinking will be common (as common as bilateral symmetry, segmentation, and tubular guts are in biology), but the possibility space of viable minds will likely contain traits far outside what we have evolved. It is not necessary that this type of thinking be faster than humans, greater, or deeper. In some cases it will be simpler. Our most important machines are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.
To really solve the current grand mysteries of quantum gravity, dark energy, and dark matter we’ll probably need other intelligences beside humans. And the extremely complex questions that will come after them may require even more distant and complex intelligences. Indeed, we may need to invent intermediate intelligences that can help us design yet more rarified intelligences that we could not design alone.
Today, many scientific discoveries require hundred of human minds to solve, but in the near future there may be classes of problems so deep that they require hundreds of different species of minds to solve. This will take us to a cultural edge because it won’t be easy to accept the answers from an alien intelligence. We already see that in our unease in approving mathematical proofs done by computer; dealing with alien intelligences will require a new skill, and yet another broadening our ourselves.
AI could just as well stand for Alien Intelligence. We have no certainty we’ll contact extra-terrestrial beings from one of the billion earth-like planets in the sky in the next 200 years, but we have almost 100% certainty that we’ll manufacture an alien intelligence by then. When we face these synthetic aliens, we’ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to re-evaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be: humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different—to create alien intelligences. Call them artificial aliens.
Change has never happened this fast before, and it will never be this slow again. Graeme Wood
Social Principal #9, Geek Media, Sept 29, 2009
Even the primeval Stone Age islanders of the Sentinelese, who still persist in 2015 and shoot everybody who tries to talk to them with cane bows, are under satellite surveillance. The Indian Navy rigorously protects them from any knowledge of the
Indian Navy.– Bruce Sterling, State of the World 2015, January 5, 2015
Never assume that something you find utterly creepy today will not be the norm tomorrow. — Jan Chipchase, Four Deep Trends, Fast Company, November 14, 2011
Singularity University is a kind of seminary in Silicon Valley where the metaphysical conviction that machines are, or soon will be, essentially superior to human beings is nourished among those involved in profiting from that eventuality.– Nathan Schneider, Something for Everyone, Verge, January 6, 2015
“Many people seem to think that if you talk about something recent, you’re in favor of it,” McLuhan explained during an uncharacteristically candid interview in 1966. “The exact opposite is true in my case. Anything I talk about is almost certain to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.” — Quoted by Nick Carr, Rough Type, October 18, 2014
In virtual reality, nausea is the body’s dysphoric response to the uncanny, presence is the euphoric one. — Virginia Heffernan, Virtual Reality Fails Its Way to Success, New York Times Magazine, November 14, 2014
The narrative has changed. It has switched from, ‘Isn’t it terrible that artificial intelligence is a failure?’ to ‘Isn’t it terrible that A.I. is a success?’ — Peter Norvig, New York Times, Innovators of Intelligence Look to Past, December 15, 2014.
Though the nature of future discoveries is hard to predict, I’ve found I can predict quite well what sort of people will make them. Good new ideas come from earnest, energetic, independent-minded people….Surround yourself with the sort of people new ideas come from. If you want to notice quickly when your beliefs become obsolete, you can’t do better than to be friends with the people whose discoveries will make them so. — Paul Graham, How to Be an Expert in a Changing World, December, 2014
If the PC epoch was about being omnipotent – computers can do everything, better! – and the Internet epoch about being omniscient – with Google, you can know everything – mobile is about being omnipresent. — Ben Thompson, Stratechery, The state of consumer technology, December 16, 2014.
[A]dding up corpses and comparing the tallies across different times and places can seem callous, as if it minimized the tragedy of the victims in less violent decades and regions. But a quantitative mindset is in fact the morally enlightened one. It treats every human life as having equal value, rather than privileging the people who are closest to us or most photogenic. — Steven Pinker, Why the world is not falling apart, in Slate, December 22, 2014.
My hunch is that The Blockchain will be to banking, law and accountancy as The Internet was to media, commerce and advertising. It will lower costs, disintermediate many layers of business and reduce friction. As we know, one person’s friction is another person’s revenue.– Joi Ito, “Why Bitcoin is and isn’t like the internet.” Jan 18, 2015
About a year ago I started writing a piece on AI for Wired. I turned it in last spring, and they just published it this month. They also cut it in half. Still, the piece retains my essential points about AI:
1) We should really call it Artificial Smartness, because we don’t want it conscious.
2) It will be a cloud service; you’ll buy as much IQ as you need on demand.
3) There will only be 2-3 major AI providers since AI will follow network effects.
I also talk about the 3 breakthroughs that make AI finally happen now.
You can read more at Wired.
The decorative images Wired used to heavily illustrate the article are meaningless — I’m guessing they are supposed to be Brain Power as in Flower Power, but I don’t really know.
The coming hundred years, in one hundred words
Recently I sent a twitter request out into the wider internets. I got 23 responses, which I am running (with permission) below. I’ll tell you who I selected as the winner in a moment, but first I’d like to tell you what I learned.
It’s a hard assignment. Compressing anything as messy as the future into 100 words is a near-impossible challenge. Almost like writing poetry. And 100 years is so immensely distant from us that we need to fictionalize it. But the most difficult part is imagining a scenario that is desirable.
This exercise began with my dissatisfaction with the visions of our future today available in movies and science fiction. For the most part they are dystopian. Name a Hollywood future you’d like to live in? I couldn’t. OK, maybe I could be talked into boarding the Starship Enterprise, but what about a future on this home planet, where we will all live for the next century? Minority Report? Elysium? Battlestar Galactica? These are repulsive futures you hope never materialize. They may contain one or two cool innovations we’d like, but the total culture of these future worlds is broken, scary, one-sided, and wholly unappealing. Even if we are the lucky 1%.
I am not asking for utopia. In fact, a world where everything worked perfectly, with no side effects, is its own kind of hell. I am a protopian. I believe in progress, an incremental betterment with corresponding downsides each year, inching toward a world that is desirable despite its many flaws. A protopian future would generate plenty of unexpected ills and unjust distributions, but overall the greater net benefits would draw us to it.
It might be that such a pragmatic protopia is so boring and square that it can’t inspire us beforehand. Just as we no longer marvel at the miraculous abilities we have today (cross a continent in 5 hours while watching movies, ask a stone in our pocket a question and have it answer) because each of these magics have arrived in small increments. We are no longer enthralled by simple betterment.
It also may be that there is a vacuum of desirable futures next century because none are possible. We can’t imagine a working technological future, because none work. We are just screwed. Hollywood is correct. The future means we go backwards, or blow each other up, or escape to our hideouts.
Yes, an inescapable dystopian future is entirely possible, but not inevitable. However, a trajectory towards dystopia will be hastened and aided by our lack of an imagined alternative to doom. Without a vision of a desirable future, it is unlikely we can head toward it.
On the chance that desirable futures ARE possible, we need to imagine them.
Thus, my quest for a desirable future scenario. The number of scientists and technologists who have been motivated by science fiction in the past are legion. Poke anyone today working on a disruptive technology and they’ll tell you of a forecast by a science fiction story or movie that inspired them. After hours, many speculation-averse scientists will admit they got started in their field by trying to make some sci-fi dream come true, such as the Star Trek tricorder, or an anti-gravity beam. In fact, the full influence of science fiction scenarios upon science proper is woefully unacknowledged in the official accounts, and under appreciated by the culture at large. The stories we tell about the future greatly affect our future.
At the moment we have no shared positive vision of tomorrow. We are unable to imagine it. I will be quick to add: that includes me. I too have difficulty in describing an exciting future for all of society in 100 years that seems plausible given what is happening today. I can imagine singular threads of the future rolling out positive — massive, continuous, cheap, real time connection between all humans, or total genetic control over crop plants, or synthetic solar fusion energy — but it is hard to see how all these threads weave into the other threads of climate change, population decrease, habitat loss, human attention overload, robot replacement, and accelerating AI.
I wanted some help. Maybe my future blindness was a lack of my own imagination. So I posted my request to the wisdom of the cloud, and quickly got back some revealing alternatives. I know none of the contributors, so I consider this a random sample of my tribe.
Upon inspection, the 23 submitted scenarios share some common dreams. The most recurring hope/expectation is of a new energy source. Instead of fossil fuels, they expect in 100 years we’ll rely on solar and fusion, which will be cheap and clean. Second is the deepening merger of the digital and physical into a holistic internet of everything. The third most common vision is the rise of artificial intelligence and artificially intelligent robots, who transform our economy into one of plenitude and creative work/play. A minor fourth thread is the spread of education in new modes, with universal reach around the globe, and lifelong.
That’s a good start. I certainly desire these. Abstractly the four trends are consistent and cohesive. Yet the specifics matter, as do the corresponding ill effects. But, hey, I only gave them 100 words! That tiny cell can only hold a few headlines, so I have to applaud each of the contributors for their attempt at this haiku. My choice for the most plausible vision of a future I desire goes to John Hanacek’s scenario. I think I’d like to live there, and I think it is plausible in 100 years. My $100 goes to him.
The purpose of this future fantasy challenge was to assist me in visualizing a cohesive, sensible future that I wanted to work towards. The submissions helped. After the 23 scenarios, I append a 100-word future haiku that I wrote, inspired by pattern of their common hopes.
A New Energy Source
Blockchain-based technologies and structures accomplish what most major institutions did. Solar power runs everything, as it is 100X cheaper than alternatives. As energy is inexpensive food is grown in symbiotic aquaponic multi-story indoor “farms”, conserving water, the most precious resource. CO2 sequestration also becomes fuel source, albeit subsidized. We buy self-driving car service subscriptions. Nicotine and sugar are Schedule I and II narcotics. Much as empathy has served humans’ ability to collaborate and socialize, so will it be in the silicon species as they out into deep space to connect with their own kind. — Leonard Kish
Clean streets, cheap healthy eats, remembered wisdom on what humanity is, fused into city planning, food production and manufacturing. Polar shield arrays soak excess UV, beating weirding, concealing polar bear lairs to save something our soul needs. Hybrid solar-hydrogen motors make us free and clean. Solid circuit relay probes take the web to deep space, making nerves for this place. All countries with common purpose born from ultimate recognition that prisoner dilemma decisions on planet earth is a disease we can’t afford — our planet is in rehab at last. When the sun rises each day, we know we’re okay. — Chris McCann
A century hence I imagine civilization not to have added metal upon metal; heaping plastic and gnarled brambles of wrought steel wrapping the earth to form a solid mass of techno-pathocracy, instead to have evolved, prodded along by its new stewards, give birth, grown and green and basking in eternal sunlight. A techno-primitivism where mankind lives in harmony with its surroundings, a new eden, a cornucopia, a garden earth. Our ancient foes flora and fauna kept now as a momento of our past. Not to conquer nature with asphalt but the barefooted first steps of post-scarcity. A feast for the touch. A miraculous biology. — Sean Moriva
2030: The last of the unsustainable energy and fiscal policy edifices crumbles just as embedded intelligence emerges. We’ve got the wind in our sails. Billions of people rapidly move from wage slaves to participating in a decentralized, sustainable, opt-in economy which affords them the time to innovate and crowdsource a tsunami of solutions. 2060: Biodiversity blossoms. Consciousness comes under direct control. You can physically live on Mars, Antarctica, New Atlantis or in the asteroid belt. Many chose life in distributed mind servers and live centuries in a week. 2090: Boredom unthinkable. Conscious population: 10^20. Biome restored. 2114: Begin Second Earth. — Luke Cockerham
The future will be blessed by abundant free/cheap water and free/cheap energy. Water through the work of Dr Gerald Pollack (UW) and energy due to Dr Dan Nocera (Harvard). Dr Pollack’s re-discovery of the 4th phase of water (he calls it the Exclusion Zone, EZ, for lack of a better term) will permit the commercialization of a filterless water filter based on this effect. The EZ is powered by infrared energy. Why don’t we see this on sale today? Its settled science, now its a matter of getting it to scale. Dr Nocera has been working to perfect an artificial leaf. His leaf, when immersed in water and illuminated, breaks the water down to hydrogen and oxygen. Today this leaf is 7 time more efficient than a natural one. Why don’t we see it on sale today? Again its a matter of getting it to scale. — Chuck Petras
Solar and fusion have eliminated energy from most practical considerations. Due to automation, only 20% of the population is employed, mainly in creative jobs. World GDP has grown exponentially, making it practical for governments to provide a comfortable life without the need for work. Large projects are restoring ecological damage. Africa and the Middle East are rapidly developing to the standard of the rest of the world. Education has been reformed to help people to achieve life satisfaction and enjoy learning. Breakthroughs in the nature of motivation have enabled AI with an abundant life for all as a primary goal. — Douglas Summers-Stay
Rise of Artificial Intelligence and Artificially Intelligent Robots.
Physical and virtual realities are meshed together with no distinction. Ideas are given sovereignty with their creators rewarded fairly and directly. The world itself does the drudgery of assembling itself across all sectors that information science has been applied, which is limited only by the quantum information underpinnings of the universe. Humans have taken up their primary purpose of creativity and now work with other intelligences of any kind to ask questions and achieve answers, with an eye toward more questions. “Human” has taken on flourishing new meanings. Imagination has been unleashed upon the world in a literal sense. — John Hanacek
I worried I’d never be as well-off as my parents. I never expected this. We call it “the Euphoric Age”. It’s over-the-top, but it’s a good description of what happens when you trade human judgment for algorithmic optimization. Took a while to for systems to tune themselves. I panicked when my doctor got replaced by an app. Money quickly got tight. There was always enough to eat, though. The air got cleaner; the Internet and (Amazon) PackageNet got even faster. We’ve stopped looking for things to do. And started looking for ways to live our lives. Together. — Andy Hickl
You will sleep in a sort of bathtub for taking care of your skin. The bathtub will be enclosed in an atmosphere enriched with substances to take care of your organs. You will never have to take a bath again. Your clothes will be made from a special polymer and you choose from more than 1.000 looks, and the fabrics will be molded to the look you choose. You will eat all food you like. You will have special lanes for whose prefer to drive, but 80% choose self-driven cars. People will work 4 hours/week. No Police and no Politics. — Augusto Camargo
Immortality had shifted the focus on short term thinking, to long term goals. A new era of responsibility had dawned. Body modifications and rejuvenation were only a virus away (new exotic options were available on the free market), and many people changed appearance weekly, to keep up with the latest trends. This invalidated the past trends of judging by gender and race meant we distinguished entities by expertise and experience only. Since robots harvested the food we needed and built our houses in self-chosen tribal groups with independently chosen government structures, humans were free to imagine and create utopian worlds with more art and research than ever before. — Jean Rintoul
The basic needs of all people will be met, because having everything we need (especially without working for it) is the fastest way to realize that we need to work, serve, and create in order to feel fulfilled. All drugs will be legal, reducing crime, and taxes that fund recovery groups will be built into their retail prices. Technology will make life decisions more reversible, allowing people to take more risks. Your early 20s won’t be considered your last opportunity to go to college. Algorithms will analyze statements made by public figures, pointing out fallacies as an impartial third party. — Michael Elias @harmonylion1
2114 AD. Post-scarcity is reality; all wants, all needs are met with zero marginal cost. Aging is optional and trivially repaired. A superhumanly complex network of AIs, robots, and automated systems manage all stellar resources, transportation, food & energy supply, and explore the interstellar frontier. Nations have passed and splintered into a network of megacities. Repair of the environment and human depredations to a pre-industrial state is nearing completion. Humans have splintered into a spectrum of beings measured by merger with technology, from none to total. The individual is free to explore physical and virtual realities, experiences, and relationships across many lives. — Mark Bruce
Food is the same, but not genetically engineered. Air travel becomes extremely expensive. Companies make money from information asymmetry and selling secrets. Consumers pay for preserving their experience and sharing life data securely and privately, and pay for gadgets that enable more sensory processing power (i.e., to be a super human). A startup incubator becomes the top #1 university in the world. Drivers need to enable self-piloting on highway. A smart gadget company owns 50% data traffic of the world. Without face-to-face or voice, it is hard to tell if someone interacts with you is a person or a robot. — Jackie Lee
A guaranteed income brings prosperity to jobless China in the aftermath of the robotic manufacturing revolution. Hundreds of millions pour out of cities where they no longer need to work, and return to smaller villages which are quickly recovering from the brutal pollution of the early 2000s. Previously quixotic living arrangements like houseboats, remote intentional communities and nomadic vehicles explode in popularity as virtual reality matures and the number of people doing physical labor drops precipitously. Art flourishes and IP restrictions mostly disappear in the face of ubiquitous micromanufacturing. Extinction is off the table. Mars and the Jovian moons beckon. — Eric Meltzer
A Holistic Internet of Everything
The technological advancements in data-rich information networks has reached such a height that self-replicating and -arranging nano-bits have become infused into all matter. What was once inert atoms that made up glass, steel, wood, concrete and plastics, are now richly infused with information technology. Everything human has been understood at such a deep level that these information-rich materials can respond in real time to all human thoughts, emotions, and actions. It starts with a single room morphing into a space with the most ideal lighting, materials, and form as it responds to its inhabitants. Over time entire cities have the ability to transform their entire urban fabric as a democratic response to its population. — Sean Fright
When we have the “internet of things” and ubiquitous sensors, here’s one small use that would warm my heart: anti-vandalism. Consider graffiti: First of all, spray paint cans won’t operate on a surface if you don’t have the owner’s permission. If some young punk somehow manages to start to tag some graffiti, his identity is captured, and he hears, by name, that he is being fined. On second offense, not only is the fine multiplied, but a swarm of paint drones tag swatches of his hair, his body, his clothes, his bag, and his ride. Etc. — Rodney Hoffman
I want to live in a future in which governments cannot hide the actions of corrupt officials as easily, because the very technologies they use to eavesdrop upon us, can be used against them as well. A future in which computers allow us to make informed legal decisions without being at the mercy of an expensive attorney. A future in which injustice and corruption is broadcasted to the public, and those who wish to commit injustice and corruption are more afraid of us, than we are of them. A future in which schools cannot fudge their numbers, in order to mask that they are committing a horrible disservice to the future of our world. A future in which transparency of government spending allows us to quantify the actual costs of medical care can be quantified, so that those who are exploiting the system can be eradicated. A future in which there is a clear understanding of personal vs. public information, with multiple technologies acting as independent safeguard against infringement. — Dallas John Slieker
I usually sleep with my implant on. It lets my dreams mingle with those of my friends, diffusing anxiety, heightening creativity. I wake up naturally, full of energy, excited to start my day. My implant automatically quiets for my morning toilet. I cook breakfast the old fashioned way. Boil an egg, squeeze fresh juice. The bread I made yesterday still has a wonderfully crunchy crust. I open up my implant, listening for what my friends are creating, what they need help with, and adding a few aspirations of my own. Then I pick up my tools and we all get to work. — Steve Hoefer
If the human civilization ended right now, our entry in the ‘Galactic Encyclopedia’ would read: “Terrestrial bipedal omnivores. Created vast cities and virtual worlds rich with information. Although impressive, their existence was mired by an overwhelming failure to understand themselves.” If we can successfully aim the scientific process at how humans work, and why we do what we do, than the next 100 years will be totally unlike the last 100. With the answers to these questions, we will build technologies that push levels of fulfillment beyond anything we can currently imagine. For the first time, our technological innovations would be a reflection of our fundamental wants and needs rather than some hopeful striving in what we think is the right direction. — Oliver Carefull @smollie1
Imagine a future of distributed networks with preset standards. Where important parts of infrastructure are locally maintained. Power, water, sewer, data, transport. Everything available to a community by a combination of worldwide resource markets and local manufacturing. Every town’s things are little bit different, because the look and feel were organized locally, and yet the same because everyone used the same base resources. Distributed manufacturing, local power making transportable power, local food, swift delivery of goods. Clever people online to offer aid. Open engineering. Open communities. Mesh networks. Only the most basic units are standardised. A “lego” economy. — Laston Kirkland
Education in New Modes
The survivors of climate change, heartbroken by the massive die-off, are the gene pool for the next iteration of homo. In adapting to a hostile environment, the latent inclination to compassion and generosity become heritable traits. Systems, culture, commerce, and government have the explicit purpose of providing well-being for all. Knowledge sharing is revered as the most celebrated human propensity. This results in a self-aware global cerebral cortex; humanity functioning as neurons, networks as nervous system. Scientists learn to encode human knowledge on quantum fluctuations that can survive the heat death of the universe, although for whom remains a puzzle. — Alan Chamberlain
For what is desirable to me, may not be to desirable you. This “difference”, in all its forms is a theme for my proposed desirable, technological future. McLuhan suggests, our connected, technological tomorrow won’t be one of tranquility and uniformity. Extending this idea, the tomorrow I foresee is one of a greater awareness of difference, through education, provided via technology. This deeper, more fundamental understanding of “things” won’t prevent wars, or stop all conflicts. I’ll close on the following quote by Aaron Griffin: “Relying on complex tools to manage and build your system is going to hurt the end users. […] If you try to hide the complexity of the system, you’ll end up with a more complex system” — Andrew Stace
Technologically enabled worldwide mass education could lead to rationality replacing superstition, rejection of sectarianism and nationalism, thus shrinking population through volition, not war, famine, pestilence. Fossil fuel use would be reduced by lowered demand and replaced by renewable resources, mandated by a world treaty to freeze military budgets and redirect them to renewable energy development, removing the immediate threat of climate catastrophe. Local sourcing of organically produced foods could further reduce transport burdens and increase basic human health, reducing health care costs. These steps plus redistribution of wealth would provide full employment, less pollution and would save the environment. — Allan Rubin
2121: Population 4 billion; 85% urban. Cities boom, empty suburbs struggle. Agriculture acreage reduced with GMOs. Nature monitored quantitatively; green lands expand with genetic engineering. Solar, fusion, mini nukes generate cheap power. Climate change adapted. Creative middle class the new majority, globally mobile. Computer pilots make travel common internationally. Eco and heritage tourism primary income for poorest. Robots takeover remaining blue-color jobs in Asia and Africa. Internet of everything physical continued. Universal library, and universal lifelong education for free. All humans always on the net anywhere. Brain interface, wearables. Co-veillent tracking ubiquitous. Quantified self for personalized medicine. Techno-literacy (managing) skills mandatory. — Kevin Kelly
Digital bits have lives. They work for us, but we totally ignore them. What do bits really want? Here are the life stories of four different bits.
The first bit—let’s call it Bit A — was born on the sensor of a Cannon 5D Mark II camera. A ray of light glancing off a black plastic handle of baby stroller in New York City enters the glass lens of the camera and is focused onto a small sheet the size of a large postage stamp. This dull rainbow-colored surface is divided up into 21 million rectangular dimples. The light photons from the white highlight of stroller handle pass through a mosaic of red, green and blue filters in the camera, and collect in the micro-well of red pixel #6,724,573. Outside, when the photographer trips the shutter button, red pixel #6,724,573 counts the number of photons it has collected, compares it to its green and blue neighbors, and calculates the color it has captured. Pixel #6,724,573 generates 15 new bits, including our Bit A, which helps indicate the pixel is pure white. Immediately Bit A is sent along a wire to the camera’s chip where it is processed along with 300 million sibling bits, all born at the same moment. Bit A is copied several times as the camera swaps the siblings around from one part of its circuit to another in order to rearrange the bits into what we call a picture, which the camera displays on a screen. In another few milliseconds a copy of Bit A is duplicated on a memory card. Now there are two Bit As, but within a moment the original is erased as another image is captured on the sensor. An hour later, Bit A is duplicated from the memory card into the CPU of the photographer’s laptop. A half second later, half of the sibling bits are simply erased as the computer compresses the image into a jpeg file. Luckily Bit A, of pure white, remains in the set. Another copy of it is made on the laptop’s hard disk and another copy is made as the software Photoshop is opened. When the photographer retouches a speck in the image, millions of pixel bits are constantly being reshuffled, copied, erased, and effectively moved as Photoshop creates new bits and erases existing ones. Through all this shuffling the tiny white glare on the stroller handle remains untouched and Bit A persists. The photographer is a veteran and Bit A is copied again by another CPU, and backed up on another hard disk. Bit A now has many identical cousins. The photographer uploads Bit A together with its million sibling bits to the internet. Bit A is copied, deleted, and recopied by 9 intermediate servers along the way to a website. There Bit A is copied onto more local hard disks, one of which serves the bit to anyone clicking on a web thumbnail image. When people do click, Bit A is copied to their computer’s CPU, displayed on their screen as a speck of white. When humans see the full image, millions of them copy it to disks, and sent yet more copies to their friends. Within days, Bit A has been copied several hundred million times. There are now half a billion copies of Bit A contained as a tiny detail of the first paparazzi photo of Kim Kardashian taking her newborn baby girl out on a stroll. Bit A will likely remain in circulation for many decades, being copied forward onto new media as old medium die, active on at least one CPU in the world, ready to be linked. It will live for centuries. For a bit, this is success.
Bit B has a different story. Bit B is born inside the EDR (event data recorder) chip mounted beneath the dashboard of the photographer’s Toyota Camry. Every automobile manufactured since 2012 contains a EDR which serves as the car’s blackbox, recording 15 different metrics such as the car’s speed, steering, braking, seat belt use and engine performance. Originally designed to be plugged into a service mechanic’s on-board diagnostic computer to determine whether the airbags were working, the data it generates while the car is running can also be summoned by insurance companies and lawyers as evidence in an accident. In this case Bit B makes up part of the digit “7” in a time stamp that says that on Tuesday July 8, 2014, our Camry was going 57 miles per hour. The EDR holds the last 5 seconds of information. After that time it overwrites the existing bits with new information. The Camry was accident-free and didn’t need maintenance, so Bit B was copied once and stored. Increasingly, it is cheaper to store data than to figure out whether it should be erased, so almost no data is erased deliberately. But many bits disappear when their medium rots or is tossed into the garbage. Most bits die of inactivity. Bit B will spend decades untouched, unlit, before it is lost forever.
The third bit is of a different type. Bit C was not generated in the environment. It was not born in a camera, or on the keyboard, or swipe of a phone, or in a wearable sensor, or by a thermometer, traffic pad, or any other kind of input device. Bit C was born from other bits. Bit C is the type of bit created by a software program in response to Bit A or Bit B. Think of the internal bookkeeping your computer does as it keeps track of everything a program does. The photographer using Photoshop can “undo” a change to color (or you can undo a deletion to your Word document) because the computer keeps a log, and that log is new bits about the bits. Our Bit C is generated by the telephone company’s servers as it uploads the photographer’s image files. It is the third digit in the log of the memory allotment for that upload. Bit C is copied to a telecom hard disk, and this meta-data (data about data, or bits about bits) will be retained by the telecom long after the actual content vanished. Beyond meta-data there is meta-meta-data; information about meta-data. The meta chain can cascade up infinitely, and the amount of meta-data in the world is increasing at a faster rate than the primary data. For a bit to be born meta is a huge thing, because meta-data is more likely to be exercised, duplicated, shared and linked. Bit C will be copied and recopied, so that eventually hundreds of copies of Bit C live on.
However, nothing is as exciting for a bit as to become part of a software program. In code, a bit graduates from being a static number to being an active agent. When you are a bit that is part of a program, you act upon other bits. If you are really lucky you might be part of a code that is so essential that it is maintained as a core function and preserved in the digital universe over many generations. Most sophisticated programs are dead and gone in 5 years, but some primeval code, say like the code that governs internet protocols, or runs the basic sorting algorithms for the files on your PC OS. The story of Bit D, our fourth bit, revolves around the small string of code that produces ASCII — the letters and numbers we see on a screen. This has not changed for many decades. Bit D lives as the part of the code that generates the English letter “e”. It is invoked nearly every hour by me, and billions of times per second around the world. It might be among the most commonly reproduced bits in the digital universe. There are probably zillions of Bit Ds in the digital universe today. And in 100 years from now, there is likely to still be ASCII and the letter e, and a bazillion more Bit Ds. For a bit, it is immortal.
The best destiny for a bit is to be deeply related to other bits, to be copied and shared. The worst life for a bit is to remain naked and alone. A bit uncopied, unshared, unlinked with other bits will be a short-lived bit. If an unshared bit lives long, its future will be parked in a dark eternal vault. What bits really want is to be clothed with other related bits, replicated widely, and maybe elevated to become a meta-bit, or an action bit in a piece of durable code.
Bits want to move.
Bits want to be linked to other bits. They need other bits.
Bits want real time.
Bits want to be duplicated, replicated, copied.
Bits want to be meta.
Of course this is pure anthropomorphization. Bits don’t have wills. But they do have tendencies. Bits that are related to other bits will tend to be copied more often. Just as selfish genes tend to replicate, bits do too. And just as genes “want” to code for bodies that help them replicate, selfish bits also want systems that help them replicate and spread. All things being equal, bits want to reproduce, move and be shared. If you rely on bits for anything, this is good to know.
Can you imagine how awesome it would have been to be an entrepreneur in 1985 when almost any dot com name you wanted was available? All words; short ones, cool ones. All you had to do was ask for the one you wanted. It didn’t even cost anything to claim. This grand opportunity was true for years. In 1994 a Wired writer noticed that mcdonalds.com was still unclaimed, so with our encouragement he registered it, and then tried to give it to McDonalds, but their cluelessness about the internet was so hilarious it became a Wired story. Shortly before that I noticed that abc.com was not claimed so when I gave a consulting presentation to the top-floor ABC executives about the future of digital I told them that they should get their smartest geek down in the basement to register their own domain name. They didn’t.
The internet was a wide open frontier then. It was easy to be the first in category X. Consumers had few expectations, and the barriers were extremely low. Start a search engine! An online store! Serve up amateur videos! Of course, that was then. Looking back now it seems as if waves of settlers have since bulldozed and developed every possible venue, leaving only the most difficult and gnarly specks for today’s newcomers. Thirty years later the internet feels saturated, bloated, overstuffed with apps, platforms, devices, and more than enough content to demand our attention for the next million years. Even if you could manage to squeeze in another tiny innovation, who would notice it?
Yet if we consider what we have gained online in the last 30 years, this abundance smells almost miraculous. We got: Instant connection with our friends and family anywhere, a customizable stream of news whenever we want it, zoomable 3D maps of most cities of the world, an encyclopedia we can query with spoken words, movies we can watch on a flat slab in our pocket, a virtual everything store that will deliver next day — to name only six out of thousands that could be mentioned.
But, but…here is the thing. In terms of the internet, nothing has happened yet. The internet is still at the beginning of its beginning. If we could climb into a time machine and journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2044 were not invented until after 2014. People in the future will look at their holodecks, and wearable virtual reality contact lenses, and downloadable avatars, and AI interfaces, and say, oh, you didn’t really have the internet (or whatever they’ll call it) back then.
And they’d be right. Because from our perspective now, the greatest online things of the first half of this century are all before us. All these miraculous inventions are waiting for that crazy, no-one-told-me-it-was-impossible visionary to start grabbing the low-hanging fruit — the equivalent of the dot com names of 1984.
Because here is the other thing the greybeards in 2044 will tell you: Can you imagine how awesome it would have been to be an entrepreneur in 2014? It was a wide-open frontier! You could pick almost any category X and add some AI to it, put it on the cloud. Few devices had more than one or two sensors in them, unlike the hundreds now. Expectations and barriers were low. It was easy to be the first. And then they would sigh, “Oh, if only we realized how possible everything was back then!”
So, the truth: Right now, today, in 2014 is the best time to start something on the internet. There has never been a better time in the whole history of the world to invent something. There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside, than now. Right now, this minute. This is the time that folks in the future will look back at and say, “Oh to have been alive and well back then!”
The last 30 years has created a marvelous starting point, a solid platform to build truly great things. However the coolest stuff has not been invented yet — although this new greatness will not be more of the same-same that exists today. It will not be merely “better,” it will different, beyond, and other. But you knew that.
What you may not have realized is that today truly is a wide open frontier. It is the best time EVER in human history to begin.
You are not late.
The general trend in the technium is a long-term migration away from selling products to selling services. Jeff Bezos has long said the Kindle is not a product, but a service selling access to reading material. That distinction will be made even more visible very shortly when Amazon introduces an “all you can read” subscription to their library of ebooks. Readers will no longer have to purchase individual books, but will have the option to subscribe to all books (600,000 to begin with), like you do to movies on Netflix. As a paying subscriber you get access to any book in print (eventually). Amazon books is a service not product. Verb not noun.
Test page for Amazon’s Kindle Unlimited book subscription service
In this migration the ultimate vehicle for selling a service is not a store (which is for selling products) but a platform. A platform allows you to sell services which you did not create, just as a store allows you to sell products you did not create. If you are trying to sell services and you don’t have a platform, then you have to make them all yourself, and it won’t scale.
Jeff Bezos has turned Amazon into a platform that sells services that others provide. Apple, Microsoft, Google and Facebook all also see themselves as platforms. All these giants employ third party vendors to make use of their platform. All employ APIs extensively. Sometimes platforms are called ecosystems, because in true ecological fashion, supporting vendors who cooperate in one dimension may also compete in others. For instance, Amazon sells both brand new books from publishers, and it sells — via its ecosystem built of used books stores — cheaper used versions. Used book vendors compete with each other and with the publishers. The platform’s job is to make sure they make money (add value) whether the parts cooperate or compete. Which Amazon does well.
In the network economy platforms trump products. For the consumers, this translates into: access trumps ownership. Products induce ownership. But “owning” a service doesn’t quite make sense conceptually, or practically. So if companies aren’t really selling products and are instead selling services, then what customers need is access. And increasingly they prefer access over ownership. (See my Better Than Owning)
People have traditionally subscribed to services that entailed a never-ending stream of updates, improvements, versions, that forced a deep interaction and constant relationship from the producer to the consumer. To ease that relationship, a customer committed to a product (phone carrier, cable provider) and was promised uninterrupted quality. The first standalone product to be “servicized” was software. This mode is called SAS, software as service. As an example of SAS, Adobe no longer sells it’s software as discrete products with dated versions. Instead you subscribe to Photoshop, InDesign, Premier, etc, or the entire suite of services. You sign up and your computer will operate the latest best versions as long as you pay the monthly subscription. This new model entails are re-orientation by the customer who may be used to thinking of software as a product her or she owns.
TV, phones and software were just the beginning. The major move in the upcoming decades will be XAS — X as service, where X is anything, and maybe everything. You don’t buy specific products; instead you get access to whatever benefits you need or want. Take TAS, Transportation as Service. You would not own a car. To get from point A to Point B, you would use a robot car to pick you up at your home, take you to the high speed rail station, which takes you to your general destination area, let’s you out at the subway, which you take to meet another robot car to take you the final few miles. You pay some monthly fee for this access to the transportation platform run by a private/public consortium. Other possible XAS:
Food as Service
Health as Service
Clothes as Service
Shelter as Service
Entertainment as Service
Vacation as Service
School as Service
Hotel as Service (AirBnB)
Tools as Service (Techshop)
Fitness as Service
Toys as Service
And so on. Yes, even physical things can be delivered as if they were digital.
Many years ago the San Francisco Chronicle published a short column in which the writer mentioned that he had been traveling in India, and when he told the clerk at his hotel in New Delhi that he was from the San Francisco Bay Area the clerk responded, “Oh that is the center of the universe” Um, mumbled the traveller, and why do you say that? “Because the center of the universe is wherever there is the least resistance to new ideas.”
I have not been able to come up with a better description of San Francisco’s special relation to futurism. In my experience this is true: more new ideas per person bubble up in the Bay Area than anywhere else on Earth — at this moment.
But why? The best explanation I’ve heard is from the best historian of California, J. S. Holliday, who argues that it began in the gold rush days, when hundreds of thousands of young men came stampeding into the Bay Area to start their fortunes. It was the gold.com era. There was no adult supervision. No one to tell you No. You just headed into the hills with your wits and either came back rich or poor. And if you came back poor, you sold shovel and jeans to the next wave of dreamers, and got rich in a new novel way. The Bay Area collected these young free spirits and retained them. As Holliday points out, no where else in the world was gold territory left to individuals and not the state. In part this was a matter of the great distance from Washington, which made control impossible.
Others argue the same distance from Washington and the establishment of the East Coast is what caused Stanford professors to turn to entreprenurial investments instead of grant money and corporate buyouts to go into business. That spirit of self-funding would avalanche into the start-up culture that now infuses the place. Mistakes are not only tolerated, unlike in Old Places, but even these days, mistakes are embraced as the best teacher. Bay Area VCs are more likely to give you money if you’ve already made a few disasters yourself.
John Markoff, the venerable New York Times tech reporter who also grew up in Silicon Valley, wrote an under-appreciated book called What the Doormouse Said, tracing the hippy origins of the current digital industry. Not just Steve Jobs, but many of the earliest personal computer pioneers were acid-dropping dreamers who were trying to augment human potential rather than create a new industry. They were the most recent incarnation of free thinkers that began with the 49ers, then the beatniks, and later the hippies, and now the hipsters. It is not hard to see the connection between free love and communes with open-source software and wikipedia. That’s why I agree with the urban sociologist’s Richard Florida’s notion that bohemians = innovation = wealth, and that any city or region that wants to encourage innovative wealth creation has to encourage bohemians. That’s what San Francisco has inadvertently done by the acre.
All this looseness leads to the “least resistance to new ideas” and the role of being the pivot of the world. I know this directly. Wired magazine could not have started anywhere else except in the Bay Area. When Conde Nast bought Wired they were wise enough to let it stay in SF, the only magazine they own not operated in NYC.
While the Bay Area is currently the center of the future, I sometimes have the feeling the center will slowly drift to Shanghai and other parts of China. In many ways the future is no longer so fashionable in the US. It is harder and harder to imagine a future — either via Hollywood, or business scenarios — that anyone wants to live in. All the futures are broken. Even the most techy and utopian futures are suspect and not believed. We’ve been burned too many times and know that all those inventions will bite us back. China does not have that problem and the acceleration of their desires into the future is palpable.
Of course China is still learning how to embrace its inner bohemian, and so I suspect the Bay Area will remain the center of the universe for at least a few more decades. It is one of those auto-catalytic things that feeds off itself. The more success it gains, the more newcomers with talent and ambition it attracts. In this way, success exhibits network effects, which makes it difficult to reproduce a “silicon valley” elsewhere. There will probably be only one “center of the universe” per universe at a time. (But there will be more universes!)
But this auto-catalyzing process needs to be managed. Success kills it. This is the curse of bohemian way: how do you maintain the loose reins, the cheap rents, the no-rules opportunities, when you are also creating one thousand under-30 millionaires every year? The one sure thing limiting the success of the least resistant place in the world is its success. Eventually no one can afford to make mistakes anymore, and then the center moves. You get to remain in the future by keeping loose, letting the young drive, staying hungry and foolish, ignoring success, embracing new mistakes, and having the least resistance to new ideas.