The Technium

Was Moore’s Law Inevitable?


[Translations: Hebrew]

In the early 1950s the same thought occurred to many people at once: things are improving so fast and so regularly, there might be a pattern to the improvements. Maybe we could plot technological progress to date, then extrapolate the curves and see what the future holds. Among the first to do this systemically was the US Air Force. They needed a long-term schedule of what kinds of planes they should be funding, but aerospace was one of the fastest moving frontiers in technology. Obviously they would build the fastest planes possible, but since it took decades to design, approve, and then deliver a new type of plane, the generals thought it prudent to glimpse what futuristic technologies they should be funding.

Martino72-43

So in 1953 the Air Force Office of Scientific Research plotted out the history of the fastest air vehicles. The Wright Brothers’ first flight reached 6.8 kph in 1903, and jumped to 60 kph two years later. The air speed record kept increasing a bit each year and in 1947 the fastest flight passed  1,000 kph  in a Lockheed Shoot Star flown by Colonel Albert Boyd.  The record was broken four times in 1953, ending with the F-100 Super Sabre doing 1, 215 kph. Things were moving fast. And everything was pointed towards space. According to Damien Broderick, the author of “The Spike”, in 1953 the Air Force…

Charted the curves and metacurves of speed. It told them something preposterous. They could not believe their eyes. The curve said they could have machines that attained orbital speed… within four years. And they could get their payload right out of Earth’s immediate gravity well just a little later. They could have satellites almost at once, the curve insinuated, and if they wished — if they wanted to spend the money, and do the research and the engineering — they could go to the Moon quite soon after that.

It is important to remember that in 1953 none of the technology for these futuristic journeys existed. No one knew how to do go that fast and survive. Even the most optimistic die-hard visionaries did not expect a lunar landing any sooner than the proverbial “Year 2000.” The only voice telling them they could do it was a curve on a piece of paper. But the curve was right. Just not politically correct. In 1957 the USSR launched Sputnik, right on schedule. Then US rockets zipped to the Moon 12 years later. As Brokderick notes, humans arrived on the Moon “close to a third of century sooner than loony space travel buffs like Arthur C Clarke had expected it to occur.”

What did the curve know that Arthur C Clarke did not? How did it account for the secretive efforts of the Russians as well as dozens of teams around the world? Was the curve a self-fulfilling prophesy, or a revelation of a trend rooted deep in the nature of the technium?  The answer may lay in the many other trends plotted since then. The most famous of them all is the trend known as Moore’s Law. In brief, Moore’s Law predicts that computing chips shrink by half in size and cost every 18-24 months. For the past 50 years it has been astoundingly correct.

This trend was first noticed in 1960 by Doug Englebart, a researcher at SRI in Palo Alto, California, who would later go on to invent the “windows and mouse” interface that is now ubiquitous on most computers. When he first started as an engineer Englebart worked in the aerospace industry testing airplane models in wind tunnels where he learned how systematic scaling down led to all kinds of benefits and unexpected consequences. The smaller the model, the easier to fly. Englebart imagined how the benefits of scaling down, or as he called it “similitude,” might transfer to a new invention SRI was tracking — multiple transistors on one integrated silicon chip. Perhaps as they were made smaller, circuits too might deliver a similar kind of similitude magic: The smaller a chip, the better. Englebart presented his ideas on similitude to an audience of engineers at the 1960 Solid State Circuits Conference that included Gordon Moore, a researcher at Fairchild Semiconductor.

In the following years Moore began tracking the actual statistics of the earliest prototype chips. By 1964 he had enough data points to extrapolate the slope of the curve so far. But as Moore recalls,

I was not alone in making projections. At a conference in New York City that same year [1964], the IEEE convened a panel of executives from leading semiconductor companies: Texas Instruments, Motorola, Fairchild, General Electric, Zenith, and Westinghouse. Several of the panelists made predictions about the semiconductor industry.

Patrick Haggerty of Texas Instruments, looking approximately ten years out, forecast that the industry would produce 750 million logic gates a year. I thought that was perceptive but a huge number, and puzzled, “Could we actually get to something like that?” Harry Knowles from Westinghouse, who was considered the wild man of the group, said, “We’re going to get 250,000 logic gates on a single wafer.” At the time, my colleagues and I at Fairchild were struggling to produce just a handful. We thought Knowles’s prediction was ridiculous. C. Lester Hogan of Motorola looked at expenses and said, “The cost of a fully processed wafer will be $10.”

When you combine these predictions, they make a forecast for the entire semiconductor Industry [for 1974]. If Haggerty were on target, the industry would produce 750 million logic gates a year. Using Knowles’s “wild” figure of 250,000 logic gates per wafer meant that the industry would only use 3,000 wafers for this total output. If Hogan was correct, and the cost per processed wafer was $10, that would mean that the total manufacturing cost to produce the yearly output of the semiconductor industry would be $30,000! Somebody was wrong.

As it turned out, the person who was the “most wrong” was Haggerty, the panelist I considered the most perceptive. His prediction of the number of logic gates that would be used turned out to be a ridiculously large underestimation. On the other hand, the industry actually achieved what Knowles foresaw, while I had labeled his suggestion as the ridiculous one. Even Hogan’s forecast of $10 for a processed wafer was close to the mark, if you allow for inflation and make a cost-per-square-centimeter calculation.

The trends were telling them something no one else was, impossible as it seemed. Moore kept adding data points as the semiconductor industry grew. He was tracking all kinds of parameters — number of transistors made, cost per transistor, number of pins, logic speed, and components per wafer. But one of them was cohering into a nice curve: The number of components per chip. In 1965, at the invitation from the editor of the trade journal Electronics, Moore wrote a piece on “the future of microelectronics.” In this short article he pointed out the curve of progress in chip fabrication is increasing by a exponential power every year. As Moore noted in his internal memo to the Fairchild patent officers, he took current trend and “extrapolated into the wild blue yonder.” But in fact, how far would it really go?

Mooreoriginal-1

Moore’s original plot

Moore hooked up with Carver Mead, a fellow Caltech alumnus. Mead was an electrical engineer and early transistor expert. In 1967 Moore asked Mead what kind of theoretical limits were in store for microelectronic miniaturization. Mead had no idea but as he did his calculations he made an amazing discovery: The efficiency of the chip would increase by the cube of the scale’s reduction. The benefits from shrinking were “exponential.”  Microelectronics would not only become cheaper, they would also become better. As Moore puts it “By making things smaller, everything gets better simultaneously. There is little need for tradeoffs. The speed of our products goes up, the power consumption goes down, system reliability improves by leaps and bounds, but especially the cost of doing things drops as a result of the technology.”  Carver Mead was so caught up in Moore’s curves that he began to formalize them with physics equations and he named the trend Moore’s Law. He became an evangelist for the idea, traveling to electronics companies, the military, and academics preaching  that the future of electronics lay in ever-smaller blocks of silicon, and trying to “convince people that it really was possible to scale devices and get better performance and lower power” — and that there was no end in sight for this trend. “Every time I’d go out on the road,” Mead recalls, “I’d come to Gordon and get a new version of his plot.”

Pricetransistor

Today when we stare at the plot of Moore’s Law  we can spot several striking characteristics of its 50 year run. The first surprise is that this is a picture of acceleration. The straight line descending slope of the “curve”  indicates a ten fold increase in goodness for every tick on vertical log axis. Silicon computation is not simply getting better, but getting better faster. Relentless acceleration for five decades is rare in biology and unknown in the technium before this century. The explosion of good stuff is revealed in a line. So this graph is as much about the phenomenon of cultural acceleration as about silicon chips. In fact Moore’s Law has come to represent the principle of an accelerating future which underpins our expectations of the technium: the world of the made gets better, faster.

Secondly, even a cursory glance reveals the astounding regularity of Moore’s line. From the earliest points its progress has been eerily mechanical.  Without interruption for 50 years, chips improve exponentially at the same speed of acceleration, neither more nor less. It could not be more straight if it had been engineered by a technological tyrant. Yet, we are to believe that this strict nonwavering trajectory came about via the chaos of the global marketplace and uncoordinated ruthless scientific competition. The line is so straight and unambiguous that it seems curious anyone would need convincing by Moore and Mead to “believe” in it. The question of faith lies in whether one believes the force of this “law” lies within the technology itself, or in a self-fulfilling social prophecy. Is Moore’s law inevitable, a direction pushed forward by the nature of matter and computation, and independent of the society it was born into, or is it an artifact of self-organized scientific and economic ambition?

Moore and Mead themselves believe the latter. Writing in 2005, on the 40th anniversary of his law, Moore says, “Moore’s law is really about economics.”  Carver Mead made it clearer yet: Moore’s Law, he says, “is really about people’s belief system, it’s not a law of physics, it’s about human belief, and when people believe in something, they’ll put energy behind it to make it come to pass.” In case that was not clear enough he spells it out further:

After [it] happened long enough, people begin to talk about it in retrospect, and in retrospect it’s really a curve that goes through some points and so it looks like a physical law and people talk about it that way. But actually if you’re living it, which I am, then it doesn’t feel like a physical law. It’s really a thing about human activity, it’s about vision, it’s about what you’re allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe what is possible.

Finally, in a another reference, Mead adds : “Permission to believe that [the Law] will keep going,” is what keeps the Law going. Moore agrees in a 1996 article: “More than anything, once something like this gets established, it becomes more or less a self-fulfilling prophecy. The Semiconductor Industry Association puts out a technology road map, which continues this [generational improvement] every three years. Everyone in the industry recognizes that if you don’t stay on essentially that curve they will fall behind. So it sort of drives itself.”

The ” technology road map” produced by Semiconductor Industry Association in the 1990s was a major tool in cementing the role of Moore’s law in chips and society. According to David Brock, author of Understanding Moore’s Law, the SIA road map “transformed Moore’s law from a prediction to a self-fulfilling prophecy. It spelled out what needed to be accomplished, and when.” A major factor in semiconductor manufacturing process are the photoresist masks which craft the thin etched conducting wires on a chip. The masks have to get smaller in order for the chip to get smaller. Elsa Reichmanis is the foremost photoresist technical guru in Silicon Valley. She says, “Advances in the [process] technology today are largely driven by the Semiconductor Industry Association.” Raj Gupta, a materials scientist and CEO of Rohm and Haas, declares “They” — the SIA road map — “say what performance they need [for new electronic materials], and by which date.” Andrew Odlyzko from AT&T Bell Laboratories concurs: “Management is *not* telling a researcher, ‘You are the best we could find, here are the tools, please go off and find something that will let us leapfrog the competition.’ Instead, the attitude is, ‘Either you and your 999 colleagues double the performance of our microprocessors in the next 18 months, to keep up with the competition, or you are fired.’”  Gordon Moore reiterated the importance of SIA in a 2005 interview with Charlie Rose: “the Semiconductor Industry Association put out a roadmap for the technology for the industry that took into account these exponential growths to see what research had to be done to make sure we could stay on that curve. So it’s kind of become a self-fulfilling prophecy.”

Clearly, expectations of future progress guide current investments. The inexorable curve of Moore’s Law helps focus money and intelligence on very specific goals — keeping up with the Law. The only problem with accepting these self-constructed goals as the source of such regular progress is that other technologies which might benefit from the same belief do not show the same zooming curve. We witness steady, quantifiable progress in other solid state technologies such as solar photovoltaic panels — which are also made of silicon. These have been sinking in performance price for two decades, but not exponentially. Likewise the power density of batteries has been increasing steadily for two decades, not again, no where near the rate of computer chips.

Battery Energy Density

Solar

Why don’t we see Moore’s Law type of growth in the performance of solar cells if this is simply a matter of believing in a self-fulfilling prophecy? Surely, such an acceleration would be ideal for investors and consumers. Why doesn’t everybody simply clap for Tinkerbelle to live, to *really* believe, and then the hoped for self-made fairy will kick in, and solar cells will double in efficiency and halve in cost every two years? That kind of consensual faith would generate billions of dollars. It would easy to find entrepreneurs eager to genuinely believe in the prophecy. The usual argument applied against this challenge is that solar chips and batteries are governed primarily by chemical processes, which chips are not. As one expert put the failure of exponential growth in batteries: “This is because battery technology is a prisoner of physics, the periodic table, manufacturing technology and economics.” That’s plain wrong. Manufacturing silicon integrated chips is an intensely chemical achievement, as much a prisoner of physics, the periodic table and manufacturing as batteries. Mead admits this: “It’s a chemical process that makes integrated circuits, through and through.” In fact the main technical innovation of Silicon Valley chip fabrication was to employ the chemical industry to make electronics instead of chemicals. Solar and batteries share the same chemical science as chips.

So what is the curve of Moore’s law telling us that expert insiders don’t see?  That this steady acceleration is more than an agreement. It originates within the technology. There are other technologies, also solid state material science, that exhibit a steady curve of progress, and just like Moore’s Law, their progress *is* exponential. They too seem to obey a rough law of remarkably steady exponential improvement.

Consider the recent history (for the last 10-15 years) of the cost per performance of communication bandwidth, and digital storage.

Bandwidth

The picture of their exponential growth parallels the integrated circuit in every way except their slope — the rate at which they are speeding up. Why is the doubling period in one technology eight years versus two? Isn’t the same finance system and expectations underpinning them?

Except for the slope, these graphs are so similar, in fact, that it is fair to ask whether these curves are just reflections of Moore’s Law. Telephones are heavily computerized and storage discs are organs of computers. Since progress in speed and cheapness of bandwidth, and storage capacity, relies directly and indirectly on accelerating computing power, it may be impossible to untangle the destiny of bandwidth and storage from computer chips. Perhaps the curves of bandwidth and storage are simply derivatives of the one uber Law?  Without Moore’s Law ticking beneath them, would they even remain solvent?

Carlson Cost Per Base Nov 08

Consider another encapsulation of accelerating progress. For a decade or so biophysicist Rob Carlson has been tabulating progress in DNA sequencing and synthesis. Graphed similarly to Moore’s Law (above), in cost performance per base pair, this technology too displays a steady drop when plotted on a log axis. I asked Carlson how much of this gain is due to Moore’s Law. If  computers did not get better, faster, cheaper each year, would DNA sequencing and synthesis continue to accelerate? Carlson replied:

Most of the fall in costs of sequencing and synthesis have to do with parallelization, new methods, and falling costs for reagents.  Moore’s law must have had some effect through cheap hardware that enables desktop CAD, but that is fairly tangential. If Moore’s Law stopped, I don’t think it would have much effect.  The one area it might affect is processing the raw sequence information into something comprehensible by humans.  Crunching the data of DNA is at least as expensive as getting the sequence of the physical DNA.

Larry Roberts, the principal architect of the ARPANET, the early internet, keeps detailed stats on communication improvements. He has noticed that communication technology in general also exhibits a Moore’s Law-like rise in quality. Might progress in wires also be correlated to progress in chips? Roberts says that the performance of communication technology “is strongly influenced by and very similar to Moore’s Law but not identical as might be expected.”

Z Ibm Arealdensityhistory

In the inner-circle of the tech industry the fast-paced drop in prices for magnetic storage is called Kryder’s Law. It’s the Moore’s Law for computer storage and is named after Mark Kryder the chief technical officer of Seagate, a major manufacturer of hard disks. Kryder’s Law says that the cost/performance of hard disks is increasing exponentially at a steady rate of 40% per year. I asked Kryder the same question: is Kryders’ Law dependent upon Moore’s Law. If computers did not get better, cheaper every year, would storage continue to do so? Kryder responded: “This is no direct relationship between Moore’s law and Kryder’s law. The physics and fabrication processes are different for the semiconductor devices and magnetic storage.  Hence, it is quite possible that semiconductor scaling could stop while scaling of disk drives continues. In fact, I believe that Flash [silicon chip-based storage such as memory cards and thumb drives] will hit a barrier well before hard drives do.”  Of course, if computers were to cease getting more powerful, the need for extra storage or faster communications would also slow down. So indirect market forces entwine the Laws, but only to a secondary degree. Andrew Odlyzko, who now studies the growth of internet at the University of Minnesota, says, “I would say Moore’s laws in their various disciplines are highly correlated and synergistic, drawing on the same pool of basic science and technology. It is hard (but not impossible) to imagine that if improvements in transistor density ceased, then photonics or wireless could progress for long.”

While expectations can certainly guide technological progress, consistent law-like improvement must be more than self-fulfilling prophesy. For one thing, this obedience to a curve often begins long before anyone notices there is a law, and way before anyone would be able to influence it. The exponential growth of magnetic storage began in 1956, almost a whole decade before Moore formulated his law for semiconductors and 50 years before Kryder formulized its  existence. Rob Carlson says, “When I first published the DNA exponential curves, I got reviewers claiming that they were unaware of any evidence that sequencing costs were falling exponentially. In this way the trends were operative even when people disbelieved it.” Ray Kurzweil dug into the archives to show that something like Moore’s Law had it origins as far back as 1900, long before electronic computers existed, and of course long before the path could have been constructed by self-fulfillment. Kurzweil estimated the number of “calculations per second per $1,000″ performed by turn-of-the-century analog machines, by mechanical calculators, and later by the first vacuum tube computers and extended the same calculation to modern semiconductor chips. He established that this ratio increased exponentially for the past 109 years. More importantly, the curve (let’s call it Kurzweil’s Law) transects five different technological species of computation: electromechanical, relay, vacuum tube, transistors and integrated circuits. An unobserved constant operating in five distinct paradigms of technology for over a century must be more than an industry road map. It suggest that the nature of these ratios are baked deep into the fabric of the technium.

Kurzweillaw

But in every contemporary case — DNA sequencing, magnetic storage, semiconductors, bandwidth, pixel density — once a fixed curve is revealed, scientists, investors, marketing, and journalists all grab hold of it and use it to guide experiments, investments, schedules and publicity.  The map becomes the territory. There is no doubt curves are used as tools, and that they can sway the rate of progress. Moore tells one story: “When the industry fully recognized that we were truly on a pace of a new-process-technology-generation-every-three-years, we started to shift to a new-generation-every-two-years to get a bit ahead of the competition. We changed the slope.” For three and a half decades, from 1956 to 1991, IBM set the pace on improvements in the density of hard disk drives to about 25% per year. That’s just below the rate of semiconductor progress. IBM owned the patents and this rate allowed them to comfortably maintain high profit margins. But in the early 1980s, many competitors sprang up making smaller disks which had lower densities. But their densities were improving must faster than IBM’s schedule. So in 1990, IBM changed the slope. They mandated that henceforth their improvement would be 60% a year. This spurred an escalation of R&D investment, faster growth by the competitors, and more R&D by IBM so that by the late 1990s onward, the slope of growth increased to more than 100% per year. The slope of progress can be changed by pouring money down it. Mark Kryder says, “My  guess is that you could double the density growth rate with something less than the double the R&D dollars.” The slope can also be changed by regulations. Larry Roberts offers this evidence for the effects of the US Telecommunications Act of 1993. “From before 1960 until de-regulation about 1993, the cost per terabit of communications [red line in the graph below; the blue is Moore's Law] dropped very slowly, halving every 79 months. Then, once fiber was in place, DWDM and free enterprise took the market price down fast, halving every 12 months.”

Comdrop

Since the rate of these explosions of innovation can be varied to some degree by applying money or laws, their trend lines cannot be fully inherent in the material itself. At the same time, since these curves begin and advance independent of our awareness, and do not waver from a straight line under enormous competition and investment pressures, their course must in some way be bound to the materials.

If you scour the technium for examples of enduring exponential progress, you’ll find most candidates within fields related to material science. For instance the maximum rotational speed of an electric motor is not following an exponential curve. Nor is the maximum miles-per-gallon performance of an automobile engine. In fact most technical progress is not exponential, nor steady. Even most progress in material science is not exponential. We are not exponentially increasing the hardness of steel. Nor are we exponentially increasing the percentage yield of say, sulfuric acid, or petroleum distillates, from their precursors. 

I gathered as many examples of current exponential progress as I could find. I was not seeking examples where the total quantity produced (watts, kilometers, bits, basepairs, traffic, etc) were rising exponentially since these quantities are skewed by our rising populations. More people use more stuff, even if it is not improving. Rather I looked for examples that showed performance ratios (such as pounds per inch, illumination per dollar) steadily increasing if not accelerating. Here is a set of quickly found examples, and the rate at which their performance is doubling. (This will display as halving the time.)

Doubling Times of Various Technological Performance in Months

Doublingtimes

The first thing to notice is that all these examples demonstrate the effects of scaling down, or working with the small. In this microcosmic realm energy is not very important. We don’t see exponential improvement in efforts to scale up, to keep getting bigger, skyscrapers and space stations. Airplanes aren’t getting bigger, flying faster, and more fuel efficient at an exponential rate. Gordon Moore jokes that if the technology of air travel experienced the same kind of progress as Intel chips, a modern day commercial aircraft would cost $500, circle the earth in 20 minutes, and only use five gallons of fuel for the trip. However, the plane would only be the size of a shoebox! We don’t see a Moore’s Law-type of progress at work while scaling up because energy needs scale up just as fast, and energy is a major limited constraint, unlike information. So our entire new economy is built around technologies that scale down well — photons, electrons, bits, pixels, frequencies, and genes. As these inventions miniaturize, they reach closer to bare atoms, raw bits, and the essence of matter and information. And so the fixed and inevitable path of their progress derives from this elemental essence.

Cap 0602Lf04F1

The second thing to notice about this set of examples is the narrow range of slopes, or doubling time (in months). The particular power being optimized in these technologies is doubling between 8 and 30 months. Everyone of them is getting twice as better every year or two no matter the technology. What’s up with that? Engineer Mark Kryder’s explanation is that this “twice as better every two years” is an artifact of corporate structure where most of these inventions happen. It just takes 1-2 years of calendar time to conceive, design, prototype, test, manufacture and market a new improvement, and while a 5- or 10-fold increase is very difficult to achieve, almost any engineer can deliver a factor of two. Voila! Twice as better every two years. Engineers unleashed equals Moore’s Law.

But, as mentioned earlier, we see engineers unleashed in other departments of the technium without the appearance of exponential growth. And in fact not every aspect of semiconductor extrapolation resolves into a handy “law.” Moore recalls that in a 1975 speech he forecasted the expected growth of other attributes of silicon chips “just to demonstrate how ridiculous it is to extrapolate exponentials.” Extrapolating the maximum size of the wafer of silicon used to grow the chips (which was increasing as fast of the number of components) he calculated would yield a nearly 2-meter (6-foot) diameter crystal by 2000, which just seemed unlikely. That did not happen; they reached 300 mm (1 foot).

Expogap

And as small as those differences in slopes are, say, between the 21-month doubling increase in power of a CPU processor, and the 16-month doubling increase in density of RAM storage, that gap is significant. Curiously, the difference between two exponential curves is itself an exponential growth. That means that over time, the performance of two technologies under that same financial regime, the same engineering society, the same technium, are diverging at an exponential rate. Clearly, this ever widening gap is due to an intrinsic quality of the technologies.

Should we ever arrive on other inhabited worlds in our galaxy, we should expect some of them to have reached the stage of microelectronics in their own technium. Once they discover the application of binary logic to microcircuits, they too will experience a version of Moore’s Law. The lessons of the microcosm will play out its inevitable course: as circuits get smaller, they get faster, more accurate, and cheaper. Their alien computers will quickly get better and more affordable at once, which in turn will propel innumerable other technological explosions to their great delight. Whenever it begins the steady acceleration of progress in solid-state computation should last for at least 25 doublings (what we’ve experienced so far), or a 33 million-fold increase in value. But while Moore’s Law is inevitable in its progression, its slope is not.

The slant of increase in a particular world may indeed be a matter of macro-economics. Here Moore and Mead may be correct: the slope of the Law rests on economics. Whether computing power doubles every month or every decade will depend on many factors of that particular society: population size, volume of the economy, velocity of money, evolution of financial instruments. The constant speed of discovery might hinge in part on the total available pool of engineers, whether it is 10 thousand versus 10 million. A faster planetary velocity of money may permit a faster doubling period. All these economic factors combine to produce a fixed constant for that world at that phase. If Moore’s Law turned out to be a universal fixture in the computational phase of civilization, this fixed constant might even be used as a classification marker. Hypothetical civilizations are currently classified by their energy use. The Russian astronomer Nikolai Kardashev specified that a class I civilization would leverage its home planet’s energy, a class II its star’s, and class III civilization its galaxy’s energy.  In a Moore’s Slope scheme, a class I civilization would exhibit a chip-power doubling rate measured in “days,” while a class II civilization would show doublings measured in days squared, and class III in days cubed, and so on. 

In any case, at some point on our planet, or any planet, the curve will plateau out. Moore’s law will not continue forever. Any specific exponential growth will inevitably smooth out into a typical S-shaped curve. This is the archetypical pattern of growth: after a slow ramp up, gains takeoff straight up like a rocket, and then after a long run level out slowly.  Back in 1830 only 37 kilometers of railroad track had been laid in the US. That count doubled in the next ten years, and then doubled in the decade after that, and kept doubling every decade for 60 years. In 1890 any reasonable railroad buff would have predicted that the US would have hundreds of millions of kilometers of railroad by a hundred years later. There would be railroad to everyone’s house. Instead there were fewer than 400,000 kilometers. However, Americans did not cease to be mobile. We merely shifted our mobility and transportation to other species of invention. We built automobile highways, and airports. The miles we travel keep expanding, but the exponential growth of that particular technology peaked and plateau.

Much of the churn in the technium is due to our tendency to shift what we care about. Mastering one technology engenders new technological desires. A recent example: The first digital cameras had very rough picture resolution. Then scientists began cramming more and more pixels onto one sensor to increase photo quality. Before they knew it, the number of pixels possible per array was on an exponential curve, heading into megapixel territory and beyond. But after a decade of acceleration, consumers shrugged off the increasing number of pixels; the current resolution was sufficient. Their concern shifted to the speed of the pixel sensors, or the response in low light — things no one cared about before. So a new metric is born, and a new curve started, and the exponential curve of ever-more pixels per array will gradually abate.

Moore’s Law is headed to a similar fate. When, no one knows. “Moore’s Law, which has held as the benchmark for IC scaling for more than 40 years, will cease to drive semiconductor manufacturing after 2014,” Len Jelinek, the director of a major semiconductor manufacturer, claimed in 2009. Carl Anderson, an IBM Fellow, announced at an industry conference in April, 2009 that the end of Moore’s Law was at hand:  “There was exponential growth in the railroad industry in the 1800s; there was exponential growth in the automobile industry in the 1930s and 1940s; and there was exponential growth in the performance of aircraft until they reached the speed of sound. But eventually exponential growth always comes to an end.”  But IBM has been wrong several times before. In 1978, IBM scientists predicted Moore’s Law had only 10 years left. Whoops. In 1988, they again said it would end in 10 years. Ooops, again. Gordon Moore himself predicted his law would end when it reach 250 nanometer manufacturing, which it passed in 1997. Today the industry is aiming for 20 nanometers. In 2009, Intel CEO Craig Barrett said “We can scale it down another 10 to 15 years. Nothing touches the economics of it.”

Whether Moore’s Law — as the count of transistor density — has one, two, or three decades left to zoom and drive our economy, we can be sure it will peter out as other past trends have by being sublimated into another rising trend. When we reach the limits of miniaturization, and can no longer cram more circuits on one chip, we can just make the chip bigger (that’s Moore’s suggestion!). Carl Anderson of IBM cites three next-generation technologies that are candidates for the next round of exponential growth: piling transistors on top of each other (known as 3-D chips), optical computers, or making existing circuits work faster (accelerators). And then there is parallel processing using many core processors at once, lots of chips connected in parallel. In other words, maybe we don’t need more and more transistors on one chip. Maybe we need re-arrangements of the bits we have. We may consider ourselves to be a million times cleverer than a monkey, but we don’t have a million times as many genes, or a million times as many neurons. Our gene and neuron count is almost identical with all apes. The evolutionary growth in those trends stopped with Sapiens (us), and switched to increasing other factors. As Moore’s Law abates, we’ll find alternative solutions to making a million times more transistors. In fact, we may already have enough transistors per chip to do what we want, if only we knew how.

Moore began by measuring the number of “components” per square inch, then switched to transistors, and now we measure transistors per dollar. As one exponential trend (say, the density of transistors) decelerates, we begin caring about a new parameter (say, speed of operations, or number of connections) and so we begin measuring a new metric, and plotting a new graph. Suddenly, another “law” is revealed. As the character of this new technique is studied, exploited and optimized, its natural pace is revealed, and when this trajectory is extrapolated, it becomes the creators’ goal. In the case of computing, this newly realized attribute of microprocessors will become, over time, the new Moore’s Law.

Like the Air Force’s 1953 graph of top-speed, the curve is one way the technium speaks to us. Carver Mead, who barnstormed the country waving plots of Moore’s Law, believes we need to “listen to the technology.” As one curve inevitably flattens out, its momentum is taken up by anther S-curve. If we inspect any enduring curve closely we can see how definitions and metrics shift over time to accommodate new substitute technologies.

Christensen97-10

For instance, a close scrutiny of Kryder’s Law in hard disk densities shows that it is composed of a sequence of overlapping smaller trendlines. These may have slightly different slopes, but in aggregate, calibrated with an appropriate common metric, they yield the unwavering trajectory.

Christensen97-40-1

This cartoon graph dissects what is happening. A stack of s-curves, each one containing their own limited run of exponential growth, overlap to produce a long-run emergent exponential growth line.  The trend bridges more than one technology, giving it a transcendent power. As one exponential boom is subsumed into the next boom, an established technology relays its momentum to the next paradigm, and carries forward an unrelenting growth. The performance is measure at a higher emergent level, not seen at first in the specific technologies. It reveals itself as a long-term trans-technium benefit, a macro trend that continues indefinitely. In this way Moore’s Law — redefined — will never end.



But while the slow demise of the transistor trend is inevitable, if the larger meta version of all the related Moore’s Laws — increasing, cheaper computer/internet power — were to suddenly cease on Earth in the next few years, it would be disastrous. These performance ratios roughly double (or half) 50% annually. That means things we care about get better by half as much every year. Imagine if you got half-again smarter every year, or remember half as much more this year as last. Embedded deep in the technium (as we no know it) is the remarkable capacity of half-again annual improvement. The optimism of our age rests on the reliable advance of Moore’s promise: that stuff will get significantly, seriously, desirably better and cheaper tomorrow. If the things we make will get better the next time, that means that the Golden Age is ahead of us, and not in the past. With the meta Moore’s Law out of action, half or more of the optimism of our time would vanish.

But even if it we wanted to, what on earth could derail Moore’s law? Suppose we were part of a vast conspiracy to halt Moore’s Law. Maybe we believed it artificially elevates undue optimism and encourages misguided expectations of a Singularity. What could we do? How would you stop it?  Those who believe its powers rests primarily in its self-reinforced expectations would say: simply announce it will end. If enough smart believers circulate declaring Moore’s Law is over, then it will be over. The loop of self-fulfilling prophecy would be broken. But all it takes in one maverick to push ahead and make further progress, and the spell would be broken. The race would resume until the physics of scaling down gave out.

More clever folk might reason that since the economic regime as a whole determines the slope of Moore’s Law, you could keep decreasing the quality of the economy until it stopped. Perhaps through armed revolution, you installed an authoritative command-style policy (like the old state-communisms), whose lackadaisical economic growth killed the infrastructure for exponential increases in computing power. I find that possibility intriguing, but I have my doubts. If, in a counterfactual history, communism had won the cold war, and microelectronics were invented in a global Soviet style society, my guess is that even that alternative policy could not stifle Moore’s Law. Progress might roll out slower, but I don’t doubt Stalinist scientists would tap into the law of the microcosm, and soon marvel at the same technical wonder we do: chips improving exponentially as constant linear effort is applied.

I suspect Moore’s Law is something we don’t have much sway over, other than its doubling period. Moore’s Law is the Moira of our age. In Greek mythology the Moira were the three Fates. Usually depicted as dour spinsters, one Moira spun the thread of a newborn’s life. The other Moira counted out the thread’s length. And the third Moira cut the thread at death. A person’s beginning and end were predetermined. But what happened in between was not inevitable. Humans and gods could work within the confines of one’s ultimate destiny. According to legend, Dionysos, the god of wine and parties, was unable to cry. But he loved a woodspirit, a half-satyr named Ampelos, who was killed by a wild bull. Dionysos was so stricken by Ampelos’ death that he finally wept. So to appease Dionysos’ unexpected grief, the Moira transformed Ampelos into the first grapevine. Then according to the bards, “the inflexible threads of Moira were unloosened and turned back.” Ampelos’ blood became the wine that Dionysos loved. Fate was obeyed, yet Dionysos get what he choose. Within the inexorable flow of larger trends our freewill flits, moving us in tandem with destiny.

The unbending trajectories uncovered by Moore, Kryder, Gilder, and Kurzweil spin through the technium forming a long thread. The thrust of the thread is inevitable, its course destined by the nature of matter and discovery. Once untied, the thread of Moore’s Law will unravel steadily, inexorably towards its anchor at the bottom of physics. Along the way it unleashes other threads of technology we might wish to pull.  Each of those threads, of Communication, Bandwidth, Storage, will unravel in its predetermined manner as well. We choose how fast to unzip them, and which ones to unloosen next. Collectively we push and pull with exceeding energy to wrench the threads from their place, but our efforts only serve to unravel it as it would anyway. The technium holds our Moira, and the Moira play out our inflexible threads. Like Dionysos we can unloosen, but not remove them.

Listen to the technology, Carver Mead says. What do the curves say? Imagine it is 1965. You’ve seen the curves Gordon Moore discovered. What if you believed the story they were trying to tell us: that each year, as sure as winter follows summer, and day follows night, computers would get half again better, and half again smaller, and half again cheaper, year after year, and that in 5 decades they would be 30 million times more powerful than they were then, and cheap. If you were sure of that back then, or even mostly persuaded, and if a lot of others were as well, what good fortune you could have harvested. You would have needed no other prophecies, no other predictions, no other details. Just knowing that single trajectory of Moore’s, and none other, we would have educated differently, invested differently, prepared more wisely to grasp the amazing powers it would sprout.

Moore’s Law is one of the few Moira threads we’ve teased out in our short history in the technium. There must be others. Most of the technium’s predetermined developments remain hidden, not yet uncovered, by tools not yet invented. But we’ve learned to look for them. Searching, we can see similar laws peeking out now. These “laws” are reflexes of the technium that kick in regardless of the social climate. They too will spawn progress, and inspire new powers and new desires as they unroll in ordered sequence. Perhaps these self-governing dynamics will appear in genetics, or in pharmaceuticals, or in cognition. Once a dynamic like Moore’s Law is launched and made visible, the fuels of finance, competition, and markets will push the law to its limits and keep it riding along that curve until it has consumed its physical potential.

Our choice, and it is significant, is to prepare for the gift — and the problems it will also bring. We can choose to get better at anticipating these inevitable surges. We can choose to educate ourselves and children to become smartly literate and wise in their use. And we can choose to modify our legal/political/economic assumptions to meet the ordained evolution ahead. But we cannot escape from them.

When we spy our technological fate in the distance we should not reel back in horror of its inevitability; rather we should lurch forward in preparation.




Comments
  • dr2chase

    The problem with scaling down cars is that we have in some sense already seen the endpoint — the streamlined bicycles used at the human powered vehicle races. That’s about as efficient as we are going to get, for moving a single human.

    This is generally true for energy-related improvements — we cannot get more power out than power in, and the best we can do is to cut out waste.

  • Joe

    As a photoresist chemist, I would also add that the small scale of microprocessors means that there is none of the dreaded ‘scale-up’ prevents most lab ideas from ever being commercialized. If you can make a prototype batch, you can start selling immediately. I have worked in the adhesives market, where the economic scale-up is more important than new features, and the prototype-to-market lag time is several years, sometime just never. Also, since chip-makers use only pennies of resist per wafer, a chemist is free to use the most expensive and exotic chemicals he can think of with out worrying about the economics.

    Joe

  • Seth

    This assumes a duality between technology and humanity, but I suppose that human invention is a function and fruition of humanity. So, observable matter can be changed by the observer into that which it desires to observe. The observable matter is delimited and defined by the imagination that observes it. Self-fulfilling prophecy is no less a technical marvel than the marvel which may be thought to be inherent in laws of physics in creating Moore’s law. The law cannot be considered apart from the self. So the process of creation will proceed to the limits of the self and not the material. As soon as the limits of material are reached the self will leap to a new paradigm for increasing the exponet, from train to automobile to airplane etc. The collapse of Moore’s law is really the collapse of desire, the ground of being and probably the universe.

  • RobertJ

    Great essay. Pointing out that the difference between two exponential curves is itself exponential, together with your table of doubling times, gives some intriguing extrapolations. Notice that bandwidth are increasing slower than most other components and only a bit faster than the clockrate. The vision of hardware that is rented off-site and connected to your pc by fiber, instead of being installed as cards in your PC, that vision seems near impossible. So too do the vision of escaping the neuman architecture of computers, which is based on cheap storage components and expensive computing components. Going beyond the neuman architecture is what e.g neural networks do, so bad news for them, computing will keep being more expensive than storage. But it IS interesting that RAM seemingly overtakes harddisks in size in the future. This too is a vision in the computer manufacturing industry

  • ottnott

    One explanation for the difference in the rate of improvement in ICs versus the rate of improvement in some other technologies of similar societal and economic interest is the compounding of benefits as noted by Moore:
    “As Moore puts it ‘By making things smaller, everything gets better simultaneously. There is little need for tradeoffs. The speed of our products goes up, the power consumption goes down, system reliability improves by leaps and bounds, but especially the cost of doing things drops as a result of the technology.’”

    For photovoltaic cells to be analogous to ICs, you’d need a situation where, as you made improvements of one sort (cell thickness, as a good hypothetical example) the energy density of sunlight increased, the electrical resistance of the cell material would decrease, and the energy of the photons would, rather than being dependent on wavelength, begin to converge toward the bandgap of the cell material.

    Obviously, that isn’t the case.

    It is correct to say that PV cells and batteries are prisoners of physics whereas ICs are not, because physics has long presented tradeoffs in PV cells and batteries such that you do not have the compounding improvements you garner as you shrink features in an IC.

    It is already true of batteries and PV cells that there are some tradeoffs imposed by physics. We cannot make electrons and photons and atoms do things that they simply refuse to do. At some point, the same situation will put the brakes on Moore’s law.

    Certainly, the long life of Moore’s law has required us to achieve a level of control over electrons and photons and atoms that seemed inconceivable only a few years earlier, but it has never required us to make those creatures of physics do things that they cannot do.

  • Bob Frankston

    I took a different view of Moore’s Law in http://rmf.vc/?n=BL arguing that it is about decoupling markets rather than about technology. Also check out Bob Seidensticker’s “Future Hype”.

  • Daryl Oster

    The air vehicle speed curve you started with is analogous to a much longer curves of water and ground based transport speed curves. The curve is not locally smooth (Kryders law) but a series of superimposed steps of quantum improvements. A new ground based transportation paradigm is just starting that will soon prove that the curve is very accurate, and this will lead to the wholesale move to orbit, and this will make galactic travel possible soon after.

  • mulp

    You need to read The Innovator’s Dilemma by Clayton Christiansen.

    The length of the cycle time is intertwinned with the time required to design and ship in volume a new design, and the ability of the market to take up the new design.

    Think about houses, or skyscrappers, if you will. If something about houses drove all of us to discard or give away a house after three years, then designers and builders would be coming up with new, better designs twice as often so you would buy every other generation of house just as you bought every other generation of chips, or disks.

    Christiansen called the disk drive the fruit fly of technology innovation, but he saw the same effect in excavators like steam shovels and backhoes. The differences is the number of each sold and how long before it makes sense to discard an “old” one.

    I’m pretty sure that the ramp up in speed of microprocessors has slowed because the demand has cut the number of competitors – about 15 years ago, there was Intel, AMD, IBM, Sun, DEC, HP, SGI, and a couple others competing to deliver the next Hot Chip. At the time, microprocessors couldn’t meet the demand just for existing customers and the market was expanding almost exponentially as new people realized they needed a computer. Now the market is dominated by Intel all up and down the spectrum, and frankly Intel is a plodder.

    As Intel originally made memory chips, that is where Moore’s law was most easily seen, and as flat panel displays and camera imagers are effectively memory chips, the gates per and dollars per gate are still moving along. And with both, the chips are getting larger physically, so the small size of semiconductors isn’t the reason for Moore’s Law.

    It all comes down to product life cycle, how long until the buyer is willing to discard it, and whether two or more competitors can make the buyer more likely to throw the old one away with some new feature.

    This what is examined in The Innovator’s Dilemma.

  • Brad

    I find the curve showing Moore’s law extending back to 1900 to be the most instructive. The slope increases when “manifest destiny” kicks in, but the slope is amazingly constant for a century. If feature size really hits a wall, other cost reductions will keep the computations/sec/$ metric on the path.

    The point I would like to make is that Moore’s Law may be a spiritual phenomenon. Stuff happening because it’s supposed to happen. Smart people find a way to bring “their” ideas to life, whether the investment is there or not.

  • Lawrence de Martin

    I’ll go with the overlapping S curves, because I participated in one. The Anatel and Sievers Instrument Carbon analyzers and airborne contamination laser scanners from Particle Measurement Systems where integral to the silicon manufacturing S curve in the mid-eighties to mid nineties.

    I suggested in 1982 to Rick Blades and Rick Godec that their expertise in designing aqueous process flow sensors was directly applicable to semiconductor manufacturing and they started working on it in 1983.

    By 1984 Intel hit a plateau where 64Kb DRAM and 80286 processors where reliably producible, but 256Kb and 80386 where not due to batch failures. Nothing measurable correlated to the bad batches until Blades & Godec released the Anatel A100 carbon analyzer. Intel then discovered that the remanant organic carbon at even 10 parts per billion (.000001%) was sufficient to power life in the form of aerobic bacteria in the water systems. Since wafers require hundreds of aqueous processing steps subject to contamination, bacteria reproducing destroyed the chemical integrity of the chips.

    High Technology is high purity. Carbon analysis of ultra-pure water proved essential for all silicon manufacturing, high density hard disk drives, flat panel displays and pharmaceuticals. The quantum physics was just a prediction, the reality is applied chemistry.

    As for me, I am a nexialist. I study all science to see where it intersects.

  • Arthur Smith

    Kevin – great essay, some good questions and comparisons there. What you’ve written suggests to me that the key criterion in whether a technological trend follows a “Moore’s law” of rapid growth or not is the level of continuing investment society puts into that technology, relative to the real costs and constraints of chemistry/physics/information.

    If the level of investment is high enough, rapid (2-year or faster doubling) growth comes naturally and is driven by continued demand against the falling cost. If the level of investment is lower than some threshold (solar cells, batteries, spaceflight, …) then growth in technical capability is forced to a much slower curve, constrained by the ratio of investment level to that threshold.

    I would guess the threshold investment level also grows exponentially with that capability level, and when the curves of available investment capital vs. threshold cross, you hit the top of that S curve and innovation slows again. But new technology ideas can drop that threshold and re-ignite growth again…

    The level of investment capital available would be constrained in a market economy by the basic level of demand for whatever product it is, and the availability of alternative technologies. For example, in energy markets the availability of fossil fuels as alternatives greatly suppresses the level of investment available in solar photovoltaics or batteries. An argument for government intervention?

  • Scott Hendrickson

    kk-

    Thanks for a great article! You hint at this point a number of times and say it plainly once: “chips improving exponentially as a constant linear effort is applied.” That is intriguing and begs for data. Simplistically, we might have:
    (1) resources applied to problem increasing exponentially => progress exponential, until we reach physical limits (size, energy, etc)
    or, as you say,
    (2) resources applied to problem increasing linearly => exponential progress. This kind of leverage is very attractive and it would be nice to understand the mechanism or applicable domains (e.g. information, miniaturization, etc.)
    or maybe somewhere between (1) and (2),
    (3) important network effect where primary resources grow linearly (engineers, schools, money, labs), but connections make secondary resources proliferate (ready access to info, tools, concepts…)?

    Some analogous general thinking on “how progress scales with effort?” seems like it is as interesting and maybe different than “how progress scales with time?”

    Thanks again.

    • http://www.kk.org Kevin Kelly

      @Scott: I don’t have data on the linear increase in effort. Wish I did. But I agree it is a great place to do further research. Somebody should.

  • Alvass

    @ Arthur Smith – An argument for increased currency fluidity and compression (it too is linked, the ability to achieve consensus in quantification, to these other curves).

    We can choose to get better at anticipating these inevitable surges. We can choose to educate ourselves and children to become smartly literate and wise in their use. And we can choose to modify our legal/political/economic assumptions to meet the ordained evolution ahead.

    Hmmm. Is it free will – our choice? Or are our intelligence (g), social intelligence, communication, total knowledge generation curves similarly tied into this convergent yarnwad and destined to “get smarter”? If in fact all complex life is likely/destined to compute similarly and navigate developmental bottlenecks in similar ways (John Smart – other Evo Devo folks), then is there really choice at all? :) Are not competing agent-based systems that work to continually establish control over perceived environment (cope) fundamentally rigged to accomplish computation that benefits the system – in which case it’d make more logical sense to include us in the technium as embedded sensors, processors, and executors that serve as tools/extensions of deeper, potentially Gaiian (weak or strong – big distinctions) processing / COPE drive?

    If Moore’s law and these tech curves are deemed inevitable then, by extension, i think you’ve also argued that intel/comm/knowledge curves are also inevitable, thus bringing humans squarely into the technium, the technium into the human realm, and both into the natural evo/devo realm.

    That said, from our subjective viewpoint as agents who want to survive and thrive, establish COPE in a universe probably teeming with life, we must indeed lurch – though there will also be those who feel equally as compelled to lurch against it, largely due to other perceptions/driver. Ultimately, these competing agents form the already existing social brain and will continue to perform computation/execution as they always have, until that system fundametally changes or transitions to the next meta phase.

  • J S

    Your “stack of S-curves” reminds me of the TRIZ system (http://en.wikipedia.org/wiki/TRIZ) of solving engineering problems begun in the 1940′s.

    There are fairly standard solution steps for each technology level, such as now that CPU chips have plateaued they are offering parallel processors – TRIZ concept of going from single to multiples. The list of options: (http://www.triz40.com/aff_Principles.htm)

    The author of TRIZ put the theory together based on reviewing patents and seeing the relationships in technical problem solving.

    So Moore’s Law is inevitable – and it’s also forced by economics as no company will continue on the improvement path if no customer wants or needs a device improved enough to pay for it.

  • Mark Essel

    While reading this post Kevin, I was captivated by a background image of a great stonelike wheel with many great spindles, your technium, grinding forward inevitably. Have you ever commissioned artisits to capture a fleeting image?

    The paths our society rifles down are chosen by us. Even accepting the unerring push of progress we are free to navigate what rechnologies to pursue by the resources allocated to their improvement. We may be able to steer the technium rudder more easily than that of our world’s societies.

  • Robbo

    What fascinates me most about this inexorable path of self-induced evolution is that our “choices” are determined by our needs – conscious or not – and we then devise the technologies required to satisfy those needs. Any portion of our technology is an extension of ourselves so it’s as if we were ever growing outward ourselves, expanding in size and complexity and reach, and as this growth occurs we discover not only new things now within our extended grasp but also where our grasp is lacking and thus push forward with new devices, accelerating our growth and evolution further as we seek to fill the voids encountered. Absolutely stunning stuff to contemplate. Thanks so much for pursuing these lines of thought.

    Cheers.

  • Steve Jones

    I found the argument over the costs issues regarding photovoltaics plain wrong (unless I’m misunderstanding what was said). Moore’s law comes simply from the ability to manufacture ever smaller active elements. However, there’s no benefit to miniaturising the individual elements in a photovoltaic cell beyond a certain point. The amount of power produced by a solar cell is essentially dictated by the surface area of the receiver, light intensity and efficiency. Leaving out the use of mirrors and lenses to increase light intensity, that leaves us with improving efficiency and reducing costs of fabrication for a given area of solar cells. Both of these are on a much lower curve than Moore’s law – in fact the cost drivers bear no resemblence at all to the essence of that law. That is the reduction in feature size.

    You can see this with disk capacity and performance. Disk capacity is very much on a similar curve to that for processor chips in that the feature sizes are being reduced. Half the feature size and you quadruple capacity. However, what does not scale with Moore’s law is disk performance which only partly (for sequential performance) gains from decreasing feature sizes (sequential read rates go up inversely to feature size rather than the square). Random access gains relatively little from decreased feature size – only the ability to use smaller form factors and hence faster spin rates and lower seek times improve, but on a much lower level than capacity. Measured by cost per GB, cost per MB/s bandwidth, and random IOPs you will get three very different curves on the same technology all due to the different interplay between the improvements in technology (feature size, production technoloyg, materials and so on). All are, in their own way, prisoners of the laws of physics (or whatever you care to call them).

    One of my pet hates in economics is the tendency of many practitioners to indefinitely project trends without bothering to understand the fundamental limits. No trend goes on forever – alol have limits, and expecting all technology performance and cost trends to be exponential in nature is fundamentally wrong.

  • Alvuss

    @ JS – There’s also a nice cascading stack of ICT curves that get sharper over time in Diffusion of Innovations – seminal book – that Vijay Gurbaxani (currently at UC Irvine) plotted. Don’t know if the image is available via web.

    @ Robbo – Right-on-o. Tech is part of the ever-expanding Global Body, powered by the Global Mind (really like Nova Spivack’s presentation of those terms) – an autocatalytic system. I think the Technium archive has some posts related to that – the human drive to solve problems or such.

  • Robert de Forest

    This is the essay I wanted to write in response to “Five Unstoppables.” Thank you for expressing my thoughts better than I could and even including data to back them up.

  • Roland Hjerppe

    One of the underlying phenomena is the exponential growth of science since the 1600s as described by Derek J de Solla Price in his “Little Science, Big Science” (Columbia Univ. Press, New York, 1963). Well worth finding and reading.
    He was also the first to realize that the Antikythera mechanism was a calculator.

  • Jurvetson

    kk – Provocative and alluring as always. I liked your perspective in the movie Transcendent Man as well.

    I got an updated version of Ray Kurzweil’s abstraction of Moore’s Law, posted with permission: http://www.flickr.com/photos/jurvetson/3656849977

    I think this is the most important chart in technology business.

    It suggests that the pace of innovation is exogenous to the economy.

  • Joel

    I wonder how much of this has to do with the self-referential nature of these sorts of technology:

    The goal of a transistor in an IC is usually to influence the state of one of its peers, whereas a diode in a PV cell has goals outside of solar technology, so that better motors or lightbulbs are needed for the device in question to have more effect.

    Similarly, no matter how good a car battery becomes, it cannot reduce the weight of the passenger it carries nor make the hills any less steep.

  • Hernan

    “No end in sight” — not “site”

  • Patrick McNeely, PT

    Wow! Nice vision. One thought, in physical therapy and medicine in general we have developed an idea that dynamic systems show emergant properties. In other words, the chaos of the system itself drives the “behaviors” one expects in ways such that the sum if the effects is greater than what would be expected by simple sumation, or rather a new system becomes emergant from the prior systems…. I wonder if we ae not seeing the same thing in electronics and in information processing and manipulation in general. I also feel there would be potential benefit in studying the sociogenetic implications of these trends. You paint a quite vivid picture, but where are we in relation to the artwork itself. I wonder. Patrickpt@aol.com

  • Greg

    FYI Particle accelerators have been following a power law for some time and you can’t get more large scale than some of our experiments :-). (Though it seems we are beginning to slow down.)

    Here’s a (not amazing and a little old but there are better ones around) plot showing this:

    http://tesla.desy.de/~rasmus/media/Accelerator%20physics/slides/Livingston%20Plot%202.html

    taken from the interesting atricle: http://www.slac.stanford.edu/pubs/beamline/27/1/27-1-panofsky.pdf

  • Alexis Madrigal

    A beautiful essay on technological affordances that charts a nice path between hard determinism and a kindler, gentler determinism that lets us have control over the speed of the movie.

    But how to know when you’re (to take your opening example) just before moon rockets or permanently before interstellar spacecraft? How to make policy in the here and now when competing forecasts predict laws all of all kinds, often conflicting? Making no policy isn’t really a serious option, as government is a palimpsest.

    Moore’s Law — and the associated cluster of technologies that you point out — are basically the exceptions that prove the rule that technology is not progressive in the way you describe. The technologies that are inevitable, in your view, only appear that way in hindsight, which is a quite sorry version of inevitability.

  • Greg

    Another thing about the accelerator physics community is that we don’t have the same sort of timescales as most industries. (20 years from start to finish producing 1 item anyone?) At the energy frontier the work is done for multinational labs and progress is not profit driven.

    Seriously interesting article by the way. :-)

  • joe

    first , great article.

    have a look at :www.uh.edu/engines/qualitytechnology.pdf

    it seem that exponential improvement is the rule in many technologicies.
    they give many examples of such technologies.
    also they say the conditions for this is that the there would be a motivation to improve the technology , and the technology would be limited by out cleverness , and not by some physical or other(e.g. economical law).

    another point from another place:
    some technologies might require exponential improvement in a base technology to achieve linear gain :”HOWEVER, technology performance is not the same as knowledge. The next question is how does an improvement in the performance of computation (and other information processing technologies such as communication bandwidths) affect knowledge. There are many instances in which it requires exponential gains in information processing to achieve linear gains in knowledge. For example, it requires exponential gains in computation to see linearly further ahead in chess moves. Indeed, with exponential gains in computation, we’ve seen linear gains in machine chess ratings. We’ve also seen linear gains in pattern recognition performance (e.g., speech recognition accuracy) from exponential gains in processor speed, memory capacity, as well as the size of the research databases (of speech samples to train on). There are many examples of inherently exponential problems for which exponential grains in the power of the information processing technology produce linear gains in performance and/or knowledge.”

    it would be really interesting to understand more about the reasons that make some technologies exponential.

    • http://www.kk.org Kevin Kelly

      @Joe, that is an amazing paper. I wish I had seen it earlier. All kinds of goodies in it.

      I agree that one could devote a whole book, or research program, to sorting out even what has been collected over the years. I think we have lots to understand yet.

      Thanks for your comments.

  • Mike

    An area not discussed for a Moore’s-type-of Law is the hours of labor per week required to provide neccessities of food, shelter and clothing.

    I suggest that currently this could be as low as 10 hours per week if employers allowed it. (I’m self employed and work about 15 hours per week, by choice, for a very comfortable suburban lifestyle w/ spouse and 3 children.) However most employers want more hours so employees are tempted/encouraged to aspire to luxuries to justify 40+ hours/week.

  • michael schrage

    kk -

    i think this posting superb – and shame on me for not following your writing with greater rigor…

    i do have a (substantive) quibble, however…you take pains to point out how moore and mead really do believe that sliding down (up?) these exponential curves is more a socio-economic phenomenon than merely the manifestation of inevitable physics…later on, you counterfactual the triumph of communism and assert that even stalinists would have fun with silicon…(after all, they did pretty well with the space race…)

    i disagree…i think you really, really, really downplay the effect and influence both of ‘competition’ and of ‘markets’ in getting these curves to curve…this was something the air force visionaries surely understood (von karman, rand, etc.) and it would be ridiculous to look at, say, the history of aviation improvements divorced from the competitions and tournaments of racing, war, rivalry, etc…having been in the room when a few of them were discussed, i would also disagree with the ‘road map’ interpretation of silicon innovation – by having all the ‘competitors’ ‘see’ where the industry was ‘supposed’ to go, we created the counterpart to ‘rules’ in a baseball or football league to help determine ‘winners’…in other words, roadmaps were as much a framework for competition as a cartographic extension of anticipated cost/performance curves…

    i don’t mean to sound like a reflexive anti-communist/anti-statist type (although, yes, i am) but – you know – shockley ‘broke away’ from the bell system and the ‘traitorous 8′ broke away from shockley and – you know – everybody ended up competing…and i’m not even talking about what happened when the japanese came in…

    bottom line: competition is a better enzyme for accelerating the inevitable than centralized extrapolation of suspected trends…i don’t believe for a moment that societies predicated on centralized planning and control are capable of coping with the unintended effects of exponential innovation…i don’t think you do either…

    • http://www.kk.org Kevin Kelly

      @Michael: Yes, my counterfactual has the usual counterfactual weaknesses of not being very objective, but let me ask you this: What do you think would happen to Moore’s Law without the benefit of the intense pressure of capitalism and markets? I think it slows down, maybe by a magnitude, but it still runs as an invariant. Describe what you think would happen? They discover that smaller equals better, cheaper. Then what? Uneven fits and starts? No advance at all? Linear progress? I really do have trouble seeing how the physics of of it would not just propel it forward — how it would not happen. Enlighten me please!

  • MeToo

    “For photovoltaic cells to be analogous to ICs, you’d need a situation where, as you made improvements of one sort (cell thickness, as a good hypothetical example) the energy density of sunlight increased, the electrical resistance of the cell material would decrease, and the energy of the photons would, rather than being dependent on wavelength, begin to converge toward the bandgap of the cell material.”

    Good point. The main issue here is that one system is deling with INFORMATION, while another is dealing with a physical process. A physical process will be limited by laws of physics. But what laws for information are equivalent to the behavior of atoms, photons, gravity, heat dissapation, etc.? Handling information is different from handling soil, as a backhoe does.

  • Gregory

    In fact,
    exponential transistor density growth has started in 1947 with the invention of the first transistor. Meaning we are about to end the first half of the chessboard in 2011 and hit the second half of the chessboard in 2013. That’s 32 doubling since 1947 equaling to 4 billion transistors.

  • joe

    @kevin:
    in a totalitarian rule , the ruler just decides to stop using and developing the technology , and the technology development would stop for many years, (look at china vs europe tech development – http://www.edge.org/3rd_culture/diamond_rich/rich_p6.html) .

    it’s even possible that china would’nt have returned to the technologies she abandoned , without external pressures.

  • Edward Kee

    Moore’s law applies to information, specifically to the number of transistors that will fit onto silicon chips.

    Moore’s law does not apply to power generation and especially not to nuclear power plants.

    Large amounts of power involve large power plants and large transmission wires; as the electricity industry developed, power plants achieved higher thermal efficiency and lower electricity cost through increases in size to benefit from scale economies.

    Economies of scale still apply and apply strongly to power plants and nuclear power plants:
    - Costs of developing and building a new nuclear power plant are relatively fixed, regardless of the plant size (geophysical assessment, site acquisition, environmental assessment, NRC permitting process)
    - Major nuclear plant components exhibit economies of scale (buildings, containment, piping systems, reactor pressure vessels)
    - Non-fuel operating costs largely consist of fixed annual O&M costs (people, security, etc.) that are not tightly linked to plant size

    These scale economies mean that larger nuclear power plants will have lower costs per unit of electricity produced; new Generation III designs have pushed scale economies to lower costs.

  • Bullseye

    “The Semiconductor Industry Association puts out a technology road map, which continues this [generational improvement] every three years. Everyone in the industry recognizes that if you don’t stay on essentially that curve they will fall behind. So it sort of drives itself.”

    …or maybe it’s driven by just a little planned obsolescence and collusion. ;^)

    It’s easier for me to believe the technology industries have market power driven by their significant economies of scale and that they throttle innovation to match the curve to enable their business model.

    Nice article nonetheless.

  • Stephanie Gerson

    “Is Moore’s law inevitable, a direction pushed forward by the nature of matter and computation, and independent of the society it was born into, or is it an artifact of self-organized scientific and economic ambition?”

    This is a classic nature-nurture problem. And the only difference between nature and nurture is time (nurture becomes nature). Call it techno-biofeedback. Or co-production (http://en.wikipedia.org/wiki/Coproduction_%28technology_and_society%29).

    You looked for other technologies with similar curves. What about the same technologies in different socio-economic systems? Or is this impossible, given globalization?

    What happens before the graph above of “different technological species of computation,” and what happens after? Is Moore’s Law a computational manifestation of a meta-law describing the change of change in the universe?

    The graph of many s-curves above seems to correlate with Gartner’s hype cycle (http://en.wikipedia.org/wiki/Hype_cycle).

    Given all of the above, it sill might be worthwhile to ‘co-produce’ a Moore’s law for desirable technologies, like solar panels? (Please?)

  • Ken Wilson

    The families of S curves nail it. They may also help bridge the exponentially diverging performances of the different technologies.

  • AW

    You made the point that Moore’s law only applies to technologies that don’t involve a lot of energy. As I understand it, this means that these exponential gains occur because we do more with less – the fewer electrons we have to push around, the faster we can push them.
    At the other end of the scale (airplanes, space travel, and so on), there could be a way to take advantage of this: ultralight materials (nanotubes, carbon fibre, and so on.) When we make cars out of these, for example, they would become lighter and thus, more energy-efficient – perhaps at a briefly exponential rate. We can only go so far, because we have to haul humans instead of raw data, but it’s a start.
    My point (if you’re still reading this!) is that energy efficiency can lead to growth in any industry, and should be aggressively pursued for multiple reasons – environmental considerations not least among them. We don’t consider this side of performance nearly enough – instead we throw energy at problems, which is cheaper, but leads to whole new problems at large scales.
    If you have to haul around bulk products like grain there isn’t much to be done – but things like packing materials can easily be reduced. I know it would never be as spectacular as gains in the microchip world, but I think it would help us solve a great many important problems by making them as ‘Moore-like’ as possible.
    Sorry if this is too long.

    • http://www.kk.org Kevin Kelly

      @AW: Yes, if you scale down – make cars smaller, lighter, then they do follow a Moore’s Law like improvement. This is Amory Lovin’s mantra, which he has been advocating for decades. The lighter the car, the smaller the engine, the lighter the car, the smaller the engine……

  • Jerry Leichter

    You discuss, as it were, the supply side of the technology. But the demand side is equally important: If no one needed faster chips or more disk space, Moore’s and Kryder’s Laws would come to a quick end. You give an example of this with camera sensors: If we choose to measure just pixels, then the growth trend has stopped because it got to the point where “more” lacked any economic value.

    If you look at how we’ve *used* the exponential growth of chip features, that’s changed over the years. Initially, the driving force was putting all the gates needed to build a whole CPU on a single chip. Speed of the individual chips didn’t matter much because once you need off-chip connections, they dominate the speed of the whole system. It took roughly until 1985 to get a full CPU, with all the features expected of it (virtual memory, floating point computation) onto a chip. Once we got there, the demand to shove more *function* on the chip more or less stopped. We spent a couple of decades making the chip do the same thing much faster. Sure, some higher levels of integration appeared – so called System-on-a-Chip devices (which were aimed at the low end), virtual machine support, etc. – but our idea of what a chip should do hasn’t really changed much in years.

    As we reached the limits of our ability to exponentially increase speeds, we’ve applied the same decreasing feature size to put multiple cores on a single chip. If we didn’t have parallelizeable applications, no one would bother building such things. In fact, a driving force in the industry today is finding ways to use high levels of parallelism. Using 4 cores is not a big deal; using 16 is harder, using 64 much less 256 is moving into the unknown. Fortunately, the industry gets to fill another new demand: Low power chips for portable applications.

    You could make a similar analysis for disks. We long ago passed the point where anyone who wanted could buy enough disk space to store all the text they could ever read in their lives. Then we passed the point where anyone could store all the music or other audio they could ever listen to. We’re pretty much at a similar point for pictures. We’re not there yet for movies. Further, resolution and frame rates can still go up and produce significant improvements. We have yet to get to 3D. All of these guarantee a continued demand for bigger *personal* storage to match our technological ability to create such devices, so one can readily predict the continuation of the curve. (There are similar data points for business storage.)

    On the other hand, cell phones got smaller rapidly – and then reached a limit. Technologically, we could probably create a cell phone along the Star Trek The Next Generation model – a button on your shirt. (Power is probably the only real limitation.) But no one seems interested right now. In fact, the demand has moved in the opposite direction – bigger screens.

    Ultimately, “better” isn’t intrinsic to a design or technology. It’s an economic judgement. Our ingenuity in coming up with new ways to use what technical advances make possible is an equal partner in keeping those advances coming.

    — Jerry

  • Bob

    Lets talk about what happens when you make circuitry really tiny – like in the human brain. You discover that processing clock speed slows down and that even the speed of transmission on wires slows down.

    Why? Because capacitance is inversely proportional to the thickness of the insulator between the two conductors. With the extremely thin cell walls of neurons the resulting capacitance is so high that it takes substantial time to send a signal change down the neuron.

    This is why long axons have thick insulating sheaths wound around them; to decrease the capacitance of the axon and speed up the signal transfer rate. Let us also remember that line resistance is inversely proportional to the square of the wire diameter. This means that tiny wires have high resistance and that the capacitance can only be charged so fast.

    In addition the thickness of the cell walls limits the operating voltage to about 45mv, which produces about 9,000,000 volts per meter across the thin insulator of the neuron wall. This is near the standoff ability of the best known insulators.

    I used to laugh at movies where carbon based brains were sneered at. Given the materials which exist in the physical universe you aren’t going to do much better than a human brain. Approximately 100,000,000,000 times the computing power of the original IBM PC all running on about 40 watts of power – with a mean time between system failures of 70+ years.

    Line capacitance alone is going to prevent any sort of electrical computing device from ever reaching the singularity.

  • Andrew Schmitt

    Great essay, and great charts. Do you have the sources for them? I would like to re-use them in my own presentations?

  • Arun

    It is interesting to note that we have already fallen off the doubling rate for microprocessors, and though we continue to decrease minimum feature size for chips, the effective rate of improvement has slowed because of other design rules.

    The ITRS (International Technology Roadmap for Semiconductors) has an increasing number of
    ‘red blocks’ (technology areas where there are no known manufacturing solutions) in the coming years. There is no clear winning solution for the ‘post-CMOS’ area, though there are many contenders.

  • Tzara V

    Vierck’s law is a corollary of Moore’s law, which says software has a half-life of 18 months.

    From personal assessment and observation if processing capability increases exponentially, the scope of what is possible also naturally increases under Moore’s Law. That allows for new potential using software that will take advantage of the increased performance. The increase in capacity permits techniques that were previously not feasible, making them much more reasonable. By definition this process makes the value for a portion of the prior work, obsolete. That ratio would follow Moore’s law.

  • Richard Freytag

    Kevin,

    Great paper. Was thinking about it today, especially your last point about overlapping sigmoids. Seems to me this is exactly how financial booms proceed. Sure the capital is destroyed but the product of the boom (housing stock, web infrastructure, 3rd-word development [in the case of the Little Dragons], etc.) remains.

    Progress can then be seen as a process of overlapping booms and the products thereof. The overall metric is then world GDP or perhaps median income.

    Thanks for a great essay.

  • Malcolm Knapp

    Following on from @AW, I think the dividing line between the technologies that exhibit exponential growth and those that do not is whether they are technologies that move mass or technologies that move information. For technologies that are used to produce work (motors, planes etc) each unit of improvement in performance requires an exponential increase in energy input. Information technology, on the other hand, performance is basically independent of the amount of energy used to run it. The only energy it needs is for overcoming switching losses and other parasitics. Thus for each unit of improvement of performance information technologies can use the same, or even less, energy. As I see it, this in the end is what powers the economics of Moore’s Law. It is why computers have shown such amazing growth in performance whereas flight has stagnated.

  • Laurence Gonsalves

    Sorry to pick nits, but the bit about Arthur C. Clarke is either an exaggeration, or a poor choice of examples. His first published novel, “Prelude to Space”, which was written in 1947 and published in 1951, placed the first Moon landing in 1977. The actual date was only 8 years earlier, not “close to a third of century sooner”. Maybe there were other people who would have placed the first Moon landing around the year 2000, but there is published evidence that Clarke was not one of them.

    • http://www.kk.org Kevin Kelly

      @Laurence: Perhaps my source was thinking of another Clarke novel. It is possible he had more than one fictional date for a moon landing. Also, my source may have been thinking not of Clarke’s fiction but perhaps one of his non-fiction predictions.

      • neilcraig

        Different case where Clarke underestimated (& all right-thinking people thought he was impossibly optimistic) – the space elevator in Fountains of Paradise is set in mid 2100s because of the assumption that there would be no material to make it of earlier. In fact we have buckytubes which, theoretically, could do it now.

  • Jack

    Note there is an error in the discussion immediately following “Exhibit 17″, regarding the cost scaling of interconnect hardware. The chart shows a reduction in cost from $1197 to $130 over 9 years ($/Gbps), which corresponds to a doubling time in performance per unit price of 2.8 years, not 8 as indicated in the text.

  • Sunik Lee

    It’s already the 9th year of the 21st century. How come we don’t have interstellar spacecrafts as USAF’s chart predicts?… My 2c is that extrapolation of this sort often leads to nowhere.

    I work in the semiconductor industry – as some, if not most, of people who’ll read this comment – and we KNOW for fact that Moore’s law is at its last days. The latest transistors are only tens of atoms long at the channel and 3 to 4 atoms thin on the gate. PC CPUs are stuck at 3~4GHz range because of physical properties of the silicon, and this is the main reason why they’re increasing number of “cores” instead of jacking up clock speed as in the last decade. Even with cost-prohibitive materials such as gallium arsenide(GaAs), we’ll never see anything faster than hundreds of GHz. Experimental technologies such as carbon nanotubes and quantum computing won’t take us much further either(Unless we find ways to shrink the size of atoms).

    And then there’s that Gate’s Law: “Computer softwares get twice slower every 18 months”. I mean, have you tried their latest OSs(Vista, 7)? It comes with fancy graphical effects, but doesn’t really add any value. But they ARE noticeably slower and would require some people to buy new computers! 2000 to XP transition was a painful one, but going from XP to 7 just seems like a nonsense. Cloud computing here we come!

  • Caleb Mannan

    Reminds me of Omron’s ‘SINIC Theory’. http://omron.com/about/corporate/vision/sinic/theory.html

  • Lew Glendenning

    http://www.wired.com/wired/archive/10.01/gilder.html

    This 2002 article by George Gilder relates Moore’s Law to the ‘learning curve’ postulated by the Boston Consulting Group.

    The short doubling time of Moore’s Law is therefore at least partly due to the rapid increase in output of the semi-conductor industry which is possible because of the elastic market for powerful computers.

    Is it possible that the market for the various other items isn’t as elastic, e.g. doesn’t translate technology into benefit as directly as does CPU performance?

  • Scott Locklin

    Interesting essay, but you completely lost me with your idea of batteries and solar cells following a cost efficiency curve the way Moore’s law did. While solar cells are likely to get cheaper, they simply will not get better. You can’t beat the laws of thermodynamics. You can’t make a solar cell that harvests more than 100% of the incident solar energy. Want a real upper limit to solar panel efficiency? Look outside: it’s plants. Batteries could get better, but the upper bound of performance for a given battery weight is *very much* “a prisoner of physics, the periodic table” -i.e., we know pretty close to exactly how much chemical potential energy we can practically stuff into a pound of matter. It’s about what a pound of diesel fuel contains. Batteries will never be higher in energy density than diesel fuel.

    There is no Moore’s law for internal combustion engines for the exact same reason there is no Moore’s law for batteries. Internal combustion engine technology peaked in the middle of the 20th century, and will never get much better than it is now no matter how much we tinkerbell about it.

    Just because crazy Carver Mead narcissistically insists his curves somehow *caused* the silicon revolution doesn’t make it true. There are laws of physics. Silicon technology doesn’t butt up against any right now, so it is likely to grow for a while longer. Same with Kryder’s law. That is not true of all technologies; thinking so is absurd.

  • Steve Witham

    Nice job of asking why and where Moore’s Law happens and surveying ideas about that from the people involved. I especially like your emphasizing both the temptation to think it’s created by our expectations, and the fact that it happens even when it contradicts expectations.

    I agree that explicit expectations, and industry roadmaps, aren’t the essence of the phenomenon. They may help an industry plan more accurately, but they aren’t the main driver.

    I think you’re looking at the ability to make something some percent smaller (which means based on familiarity with a somewhat similar situation), per R&D dollar. It’s a curve with diminishing returns. The nature of the problem creates the change- per- time- per- effort curve, and other things determine where on that curve an industry works, giving us change per time.

    It would be interesting if there were data on how much each company in the same industry spent on R&D per year vs. how far ahead or behind the competition they got.

    It would also probably be revelatory to watch the detailed history of particular developments in a field through the years. After all, all these quantitative improvements come from continual qualitative changes. So if we could see the pattern in how each generation of ideas compared to the previous, how the two were bridged, I think the process would be less mysterious.

    All these technologies have natural internal scaling laws, something like the law that relates bone thickness to height in an animal. More complex processors have more layers of organization that act as bottlenecks. Software for a given purpose accumulates less-often-used features at the expense demanding more memory and a faster processor. Disks need more layers of directory structure as they get bigger, but not badly more, and their physical structure has stayed simple, so they follow a different law.

    Plug different technologies together, say, a disk and the computer that has to back it up overnight, and the combined system has new scaling laws, like the ratio of bone thickness to height in animals. Selling them in the same market is another way of plugging their scaling tradeoffs into the same equation.

    The crazy thing about information tech is that the same population of people can use more and vastly more of it, at least as long as the cost doesn’t go up. Part of that is that we started out so many orders of magnitude away from the physical efficiency limits, and we still have many to go. Also, Alan Kay said an order of magnitude quantitative improvement seems like one qualitative improvement, which is a logarithmic diminishing returns curve!

    This is all economics. Despite equilibrium sounding like stasis, economics is all about the spread of innovation. Why has the interest rate been 5% across industries for centuries? There are equilibrium arguments about that, but it’s an equilibrium creating an exponential curve.

    I think it was Hans Moravec rather than Ray Kurtzweil who originally plotted calculations per second per dollar back to mechanical calculators.

  • Steve Witham

    Nice job of asking why and where Moore’s Law happens and surveying ideas about that from the people involved. I especially like your emphasizing both the temptation to think it’s created by our expectations, and the fact that it happens even when it contradicts expectations.

    I agree that explicit expectations, and industry roadmaps, aren’t the essence of the phenomenon. They may help an industry plan more accurately, but they aren’t the main driver.

    I think you’re looking at the ability to make something some percent smaller (which means based on familiarity with a somewhat similar situation), per R&D dollar. It’s a curve with diminishing returns. The nature of the problem creates the change- per- time- per- effort curve, and other things determine where on that curve an industry works, giving us change per time.

    It would be interesting if there were data on how much each company in the same industry spent on R&D per year vs. how far ahead or behind the competition they got.

    It would also probably be revelatory to watch the detailed history of particular developments in a field through the years. After all, all these quantitative improvements come from continual qualitative changes. So if we could see the pattern in how each generation of ideas compared to the previous, how the two were bridged, I think the process would be less mysterious.

    All these technologies have natural internal scaling laws, something like the law that relates bone thickness to height in an animal. More complex processors have more layers of organization that act as bottlenecks. Software for a given purpose accumulates less-often-used features at the expense demanding more memory and a faster processor. Disks need more layers of directory structure as they get bigger, but not badly more, and their physical structure has stayed simple, so they follow a different law.

    Plug different technologies together, say, a disk and the computer that has to back it up overnight, and the combined system has new scaling laws, like the ratio of bone thickness to height in animals. Selling them in the same market is another way of plugging their scaling tradeoffs into the same equation.

    The crazy thing about information tech is that the same population of people can use more and vastly more of it, at least as long as the cost doesn’t go up. Part of that is that we started out so many orders of magnitude away from the physical efficiency limits, and we still have many to go. Also, Alan Kay said an order of magnitude quantitative improvement seems like one qualitative improvement, which is a logarithmic diminishing returns curve!

    This is all economics. Despite equilibrium sounding like stasis, economics is all about the spread of innovation. Why has the interest rate been 5% across industries for centuries? There are equilibrium arguments about that, but it’s an equilibrium creating an exponential curve.

    I think it was Hans Moravec rather than Ray Kurtzweil who originally plotted calculations per second per dollar back to mechanical calculators.

  • Steve Witham

    Sorry, I was confused by the confirmation page saying I was going to lose my comment!

  • Purple people eater

    bandwdith” “kilobits per second per $” doubles only every 30 months but “communication” “bits per dollar” doubles every 12 months.” Sure thing Kevin. No wonder the chart has zero citations to sources. The meaningful rate is $ per byte (not a bit rate. Wireless carrier cap you at 5 GB a month. Watch how long that takes to double.

  • Crash

    Wonderful read. Thanks. I couldn’t help but think of “Schrodinger”. The laws/rules may or may not exist, but through the act of observing them we bring them into existence and are then seemingly bound by them.

  • Hal Varian

    Very nice essay. But here’s another angle on Moore’s Law: it is to some degree a coordination device. A computer system involves CPU, memory, disk, display, etc., and all the components have to improve at the rate to avoid a bottleneck. The system is only as powerful as its weakest link. If the semiconductor industry tries to double the performance every 2 years, then the disk, RAM, screens, etc can design to that benchmark. By this view, Moore’s law could be 2 years, or 3 years, or whatever — what matters is the common expectation for component parts manufacture.

  • http://www.rapidwriters.net/ Writing Custom Essays

    Hello Kevin !!
    Pretty good post, this is one of the best articles that I have ever seen! This is a great site and I have to congratulate you on the content.
    I appreciate it!

  • Mike Swayze

    So, where does  ‘planned obsolesence’ fit into this? ($$$$???). It kind of makes ya wonder aboutit like  ’history’ in the stock market- once its a known cycle- it appears to disappear…

  • Camel

    unfortunately the “risk of human extinction time” graph may very well be a logarithmic nature … ever more “great filters” hitting us as we invent more and more potent technologies.

  • Anand Manikutty

    The end of Moore’s “law” is already approaching. People are honestly reading too much into Moore’s law. This idea of exponential increases in computing power has been extended to the idea of the Technological Singularity, an idea that is deeply mistaken (please see my List for posts on that topic). There ain’t no such thing as the Singularity. And Moore’s “law” ain’t really a law.

  • http://twitter.com/Hwy41Revisited Michael Pardek

    Maybe Moore’s Law would fit nicely on the bottom of the stack of S-curves. What will be the emergent “law” be that surpasses it?

  • neilcraig

    How to stop the equivalent of Moore’s Law? It has been done withy nuclear power. Up till about 1970 newbuilding of reactors (& to 1980 their completion) had been on a rising curve both as a proportion of power use and in total.

    Then the ecofascists gained enough power to push through anti-nuclear scare stories, all of which were either wholly or largely false, and the industry was effectively halted. Since then the catastrophic global warming fraud has been used to increase the cost and reduce usage of electricity across the developed world, hence recent recession.

    An entirely different example was the way the 16thC Japanese government rolled back the use of cannon by (1) requiring all cannon makers to be licenced then (2) requiring them to live in the capital then (3) issuing no new licences then (4) inducting all cannon makers into the aristocracy but at the same time saying they couldn’t make cannon.

    A common trait is that both banned products (reactors and cannon) are big objects that are difficult to hide while other technologies not banned (computing capacity, muskets in Japan & crossbows in medieval Europe) are smaller and portable when government inspectors turn up.