The Technium

Speculations on the Future of Science


Science will continue to surprise us with what it discovers and creates; then it will astound us by devising new methods to surprises us. At the core of science’s self-modification is technology. New tools enable new structures of knowledge and new ways of discovery. The achievement of science is to know new things; the evolution of science is to know them in new ways. What evolves is less the body of what we know and more the nature of our knowing.

Technology is, in its essence, new ways of thinking. The most powerful type of technology, sometimes called enabling technology, is a thought incarnate which enables new knowledge to find and develop news ways to know. This kind of recursive bootstrapping is how science evolves. As in every type of knowledge, it accrues layers of self-reference to its former state.

New informational organizations are layered upon the old without displacement, just as in biological evolution. Our brains are good examples. We retain reptilian reflexes deep in our minds (fight or flight) while the more complex structuring of knowledge (how to do statistics) is layered over those primitive networks. In the same way, older methods of knowing (older scientific methods) are not jettisoned; they are simply subsumed by new levels of order and complexity. But the new tools of observation and measurement, and the new technologies of knowing, will alter the character of science, even while it retains the old methods.

I’m willing to bet the scientific method 400 years from now will differ from today’s understanding of science more than today’s science method differs from the proto-science used 400 years ago. A sensible forecast of technological innovations in the next 400 years is beyond our imaginations (or at least mine), but we can fruitfully envision technological changes that might occur in the next 50 years.

Based on the suggestions of the observers above, and my own active imagination, I offer the following as possible near-term advances in the evolution of the scientific method.

Compiled Negative Results – Negative results are saved, shared, compiled and analyzed, instead of being dumped. Positive results may increase their credibility when linked to negative results. We already have hints of this in the recent decision of biochemical journals to require investigators to register early phase 1 clinical trials. Usually phase 1 trials of a drug end in failure and their negative results are not reported. As a public heath measure, these negative results should be shared. Major journals have pledged not to publish the findings of phase 3 trials if their earlier phase 1 results had not been reported, whether negative or not.

Triple Blind Experiments – In a double blind experiment neither researcher nor subject are aware of the controls, but both are aware of the experiment. In a triple blind experiment all participants are blind to the controls and to the very fact of the experiment itself. The way of science depends on cheap non-invasive sensor running continuously for years generating immense streams of data. While ordinary life continues for the subjects, massive amounts of constant data about their lifestyles are drawn and archived. Out of this huge database, specific controls, measurements and variables can be “isolated” afterwards. For instance, the vital signs and lifestyle metrics of a hundred thousand people might be recorded in dozens of different ways for 20-years, and then later analysis could find certain variables (smoking habits, heart conditions) and certain ways of measuring that would permit the entire 20 years to be viewed as an experiment – one that no one knew was even going on at the time. This post-hoc analysis depends on pattern recognition abilities of supercomputers. It removes one more variable (knowledge of experiment) and permits greater freedom in devising experiments from the indiscriminate data.

Images-25

Combinatorial Sweep Exploration – Much of the unknown can be explored by systematically creating random varieties of it at a large scale. You can explore the composition of ceramics (or thin films, or rare-earth conductors) by creating all possible types of ceramic (or thin films, or rare-earth conductors), and then testing them in their millions. You can explore certain realms of proteins by generating all possible variations of that type of protein and they seeing if they bind to a desired disease-specific site. You can discover new algorithms by automatically generating all possible programs and then running them against the desired problem. Indeed all possible Xs of almost any sort can be summoned and examined as a way to study X. None of this combinatorial exploration was even thinkable before robotics and computers; now both of these technologies permit this brute force style of science. The parameters of the emergent “library” of possibilities yielded by the sweep become the experiment. With sufficient computational power, together with a pool of proper primitive parts, vast territories unknown to science can be probed in this manner.

Images-26

Evolutionary Search – A combinatorial exploration can be taken even further. If new libraries of variations can be derived from the best of a previous generation of good results, it is possible to evolve solutions. The best results are mutated and bred toward better results. The best testing protein is mutated randomly in thousands of way, and the best of that bunch kept and mutated further, until a lineage of proteins, each one more suited to the task than its ancestors, finally leads to one that works perfectly. This method can be applied to computer programs and even to the generation of better hypothesis.

Simmatrix

Multiple Hypothesis Matrix – Instead of proposing a series of single hypothesis, in which each hypothesis is falsified and discarded until one theory finally passes and is verified, a matrix of many hypothesis scenarios are proposed and managed simultaneously. An experiment travels through the matrix of multiple hypothesis, some of which are partially right and partially wrong. Veracity is statistical; more than one thesis is permitted to stand with partial results. Just as data were assigned a margin of error, so too will hypothesis. An explanation may be stated as: 20% is explained by this theory, 35% by this theory, and 65% by this theory. A matrix also permits experiments with more variables and more complexity than before.

Pattern Augmentation – Pattern-seeking software which recognizes a pattern in noisy results. In large bodies of information with many variables, algorithmic discovery of patterns will become necessary and common. These exist in specialized niches of knowledge (such particle smashing) but more general rules and general-purpose pattern engines will enable pattern-seeking tools to become part of all data treatment.

Adaptive Real Time Experiments – Results evaluated, and large-scale experiments modified in real time. What we have now is primarily batch-mode science. Traditionally, the experiment starts, the results are collected, and then conclusions reached. After a pause the next experiment is designed in response, and then launched. In adaptive experiments, the analysis happens in parallel with collection, and the intent and design of the test is shifted on the fly. Some medical tests are already stopped or re-evaluated on the basis of early findings; this method would extend that method to other realms. Proper methods would be needed to keep the adaptive experiment objective.

AI Proofs – Artificial intelligence will derive and check the logic of an experiment. Ever more sophisticated and complicated science experiments become ever more difficult to judge. Artificial expert systems will at first evaluate the scientific logic of a paper to ensure the architecture of the argument is valid. It will also ensure it publishes the required types of data. This “proof review” will augment the peer-review of editors and reviewers. Over time, as the protocols for an AI check became standard, AI can score papers and proposals for experiments for certain consistencies and structure. This metric can then be used to categorize experiments, to suggest improvements and further research, and to facilitate comparisons and meta-analysis. A better way to inspect, measure and grade the structure of experiments would also help develop better kinds of experiments.

200603031234

Wiki-Science – The average number of authors per paper continues to rise. With massive collaborations, the numbers will boom. Experiments involving thousands of investigators collaborating on a “paper” will commonplace. The paper is ongoing, and never finished. It becomes a trail of edits and experiments posted in real time — an ever evolving “document.” Contributions are not assigned. Tools for tracking credit and contributions will be vital. Responsibilities for errors will be hard to pin down. Wiki-science will often be the first word on a new area. Some researchers will specialize in refining ideas first proposed by wiki-science.

Defined Benefit Funding – Ordinarily science is funded by the experiment (results not guaranteed) or by the investigator (nothing guaranteed). The use of prize money for particular scientific achievements will play greater roles. A goal is defined, funding secured for the first to reach it, and the contest opened to all. The Turing Test prize awarded to the first computer to pass the Turing Test as a passable intelligence. Defined Benefit Funding can also be combined with prediction markets, which set up a marketplace of bets on possible innovations. The bet winnings can encourage funding of specific technologies.

Zillionics – Ubiquitous always-on sensors in bodies and environment will transform medical, environmental, and space sciences. Unrelenting rivers of sensory data will flow day and night from zillions of sources. The exploding number of new, cheap, wireless, and novel sensing tools will require new types of programs to distill, index and archive this ocean of data, as well as to find meaningful signals in it. The field of “zillionics” — dealing with zillions of data flows — will be essential in health, natural sciences, and astronomy. This trend will require further innovations in statistics, math, visualizations, and computer science. More is different. Zillionics requires a new scientific perspective in terms of permissible errors, numbers of unknowns, probable causes, repeatability, and significant signals.

Images-23

Deep Simulations – As our knowledge of complex systems advances, we can construct more complex simulations of them. Both the success and failures of these simulations will help us to acquire more knowledge of the systems. Developing a robust simulation will become a fundamental part of science in every field. Indeed the science of making viable simulations will become its own specialty, with a set of best practices, and an emerging theory of simulations. And just as we now expect a hypothesis to be subjected to the discipline of being stated in mathematical equations, in the future we will expect all hypothesis to be exercised in a simulation. There will also be the craft of taking things known only in simulation and testing them in other simulations—sort of a simulation of a simulation.

Hyper-analysis Mapping – Just as meta-analysis gathered diverse experiments on one subject and integrated their (sometimes contradictory) results into a large meta-view, hyper-analysis creates an extremely large-scale view by pulling together meta-analysis. The cross-links of references, assumptions, evidence and results are unraveled by computation, and then reviewed at a larger scale which may include data and studies adjacent but not core to the subject. Hyper-mapping tallies not only what is known in a particular wide field, but also emphasizes unknowns and contradictions based on what is known outside that field. It is used to integrate a meta-analysis with other meta-results, and to spotlight “white spaces” where additional research would be most productive.

Images-24

Return of the Subjective – Science came into its own when it managed to refuse the subjective and embrace the objective. The repeatability of an experiment by another, perhaps less enthusiastic, observer was instrumental in keeping science rational. But as science plunges into the outer limits of scale – at the largest and smallest ends – and confronts the weirdness of the fundamental principles of matter/energy/information such as that inherent in quantum effects, it may not be able to ignore the role of observer. Existence seems to be a paradox of self-causality, and any science exploring the origins of existence will eventually have to embrace the subjective, without become irrational. The tools for managing paradox are still undeveloped.




Comments
  • http://www.mty.itesm.mx/profesores/dia/noel_leon.html Noel Leon

    Hi Kevin,
    I saw you conference at video.google.com (The Next Fifty Years of Science ) and I am very interested.
    As I am working on the concept of Computer Aided Innovation, many of the concepts you mentioned for science may also, from my point of view, apply for inventions and innovations: The next 50 years will change the way we make inventions and innovations more than the past 500 years did. Any comments?
    Best regards,

    Noel

  • http://www.researchcrossroads.com kyle brown

    I agree that this open, utopian view is very much needed. However the current way public researchers are measured (grants, publications, citations) effectively act as a huge barrier to academics sharing data (in particular negative results).

    Until academia and government funding agencies reward collaboration, scientific discoveries will continue at a steady pace. If more researchers would collaborate, discoveries would accelerate and research funding would be better utilized.

  • Stewart Brand

    Kevin Kelly will give a public talk on this very subject on Friday, March 10, in San Francisco, at the Cowell Theater, Fort Mason, 7pm, admission free.

  • Kevin Kelly

    Indeed I will. It will essentially be my first public talk on the subject. It could be wonderfuly new, or embarrassingly shakey. Either way you are welcome to attend.

  • Michael Gruber

    Certianly you make a good statement of science optimism–onward and upward. What do you think of John Horgan’s position that science is doomed to spin wheels henceforth, that no breakthroughs of the Newton-Darwin-Einstein magnitude are in the offing? Or even conceiveable over our lifetimes.

    I rather hope you’re right and he’s wrong–it’d make for a more interesting world–but I don’t know, the future could just as easily be ancient China or some version of Magister Ludi. I think there is such a thing as cultural exhaustion and I detect the early stages in America now. The iPod is cool, but it’s not the railroad, is it?

  • http://www.futuratronics.com Andres Hax

    In terms of short term forecasts I would be very interested to know your thoughts of Stephen Wolfram’s A New Kind of Science and Ray Kurzweil’s ideas of the singularity.

    Wolfram says –point blank—that his discoveries of the behavior of cellular automata will revolutionize science. I had the opportunity to interview him (here is the transcript: http://www.stephenwolfram.com/interviews/clarin05english.html) and I asked him if he thought his discoveries were on par with Newton and Darwin – and he basically said yes.

    Insofar as the singularity, Kurzweil dates it at approximately 2045.

    In terms of your argument in this post these two individuals are going completely in the opposite direction of collaborative scientific endeavor. They are solitary, visionary, and totally extreme in their predictions.

    I would enjoy to know your thoughts on these two thinkers within the context of The Technium. Have they seen something that nobody else has seen? Could they be right?

  • http://www.altenergyaction.org/ Arthur Smith

    Kurzweil’s optimism depends on a “law of accelerating returns” that I find a bit dubious – you can read my comments on that here.

    Some of Kevin’s notes above are already taking hold in science today. I work for a major science publisher, and while we haven’t yet published 1000-author papers, we do have some with over 600 authors, and we’re expecting the first 1000-author paper in the next few years (from the new Large Hadron Collider physics experiments in Europe). Nevertheless, the odd thing about scientific papers (or perhaps any form of serious writing) is that they can’t continuously evolve – they evolve for a short period of time as the collaboration resolves near-term issues, and the Wiki approach is I believe actually used by some large collaborations already. But then over a few weeks they decide among themselves that it’s acceptable and send it off, and are done with it (except perhaps in response to reviewer comments before final publication). I’m not sure that’s ever going to change – there has to be a static record of that sort or else we’ll be lost in Borges’ “Library of Babel”, in my opinion!

    One thing you don’t mention is the nature of scientific fields. The historical pattern has been one of fragmentation, as “natural philosophy” split into mathematics, physical, and biological sciences, splitting further in a reductionist fashion so we now have dozens of specialties within what once were individual fields like physics and chemistry. But looking at recent patterns of publication, we’re seeing an odd sort of re-integration going on, through common computational and theoretical techniques at the least, and in many cases complementary experimental approaches. Nuclear physicists and molecular physicists work together since the physics of cluster and nucleus behavior shares many similarities. People look into “complexity” and see it all over the place. Physicists bring their perspective to fundamental biological research, looking at the mathematical behavior of cells under a variety of conditions for instance.

    So 400 years from now, is there any chance we’ll still have “physics”, “chemistry”, and “biology” as we now understand them? Will specialization be as important with vast computer resources available to every individual? Will there even still be a separation between applied and basic research, or will the totality of science be the domain of every engineer trying to do something new in the world?

  • http://www.altenergyaction.org/ Arthur Smith

    My Kurzweil review is here:

    http://www.sciscoop.com/story/2006/2/2/235543/1942

    I guess I should have checked how the link would work, sorry…

  • http://greg.abstrakt.ch/ Gregor J. Rothfuss

    One way to advance the scientific method might be to do more experiments along the lines of Scraping ArXiv: Kludging Open Scientific Hypertext. i liked his points about layering different discussion systems atop a common layer of scientific papers.

  • Kevin Kelly

    Michael,

    Of course I disagree with John Horgan on the future of science. I find it ludicrous that we should have discovered the major contours of everything in such a short time, especially since once you start to ask reallly hard questions, you find out that we really don’t know anything at all.

    Andres,

    I like the work of both Wolfram and Kurzweil, but I don’t think their ideas are singular. And BTW, mine aren’t either.

    Arthur,

    I think you are correct that the catagories of disciplines we have now will not be useful in 400 years from now. Not because specialization will go away (it won’t) but because science will transform what we mean by biology, or physics, so that in 400 years we will simply see it as something different. Imagine doing experiments on electrons totally in your computer. Is that physics or computer science or what?

  • http://drexel-coas-elearning.blogspot.com Jean-Claude Bradley

    Kevin – you make a number of good points about how science might change. I think the near term, the real time publication of experimental results using a blog/wiki approach has a chance of having an immediate impact on how science takes place. Although I think there is a place for publishing in the format of an article for human consumption, this will lag behind publication of the raw data that is more amenable to automated processing. We are attempting to make this shift in my organic chemistry lab my using blogs to record experimental data and a wiki to organize higher level concepts.
    http://usefulchem.blogspot.com
    http://usefulchem.wikispaces.com

  • Kevin Kelly

    Thanks, Jean-Claude.