The Reverend William Paley published Natural Theology: Or Evidences of the Existence and Attributes of the Deity, Collected from the Appearances of Nature in 1802, shortly after the French astronomer and mathematician Pierre-Simon Laplace brought out the first two volumes of his Traité du Mécanique Céleste. Twenty-three years later, Laplace published the fifth and final volume of his magnificent treatise, and so brought to a close the first and greatest of scientific revolutions. He was thereafter promoted to the scientific pantheon, where, dignified among the immortals, he has reposed ever since. Paley, by contrast, has been the victim of a number of near-death experiences, and like so many men who have survived close calls, he has remained eager to remind the world that he is still alive.
Natural Theology is an argument for the existence of the Christian deity, a defense of the faith; but its centerpiece—the argument from design—has long slipped its doctrinal noose. “When we come to inspect [a] watch,” Paley observed, we note that “its several parts are framed and put together for a purpose.” It follows that “the watch must have had a maker.” By the same token, living systems are very much like human artifacts in their complexity. Their own “marks of design,” Paley urged, “are too strong to be got over.” It follows again that this “design must have had a designer. The designer must have been a person. That person is God.”
This argument retains something of the clarifying effect of a hammer struck twice. The marks of design—bang! The existence of a designer—bang again! But the publication in 1859 of Charles Darwin’s The Origin of Species allowed biologists, who had long appreciated the force of Paley’s inference, to escape its conclusions. Coordination and design are required to explain the emergence of human artifacts—waterwheels as well as watches, sun dials as well as sun dresses—but it is an alliance between time and chance that explains the complexity of living creatures. No design is in place in the natural world; no designer is needed.
Darwin’s theory of evolution—random variation and natural selection—made it possible, as the contemporary biologist Richard Dawkins has written, to be “an intellectually fulfilled atheist.” And if not fulfilled, then certainly at ease.
The Reverend Revived
Within the past ten years or so, however, such satisfactions as atheism affords have come to seem a little premature. Paley’s claims have been pressed anew, as the argument from design, its revival widespread, has appeared in both evolutionary biology and (as we shall see) mathematical physics.
Within biology itself, the argument from design has become the gravamen of the “intelligent design” movement, which has engendered a current of sympathy among philosophers, biologists, and ordinary men and women long skeptical of Darwinian doctrine. The movement’s spiritual adviser has been Phillip E. Johnson, a professor of criminal law at the University of California. In 1991, Johnson published a shrewd and widely read critique of Darwinian theory under the title, Darwin on Trial. Like Michael Denton’s Evolution: A Theory in Crisis, published five years earlier, the book had some of the effect of wet dynamite. It sputtered for a while and then exploded.
Johnson’s criticisms were not new; they were, in fact, part of a community of criticisms that had been marshaled against Darwin for many years.1 But Johnson went further than his predecessors. In what now seems a rhetorical masterstroke, he argued that biologists had embraced Darwin and rejected design by virtue of their allegiance to a worldview that was wholly extra-scientific: namely, philosophical naturalism. This is the view that, in Johnson’s words, “the entire realm of nature [is] a closed system of material causes and effects, which cannot be influenced by anything from the outside.”
Providence provided Johnson with opponents dutifully willing to confirm his diagnosis by exhibiting the disease that it signified. Thus, the literary critic Frederick Crews has argued recently in the New York Review of Books that “There is a fundamental principle in all science—only material and observable forces can count” (emphasis added). But if there is such a principle, Johnson has rejoined to critics like Crews, it is not to be found as the premise to any of the great physical theories; nor is it evident in their conclusions. It functions as an article of faith.
Although Johnson’s book did not elicit the assent of the biological establishment—hardly a group interested in searching self-criticism—it did something more important. It attracted the attention of a number of younger scholars—Michael Behe, William Dembski, Paul Nelson, Jonathan Wells, Steven Meyer. These were men with very respectable training in biochemistry, molecular biology, genetics, philosophy, and mathematics. They found Johnson’s critique bracing in part because it confirmed an allegiance to Christian doctrine, and in part because, with the assumption of naturalism tentatively indicted in philosophy, they saw no reason to exclude the hypothesis of design in biology. In this they were correct. There is no reason.
Their decision to resurrect the design hypothesis represented a daring move on the chessboard, an unexpected flanking attack. Secular intellectuals have always affirmed their adherence to Darwin’s theory because they have felt that the alternative is the abyss. The abyss had now acquired a loud, lively, and unembarrassed voice: living creatures are designed, and if the designer’s identity is not obvious, it can easily be inferred. In the last years, advocates of intelligent design have put forward their views in books, conference reports, Internet publications, public lectures, scientific debates, and before various elementary-school boards—where, in a marvelous display of cheekiness, they have demanded that the public schools teach the controversy that they themselves have engendered.
Still, if the Reverend Paley has by their efforts been revived, he has been nearly dead before. Despite his reappearance on the intellectual scene, it is worth observing that a man who survives being drowned only to find himself in danger of being hanged may well possess a diminishing capacity to evade disaster.
Interrogating the Dead
Darwin’s theory of evolution and theories of intelligent design are in conceptual conflict. Although they may well both be false, they cannot both be true. Each party to the dispute has thus become persuaded that with the other character in the same room, there is simply not enough oxygen to survive.
It is the fossil record that is a preliminary witness to both theories of evolution and design, the stacked dead endeavoring to convey some discernible signal to their various partisans amid a great deal of background noise. But the signal is often confusing.
Living systems, Darwin maintained, are organized continuously, one species trailing off into the next. But the dead have done little to confirm this thesis. “Most of the fossil record,” the paleontologist Robert Carroll has noted, “does not support a strictly gradualistic interpretation.” It does not, that is, support the obvious and intended interpretation of Darwin’s theory.
This circumstance is properly a source of satisfaction to proponents of intelligent design. In Icons of Evolution (2002), Jonathan Wells, who holds a degree in molecular genetics from Berkeley, summarizes the common view that the Cambrian explosion, the seemingly abrupt appearance of diverse and highly developed fauna in the early Paleozoic era, is a great imponderable in the geological record. The various life forms that erupted then look for all the world as if they were simply planted in the ground by some obviously competent designing intelligence.
But all that is on the one hand. On the other hand, if a great many species enter the fossil record fully formed, and then depart unchanged for Valhalla some million years later, the reverse is true as well, most notably in the evolution of the vertebrates. The margin between the reptiles and the mammals appears far more friable today than it did a century ago. Paleontologists now speak of mammal-like reptiles, chiefly the Pelyocosaurs and the Therapsids. Within the amniotes—the broad class comprising the birds, the reptiles, and the mammals—we have the benefit of an unusually full record of transition sequences and can see them proceeding inexorably over the course of 150 million years, with a variety of organisms busily changing their skeleton, gross anatomy, way of life, feeding habits, digestion, locomotion, and even their cellular physiology. These changes are demonstrable through the fossil record to a fine level of detail, the delicate bones of the mammalian ear emerging in stages from the bones of the reptilian jaw. In the late Triassic and early Jurassic, one can almost see the sequences touching, Cynodont to Morganucodon, scales to fur, cold blood to hot.
When the dead are interrogated at length, then, oxygen levels do drop in certain chambers of conflict, but curiously enough, both Darwinian and design theorists seem to be turning blue as a result.
Interrogating the Living
If the dead are enigmatic and unavailing, what of the world of the living: the world in which creatures yawn, stretch, sleep, slither, and scramble for advantage? There, unceasing variety is the rule. Dogs divide themselves into more than 500 breeds, ranging from the very large to the very small; the domestic cat is found underfoot in only 50 or so flavors. The question is why.
Just why should dogs come in so many shapes and sizes, and not cats? Why in principle? Why, for that matter, should mimetic design in certain moths and butterflies exceed the observational powers of even the most capable predators, the design toppling over into a form of artistic ecstasy, while camouflage in other species is a matter of a few halfhearted disguises, a splash of stripes, a daub of pigment? Why do certain orchids involve themselves in an incredibly intricate reproductive routine while others, growing nearby, are content to slap a dusting of pollen on their shimmering petals and trust that their precious seed will be carried on the hind limbs of some heedless insect?
Why echolocation in the bats but not the buzzards? Why dancing among the bees but not the butterflies? Webs among the spiders but not the centipedes? Poison among the toads but not the frogs? Pouches among the possums but not the penguins?
And why the peacock’s tail?
In the face of such questions, Darwinian theorists may be observed standing in silence. They are looking upward, apparently much occupied in assessing cracks in the ceiling. Beyond saying that that is just the way things are, what could they say? Intelligent-design theorists, on the other hand, hear the heavy hammer-beats of Paley’s argument: the “marks of design” are too strong to be got over; such design must have a designer.
Yes? But what then follows? What follows precisely, once a designer has been acknowledged?
In commenting on the peacock’s tail, Phillip Johnson lightheartedly imagined that such preposterous contrivances were just what one might expect from a designer with a sense of whimsy. But why stop at whimsy? Why not inadvertence, anger, mockery, incompetence, colorblindness, bad taste, profligacy, or any other psychological disposition that can plausibly be connected to action?
And suppose the peacock’s tail were found mounted on the hindquarters of a donkey; or the garden shrew were given the power to speak Dutch. Suppose a world in which cats acquired the arts of ingratiation while dogs declined all further services to man—Sniff that suitcase? The one that might contain a bomb? I think not—and whales returned from the sea to take up residence in the rich pasture land from which they originally departed. Suppose the carnival of the animals and the world’s natural riot permutated in a thousand different ways. What then?
Why, nothing would change—save for the fact that it would be the donkey and not the peacock that prompted an inference to design. His choice, design theorists might say. But if His choice, why made, and according to which principles?
Design theorists may now be observed standing in companionable silence alongside Darwinian biologists. They, too, seem to be gazing upward, studying the same queer little cracks in the ceiling.
The Vexing Eye
In 1994, Dan E. Nilsson and Suzanne Pilger published a paper in the Proceedings of the Royal Society entitled, “A Pessimistic Estimate of the Time Required for an Eye to Evolve.” By “pessimistic,” they meant an estimate that, if anything, exaggerated the length of time required for the eye’s evolution. Even so, their conclusions were remarkable. “A light-sensitive patch,” they wrote, “will gradually turn into a focused-lens eye” in only a few hundred thousand years.
Darwin had himself been troubled by the existence of the mammalian eye, whose evolution by random mutation and natural selection has always seemed difficult to imagine. Nilsson and Pilger’s paper provided a welcome redemptive note. A few hundred thousand years and the job would be done. Authors have waited longer for their royalty checks.
As Nilsson and Pilger’s paper gained currency, it amassed content it did not actually possess. Biologists who failed to read what Nilsson and Pilger had written—the great majority, apparently—assumed that they had constructed a computer simulation of the eye’s evolution, a program that could frog-march those light-sensitive cells all the way to a functioning eye using nothing more than random variation and natural selection.2 This would have been an impressive and important achievement, a vivid demonstration that Darwinian principles can create simulated biological artifacts.
But no such demonstration has been achieved, and none is in prospect. Nilsson and Pilger’s computer simulation is a myth. In a private communication, Nilsson has indicated to me that the requisite simulation is in preparation; his assurances are a part of that large and generous family of promises of which “your check is in the mail” may be the outstanding example.
What Nilsson and Pilger in fact described was the evolution not of an eye but of an eyeball, and they described it using ordinary back-of-the envelope calculations. Far from demonstrating the emergence of a complicated biological structure, what they succeeded in showing was simply that an imaginary population of light-sensitive cells could be flogged relentlessly up a simple adaptive peak, a point never at issue because never in doubt.
Despite a good deal of research conducted over the last twenty years, the mammalian visual system is still poorly understood, and in large measure not understood at all. The eye acts as a focusing lens and as a transducer, changing visual signals to electrical ones. Within the brain and nervous system, complicated algorithms must come into play before such signals may be interpreted. And no theory has anything whatsoever of interest to say about the fact that the visual system terminates its activities in a visual experience, an episode of consciousness. We cannot characterize the most obvious fact about sight—that it involves seeing something.
These are again circumstances that properly afford a measure of satisfaction to members of the intelligent-design community. But in what respect is our understanding improved by assuming that the visual system is the result of intelligent design? Unless very specific religious hypotheses are invoked, neither the identity nor the nature of the designer is known. The principles that he employs are a mystery, and the objects of his design are not well understood. Certain questions now reappear with unyielding insistence. Could a designer whose nature we cannot fathom, using principles we cannot specify, construct a system we cannot characterize?
If the question is unyielding, so, too, is its answer: who knows?
“If,” Darwin wrote, “it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” These are lapidary words. They suggest a man prepared to subject his ideas to the sternest possible test. And they issue in an exemplary challenge: show me that complex organ or organism.
Darwin’s challenge is as easy to state as it is difficult to meet—an example, perhaps, of the old boy’s skill in adaptive self-protection. How could the requisite demonstration be conducted? There is, on the one hand, the organ or organism as it now exists. And there is, on the other hand, its history, the path taking it from the past to the present. But the evolutionary history of a great many species has been lost, the butterfly, the beetle, and the bat all emerging from time’s endless fog as butterfly, beetle, and bat. To show that these organisms “could not possibly have been formed” (emphasis added) by a Darwinian mechanism demands a complicated argument, one that begins with their observable properties and then strikes negatively at every possible path by which they might have been created by “numerous, successive, slight modifications.”
In Darwin’s Black Box, published in 1995, the biochemist Michael Behe identified such an observable property—what he called “irreducible complexity”—and proposed precisely such a negative argument. Darwin’s challenge having been met to Behe’s satisfaction, logic then played its familiar role: if there is no Darwinian path to certain biological structures, they must have emerged by design.
Beyond being complex, just what is an irreducibly complex system? It is, Behe writes, “a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.” Irreducible complexity applies to both biological and nonbiological systems. As an example of the latter, Behe offers the mousetrap. Its parts—hammer, spring, holding bar, platform, catch—interact; they each contribute to the trap’s function, which is to catch or kill mice; and if any part of the trap is removed, the trap fails properly to function.
Behe’s definition of irreducible complexity specifies a concept while simultaneously providing a means for its detection. That is its great strength. Is this organ, organism, system, or creature irreducibly complex? Remove a part. The system’s subsequent failure is a sign of its irreducible complexity. The first steps in forging a scientific concept—make it clear, make it operational—have been taken.
Of course, a mousetrap is a human artifact, and the argument that Behe makes in cooked-up cases needs to be ported to biological systems. Here, the structures that Behe considers are biochemical: their parts consist of molecules. (To his credit, Behe offers a powerful and convincing case that Darwinian theories that fail to encompass the details of biochemistry are exercises in self-deception.) He considers in turn the bacterial flagellum, which is a tiny but horribly efficient rotary motor found in very primitive bacteria, the blood-clotting cascade, systems of molecular transport, and the immuno-defense system—and he considers these systems on the level of their biochemical constituents.
As these examples reveal, and as readers of technical journals already know, biochemical systems display coordination, regulation, flexibility, delicacy of effect, unheard-of precision, efficiency, and great complexity. Behe’s astonishment at the vivid and miraculous world thus disclosed is hardly a matter of hyperbole. It reflects the living truth. And one further point, the obvious one: the systems that Behe discusses are irreducibly complex. Strip just one protein from the blood-clotting system, whether by design or by disease, and the normal cascade leading from a cut to a clot fails, the poor creature evacuating its life along with its blood.
It is precisely such irreducibly complex systems, Behe writes, that meet Darwin’s challenge. His argument now becomes conceptual rather than empirical. Some things cannot be done, and the “cannot” in this case has the full force of logical impossibility:
An irreducibly complex system cannot be produced directly . . . by slight successive modifications of precursor systems, because any precursor to an irreducibly complex system that is missing a part is by definition non-functional.
Just as an arbitrary angle cannot be trisected by means of a ruler and compass, so an irreducibly complex system cannot be produced directly by “numerous, successive, slight modifications” of its precursor systems. A mousetrap—or any other irreducibly complex system—cannot be assembled from its precursors if those precursors are missing a part of the original system. “Any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.”
This is true. And it is clear. But it is not conclusive.
Behe’s argument succeeds in canceling one path from the past to the present: the familiar, assembly-line model in which a complex artifact like a mousetrap is assembled piece by piece, the base going first, the hammer, spring, holding bar, and catch attached in stages. But this mode of construction, which is common enough in the case of virtually every human artifact, is never seen in the biological world. That is in itself cause for suspicion. But the real objection is logical. If Behe’s argument were to meet Darwin’s challenge, it would have to cancel not one but every possible Darwinian path between the precursors of an irreducibly complex system and the system itself. He would thus have to show that irreducibly complex systems arise by means of an assembly line and only by such means. And that is plainly not so—even in the case of artifacts.
A three-legged stool, to chose a modest example, is irreducibly complex. Remove a leg and the stool topples. But a three-legged stool may be constructed by “numerous, successive, slight modifications” of a solid cylindrical block of wood, modifications that serve to hollow out three arches, leaving only the stool’s three legs and seat upon completion. The stool is irreducibly complex; its precursors are not. And the path to complexity just sketched falls entirely within Darwin’s ambit of small changes, each conferring some selective advantage. As those arches are carved, weight is reduced and stability increased.
A gap has now opened between Behe’s definition and the conclusion he means to draw. With a few deft maneuvers, the gap may be further enlarged. An irreducibly complex system must fail if a part is removed; well and good. But from this it hardly follows that an irreducibly complex system must fail if its parts are modified, or if new parts are substituted for old.
The gap is large enough to have accommodated by now a number of Behe’s detractors. After the publication of his book, biologists wasted no time in demonstrating how the common mousetrap might be modified in successive stages by an adroit substitution of parts, the platform disappearing in favor of the ground itself or a table top, and so on to the hammer, spring, holding bar, and catch. They then imagined this backward tinkering going forward by evolutionary means, the mousetrap getting better and better as its parts became more and more refined. For a time, the Internet vibrated with imaginary mousetraps of varying degrees of ingenuity, with a number of biologists losing themselves entirely in detailed designs, their diagrams drawn to scale and an aspect of obsessive lunacy stealing over their work.
Still: mousetraps are one thing, three-legged stools another, and living systems yet another. Biologists who had mastered the mousetrap were rather less successful in explaining the origins of the biological systems Behe had described in patient detail. When he himself searched the existing literature for plausible models, Behe had found little of interest and nothing of value. In reviewing his book, biologists asserted that the problem he had identified did not exist, and would anyway soon be remedied. A number of biologists have in fact proposed biochemical pathways leading to the formation of complex biochemical systems, the Internet again vibrating with their proposals. But Behe has in turn criticized their suggestions, arguing generally that where the pathways are Darwinian, they do not succeed, and where they succeed, they are not Darwinian. In this, it seems to me, he has been entirely convincing. The origins of irreducibly complex biological systems remain what they have always been, and that is an utter mystery.
But these rattling exchanges of small-arms fire represent little more than tactical exercises, the usual (and usually hidden) pop and counter-pop of controversy within biology itself. Behe’s concerns were with what military theorists call grand strategy, and his ambition was to justify an inference to design by means of a logical resolution of Darwin’s challenge.
What Behe has shown is that, with respect to certain biochemical systems, Darwinian biologists do not have a clue. What he needed to show was something stronger. He needed to show that they do not have a prayer.
Chance and Necessity
As I intimated early on, the argument from design has been revived not only within biology but also within mathematical physics. Surveying the possibilities for scientific explanation, Jacques Monod, the eminent French molecular biologist, concluded rather grimly that the universe is ruled by necessity and by chance; there is nothing else. Monod made this claim in an elegant volume entitled Chance and Necessity (1970), a work that has come to play an important if often unacknowledged role in contemporary debate over these matters.
The laws of physics control the behavior of matter. It is there that necessity rules, its iron fist unyielding. Necessity explains the movement of the sun, the moon, and the planets; it explains the dancing play of atoms and molecules; it governs the very skin of creation. But chance—luck—is also a great governing force. It is chance that accounts for the origin of life, and chance again that has governed the emergence of human beings, with their complicated languages, their insatiable desires, and their doomed sense of curiosity.
But is this all there is? Do necessity and chance exhaust the explanatory options?
The biochemistry of living systems is based on carbon. The periodic table, which begins with hydrogen, and then straggles up and down the chart, covers more than 100 atoms. Among them, only carbon can bond with other atoms in four different directions, and so only carbon has the flexibility to create ever larger and more complicated organic structures. As it happens, there is plenty of the stuff around, a state of affairs no less perplexing than the fact that apples unsupported fall downward. Why should this be so?
In 1946, the astronomer Fred Hoyle published the first of his pioneering studies, The Synthesis of the Elements from Hydrogen. The rich and complex panoply of chemical elements that is characteristic of the universe, Hoyle argued, must have been forged by a sequential process, one starting with hydrogen and then continuing step by step until the construction of carbon, the universe enlarging itself in stages. The process could not be chemical in any ordinary sense; chemistry leaves the interior of the atom untouched. The creation of matter must thus have been handled within the interior of the stars.
If hydrogen is the first step in the chain, deuterium is the next, the fused product of two hydrogen atoms and a vital link in the formation of carbon. The fusion that fashions deuterium depends crucially on the magnitude of the strong nuclear force. (There are four fundamental forces in nature: gravity, electromagnetism, and the strong and weak nuclear forces.) Like all forces, the strong nuclear force is expressed as a number. The value of that number is critical. Were it weaker than it is, hydrogen atoms would have found themselves unable to fuse; were it stronger, the stars would long ago have burned themselves out.
None of this took place. The strong nuclear force has the value that it does.
Although the next step is relatively simple—deuterium atoms fuse together to form helium—thereafter the story grows complicated. The laws of physics would normally prevent helium atoms from spontaneously fusing to form anything more interesting than helium. But two vagrant helium atoms meeting in the interior of a star can fuse together to form what is called a beryllium intermediate. It is an intricate celestial dance, and one that is highly unstable because beryllium intermediates are very short-lived.
In 1953, Edwin Salpeter discovered that, like an opera singer and the glass she shatters, the nuclear resonance between helium atoms and intermediate-beryllium atoms is precisely tuned to facilitate the creation of beryllium. (Nuclear resonance is the vibration produced when the frequency of the absorbing nucleus is identical to the frequency of the emitting nucleus.) The process is wonderfully elegant and entirely improbable. What would otherwise have been a process sputtering into the abyss has now been promoted to a process taking helium into a new element.
But there is yet no carbon—and here Hoyle entertained his most daring conjecture. Quite before any evidence was available, he predicted the existence of a second nuclear-resonance sounding directly between beryllium and helium, one that would in its turn enable the great stellar furnaces to produce carbon in abundance. This prediction was verified. The steps involved in the construction of carbon lay revealed—and so, too, the path to life.
Hoyle was deeply troubled by the specific nuclear-resonance levels that he had discovered. That they expressed physical properties of material objects was not in doubt. But what explained the appearance of those physical properties in the great causal chain stretching from the Big Bang to the emergence of life? On this point, the laws of physics and the vagaries of chance would both seem unavailing. “A commonsense interpretation of the facts,” Hoyle observed, “suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.”
Against All the Odds
The facts are not in dispute. In both molecular biology and mathematical physics, things seem to have happened in defiance of all the odds. It is the analysis of these six ultimately crucial words that has occupied William Dembski, a young mathematician of impeccable academic training and a specialist in probability theory.
Dembski is the author of The Design Inference (1998) and No Free Lunch (2001). These are serious and difficult works. They merit respect. In much that he has written, Dembski’s proximate target has been Darwin’s theory of evolution, although, like so many of Darwin’s critics, Dembski has also discovered that Darwin is a most elusive target, bobbing and weaving around the ring and every now and then pausing to deliver a smart blow of his own. But if Darwin is Dembski’s proximate target, his real target all along has been Jacques Monod, and his aim has been to demonstrate that necessity and chance do not exhaust the options for scientific explanation. There is design as well.
Other members of the design-theory community have seen design emerging as Darwinian theory weakens. Dembski is interested in restoring design to the community of scientific properties—like mass, force, or momentum—and thus endowing the argument from design with a direct form of legitimacy.
Dembski begins with a very plausible general premise: events that can be explained neither by the laws of physics nor by chance must be explained by an appeal to the intervention of an intelligent agent.
It is surely plain that many events cannot be explained in terms of the laws of mathematical physics, or any other laws for that matter. The precise way in which Shakespeare arranged 113 words to fashion his nineteenth sonnet (“Shall I compare thee to a summer’s day . . .”), owes nothing to any system of physical laws. Shakespeare might, after all, have written, “I shall compare thee to a summer’s day,” losing poetical effectiveness but violating no physical constraints.
It is just as plain that many events cannot be explained by chance. The odds are just too small. “Phenomena with very small probabilities,” the French mathematician Emile Borel remarked, “do not occur.” They do not occur, that is, by chance.
Borel’s thesis, if not a tautology, is certainly a truism. And yet Dembski argues that Borel’s maxim, taken as it stands, is false. “Plenty of highly improbable events,” he writes in The Design Inference, “happen by chance all the time.”
The precise sequence of heads and tails in a long sequence of coin tosses, the precise configuration of darts from throwing darts at a dart board, and the precise seating arrangement of people at a cinema are all highly improbable events that, apart from any further information, can properly be explained by chance. [emphasis added]
Truisms thus seem to be in conflict. But of the two, it is Borel’s maxim, Dembski concludes, that requires revision. Certain kinds of improbable events happen “all the time.” Others do not. In the case of those that do not, an additional property has to be taken into account—the “further information” that Dembski mentions. They are specified.
A specification is something like a pattern, a description, or an identifying tag; it represents a selection from among alternatives, and so involves a division of the stream of possibilities. Dembski offers an example drawn from archery. If an archer is given a large target, and encouraged simply to let fly, then, no matter where his arrows land, they will have reached an improbable place and so have realized an unlikely event. Chance is still in charge. But if a target—say, a familiar bull’s-eye—has been fixed, then a halter has been placed on the riot of possibilities. The fixed target serves as a specification, picking out one set of places to the exclusion of all others, and the archer hitting this target repeatedly has defeated chance. He has done so by exhibiting skill and attention: marks of design, intimations of intelligence.
In Scrabble, any particular long sequence of letters forming a nonsensical word—PZQATRV—is improbable; but, conforming to no pattern, it is not specified. A short sequence of letters spelling out a particular English word (“dog,” say) is specified (since it conforms to a pattern), but not improbable; it might have appeared on the Scrabble board by chance. But Shakespeare’s nineteenth sonnet is both specified and improbable. It conforms to a pattern, and it is unlikely to have appeared by chance.
These observations suggest to Dembski a reformulation of Borel’s maxim: it is specified phenomena with very small probabilities that do not occur by chance.
Having offered Borel a face-saving detail, Dembski is now in the position to offer Paley a life-saving deal. Specified improbability is the very mark of design: the trace left in matter of a cognitive act. It is specified improbability that characterizes thimbles and thermostats and watches, bone-handled knives, computer programs, steam shovels, symphonic suites, lace lingerie and lipstick, Vicuna coats, Etruscan vases, sand-castles, epic poems, French epigrams, German novels, mathematical theorems, paintings done in oil and a sign found in Lubbock, Texas spelling out the words Good Eats.
And it is specified improbability, again, that characterizes DNA and the various proteins, the bacterial flagellum, and the blood-clotting cascade. It is in fact ubiquitous, this property, a kind of steady note sounding throughout living systems, from the Tyrannosaurus roaring in the Jurassic right down to the cellular telomeres currently vibrating under its influence. If the specified improbabilities in human artifacts suggest inexorably a human designer, the specified improbabilities in biological artifacts suggest inexorably a biological designer as well.
Nor is there any need to stop at the margins of the biological world. Nuclear-resonance levels are both improbable and specified—improbable because nuclear-resonance levels could have had other values, and specified because these nuclear-resonance levels and no others are needed in order for life to exist. The inference to a designer proceeds apace; it gathers force. It is by these means that Paley has been offered yet another lease on life.
The Hinge of Doubt
But the hinge of doubt begins to creak. Eliminating chance through small probabilities—these words are the sub-title of The Design Inference—is an enterprise that cannot begin if probabilities cannot be objectively assigned to various events. It is not always easy to see how this might be done. It is one thing to assign probabilities to a lottery, or a horse race; but just how might one go about assigning probabilities to, for example, the creation of Las Meninas, Velasquez’s great painting of the Spanish royal court—the first step in establishing its claim to design? Probabilities require a world of alternatives. There are either no alternatives to Las Meninas because it is unique, or infinitely many because it resembles every other splotch of paint on canvas.
What probability, for that matter, should we assign to the emergence of a chicken? Given an egg, the chicken’s appearance is almost inevitable; absent that egg, impossible. Which is it to be? What is the probability that Luxembourg might succeed in conquering Brazil? Or that Chinese officials might declare Basque the new language of China? Or that the queen of England might have been constructed of ice instead of flesh, her body kept at constant temperature by means of a far-reaching Masonic conspiracy?
It is not easy to say. And that, of course, is the problem.
If it is difficult to assign probabilities to a great many things, events, processes, or circumstances, it is still more difficult to determine whether they conform to a specification, the bridge to the bridge to agency. A specification is a human gesture. It may be offered or withheld, delayed or deferred; it may be precise or incomplete, partial or unique. Paley’s watch may thus be specified in terms of its timekeeping properties, but it may also be specified in terms of the arrangement or number of its parts; and its parts may be specified in horological, mechanical, molecular, atomic, or even sub-atomic terms. Each specification introduces a different calculation of probabilities. Paley’s watch may be improbable under one specification, probable under another.
And whatever the specification of specification, it is in any event hardly clear that intentional acts must be specified in the first place. A random walk, a careless gesture, the significant sigh, the soul-shattering sniffle—to what specification do they conform? The word basta overheard in a dark Italian alley, a woman’s voice conveying all the urgency of human intention—but a specification? A specification is, after all, a way of distinguishing among possibilities. What possibilities are at issue in these cases? And what choices from among a sample of possibilities do the events represent?
These examples could be multiplied at will. They suggest that there are plenty of intentional acts—acts reflecting a choice—that neither require nor conform to a specification. If that is so, then specified improbability is not necessary to trigger an inference to design. If it is not necessary, there then remains the question of whether it is sufficient.
According to Dembski, if and only if there is both specification and improbability can chance be eliminated in favor of design; otherwise, not. No one, for example, would attribute the origin of Shakespeare’s nineteenth sonnet to chance. That is true: the sonnet is indeed highly improbable. But what does specified improbability have to do with it? If 113 words are randomly rearranged, every possible arrangement of them has the same probability of occurrence, whether the arrangement represents Shakespeare’s limpid lines or sheer gibberish. No one would attribute any particular organization of Shakespeare’s words, whether specified or not, to chance. And for just the reason Borel offered: phenomena with very small probabilities do not occur. A specification does nothing to alter the odds. It is improbability that does the heavy lifting; the specifications have gone along for the ride.
Once this is seen, Dembski’s reformulation of Borel’s principle must be discreetly withdrawn; it rests on a mistake. It is true that “plenty of highly improbable events happen by chance,” but false that they happen all the time. They happen, in fact, precisely as many times as one might expect, given their probabilities.
Adieu to specifications and adieu to Dembski’s initial premise. If specified improbabilities are not necessary to trigger the elimination of chance, and if the elimination of chance is not sufficient to trigger an inference to design, it follows that specified improbabilities are not sufficient, either. We are left with unlikely events and the sense of puzzlement that they inevitably provoke. In both molecular biology and mathematical physics, things seem to have happened in defiance of all the odds. And we do not know why.
The Ineffable Inimitable
If design is to be a scientific category, it must answer, as Dembski (following Paley) quite understands, to a scientific property. And if design leaves marks in the natural world, biologists and physicists dissatisfied with naturalism as a doctrine are committed to identifying those marks. But the curious thing is that the more these mysterious marks of design are sought, the more elusive they seem to be.
Nineteenth-century biologists often imagined that living systems were perfumed by an élan vital, which French military theorists proudly compared to the mysterious élan animating their soldiers. Although intellectual fashions have changed over the course of one hundred years, the élan vital has not disappeared. It has simply been renamed. Citing the evolutionary biologist George Williams, Phillip Johnson has argued that living systems comprise two domains, the material and the codical. The material domain is exhausted by certain material objects, chiefly DNA and the various proteins. The codical domain consists of something more. Many scientists have taken to identifying this “something more” with the concept of information, but since the usage of that term is so loose we might as well refer to it as the ineffable inimitable.3
Both Johnson and Williams agree that what I am calling the ineffable inimitable “is fundamentally distinct from the chemical medium in which it is recorded.” Thus Williams argues that Cervantes’s great novel Don Quixote just is the ineffable inimitable, “most often coded as a pattern of ink on paper” but also capable of existing “in many other media.” Less interested in metaphysical niceties, Johnson draws a more straightforward conclusion from Williams’s remarks: if matter and the ineffable inimitable are truly incommensurable, “then Darwinism cannot be true as a theory.” Random variation and natural selection are physical properties of material objects. Explaining the origin of the ineffable inimitable by an appeal to the material properties of living systems is rather like explaining the origin of Don Quixote by an appeal to the physical properties of ink and paper. Not to be outdone, Richard Dawkins has remarked that he has long admired the idea of information—the ineffable inimitable—and made it the foundation of his research on many occasions.4
George Williams is a prominent evolutionary biologist; and Phillip Johnson his critic. What the ineffable inimitable is, neither man is prepared precisely to say.
Over the years, other scholars have been more forthcoming. The ineffable inimitable has been identified with a fluid, an aura, a spirit, an entelechy, a soul, a field, and a force; it has been described in terms of entropy, energy, complexity, Kolmogorov complexity, organization, self-organization, hierarchical self-organization, organized hierarchies, catastrophe theory, graph theory, automata theory, and various theories of computation and control. Irreducible complexity and specified improbability are the latest incarnations of the ineffable inimitable.
The search for the ineffable inimitable continues, if only because it answers to a human need. But not every urgent human need is destined to be met, and not every anxiety of the heart has its causes in reality itself. Experience might suggest another counsel. The search for the ineffable inimitable is fruitless.
An inquiry of this sort must inevitably end by disappointing every party to the dispute while exhilarating none. Members of the intelligent-design community have been stalwart in their attack on Darwin’s theory of evolution. This has been their strength. Mindful of the old-fashioned wisdom that you can’t beat something with nothing, they have proposed design as an alternative, without ever quite realizing the extent to which Darwinism and design share very similar weaknesses.
Darwin’s theory of evolution makes use of a fantastic extrapolation in which the mechanism of random variation and natural selection, responsible for a number of trivial local effects, is read into the great global record of life itself, so that the development of antibiotic resistance in bacteria becomes the model for the development of such complex structures as the mammalian eye, or the immuno-defense system. As design theorists have noted, this is rather like arguing that since kangaroos can hop, they can, given time enough and chance, learn to fly.
The same faith in the same flaw—the fallacy of extrapolation, to give it a name—runs through design theories as well. Given the human ability to fashion objects and create artifacts, design theorists argue that some similar process is at work in the construction of biological structures. Is it? What justifies the assumption that a process that accounts for the construction of a pocket watch, a sundress, or a nation-state might also account for the emergence of the blood-clotting cascade or the immuno-defense system? It is entirely possible, after all, that complicated biological structures lie quite beyond the possibility of design, no matter the designer and regardless of the principles employed. Faith in a designed universe might well be rather like faith in a planned economy, a doctrinal commitment that cannot survive a confrontation with experience.
It has likewise been the ambition of the design-theory community to interpose design as a category that might occupy the contested territory between necessity and chance. No philosophical argument renders this ambition absurd; and no scientific theory suggests that it is impossible. But if theories of design cannot be ruled out of court by facile invocations of philosophical naturalism, design theorists have for their part tended to underestimate the enduring intellectual force behind Monod’s claim that the categories of chance and necessity are mutually exclusive and jointly exhaustive.
What interested him the most, Albert Einstein once remarked, was whether the deity had any choice in the creation of the universe. Einstein is very often presumed to have had a religious sensibility; this remark reveals its profoundly austere nature. For Einstein’s comment suggests two, and only two, possibilities: a deity with no choice, or a deity with a world to choose and then to make.
If the deity’s actions in creating the world are necessary, it is the principles that explain the nature of this world that are of interest and compel attention. In the end, I suspect, those principles will be matters of pure logic, simple and compelling, and if ever they are discovered, human beings will share with the deity pleasure in their contemplation. But this is a vision that subordinates the designer to the design, one in which the designer is bound to the same wheel of inexorability that binds the rest of us.
If, on the other hand, the universe represents a deity’s free creation, it is the principles that explain his choice that are of interest and compel attention. A universe in which necessity does not control creation must have presented itself to the deity as an immense ocean of possibilities, our own universe an island amid other possible islands, the whole archipelago arising from unfathomable nothingness. And those divine possibilities, so far as we can tell, must have been all equally likely.
God alone knows what God is thinking; as suffering men and women have long known, His inscrutability is one of His less attractive features. But if all options are equally likely, and no motives known, we are in the position of observers contemplating a vast, cosmic lottery, one whose outcome is neither favored by the odds nor specified by its designer. It is chance that now returns as the default hypothesis, if only because it is the only hypothesis that is completely consistent with our ignorance.
By this means, it might well seem, Monod’s resilient double distinction, chance and necessity, returns to haunt the intellectual scene, with design, having been admitted as a possibility, once more in danger of collapsing along with the contested space between the two categories.
And Paley, poor Paley? Dead at last, or at least not very vigorously alive.
1 For my own contribution to this literature, see “The Deniable Darwin,” COMMENTARY, June 1996.
2 The physicist Matt Young offers this inadvertently rich account of his own inability to read the literature: “Creationists used to argue . . . that there was not enough time for an eye to develop. A computer simulation by Dan Nilsson and Suzanne Pilger gave the lie to that claim: Nilsson and Pilger estimated conservatively that 500,000 years was enough time.”
3 For a discussion of the explanatory value of information theory, see my “What Brings a World Into Being?” in the April 2001 COMMENTARY.
4 Dawkins’s remarks, as well as those of Johnson and Williams, appear in Intelligent Design Creationism and its Critics (2001), edited by Robert T. Pennock.