Even as our understanding has advanced dramatically, our explanations of how life emerged on earth continue to rest on yet-to-be-proved…
For those who are studying aspects of the origin of life, the question no longer seems to be whether life could have originated by chemical processes involving non-biological components but, rather, what pathway might have been followed.
—National Academy of Sciences (1996)
It is 1828, a year that encompassed the death of Shaka, the Zulu king, the passage in the United States of the Tariff of Abominations, and the battle of Las Piedras in South America. It is, as well, the year in which the German chemist Friedrich Wöhler announced the synthesis of urea from cyanic acid and ammonia.
Discovered by H.M. Roulle in 1773, urea is the chief constituent of urine. Until 1828, chemists had assumed that urea could be produced only by a living organism. Wöhler provided the most convincing refutation imaginable of this thesis. His synthesis of urea was noteworthy, he observed with some understatement, because “it furnishes an example of the artificial production of an organic, indeed a so-called animal substance, from inorganic materials.”
Wöhler's work initiated a revolution in chemistry; but it also initiated a revolution in thought. To the extent that living systems are chemical in their nature, it became possible to imagine that they might be chemical in their origin; and if chemical in their origin, then plainly physical in their nature, and hence a part of the universe that can be explained in terms of “the model for what science should be.”
In a letter written to his friend, Sir Joseph Hooker, several decades after Wöhler's announcement, Charles Darwin allowed himself to speculate. Invoking “a warm little pond” bubbling up in the dim inaccessible past, Darwin imagined that given “ammonia and phosphoric salts, light, heat, electricity, etc. present,” the spontaneous generation of a “protein compound” might follow, with this compound “ready to undergo still more complex changes” and so begin Darwinian evolution itself.
Time must now be allowed to pass. Shall we say 60 years or so? Working independently, J.B.S. Haldane in England and A.I. Oparin in the Soviet Union published influential studies concerning the origin of life. Before the era of biological evolution, they conjectured, there must have been an era of chemical evolution taking place in something like a pre-biotic soup. A reducing atmosphere prevailed, dominated by methane and ammonia, in which hydrogen atoms, by donating their electrons (and so “reducing” their number), promoted various chemical reactions. Energy was at hand in the form of electrical discharges, and thereafter complex hydrocarbons appeared on the surface of the sea.
The publication of Stanley Miller's paper, “A Production of Amino Acids Under Possible Primitive Earth Conditions,” in the May 1953 issue of Science completed the inferential arc initiated by Friedrich Wöhler 125 years earlier. Miller, a graduate student, did his work at the instruction of Harold Urey. Because he did not contribute directly to the experiment, Urey insisted that his name not be listed on the paper itself. But their work is now universally known as the Miller-Urey experiment, providing evidence that a good deed can be its own reward.
By drawing inferences about pre-biotic evolution from ordinary chemistry, Haldane and Oparin had opened an imaginary door. Miller and Urey barged right through. Within the confines of two beakers, they re-created a simple pre-biotic environment. One beaker held water; the other, connected to the first by a closed system of glass tubes, held hydrogen cyanide, water, methane, and ammonia. The two beakers were thus assumed to simulate the pre-biotic ocean and its atmosphere. Water in the first could pass by evaporation to the gases in the second, with vapor returning to the original alembic by means of condensation.
Then Miller and Urey allowed an electrical spark to pass continually through the mixture of gases in the second beaker, the gods of chemistry controlling the reactions that followed with very little or no human help. A week after they had begun their experiment, Miller and Urey discovered that in addition to a tarry residue—its most notable product—their potent little planet had yielded a number of the amino acids found in living systems.
The effect among biologists (and the public) was electrifying—all the more so because of the experiment's methodological genius. Miller and Urey had done nothing. Nature had done everything. The experiment alone had parted the cloud of unknowing.
The Double Helix
In April 1953, just four weeks before Miller and Urey would report their results in Science, James Watson and Francis Crick published a short letter in Nature entitled “A Structure for Deoxyribose Nucleic Acid.” The letter is now famous, if only because the exuberant Crick, at least, was persuaded that he and Watson had discovered the secret of life. In this he was mistaken: the secret of life, along with its meaning, remains hidden. But in deducing the structure of deoxyribose nucleic acid (DNA) from X-ray diffraction patterns and various chemical details, Watson and Crick had discovered the way in which life at the molecular level replicates itself.
Formed as a double helix, DNA, Watson and Crick argued, consists of two twisted strings facing each other and bound together by struts. Each string comprises a series of four nitrogenous bases: adenine (A), guanine (G), thymine (T), and cytosine (C). The bases are nitrogenous because their chemical activity is determined by the electrons of the nitrogen atom, and they are bases because they are one of two great chemical clans—the other being the acids, with which they combine to form salts.
Within each strand of DNA, the nitrogenous bases are bound to a sugar, deoxyribose. Sugar molecules are in turn linked to each other by a phosphate group. When nucleotides (A, G, T, or C) are connected in a sugar-phosphate chain, they form a polynucleotide. In living DNA, two such chains face each other, their bases touching fingers, A matched to T and C to G. The coincidence between bases is known now as Watson-Crick base pairing.
“It has not escaped our notice,” Watson and Crick observed, “that the specific pairings we have postulated immediately suggests a possible copying mechanism for the genetic material” (emphasis added). Replication proceeds, that is, when a molecule of DNA is unzipped along its internal axis, dividing the hydrogen bonds between the bases. Base pairing then works to prompt both strands of a separated double helix to form a double helix anew.
So Watson and Crick conjectured, and so it has proved.
The Synthesis of Protein
Together with Francis Crick and Maurice Wilkins, James Watson received the Nobel Prize for medicine in 1962. In his acceptance speech in Stockholm before the king of Sweden, Watson had occasion to explain his original research goals. The first was to account for genetic replication. This, he and Crick had done. The second was to describe the “way in which genes control protein synthesis.” This, he was in the course of doing.
DNA is a large, long, and stable molecule. As molecules go, it is relatively inert. It is the proteins, rather, that handle the day-to-day affairs of the cell. Acting as enzymes, and so as agents of change, proteins make possible the rapid metabolism characteristic of modern organisms.
Proteins are formed from the alpha-amino acids, of which there are twenty in living systems. The prefix “alpha” designates the position of the crucial carbon atom in the amino acid, indicating that it lies adjacent to (and is bound up with) a carboxyl group comprising carbon, oxygen, again oxygen, and hydrogen. And the proteins are polymers: like DNA, their amino-acid constituents are formed into molecular chains.
But just how does the cell manage to link amino acids to form specific proteins? This was the problem to which Watson alluded as the king of Sweden, lost in a fog of admiration, nodded amiably.
The success of Watson-Crick base pairing had persuaded a number of molecular biologists that DNA undertook protein synthesis by the same process—the formation of symmetrical patterns or “templates”—that governed its replication. After all, molecular replication proceeded by the divinely simple separation-and-recombination of matching (or symmetrical) molecules, with each strand of DNA serving as the template for another. So it seemed altogether plausible that DNA would likewise serve a template function for the amino acids.
It was Francis Crick who in 1957 first observed that this was most unlikely. In a note circulated privately, Crick wrote that “if one considers the physico-chemical nature of the amino-acid side chains, we do not find complementary features on the nucleic acids. Where are the knobby hydrophobic . . . surfaces to distinguish valine from leucine and isoleucine? Where are the charged groups, in specific positions, to go with acidic and basic amino acids?”
Should anyone have missed his point, Crick made it again: “I don't think that anyone looking at DNA or RNA [ribonucleic acid] would think of them as templates for amino acids.”
Had these observations been made by anyone but Francis Crick, they might have been regarded as the work of a lunatic; but in looking at any textbook in molecular biology today, it is clear that Crick was simply noticing what was under his nose. Just where are those “knobby hydrophobic surfaces”? To imagine that the nucleic acids form a template or pattern for the amino acids is a little like trying to imagine a glove fitting over a centipede. But if the nucleic acids did not form a template for the amino acids, then the information they contained—all of the ancient wisdom of the species, after all—could only be expressed by an indirect form of transmission: a code of some sort.
The idea was hardly new. The physicist Erwin Schrödinger had predicted in 1945 that living systems would contain what he called a “code script”; and his short, elegant book, What Is Life?, had exerted a compelling influence on every molecular biologist who read it. Ten years later, the ubiquitous Crick invoked the phrase “sequence hypothesis” to characterize the double idea that DNA sequences spell a message and that a code is required to express it. What remained obscure was both the spelling of the message and the mechanism by which it was conveyed.
The mechanism emerged first. During the late 1950's, François Jacob and Jacques Monod advanced the thesis that RNA acts as the first in a chain of intermediates leading from DNA to the amino acids.
Single- rather than double-stranded, RNA is a nucleic acid: a chip from the original DNA block. Instead of thymine (T), it contains the base uracil (U), and the sugar that it employs along its backbone features an atom of oxygen missing from deoxyribose. But RNA, Jacob and Monod argued, was more than a mere molecule: it was a messenger, an instrument of conveyance, “transcribing” in one medium a message first expressed in another. Among the many forms of RNA loitering in the modern cell, the RNA bound for duties of transcription became known, for obvious reasons, as “messenger” RNA.
In transcription, molecular biologists had discovered a second fundamental process, a companion in arms to replication. Almost immediately thereafter, details of the code employed by the messenger appeared. In 1961, Marshall Nirenberg and J. Heinrich Matthaei announced that they had discovered a specific point of contact between RNA and the amino acids. And then, in short order, the full genetic code emerged. RNA (like DNA) is organized into triplets, so that adjacent sequences of three bases are mapped to a single amino acid. Sixty-four triplets (or codons) govern twenty amino acids. The scheme is universal, or almost so.
The elaboration of the genetic code made possible a remarkably elegant model of the modern cell as a system in which sequences of codons within the nucleic acids act at a distance to determine sequences of amino acids within the proteins: commands issued, responses undertaken. A third fundamental biological process thus acquired molecular incarnation. If replication served to divide and then to duplicate the cell's ancestral message, and transcription to re-express it in messenger RNA, “translation” acted to convey that message from messenger RNA to the amino acids.
For all the boldness and power of this thesis, the details remained on the level of what bookkeepers call general accounting procedures. No one had established a direct—a physical—connection between RNA and the amino acids.
Having noted the problem, Crick also indicated the shape of its solution. “I therefore proposed a theory,” he would write retrospectively, “in which there were twenty adaptors (one for each amino acid), together with twenty special enzymes. Each enzyme would join one particular amino acid to its own special adaptor.”
In early 1969, at roughly the same time that a somber Lyndon Johnson was departing the White House to return to the Pedernales, the adaptors whose existence Crick had predicted came into view. There were twenty, just as he had suggested. They were short in length; they were specific in their action; and they were nucleic acids. Collectively, they are now designated “transfer” RNA (tRNA).
Folded like a cloverleaf, transfer RNA serves physically as a bridge between messenger RNA and an amino acid. One arm of the cloverleaf is called the anti-coding region. The three nucleotide bases that it contains are curved around the arm's bulb-end; they are matched by Watson-Crick base pairing to bases on the messenger RNA. The other end of the cloverleaf is an acceptor region. It is here that an amino acid must go, with the structure of tRNA suggesting a complicated female socket waiting to be charged by an appropriate male amino acid.
The adaptors whose existence Crick had predicted served dramatically to confirm his hypothesis that such adaptors were needed. But although they brought about a physical connection between the nucleic and the amino acids, the fact that they were themselves nucleic acids raised a question: in the unfolding molecular chain, just what acted to adapt the adaptors to the amino acids? And this, too, was a problem Crick both envisaged and solved: his original suggestion mentioned both adaptors (nucleic acids) and their enzymes (proteins).
And so again it proved. The act of matching adaptors to amino acids is carried out by a family of enzymes, and thus by a family of proteins: the aminoacyl-tRNA synthetases. There are as many such enzymes as there are adaptors. The prefix “aminoacyl” indicates a class of chemical reactions, and it is in aminoacylation that the cargo of a carboxyl group is bonded to a molecule of transfer RNA.
Collectively, the enzymes known as synthetases have the power both to recognize specific codons and to select their appropriate amino acid under the universal genetic code. Recognition and selection are ordinarily thought to be cognitive acts. In psychology, they are poorly understood, but within the cell they have been accounted for in chemical terms and so in terms of “the model for what science should be.”
With tRNA appropriately charged, the molecule is conveyed to the ribosome, where the task of assembling sequences of amino acids is then undertaken by still another nucleic acid, ribosomal RNA (rRNA). By these means, the modern cell is at last subordinated to a rich narrative drama. To repeat:
Replication duplicates the genetic message in DNA.
Transcription copies the genetic message from DNA to RNA.
Translation conveys the genetic message from RNA to the amino acids—whereupon, in a fourth and final step, the amino acids are assembled into proteins.
The Central Dogma
It was once again Francis Crick, with his remarkable gift for impressing his authority over an entire discipline, who elaborated these facts into what he called the central dogma of molecular biology. The cell, Crick affirmed, is a divided kingdom. Acting as the cell's administrators, the nucleic acids embody all of the requisite wisdom—where to go, what to do, how to manage—in the specific sequence of their nucleotide bases. Administration then proceeds by the transmission of information from the nucleic acids to the proteins.
The central dogma thus depicts an arrow moving one way, from the nucleic acids to the proteins, and never the other way around. But is anything ever routinely returned, arrow-like, from its target? This is not a question that Crick considered, although in one sense the answer is plainly no. Given the modern genetic code, which maps four nucleotides onto twenty amino acids, there can be no inverse code going in the opposite direction; an inverse mapping is mathematically impossible.
But there is another sense in which Crick's central dogma does engender its own reversal. If the nucleic acids are the cell's administrators, the proteins are its chemical executives: both the staff and the stuff of life. The molecular arrow goes one way with respect to information, but it goes the other way with respect to chemistry.
Replication, transcription, and translation represent the grand unfolding of the central dogma as it proceeds in one direction. The chemical activities initiated by the enzymes represent the grand unfolding of the central dogma as it goes in the other. Within the cell, the two halves of the central dogma combine to reveal a system of coded chemistry, an exquisitely intricate but remarkably coherent temporal tableau suggesting a great army in action.
From these considerations a familiar figure now emerges: the figure of a chicken and its egg. Replication, transcription, and translation are all under the control of various enzymes. But enzymes are proteins, and these particular proteins are specified by the cell's nucleic acids. DNA requires the enzymes in order to undertake the work of replication, transcription, and translation; the enzymes require DNA in order to initiate it. The nucleic acids and the proteins are thus profoundly coordinated, each depending upon the other. Without amino-acyl-tRNA synthetase, there is no translation from RNA; but without DNA, there is no synthesis of aminoacyl-tRNA synthetase.
If the nucleic acids and their enzymes simply chased each other forever around the same cell, the result would be a vicious circle. But life has elegantly resolved the circle in the form of a spiral. The aminoacyl-tRNA synthetase that is required to complete molecular translation enters a given cell from its progenitor or “maternal” cell, where it is specified by that cell's DNA. The enzymes required to make the maternal cell's DNA do its work enter that cell from its maternal line. And so forth.
On the level of intuition and experience, these facts suggest nothing more mysterious than the longstanding truism that life comes only from life. Omnia viva ex vivo, as Latin writers said. It is only when they are embedded in various theories about the origins of life that the facts engender a paradox, or at least a question: in the receding molecular spiral, which came first—the chicken in the form of DNA, or its egg in the form of various proteins? And if neither came first, how could life have begun?
The RNA World
It is 1967, the year of the Six-Day war in the Middle East, the discovery of the electroweak forces in particle physics, and the completion of a twenty-year research program devoted to the effects of fluoridation on dental caries in Evanston, Illinois. It is also the year in which Carl Woese, Leslie Orgel, and Francis Crick introduced the hypothesis that “evolution based on RNA replication preceded the appearance of protein synthesis” (emphasis added).
By this time, it had become abundantly clear that the structure of the modern cell was not only more complex than other physical structures but complex in poorly understood ways. And yet no matter how far back biologists traveled into the tunnel of time, certain features of the modern cell were still there, a message sent into the future by the last universal common ancestor. Summarizing his own perplexity in retrospect, Crick would later observe that “an honest man, armed with all the knowledge available to us now, could only state that, in some sense, the origin of life appears at the moment to be almost a miracle.” Very wisely, Crick would thereupon determine never to write another paper on the subject—although he did affirm his commitment to the theory of “directed panspermia,” according to which life originated in some other portion of the universe and, for reasons that Crick could never specify, was simply sent here.
But that was later. In 1967, the argument presented by Woesel, Orgel, and Crick was simple. Given those chickens and their eggs, something must have come first. Two possibilities were struck off by a process of elimination. DNA? Too stable and, in some odd sense, too perfect. The proteins? Incapable of dividing themselves, and so, like molecular eunuchs, useful without being fecund. That left RNA. While it was not obviously the right choice for a primordial molecule, it was not obviously the wrong choice, either.
The hypothesis having been advanced—if with no very great sense of intellectual confidence—biologists differed in its interpretation. But they did concur on three general principles. First: that at some time in the distant past, RNA rather than DNA controlled genetic replication. Second: that Watson-Crick base pairing governed ancestral RNA. And third: that RNA once carried on chemical activities of the sort that are now entrusted to the proteins. The paradox of the chicken and the egg was thus resolved by the hypothesis that the chicken was the egg.
The independent discovery in 1981 of the ribozyme—a ribonucleic enzyme—by Thomas Cech and Sidney Altman endowed the RNA hypothesis with the force of a scientific conjecture. Studying the ciliated protozoan Tetrahymena thermophila, Cech discovered to his astonishment a form of RNA capable of inducing cleavage. Where an enzyme might have been busy pulling a strand of RNA apart, there was a ribozyme doing the work instead. That busy little molecule served not only to give instructions: apparently it took them as well, and in any case it did what biochemists had since the 1920's assumed could only be done by an enzyme and hence by a protein.
In 1986, the biochemist Walter Gilbert was moved to assert the existence of an entire RNA “world,” an ancestral state promoted by the magic of this designation to what a great many biologists would affirm as fact. Thus, when the molecular biologist Harry Noller discovered that protein synthesis within the contemporary ribosome is catalyzed by ribosomal RNA (rRNA), and not by any of the familiar, old-fashioned enzymes, it appeared “almost certain” to Leslie Orgel that “there once was an RNA world” (emphasis added).
From Molecular Biology to the Origins of Life
It is perfectly true that every part of the modern cell carries some faint traces of the past. But these molecular traces are only hints. By contrast, to everyone who has studied it, the ribozyme has appeared to be an authentic relic, a solid and palpable souvenir from the pre-biotic past. Its discovery prompted even Francis Crick to the admission that he, too, wished he had been clever enough to look for such relics before they became known.
Thanks to the ribozyme, a great many scientists have become convinced that the “model for what science should be” is achingly close to encompassing the origins of life itself. “My expectation,” remarks David Liu, professor of chemistry and chemical biology at Harvard, “is that we will be able to reduce this to a very simple series of logical events.” Although often overstated, this optimism is by no means irrational. Looking at the modern cell, biologists propose to reconstruct in time the structures that are now plainly there in space.
Research into the origins of life has thus been subordinated to a rational three-part sequence, beginning in the very distant past. First, the constituents of the cell were formed and assembled. These included the nucleotide bases, the amino acids, and the sugars. There followed next the emergence of the ribozyme, endowed somehow with powers of self-replication. With the stage set, a system of coded chemistry then emerged, making possible what the molecular biologist Paul Schimmel has called “the theater of the proteins.” Thus did matters proceed from the pre-biotic past to the very threshold of the last universal common ancestor, whereupon, with inimitable gusto, life began to diversify itself by means of Darwinian principles.
This account is no longer fantasy. But it is not yet fact. That is one reason why retracing its steps is such an interesting exercise, to which we now turn.
It is perhaps four billion years ago. The first of the great eras in the formation of life has commenced. The laws of chemistry are completely in control of things—what else is there? It is Miller Time, the period marking the transition from inorganic to organic chemistry.
According to the impression generally conveyed in both the popular and the scientific literature, the success of the original Miller-Urey experiment was both absolute and unqualified. This, however, is something of an exaggeration. Shortly after Miller and Urey published their results, a number of experienced geochemists expressed reservations. Miller and Urey had assumed that the pre-biotic atmosphere was one in which hydrogen atoms gave up (reduced) their electrons in order to promote chemical activity. Not so, the geochemists contended. The pre-biotic atmosphere was far more nearly neutral than reductive, with little or no methane and a good deal of carbon dioxide.
Nothing in the intervening years has suggested that these sour geochemists were far wrong. Writing in the 1999 issue of Peptides, B.M. Rode observed blandly that “modern geochemistry assumes that the secondary atmosphere of the primitive earth (i.e., after diffusion of hydrogen and helium into space) . . . consisted mainly of carbon dioxide, nitrogen, water, sulfur dioxide, and even small amounts of oxygen.” This is not an environment calculated to induce excitement.
Until recently, the chemically unforthcoming nature of the early atmosphere remained an embarrassing secret among evolutionary biologists, like an uncle known privately to dress in women's underwear; if biologists were disposed in public to acknowledge the facts, they did so by remarking that every family has one. This has now changed. The issue has come to seem troubling. A recent paper in Science has suggested that previous conjectures about the pre-biotic atmosphere were seriously in error. A few researchers have argued that a reducing atmosphere is not, after all, quite so important to pre-biotic synthesis as previously imagined.
In all this, Miller himself has maintained a far more unyielding and honest perspective. “Either you have a reducing atmosphere,” he has written bluntly, “or you're not going to have the organic compounds required for life.”
If the composition of the pre-biotic atmosphere remains a matter of controversy, this can hardly be considered surprising: geochemists are attempting to revisit an era that lies four billion years in the past. The synthesis of pre-biotic chemicals is another matter. Questions about them come under the discipline of laboratory experiments.
Among the questions is one concerning the nitrogenous base cytosine (C). Not a trace of the stuff has been found in any meteor. Nothing in comets, either, so far as anyone can tell. It is not buried in the Antarctic. Nor can it be produced by any of the common experiments in pre-biotic chemistry. Beyond the living cell, it has not been found at all.
When, therefore, M.P. Robertson and Stanley Miller announced in Nature in 1995 that they had specified a plausible route for the pre-biotic synthesis of cytosine from cyanoacetaldehyde and urea, the feeling of gratification was very considerable. But it has also been short-lived. In a lengthy and influential review published in the 1999 Proceedings of the National Academy of Science, the New York University chemist Robert Shapiro observed that the reaction on which Robertson and Miller had pinned their hopes, although active enough, ultimately went nowhere. All too quickly, the cytosine that they had synthesized transformed itself into the RNA base uracil (U) by a chemical reaction known as deamination, which is nothing more mysterious than the process of getting rid of one molecule by sending it somewhere else.
The difficulty, as Shapiro wrote, was that “the formation of cytosine and the subsequent deamination of the product to uracil occur[ed] at about the same rate.” Robertson and Miller had themselves reported that after 120 hours, half of their precious cytosine was gone—and it went faster when their reactions took place in saturated urea. In Shapiro's words, “It is clear that the yield of cytosine would fall to 0 percent if the reaction were extended.”
If the central chemical reaction favored by Robertson and Miller was self-defeating, it was also contingent on circumstances that were unlikely. Concentrated urea was needed to prompt their reaction; an outhouse whiff would not do. For this same reason, however, the pre-biotic sea, where concentrates disappear too quickly, was hardly the place to begin—as anyone who has safely relieved himself in a swimming pool might confirm with guilty satisfaction. Aware of this, Robertson and Miller posited a different set of circumstances: in place of the pre-biotic soup, drying lagoons. In a fine polemical passage, their critic Shapiro stipulated what would thereby be required:
An isolated lagoon or other body of seawater would have to undergo extreme concentration. . . .
It would further be necessary that the residual liquid be held in an impermeable vessel [in order to prevent cross-reactions].
The concentration process would have to be interrupted for some decades . . . to allow the reaction to occur.
At this point, the reaction would require quenching (perhaps by evaporation to dryness) to prevent loss by deamination.
At the end, one would have a batch of urea in solid form, containing some cytosine (and urea).
Such a scenario, Shapiro remarked, “cannot be excluded as a rare event on early earth, but it cannot be termed plausible.”
Like cytosine, sugar must also make an appearance in Miller Time, and, like cytosine, it too is difficult to synthesize under plausible pre-biotic conditions.
In 1861, the German chemist Alexander Bulterow created a sugar-like substance from a mixture of formaldehyde and lime. Subsequently refined by a long line of organic chemists, Bulterow's so-called formose reaction has been an inspiration to origins-of-life researchers ever since.
The reaction is today initiated by an alkalizing agent, such as thallium or lead hydroxide. There follows a long induction period, with a number of intermediates bubbling up. The formose reaction is auto-catalytic in the sense that it keeps on going: the carbohydrates that it generates serve to prime the reaction in an exponentially growing feedback loop until the initial stock of formaldehyde is exhausted. With the induction over, the formose reaction yields a number of complex sugars.
Nonetheless, it is not sugars in general that are wanted from Miller Time but a particular form of sugar, namely, ribose—and not simply ribose but dextro ribose. Compounds of carbon are naturally right-handed or left-handed, depending on how they polarize light. The ribose in living systems is right-handed, hence the prefix “dextro.” But the sugars exiting the formose reaction are racemic, that is, both left- and right-handed, and the yield of usable ribose is negligible.
While nothing has as yet changed the fundamental fact that it is very hard to get the right kind of sugar from any sort of experiment, in 1990 the Swiss chemist Albert Eschenmoser was able to change substantially the way in which the sugars appeared. Reaching with the hand of a master into the formose reaction itself, Eschenmoser altered two molecules by adding a phosphate group to them. This slight change prevented the formation of the alien sugars that cluttered the classical formose reaction. The products, Eschenmoser reported, included among other things a mixture of ribose-2,4,-diphosphate. Although the mixture was racemic, it did contain a molecule close to the ribose needed by living systems. With a few chemical adjustments, Eschenmoser could plausibly claim, the pre-biotic route to the synthesis of sugar would lie open.
It remained for skeptics to observe that Eschenmoser's ribose reactions were critically contingent on Eschenmoser himself, and at two points: the first when he attached phosphate groups to a number of intermediates in the formose reaction, and the second when he removed them.
What had given the original Miller-Urey experiment its power to excite the imagination was the sense that, having set the stage, Miller and Urey exited the theater. By contrast, Eschenmoser remained at center stage, giving directions and in general proving himself indispensable to the whole scene.
Events occurring in Miller Time would thus appear to depend on the large assumption, still unproved, that the early atmosphere was reductive, while two of the era's chemical triumphs, cytosine and sugar, remain for the moment beyond the powers of contemporary pre-biotic chemistry.
From Miller Time to Self-Replicating RNA
In the grand progression by which life arose from inorganic matter, Miller Time has been concluded. It is now 3.8 billion years ago. The chemical precursors to life have been formed. A limpid pool of nucleotides is somewhere in existence. A new era is about to commence.
The historical task assigned to this era is a double one: forming chains of nucleic acids from nucleotides, and discovering among them those capable of reproducing themselves. Without the first, there is no RNA; and without the second, there is no life.
In living systems, polymerization or chain-formation proceeds by means of the cell's invaluable enzymes. But in the grim inhospitable pre-biotic, no enzymes were available. And so chemists have assigned their task to various inorganic catalysts. J.P. Ferris and G. Ertem, for instance, have reported that activated nucleotides bond covalently when embedded on the surface of montmorillonite, a kind of clay. This example, combining technical complexity with general inconclusiveness, may stand for many others.
In any event, polymerization having been concluded—by whatever means—the result was (in the words of Gerald Joyce and Leslie Orgel) “a random ensemble of polynucleotide sequences”: long molecules emerging from short ones, like fronds on the surface of a pond. Among these fronds, nature is said to have discovered a self-replicating molecule. But how?
Darwinian evolution is plainly unavailing in this exercise or that era, since Darwinian evolution begins with self-replication, and self-replication is precisely what needs to be explained. But if Darwinian evolution is unavailing, so, too, is chemistry. The fronds comprise “a random ensemble of polynucleotide sequences” (emphasis added); but no principle of organic chemistry suggests that aimless encounters among nucleic acids must lead to a chain capable of self-replication.
If chemistry is unavailing and Darwin indisposed, what is left as a mechanism? The evolutionary biologist's finest friend: sheer dumb luck.
Was nature lucky? It depends on the payoff and the odds. The payoff is clear: an ancestral form of RNA capable of replication. Without that payoff, there is no life, and obviously, at some point, the payoff paid off. The question is the odds.
For the moment, no one knows how precisely to compute those odds, if only because within the laboratory, no one has conducted an experiment leading to a self-replicating ribozyme. But the minimum length or “sequence” that is needed for a contemporary ribozyme to undertake what the distinguished geochemist Gustaf Arrhenius calls “demonstrated ligase activity” is known. It is roughly 100 nucleotides.
Whereupon, just as one might expect, things blow up very quickly. As Arrhenius notes, there are 4100 or roughly 1060 nucleotide sequences that are 100 nucleotides in length. This is an unfathomably large number. It exceeds the number of atoms contained in the universe, as well as the age of the universe in seconds. If the odds in favor of self-replication are 1 in 1060, no betting man would take them, no matter how attractive the payoff, and neither presumably would nature.
“Solace from the tyranny of nucleotide combinatorials,” Arrhenius remarks in discussing this very point, “is sought in the feeling that strict sequence specificity may not be required through all the domains of a functional oligmer, thus making a large number of library items eligible for participation in the construction of the ultimate functional entity.” Allow me to translate: why assume that self-replicating sequences are apt to be rare just because they are long? They might have been quite common.
They might well have been. And yet all experience is against it. Why should self-replicating RNA molecules have been common 3.6 billion years ago when they are impossible to discern under laboratory conditions today? No one, for that matter, has ever seen a ribozyme capable of any form of catalytic action that is not very specific in its sequence and thus unlike even closely related sequences. No one has ever seen a ribozyme able to undertake chemical action without a suite of enzymes in attendance. No one has ever seen anything like it.
The odds, then, are daunting; and when considered realistically, they are even worse than this already alarming account might suggest. The discovery of a single molecule with the power to initiate replication would hardly be sufficient to establish replication. What template would it replicate against? We need, in other words, at least two, causing the odds of their joint discovery to increase from 1 in 1060 to 1 in 10120. Those two sequences would have been needed in roughly the same place. And at the same time. And organized in such a way as to favor base pairing. And somehow held in place. And buffered against competing reactions. And productive enough so that their duplicates would not at once vanish in the soundless sea.
In contemplating the discovery by chance of two RNA sequences a mere 40 nucleotides in length, Joyce and Orgel concluded that the requisite “library” would require 1048 possible sequences. Given the weight of RNA, they observed gloomily, the relevant sample space would exceed the mass of the earth. And this is the same Leslie Orgel, it will be remembered, who observed that “it was almost certain that there once was an RNA world.”
To the accumulating agenda of assumptions, then, let us add two more: that without enzymes, nucleotides were somehow formed into chains, and that by means we cannot duplicate in the laboratory, a pre-biotic molecule discovered how to reproduce itself.
From Self-Replicating RNA to Coded Chemistry
A new era is now in prospect, one that begins with a self-replicating form of RNA and ends with the system of coded chemistry characteristic of the modern cell. The modern cell—meaning one that divides its labors by assigning to the nucleic acids the management of information and to the proteins the execution of chemical activity. It is 3.6 billion years ago.
It is with the advent of this era that distinctively conceptual problems emerge. The gods of chemistry may now be seen receding into the distance. The cell's system of coded chemistry is determined by two discrete combinatorial objects: the nucleic acids and the amino acids. These objects are discrete because, just as there are no fractional sentences containing three-and-a-half words, there are no fractional nucleotide sequences containing three-and-a-half nucleotides, or fractional proteins containing three-and-a-half amino acids. They are combinatorial because both the nucleic acids and the amino acids are combined by the cell into larger structures.
But if information management and its administration within the modern cell are determined by a discrete combinatorial system, the work of the cell is part of a markedly different enterprise. The periodic table notwithstanding, chemical reactions are not combinatorial, and they are not discrete. The chemical bond, as Linus Pauling demonstrated in the 1930's, is based squarely on quantum mechanics. And to the extent that chemistry is explained in terms of physics, it is encompassed not only by “the model for what science should be” but by the system of differential equations that play so conspicuous a role in every one of the great theories of mathematical physics.
What serves to coordinate the cell's two big shots of information management and chemical activity, and so to coordinate two fundamentally different structures, is the universal genetic code. To capture the remarkable nature of the facts in play here, it is useful to stress the word code.
By itself, a code is familiar enough: an arbitrary mapping or a system of linkages between two discrete combinatorial objects. The Morse code, to take a familiar example, coordinates dashes and dots with letters of the alphabet. To note that codes are arbitrary is to note the distinction between a code and a purely physical connection between two objects. To note that codes embody mappings is to embed the concept of a code in mathematical language. To note that codes reflect a linkage of some sort is to return the concept of a code to its human uses.
In every normal circumstance, the linkage comes first and represents a human achievement, something arising from a point beyond the coding system. (The coordination of dot-dot-dot-dash-dash-dash-dot-dot-dot with the distress signal S-O-S is again a familiar example.) Just as no word explains its own meaning, no code establishes its own nature.
The conceptual question now follows. Can the origins of a system of coded chemistry be explained in a way that makes no appeal whatsoever to the kinds of facts that we otherwise invoke to explain codes and languages, systems of communication, the impress of ordinary words on the world of matter?
In this regard, it is worth recalling that, as Hubert Yockey observes in Information Theory, Evolution, and the Origin of Life (2005), “there is no trace in physics or chemistry of the control of chemical reactions by a sequence of any sort or of a code between sequences.”
Writing in the 2001 issue of the journal RNA, the microbiologist Carl Woese referred ominously to the “dark side of molecular biology.” DNA replication, Woese wrote, is the extraordinarily elegant expression of the structural properties of a single molecule: zip down, divide, zip up. The transcription into RNA follows suit: copy and conserve. In each of these two cases, structure leads to function. But where is the coordinating link between the chemical structure of DNA and the third step, namely, translation? When it comes to translation, the apparatus is baroque: it is incredibly elaborate, and it does not reflect the structure of any molecule.
These reflections prompted Woese to a somber conclusion: if “the nucleic acids cannot in any way recognize the amino acids,” then there is no “fundamental physical principle” at work in translation (emphasis added).
But Woese's diagnosis of disorder is far too partial; the symptoms he regards as singular are in fact widespread. What holds for translation holds as well for replication and transcription. The nucleic acids cannot directly recognize the amino acids (and vice versa), but they cannot directly replicate or transcribe themselves, either. Both replication and translation are enzymatically driven, and without those enzymes, a molecule of DNA or RNA would do nothing whatsoever. Contrary to what Woese imagines, no fundamental physical principles appear directly at work anywhere in the modern cell.
The most difficult and challenging problem associated with the origins of life is now in view. One half of the modern system of coded chemistry—the genetic code and the sequences it conveys—is, from a chemical perspective, arbitrary. The other half of the system of coded chemistry—the activity of the proteins—is, from a chemical perspective, necessary. In life, the two halves are coordinated. The problem follows: how did that—the whole system—get here?
The prevailing opinion among molecular biologists is that questions about molecular-biological systems can only be answered by molecular-biological experiments. The distinguished molecular biologist Horoaki Suga has recently demonstrated the strengths and the limitations of the experimental method when confronted by difficult conceptual questions like the one I have just posed.
The goal of Suga's experiment was to show that a set of RNA catalysts (or ribozymes) could well have played the role now played in the modern cell by the protein family of aminoacyl synthetases. Until his work, Suga reports, there had been no convincing demonstration that a ribozyme was able to perform the double function of a synthetase—that is, recognizing both a form of transfer RNA and an amino acid. But in Suga's laboratory, just such a molecule made a now-celebrated appearance. With an amino acid attached to its tail, the ribozyme managed to cleave itself and, like a snake, affix its amino-acid cargo onto its head. What is more, it could conduct this exercise backward, shifting the amino acid from its head to its tail again. The chemical reactions involved acylation: precisely the reactions undertaken by synthetases in the modern cell.
Horoaki Suga's experiment was both interesting and ingenious, prompting a reaction perhaps best expressed as, “Well, would you look at that!” It has altered the terms of debate by placing a number of new facts on the table. And yet, as so often happens in experimental pre-biotic chemistry, it is by no means clear what interpretation the facts will sustain.
Do Suga's results really establish the existence of a primitive form of coded chemistry? Although unexpected in context, the coordination he achieved between an amino acid and a form of transfer RNA was never at issue in principle. The question is whether what was accomplished in establishing a chemical connection between these two molecules was anything like establishing the existence of a code. If so, then organic chemistry itself could properly be described as the study of codes, thereby erasing the meaning of a code as an arbitrary mapping between discrete combinatorial objects.
Suga, in summarizing the results of his research, captures rhetorically the inconclusiveness of his achievement. “Our demonstration indicates,” he writes, “that catalytic precursor tRNA's could have provided the foundation of the genetic coding system.” But if the association at issue is not a code, however primitive, it could no more be the “foundation” of a code than a feather could be the foundation of a building. And if it is the foundation of a code, then what has been accomplished has been accomplished by the wrong agent.
In Suga's experiment, there was no sign that the execution of chemical routines fell under the control of a molecular administration, and no sign, either, that the missing molecular administration had anything to do with executive chemical routines. The missing molecular administrator was, in fact, Suga himself, as his own account reveals. The relevant features of the experiment, he writes, “allow[ed] us to select active RNA molecules with selectivity toward a desired amino acid” (emphasis added). Thereafter, it was Suga and his collaborators who “applied stringent conditions” to the experiment, undertook “selective amplification of the self-modifying RNA molecules,” and “screened” vigorously for “self-aminoacylation activity” (emphasis added throughout).
If nothing else, the advent of a system of coded chemistry satisfied the most urgent of imperatives: it was needed and it was found. It was needed because once a system of chemical reactions reaches a certain threshold of complexity, nothing less than a system of coded chemistry can possibly master the ensuing chaos. It was found because, after all, we are here.
Precisely these circumstances have persuaded many molecular biologists that the explanation for the emergence of a system of coded chemistry must in the end lie with Darwin's theory of evolution. As one critic has observed in commenting on Suga's experiments, “If a certain result can be achieved by direction in a laboratory by a Suga, surely it can also be achieved by chance in a vast universe.”
A self-replicating ribozyme meets the first condition required for Darwinian evolution to gain purchase. It is by definition capable of replication. And it meets the second condition as well, for, by means of mistakes in replication, it introduces the possibility of variety into the biological world. On the assumption that subsequent changes to the system follow a law of increasing marginal utility, one can then envisage the eventual emergence of a system of coded chemistry—a system that can be explained in terms of “the model for what science should be.”
It was no doubt out of considerations like these that, in coming up against what he called the “dark side of molecular biology,” Carl Woese was concerned to urge upon the biological community the benefits of “an all-out Darwinian perspective.” But the difficulty with “an all-out Darwinian perspective” is that it entails an all-out Darwinian impediment: notably, the assignment of a degree of foresight to a Darwinian process that the process could not possibly possess.
The hypothesis of an RNA world trades brilliantly on the idea that a divided modern system had its roots in some form of molecular symmetry that was then broken by the contingencies of life. At some point in the transition to the modern system, an ancestral form of RNA must have assigned some of its catalytic properties to an emerging family of proteins. This would have taken place at a given historical moment; it is not an artifact of the imagination. Similarly, at some point in the transition to a modern system, an ancestral form of RNA must have acquired the ability to code for the catalytic powers it was discarding. And this, too, must have taken place at a particular historical moment.
The question, of course, is which of the two steps came first. Without life acquiring some degree of foresight, neither step can be plausibly fixed in place by means of any schedule of selective advantages. How could an ancestral form of RNA have acquired the ability to code for various amino acids before coding was useful? But then again, why should “ribozymes in an RNA world,” as the molecular biologists Paul Schimmel and Shana O. Kelley ask, “have expedited their own obsolescence?”
Could the two steps have taken place simultaneously? If so, there would appear to be very little difference between a Darwinian explanation and the frank admission that a miracle was at work. If no miracles are at work, we are returned to the place from which we started, with the chicken-and-egg pattern that is visible when life is traced backward now appearing when it is traced forward.
It is thus unsurprising that writings embodying Woese's “all-out Darwinian perspective” are dominated by references to a number of unspecified but mysteriously potent forces and obscure conditional circumstances. I quote without attribution because the citations are almost generic (emphasis added throughout):
- The aminoacylation of RNA initially must have provided some selective advantage.
- The products of this reaction must have conferred some selective advantage.
- However, the development of a crude mechanism for controlling the diversity of possible peptides would have been advantageous.
- [P]rogressive refinement of that mechanism would have provided further selective advantage.
And so forth—ending, one imagines, in reduction to the all-purpose imperative of Darwinian theory, which is simply that what was must have been.
Now It Is Now
At the conclusion of a long essay, it is customary to summarize what has been learned. In the present case, I suspect it would be more prudent to recall how much has been assumed:
First, that the pre-biotic atmosphere was chemically reductive; second, that nature found a way to synthesize cytosine; third, that nature also found a way to synthesize ribose; fourth, that nature found the means to assemble nucleotides into polynucleotides; fifth, that nature discovered a self-replicating molecule; and sixth, that having done all that, nature promoted a self-replicating molecule into a full system of coded chemistry.
These assumptions are not only vexing but progressively so, ending in a serious impediment to thought. That, indeed, may be why a number of biologists have lately reported a weakening of their commitment to the RNA world altogether, and a desire to look elsewhere for an explanation of the emergence of life on earth. “It's part of a quiet paradigm revolution going on in biology,” the biophysicist Harold Morowitz put it in an interview in New Scientist, “in which the radical randomness of Darwinism is being replaced by a much more scientific law-regulated emergence of life.”
Morowitz is not a man inclined to wait for the details to accumulate before reorganizing the vista of modern biology. In a series of articles, he has argued for a global vision based on the biochemistry of living systems rather than on their molecular biology or on Darwinian adaptations. His vision treats the living system as more fundamental than its particular species, claiming to represent the “universal and deterministic features of any system of chemical interactions based on a water-covered but rocky planet such as ours.”
This view of things—metabolism first, as it is often called—is not only intriguing in itself but is enhanced by a firm commitment to chemistry and to “the model for what science should be.” It has been argued with great vigor by Morowitz and others. It represents an alternative to the RNA world. It is a work in progress, and it may well be right. Nonetheless, it suffers from one outstanding defect. There is as yet no evidence that it is true.
It is now more than 175 years since Friedrich Wöhler announced the synthesis of urea. It would be the height of folly to doubt that our understanding of life's origins has been immeasurably improved. But whether it has been immeasurably improved in a way that vigorously confirms the daring idea that living systems are chemical in their origin and so physical in their nature—that is another question entirely.
In “On the Origins of the Mind,” I tried to show that much can be learned by studying the issue from a computational perspective. Analogously, in contemplating the origins of life, much—in fact, more—can be learned by studying the issue from the perspective of coded chemistry. In both cases, however, what seems to lie beyond the reach of “the model for what science should be” is any success beyond the local. All questions about the global origins of these strange and baffling systems seem to demand answers that the model itself cannot by its nature provide.
It goes without saying that this is a tentative judgment, perhaps only a hunch. But let us suppose that questions about the origins of the mind and the origins of life do lie beyond the grasp of “the model for what science should be.” In that case, we must either content ourselves with its limitations or revise the model. If a revision also lies beyond our powers, then we may well have to say that the mind and life have appeared in the universe for no very good reason that we can discern.
Worse things have happened. In the end, these are matters that can only be resolved in the way that all such questions are resolved. We must wait and see.
1 I used this phrase, borrowed from the mathematicians J.H. Hubbard and B.H. West, in “On the Origins of the Mind” (COMMENTARY, November 2004). The idea that science must conform to a certain model of inquiry is familiar. Hubbard and West identify that model with differential equations, the canonical instruments throughout physics and chemistry.
But the essentials of the model, it seems to me, lie less with the particular means in which it is expressed and more with the constraints that it must meet. The idea behind the “model for what science should be” is that whatever may be a system's initial conditions, or starting point, the laws of its development must be both unique and stable. When they are, the system that results is well posed, and so a proper object of contemplation.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
On the Origins of Life
Must-Reads from Magazine
Exactly one week later, a Star Wars cantina of the American extremist right featuring everyone from David Duke to a white-nationalist Twitter personality named “Baked Alaska” gathered in Charlottesville, Virginia, to protest the removal of a statue honoring the Confederate general Robert E. Lee. A video promoting the gathering railed against “the international Jewish system, the capitalist system, and the forces of globalism.” Amid sporadic street battles between far-right and “antifa” (anti-fascist) activists, a neo-Nazi drove a car into a crowd of peaceful counterprotestors, killing a 32-year-old woman.
Here, in the time span of just seven days, was the dual nature of contemporary American anti-Semitism laid bare. The most glaring difference between these two displays of hate lies not so much in their substance—both adhere to similar conspiracy theories articulating nefarious, world-altering Jewish power—but rather their self-characterization. The animosity expressed toward Jews in Charlottesville was open and unambiguous, with demonstrators proudly confessing their hatred in the familiar language of Nazis and European fascists.
The socialists in Chicago, meanwhile, though calling for a literal second Holocaust on the shores of the Mediterranean, would fervently and indignantly deny they are anti-Semitic. On the contrary, they claim the mantle of “anti-fascism” and insist that this identity naturally makes them allies of the Jewish people. As for those Jews who might oppose their often violent tactics, they are at best bystanders to fascism, at worst collaborators in “white supremacy.”
So, whereas white nationalists explicitly embrace a tribalism that excludes Jews regardless of their skin color, the progressives of the DSA and the broader “woke” community conceive of themselves as universalists—though their universalism is one that conspicuously excludes the national longings of Jews and Jews alone. And whereas the extreme right-wingers are sincere in their anti-Semitism, the socialists who called for the elimination of Israel are just as sincere in their belief that they are not anti-Semitic, even though anti-Semitism is the inevitable consequence of their rhetoric and worldview.
The sheer bluntness of far-right anti-Semitism makes it easier to identify and stigmatize as beyond the pale; individuals like David Duke and the hosts of the “Daily Shoah” podcast make no pretense of residing within the mainstream of American political debate. But the humanist appeals of the far left, whose every libel against the Jewish state is paired with a righteous invocation of “justice” for the Palestinian people, invariably trigger repetitive and esoteric debates over whether this or that article, allusion, allegory, statement, policy, or political initiative is anti-Semitic or just critical of Israel. What this difference in self-definition means is that there is rarely, if ever, any argument about the substantive nature of right-wing anti-Semitism (despicable, reprehensible, wicked, choose your adjective), while the very existence of left-wing anti-Semitism is widely doubted and almost always indignantly denied by those accused of practicing it.T o be sure, these recent manifestations of anti-Semitism occur on the left and right extremes. And statistics tell a rather comforting story about the state of anti-Semitism in America. Since the Anti-Defamation League began tracking it in 1979, anti-Jewish hate crime is at an historic low; indeed, it has been declining since a recent peak of 1,554 incidents in 2006. America, for the most part, remains a very philo-Semitic country, one of the safest, most welcoming countries for Jews on earth. A recent Pew poll found Jews to be the most admired religious group in the United States.1 If American Jews have anything to dread, it’s less anti-Semitism than the loss of Jewish peoplehood through assimilation, that is being “loved to death” by Gentiles.2 Few American Jews can say that anti-Semitism has a seriously deleterious impact on their life, that it has denied them educational or employment opportunities, or that they fear for the physical safety of themselves or their families because of their Jewish identity.
The question is whether the extremes are beginning to move in on the center. In the past year alone, the DSA’s rolls tripled from 8,000 to 25,000 dues-paying members, who have established a conspicuous presence on social media reaching far beyond what their relatively miniscule numbers attest. The DSA has been the subject of widespread media coverage, ranging from the curious to the adulatory. The white supremacists, meanwhile, found themselves understandably heartened by the strange difficulty President Donald Trump had in disavowing them. He claimed, in fact, that there had been some “very fine people” among their ranks. “Thank you President Trump for your honesty & courage to tell the truth about #Charlottesville,” tweeted David Duke, while the white-nationalist Richard Spencer said, “I’m proud of him for speaking the truth.”
Indeed, among the more troubling aspects of our highly troubling political predicament—and one that, from a Jewish perspective, provokes not a small amount of angst—is that so many ideas, individuals, and movements that could once reliably be categorized as “extreme,” in the literal sense of articulating the views of a very small minority, are no longer so easily dismissed. The DSA is part of a much broader revival of the socialist idea in America, as exemplified by the growing readership of journals like Jacobin and Current Affairs, the popularity of the leftist Chapo Trap House podcast, and the insurgent presidential campaign of self-described democratic socialist Bernie Sanders—who, according to a Harvard-Harris poll, is now the most popular politician in the United States. Since 2015, the average age of a DSA member dropped from 64 to 30, and a 2016 Harvard poll found a majority of Millennials do not support capitalism.
Meanwhile, the Republican Party of Donald Trump offers “nativism and culture war wedges without the Reaganomics,” according to Nicholas Grossman, a lecturer in political science at the University of Illinois. A party that was once reliably internationalist and assertive against Russian aggression now supports a president who often preaches isolationism and never has even a mildly critical thing to say about the KGB thug ruling over the world’s largest nuclear arsenal.
Like ripping the bandage off an ugly and oozing wound, Trump’s presidential campaign unleashed a bevy of unpleasant social forces that at the very least have an indirect bearing on Jewish welfare. The most unpleasant of those forces has been the so-called alternative right, or “alt-right,” a highly race-conscious political movement whose adherents are divided on the “JQ” (Jewish Question). Throughout last year’s campaign, Jewish journalists (this author included) were hit with a barrage of luridly anti-Semitic Twitter messages from self-described members of the alt-right. The tamer missives instructed us to leave America for Israel, others superimposed our faces onto the bodies of concentration camp victims.3
I do not believe Donald Trump is himself an anti-Semite, if only because anti-Semitism is mainly a preoccupation—as distinct from a prejudice—and Trump is too narcissistic to indulge any preoccupation other than himself. And there is no evidence to suggest that he subscribes to the anti-Semitic conspiracy theories favored by his alt-right supporters. But his casual resort to populism, nativism, and conspiracy theory creates a narrative environment highly favorable to anti-Semites.
Nativism, of which Trump was an early and active practitioner, is never good for the Jews, no matter how affluent or comfortable they may be and notwithstanding whether they are even the target of its particular wrath. Racial divisions, which by any measure have grown significantly worse in the year since Trump was elected, hurt all Americans, obviously, but they have a distinct impact on Jews, who are left in a precarious position as racial identities calcify. Not only are the newly emboldened white supremacists of the alt-right invariably anti-Semites, but in the increasingly racialist taxonomy of the progressive left—which more and more mainstream liberals are beginning to parrot—Jews are considered possessors of “white privilege” and, thus, members of the class to be divested of its “power” once the revolution comes. In the racially stratified society that both extremes envision, Jews lose out, simultaneously perceived (by the far right) as wily allies and manipulators of ethnic minorities in a plot to mongrelize America and (by the far left) as opportunistic “Zionists” ingratiating themselves with a racist and exploitative “white” establishment that keeps minorities down.T his politics is bad for all Americans, and Jewish Americans in particular. More and more, one sees the racialized language of the American left being applied to the Middle East conflict, wherein Israel (which is, in point of fact, one of the most racially diverse countries in the world) is referred to as a “white supremacist” state no different from that of apartheid South Africa. In a book just published by MIT Press, ornamented with a forward by Cornel West and entitled “Whites, Jews, and Us,” a French-Algerian political activist named Houria Bouteldja asks, “What can we offer white people in exchange for their decline and for the wars that will ensue?” Drawing the Jews into her race war, Bouteldja, according to the book’s jacket copy, “challenges widespread assumptions among the left in the United States and Europe—that anti-Semitism plays any role in Arab–Israeli conflicts, for example, or that philo-Semitism doesn’t in itself embody an oppressive position.” Jew-hatred is virtuous, and appreciation of the Jews is racism.
Few political activists of late have done more to racialize the Arab–Israeli conflict—and, through insidious extension of the American racial hierarchy, designate American Jews as oppressors—than the Brooklyn-born Arab activist Linda Sarsour. An organizer of the Women’s March, Sarsour has seamlessly insinuated herself into a variety of high-profile progressive campaigns, a somewhat incongruent position given her reactionary views on topics like women’s rights in Saudi Arabia. (“10 weeks of PAID maternity leave in Saudi Arabia,” she tweets. “Yes PAID. And ur worrying about women driving. Puts us to shame.”) Sarsour, who is of Palestinian descent, claims that one cannot simultaneously be a feminist and a Zionist, when it is the exact opposite that is true: No genuine believer in female equality can deny the right of Israel to exist. The Jewish state respects the rights of women more than do any of its neighbors. In an April 2017 interview, Sarsour said that she had become a high-school teacher for the purpose of “inspiring young people of color like me.” Just three months earlier, however, in a video for Vox, Sarsour confessed, “When I wasn’t wearing hijab I was just some ordinary white girl from New York City.” The donning of Muslim garb, then, confers a racial caste of “color,” which in turn confers virtue, which in turn confers a claim on political power.
This attempt to describe the Israeli–Arab conflict in American racial vernacular marks Jews as white (a perverse mirror of Nazi biological racism) and thus implicates them as beneficiaries of “structural racism,” “white privilege,” and the whole litany of benefits afforded to white people at birth in the form of—to use Ta-Nehisi Coates’s abstruse phrase—the “glowing amulet” of “whiteness.” “It’s time to admit that Arthur Balfour was a white supremacist and an anti-Semite,” reads the headline of a recent piece in—where else? —the Forward, incriminating Jewish nationalism as uniquely perfidious by dint of the fact that, like most men of his time, a (non-Jewish) British official who endorsed the Zionist idea a century ago held views that would today be considered racist. Reading figures like Bouteldja and Sarsour brings to mind the French philosopher Pascal Bruckner’s observation that “the racialization of the world has to be the most unexpected result of the antidiscrimination battle of the last half-century; it has ensured that the battle continuously re-creates the curse from which it is trying to break free.”
If Jews are white, and if white people—as a group—enjoy tangible and enduring advantages over everyone else, then this racially essentialist rhetoric ends up with Jews accused of abetting white supremacy, if not being white supremacists themselves. This is one of the overlooked ways in which the term “white supremacy” has become devoid of meaning in the age of Donald Trump, with everyone and everything from David Duke to James Comey to the American Civil Liberties Union accused of upholding it. Take the case of Ben Shapiro, the Jewish conservative polemicist. At the start of the school year, Shapiro was scheduled to give a talk at UC Berkeley, his alma matter. In advance, various left-wing groups put out a call for protest in which they labeled Shapiro—an Orthodox Jew—a “fascist thug” and “white supremacist.” An inconvenient fact ignored by Shapiro’s detractors is that, according to the ADL, he was the top target of online abuse from actual white supremacists during the 2016 presidential election. (Berkeley ultimately had to spend $600,000 protecting the event from leftist rioters.)
A more pernicious form of this discourse is practiced by left-wing writers who, insincerely claiming to have the interests of Jews at heart, scold them and their communal organizations for not doing enough in the fight against anti-Semitism. Criticizing Jews for not fully signing up with the “Resistance” (which in form and function is fast becoming the 21st-century version of the interwar Popular Front), they then slyly indict Jews for being complicit in not only their own victimization but that of the entire country at the hands of Donald Trump. The first and foremost practitioner of this bullying and rather artful form of anti-Semitism is Jeet Heer, a Canadian comic-book critic who has achieved some repute on the American left due to his frenetic Twitter activity and availability when the New Republic needed to replace its staff that had quit en masse in 2014. Last year, when Heer came across a video of a Donald Trump supporter chanting “JEW-S-A” at a rally, he declared on Twitter: “We really need to see more comment from official Jewish groups like ADL on way Trump campaign has energized anti-Semitism.”
But of course “Jewish groups” have had plenty to say about the anti-Semitism expressed by some Trump supporters—too much, in the view of their critics. Just two weeks earlier, the ADL had released a report analyzing over 2 million anti-Semitic tweets targeting Jewish journalists over the previous year. This wasn’t the first time the ADL raised its voice against Trump and the alt-right movement he emboldened, nor would it be the last. Indeed, two minutes’ worth of Googling would have shown Heer that his pronouncements about organizational Jewish apathy were wholly without foundation.4
It’s tempting to dismiss Heer’s observation as mere “concern trolling,” a form of Internet discourse characterized by insincere expressions of worry. But what he did was nastier. Immediately presented with evidence for the inaccuracy of his claims, he sneered back with a bit of wisdom from the Jewish sage Hillel the Elder, yet cast as mild threat: “If I am not for myself, who will be for me?” In other words: How can you Jews expect anyone to care about your kind if you don’t sufficiently oppose—as arbitrarily judged by moi, Jeet Heer—Donald Trump?
If this sort of critique were coming from a Jewish donor upset that his preferred organization wasn’t doing enough to combat anti-Semitism, or a Gentile with a proven record of concern for Jewish causes, it wouldn’t have turned the stomach. What made Heer’s interjection revolting is that, to put it mildly, he’s not exactly known for being sympathetic toward the Jewish plight. In 2015, Heer put his name to a petition calling upon an international comic-book festival to drop the Israeli company SodaStream as a co-sponsor because the Jewish state is “built on the mass ethnic cleansing of Palestinian communities and sustained through racism and discrimination.” Heer’s name appeared alongside that of Carlos Latuff, a Brazilian cartoonist who won second place in the Iranian government’s 2006 International Holocaust Cartoon Competition. For his writings on Israel, Heer has been praised as being “very good on the conflict” by none other than Philip Weiss, proprietor of the anti-Semitic hate site Mondoweiss.
In light of this track record, Heer’s newfound concern about anti-Semitism appeared rather dubious. Indeed, the bizarre way in which he expressed this concern—as, ultimately, a critique not of anti-Semitism per se but of the country’s foremost Jewish civil-rights organization—suggests he cares about anti-Semitism insofar as its existence can be used as a weapon to beat his political adversaries. And since the incorrigibly Zionist American Jewish establishment ranks high on that list (just below that of Donald Trump and his supporters), Heer found a way to blame it for anti-Semitism. And what does that tell you? It tells you that—presented with a 16-second video of a man chanting “JEW-S-A” at a Donald Trump rally—Heer’s first impulse was to condemn not the anti-Semite but the Jews.
Heer isn’t the only leftist (or New Republic writer) to assume this rhetorical cudgel. In a piece entitled “The Dismal Failure of Jewish Groups to Confront Trump,” one Stephen Lurie attacked the ADL for advising its members to stay away from the Charlottesville “Unite the Right Rally” and let police handle any provocations from neo-Nazis. “We do not have a Jewish organizational home for the fight against fascism,” he quotes a far-left Jewish activist, who apparently thinks that we live in the Weimar Republic and not a stable democracy in which law-enforcement officers and not the balaclava-wearing thugs of antifa maintain the peace. Like Jewish Communists of yore, Lurie wants to bully Jews into abandoning liberalism for the extreme left, under the pretext that mainstream organizations just won’t cut it in the fight against “white supremacy.” Indeed, Lurie writes, some “Jewish institutions and power players…have defended and enabled white supremacy.” The main group he fingers with this outrageous slander is the Republican Jewish Coalition, the implication being that this explicitly partisan Republican organization’s discrete support for the Republican president “enables white supremacy.”
It is impossible to imagine Heer, Lurie, or other progressive writers similarly taking the NAACP to task for its perceived lack of concern about racism, or castigating the Human Rights Campaign for insufficiently combating homophobia. No, it is only the cowardice of Jews that is condemned—condemned for supposedly ignoring a form of bigotry that, when expressed on the left, these writers themselves ignore or even defend. The logical gymnastics of these two New Republic writers is what happens when, at base, one fundamentally resents Jews: You end up blaming them for anti-Semitism. Blaming Jews for not sufficiently caring enough about anti-Semitism is emotionally the same as claiming that Jews are to blame for anti-Semitism. Both signal an envy and resentment of Jews predicated upon a belief that they have some kind of authority that the claimant doesn’t and therefore needs to undermine.T his past election, one could not help but notice how the media seemingly discovered anti-Semitism when it emanated from the right, and then only when its targets were Jews on the left. It was enough to make one ask where they had been when left-wing anti-Semitism had been a more serious and pervasive problem. From at least 1996 (the year Pat Buchanan made his last serious attempt at securing the GOP presidential nomination) to 2016 (when the Republican presidential nominee did more to earn the support of white supremacists and neo-Nazis than any of his predecessors), anti-Semitism was primarily a preserve of the American left. In that two-decade period—spanning the collapse of the Oslo Accords and rise of the Second Intifada to the rancorous debate over the Iraq War and obsession with “neocons” to the presidency of Barack Obama and the 2015 Iran nuclear deal—anti-Israel attitudes and anti-Semitic conspiracy made unprecedented inroads into respectable precincts of the American academy, the liberal intelligentsia, and the Democratic Party.
The main form that left-wing anti-Semitism takes in the United States today is unhinged obsession with the wrongs, real or perceived, of the state of Israel, and the belief that its Jewish supporters in the United States exercise a nefarious control over the levers of American foreign policy. In this respect, contemporary left-wing anti-Semitism is not altogether different from that of the far right, though it usually lacks the biological component deeming Jews a distinct and inferior race. (Consider the left-wing anti-Semite’s eagerness to identify and promote Jewish “dissidents” who can attest to their co-religionists’ craftiness and deceit.) The unholy synergy of left and right anti-Semitism was recently epitomized by former CIA agent and liberal stalwart Valerie Plame’s hearty endorsement, on Twitter, of an article written for an extreme right-wing website by a fellow former CIA officer named Philip Giraldi: “America’s Jews Are Driving America’s Wars.” Plame eventually apologized for sharing the article with her 50,000 followers, but not before insisting that “many neocon hawks are Jewish” and that “just FYI, I am of Jewish descent.”
The main fora in which left-wing anti-Semitism appears is academia. According to the ADL, anti-Semitic incidents on college campuses doubled from 2014 to 2015, the latest year that data are available. Writing in National Affairs, Ruth Wisse observes that “not since the war in Vietnam has there been a campus crusade as dynamic as the movement of Boycott, Divestment, and Sanctions against Israel.” Every academic year, a seeming surfeit of controversies erupt on campuses across the country involving the harassment of pro-Israel students and organizations, the disruption of events involving Israeli speakers (even ones who identify as left-wing), and blatantly anti-Semitic outbursts by professors and student activists. There was the Oberlin professor of rhetoric, Joy Karega, who posted statements on social media claiming that Israel had created ISIS and had orchestrated the murderous attack on Charlie Hebdo in Paris. There is the Rutgers associate professor of women’s and gender studies, Jasbir Puar, who popularized the ludicrous term “pinkwashing” to defame Israel’s LGBT acceptance as a massive conspiracy to obscure its oppression of Palestinians. Her latest book, The Right to Maim, academically peer-reviewed and published by Duke University Press, attacks Israel for sparing the lives of Palestinian civilians, accusing its military of “shooting to maim rather than to kill” so that it may keep “Palestinian populations as perpetually debilitated, and yet alive, in order to control them.”
One could go on and on about such affronts not only to Jews and supporters of Israel but to common sense, basic justice, and anyone who believes in the prudent use of taxpayer dollars. That several organizations exist solely for the purpose of monitoring anti-Israel and anti-Semitic agitation on American campuses attests to the prolificacy of the problem. But it’s unclear just how reflective these isolated examples of the college experience really are. A 2017 Stanford study purporting to examine the issue interviewed 66 Jewish students at five California campuses noted for “being particularly fertile for anti-Semitism and for having an active presence of student groups critical of Israel and Zionism.” It concluded that “contrary to widely shared impressions, we found a picture of campus life that is neither threatening nor alarmist…students reported feeling comfortable on their campuses, and, more specifically, comfortable as Jews on their campuses.” To the extent that Jewish students do feel pressured, the report attempted to spread the blame around, indicting pro-Israel activists alongside those agitating against it. “[Survey respondents] fear that entering political debate, especially when they feel the social pressures of both Jewish and non-Jewish activist communities, will carry social costs that they are unwilling to bear.”
Yet by its own admission, the report “only engaged students who were either unengaged or minimally engaged in organized Jewish life on their campuses.” Researchers made a study of anti-Semitism, then, by interviewing the Jews least likely to experience it. “Most people don’t really think I’m Jewish because I look very Latina…it doesn’t come up in conversation,” one such student said in an interview. Ultimately, the report revealed more about the attitudes of unengaged (and, thus, uninformed) Jews than about the state of anti-Semitism on college campuses. That may certainly be useful in its own right as a means of understanding how unaffiliated Jews view debates over Israel, but it is not an accurate marker of developments on college campuses more broadly.
A more extensive 2016 Brandeis study of Jewish students at 50 schools found 34 percent agreed at least “somewhat” that their campus has a hostile environment toward Israel. Yet the variation was wide; at some schools, only 3 percent agreed, while at others, 70 percent did. Only 15 percent reported a hostile environment towards Jews. Anti-Semitism was found to be more prevalent at public universities than private ones, with the determinative factor being the presence of a Students for Justice in Palestine chapter on campus. Important context often lost in conversations about campus anti-Semitism, and reassuring to those concerned about it, is that it is simply not the most important issue roiling higher education. “At most schools,” the report found, “fewer than 10 percent of Jewish students listed issues pertaining to either Jews or Israel as among the most pressing on campus.”F or generations, American Jews have depended on anti-Semitism’s remaining within a moral quarantine, a cordon sanitaire, and America has reliably kept this societal virus contained. While there are no major signs that this barricade is breaking down in the immediate future, there are worrying indications on the political horizon.
Surveying the situation at the international level, the declining global position of the United States—both in terms of its hard military and economic power relative to rising challengers and its position as a credible beacon of liberal democratic values—does not portend well for Jews, American or otherwise. American leadership of the free world, has, in addition to ensuring Israel’s security, underwritten the postwar liberal world order. And it is the constituent members of that order, the liberal democratic states, that have served as the best guarantor of the Jews’ life and safety over their 6,000-year history. Were America’s global leadership role to diminish or evaporate, it would not only facilitate the rise of authoritarian states like Iran and terrorist movements such as al-Qaeda, committed to the destruction of Israel and the murder of Jews, but inexorably lead to a worldwide rollback of liberal democracy, an outcome that would inevitably redound to the detriment of Jews.
Domestically, political polarization and the collapse of public trust in every American institution save the military are demolishing what little confidence Americans have left in their system and governing elites, not to mention preparing the ground for some ominous political scenarios. Widely cited survey data reveal that the percentage of American Millennials who believe it “essential” to live in a liberal democracy hovers at just over 25 percent. If Trump is impeached or loses the next election, a good 40 percent of the country will be outraged and susceptible to belief in a stab-in-the-back theory accounting for his defeat. Whom will they blame? Perhaps the “neoconservatives,” who disproportionately make up the ranks of Trump’s harshest critics on the right?
Ultimately, the degree to which anti-Semitism becomes a problem in America hinges on the strength of the antibodies within the country’s communal DNA to protect its pluralistic and liberal values. But even if this resistance to tribalism and the cult of personality is strong, it may not be enough to abate the rise of an intellectual and societal disease that, throughout history, thrives upon economic distress, xenophobia, political uncertainty, ethnic chauvinism, conspiracy theory, and weakening democratic norms.
1 Somewhat paradoxically, according to FBI crime statistics, the majority of religiously based hate crimes target Jews, more than double the amount targeting Muslims. This indicates more the commitment of the country’s relatively small number of hard-core anti-Semites than pervasive anti-Semitism.
4 The ADL has had to maintain a delicate balancing act in the age of Trump, coming under fire by many conservative Jews for a perceived partisan tilt against the right. This makes Heer’s complaint all the more ignorant — and unhelpful.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'The Once and Future Liberal' By Mark Lilla
Lilla, a professor at Columbia University, tells us that “the story of how a successful liberal politics of solidarity became a failed pseudo-politics of identity is not a simple one.” And about this, he’s right. Lilla quotes from the feminist authors of the 1977 Combahee River Collective Manifesto: “The most profound and potentially most radical politics come directly out of our own identity, as opposed to working to end somebody else’s oppression.” Feminists looked to instantiate the “radical” and electrifying phrase which insisted that “the personal is political.” The phrase, argues Lilla, was generally seen in “a somewhat Marxist fashion to mean that everything that seems personal is in fact political.”
The upshot was fragmentation. White feminists were deemed racist by black feminists—and both were found wanting by lesbians, who also had black and white contingents. “What all these groups wanted,” explains Lilla, “was more than social justice and an end to the [Vietnam] war. They also wanted there to be no space between what they felt inside and what they saw and did in the world.” He goes on: “The more obsessed with personal identity liberals become, the less willing they become to engage in reasoned political debate.” In the end, those on the left came to a realization: “You can win a debate by claiming the greatest degree of victimization and thus the greatest outrage at being subjected to questioning.”
But Lilla’s insights into the emotional underpinnings of political correctness are undercut by an inadequate, almost bizarre sense of history. He appears to be referring to the 1970s when, zigzagging through history, he writes that “no recognition of personal or group identity was coming from the Democratic Party, which at the time was dominated by racist Dixiecrats and white union officials of questionable rectitude.”
What is he talking about? Is Lilla referring to the Democratic Party of Lyndon Johnson, Hubert Humphrey, and George McGovern? Is he referring obliquely to George Wallace? If so, why is Wallace never mentioned? Lilla seems not to know that it was the 1972 McGovern Democratic Convention that introduced minority seating to be set aside for blacks and women.
At only 140 pages, this is a short book. But even so, Lilla could have devoted a few pages to Frankfurt ideologist Herbert Marcuse and his influence on the left. In the 1960s, Marcuse argued that leftists and liberals were entitled to restrain centrist and conservative speech on the grounds that the universities had to act as a counterweight to society at large. But this was not just rhetoric; in the campus disruption of the early 1970s at schools such as Yale, Cornell, and Amherst, Marcuse’s ideals were pushed to the fore.
If Lilla’s argument comes off as flaccid, perhaps that’s because the aim of The Once and Future Liberal is more practical than principled. “The only way” to protect our rights, he tells the reader, “is to elect liberal Democratic governors and state legislators who’ll appoint liberal state attorneys.” According to Lilla, “the paradox of identity liberalism” is that it undercuts “the things it professes to want,” namely political power. He insists, rightly, that politics has to be about persuasion but then contradicts himself in arguing that “politics is about seizing power to defend the truth.” In other words, Lilla wants a better path to total victory.
Given what Lilla, descending into hysteria, describes as “the Republican rage for destruction,” liberals and Democrats have to win elections lest the civil rights of blacks, women, and gays are rolled back. As proof of the ever-looming danger, he notes that when the “crisis of the mid-1970s threatened…the country turned not against corporations and banks, but against liberalism.” Yet he gives no hint of the trail of liberal failures that led to the crisis of the mid-’70s. You’d never know reading Lilla, for example, that the Black Power movement intensified racial hostilities that were then further exacerbated by affirmative action and busing. And you’d have no idea that, at considerable cost, the poverty programs of the Great Society failed to bring poorer African Americans into the economic mainstream. Nor does Lilla deal with the devotion to Keynesianism that produced inflation without economic growth during the Carter presidency.
Despite his discursive ambling through the recent history of American political life, Lilla has a one-word explanation for identity politics: Reaganism. “Identity,” he writes, is “Reaganism for lefties.” What’s crucial in combating Reaganism, he argues, is to concentrate on our “shared political” status as citizens. “Citizenship is a crucial weapon in the battle against Reaganite dogma because it brings home that fact that we are part of a legitimate common enterprise.” But then he asserts that the “American right uses the term citizenship today as a means of exclusion.” The passage might lead the reader to think that Lilla would take up the question of immigration and borders. But he doesn’t, and the closing passages of the book dribble off into characteristic zigzags. Lilla tells us that “Black Lives Matter is a textbook example of how not to build solidarity” but then goes on, without evidence, to assert the accuracy of the Black Lives Matter claim that African-Americans have been singled out for police mistreatment.
It would be nice to argue that The Once and Future Liberal is a near miss, a book that might have had enduring importance if only it went that extra step. But Lilla’s passing insights on the perils of a politically correct identity politics drown in the rhetoric of conventional bromides that fill most of the pages of this disappointing book.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
n Athens several years ago, I had dinner with a man running for the national parliament. I asked him whether he thought he had a shot at winning. He was sure of victory, he told me. “I have hired a very famous political consultant from Washington,” he said. “He is the man who elected Reagan. Expensive. But the best.”
The political genius he then described was a minor political flunky I had met in Washington long ago, a more-or-less anonymous member of the Republican National Committee before he faded from view at the end of Ronald Reagan’s second term. Mutual acquaintances told me he still lived in a nice neighborhood in Northern Virginia, but they never could figure out what the hell he did to earn his money. (This is a recurring mystery throughout the capital.) I had to come to Greece to find the answer.
It is one of the dark arts of Washington, this practice of American political hacks traveling to faraway lands and suckering foreign politicians into paying vast sums for splashy, state-of-the-art, essentially worthless “services.” And it’s perfectly legal. Paul Manafort, who briefly managed Donald Trump’s campaign last summer, was known as a pioneer of the globe-trotting racket. If he hadn’t, as it were, veered out of his gutter into the slightly higher lane of U.S. presidential politics, he likely could have hoovered cash from the patch pockets of clueless clients from Ouagadougou to Zagreb for the rest of his natural life and nobody in Washington would have noticed.
But he veered, and now he and a colleague find themselves indicted by Robert Mueller, the Inspector Javert of the Russian-collusion scandal. When those indictments landed, they instantly set in motion the familiar scramble. Trump fans announced that the indictments were proof that there was no collusion between the Trump campaign and the Russians—or, in the crisp, emphatic phrasing of a tweet by the world’s Number One Trump Fan, Donald Trump: “NO COLLUSION!!!!” The Russian-scandal fetishists in the press corps replied in chorus: It’s still early! Javert required more time, and so will Mueller, and so will they.
A good Washington scandal requires a few essential elements. One is a superabundance of information. From these data points, conspiracy-minded reporters can begin to trace associations, warranted or not, and from the associations, they can infer motives and objectives with which, stretched together, they can limn a full-blown conspiracy theory. The Manafort indictment released a flood of new information, and at once reporters were pawing for nuggets that might eventually form a compelling case for collusion.
They failed to find any because Manafort’s indictment, in essence, involved his efforts to launder his profits from his international political work, not his work for the Trump campaign. Fortunately for the obsessives, another element is required for a good scandal: a colorful cast. The various Clinton scandals brought us Asian money-launderers and ChiCom bankers, along with an entire Faulkner-novel’s worth of bumpkins, sharpies, and backwoods swindlers, plus that intern in the thong. Watergate, the mother lode of Washington scandals, featured a host of implausible characters, from the central-casting villain G. Gordon Liddy to Sam Ervin, a lifelong segregationist and racist who became a hero to liberals everywhere.
Here, at last, is one area where the Russian scandal has begun to show promise. Manafort and his business partner seem too banal to hold the interest of anyone but a scandal obsessive. Beneath the pile of paper Mueller dumped on them, however, another creature could be seen peeking out shyly. This would be the diminutive figure of George Papadopoulos. An unpaid campaign adviser to Trump, Papadopoulos pled guilty to lying to the FBI about the timing of his conversations with Russian agents. He is quickly becoming the stuff of legend.
Papadopoulos is an exemplar of a type long known to American politics. He is the nebbish bedazzled by the big time—achingly ambitious, though lacking the skill, or the cunning, to climb the greasy pole. So he remains at the periphery of the action, ever eager to serve. Papadopoulos’s résumé, for a man under 30, is impressively padded. He said he served as the U.S. representative to the Model United Nations in 2012, though nobody recalls seeing him there. He boasted of a four-year career at the Hudson Institute, though in fact he spent one year there as an unpaid intern and three doing contract research for one of Hudson’s scholars. On his LinkedIn page, he listed himself as a keynote speaker at a Greek American conference in 2008, but in fact he participated only in a panel discussion. The real keynoter was Michael Dukakis.
With this hunger for achievement, real or imagined, Papadopoulos could not let a presidential campaign go by without climbing aboard. In late 2015, he somehow attached himself to Ben Carson’s campaign. He was never paid and lasted four months. His presence went largely unnoticed. “If there was any work product, I never saw it,” Carson’s campaign manager told Time. The deputy campaign manager couldn’t even recall his name. Then suddenly, in April 2016, Papadopoulos appeared on a list of “foreign-policy advisers” to Donald Trump—and, according to Mueller’s court filings, resolved to make his mark by acting as a liaison between Trump’s campaign and the Russian government.
While Mueller tells the story of Papadopoulos’s adventures in the dry, Joe Friday prose of a legal document, it could easily be the script for a Peter Sellers movie from the Cold War era. The young man’s résumé is enough to impress the campaign’s impressionable officials as they scavenge for foreign-policy advisers: “Hey, Corey! This dude was in the Model United Nations!”
Papadopoulus (played by Sellers) sets about his mission. A few weeks after signing on to the campaign, he travels to Europe, where he meets a mysterious “Professor” (Peter Ustinov). “Initially the Professor seemed uninterested in Papadopoulos,” says Mueller’s indictment. A likely story! Yet when Papadopoulus lets drop that he’s an adviser to Trump, the Professor suddenly “appeared to take great interest” in him. They arrange a meeting in London to which the Professor invites a “female Russian national” (Elke Sommer). Without much effort, the femme fatale convinces Papadopoulus that she is Vladimir Putin’s niece. (“I weel tell z’American I em niece of Great Leader! Zat idjut belief ennytink!”) Over the next several months our hero sends many emails to campaign officials and to the Professor, trying to arrange a meeting between them. As far we know from the indictment, nothing came of his mighty efforts.
And there matters lay until January 2017, when the FBI came calling. Agents asked Papadopoulos about his interactions with the Russians. Even though he must have known that hundreds of his emails on the subject would soon be available to the FBI, he lied and told the agents that the contacts had occurred many months before he joined the campaign. History will record Papadopoulos as the man who forgot that emails carry dates on them. After the FBI interview, according to the indictment, he tried to destroy evidence with the same competence he has brought to his other endeavors. He closed his Facebook account, on which several communications with the Russians had taken place. He threw out his old cellphone. (That should do it!) After that, he began wearing a blindfold, on the theory that if he couldn’t see the FBI, the FBI couldn’t see him.
I made that last one up, obviously. For now, the great hope of scandal hobbyists is that Papadopoulus was wearing a wire between the time he secretly pled guilty and the time his plea was made public. This would have allowed him to gather all kinds of incriminating dirt in conversations with former colleagues. And the dirt is there, all right, as the Manafort indictment proves. Unfortunately for our scandal fetishists, so far none of it shows what their hearts most desire: active collusion between Russia and the Trump campaign.
Choose your plan and pay nothing for six Weeks!
An affair to remember
All this changed with the release in 1967 of Arthur Penn’s Bonnie and Clyde and Mike Nichols’s The Graduate. These two films, made in nouveau European style, treated familiar subjects—a pair of Depression-era bank robbers and a college graduate in search of a place in the adult world—in an unmistakably modern manner. Both films were commercial successes that catapulted their makers and stars into the top echelon of what came to be known as “the new Hollywood.”
Bonnie and Clyde inaugurated a new era in which violence on screen simultaneously became bloodier and more aestheticized, and it has had enduring impact as a result. But it was The Graduate that altered the direction of American moviemaking with its specific appeal to younger and hipper moviegoers who had turned their backs on more traditional cinematic fare. When it opened in New York in December, the movie critic Hollis Alpert reported with bemusement that young people were lining up in below-freezing weather to see it, and that they showed no signs of being dismayed by the cold: “It was as though they all knew they were going to see something good, something made for them.”
The Graduate, whose aimless post-collegiate title character is seduced by the glamorous but neurotic wife of his father’s business partner, is part of the common stock of American reference. Now, a half-century later, it has become the subject of a book-length study, Beverly Gray’s Seduced by Mrs. Robinson: How The Graduate Became the Touchstone of a Generation.1 As is so often the case with pop-culture books, Seduced by Mrs. Robinson is almost as much about its self-absorbed Baby Boomer author (“The Graduate taught me to dance to the beat of my own drums”) as its subject. It has the further disadvantage of following in the footsteps of Mark Harris’s magisterial Pictures at a Revolution: Five Movies and the Birth of the New Hollywood (2008), in which the film is placed in the context of Hollywood’s mid-’60s cultural flux. But Gray’s book offers us a chance to revisit this seminal motion picture and consider just why it was that The Graduate spoke to Baby Boomers in a distinctively personal way.T he Graduate began life in 1963 as a novella of the same name by Charles Webb, a California-born writer who saw his book not as a comic novel but as a serious artistic statement about America’s increasingly disaffected youth. It found its way into the hands of a producer named Lawrence Turman who saw The Graduate as an opportunity to make the cinematic equivalent of Salinger’s The Catcher in the Rye. Turman optioned the book, then sent it to Mike Nichols, who in 1963 was still best known for his comic partnership with Elaine May but had just made his directorial debut with the original Broadway production of Barefoot in the Park.
Both men saw that The Graduate posed a problem to anyone seeking to put it on the screen. In Turman’s words, “In the book the character of Benjamin Braddock is sort of a whiny pain in the fanny [whom] you want to shake or spank.” To this end, they turned to Buck Henry, who had co-created the popular TV comedy Get Smart with Mel Brooks, to write a screenplay that would retain much of Webb’s dryly witty dialogue (“I think you’re the most attractive of all my parents’ friends”) while making Benjamin less priggish.
Nichols’s first major act was casting Dustin Hoffman, an obscure New York stage actor pushing 30, for the title role. No one but Nichols seems to have thought him suitable in any way. Not only was Hoffman short and nondescript-looking, but he was unmistakably Jewish, whereas Benjamin is supposedly the scion of a newly monied WASP family from southern California. Nevertheless, Nichols decided he wanted “a short, dark, Jewish, anomalous presence, which is how I experience myself,” in order to underline Benjamin’s alienation from the world of his parents.
Nichols filled the other roles in equally unexpected ways. He hired the Oscar winner Anne Bancroft, only six years Hoffman’s senior, to play the unbalanced temptress who lures Benjamin into her bed, then responds with volcanic rage when he falls in love with her beautiful daughter Elaine. He and Henry also steered clear of on-screen references to the campus protests that had only recently started to convulse America. Instead, he set The Graduate in a timeless upper-middle-class milieu inhabited by people more interested in social climbing than self-actualization—the same milieu from which Benjamin is so alienated that he is reduced to near-speechlessness whenever his family and their friends ask him what he plans to do now that he has graduated.
The film’s only explicit allusion to its cultural moment is the use on the soundtrack of Simon & Garfunkel’s “The Sound of Silence,” the painfully earnest anthem of youthful angst that is for all intents and purposes the theme song of The Graduate. Nevertheless, Henry’s screenplay leaves little doubt that the film was in every way a work of its time and place. As he later explained to Mark Harris, it is a study of “the disaffection of young people for an environment that they don’t seem to be in sync with.…Nobody had made a film specifically about that.”
This aspect of The Graduate is made explicit in a speech by Benjamin that has no direct counterpart in the novel: “It’s like I was playing some kind of game, but the rules don’t make any sense to me. They’re being made up by all the wrong people. I mean, no one makes them up. They seem to make themselves up.”
The Graduate was Nichols’s second film, following his wildly successful movie version of Edward Albee’s Who’s Afraid of Virginia Woolf?. Albee’s play was a snarling critique of the American dream, which he believed to be a snare and a delusion. The Graduate had the same skeptical view of postwar America, but its pessimism was played for laughs. When Benjamin is assured by a businessman in the opening scene that the secret to success in America is “plastics,” we are meant to laugh contemptuously at the smugness of so blinkered a view of life. Moreover, the contempt is as real as the laughter: The Graduate has it both ways. For the same reason, the farcical quality of the climactic scene (in which Benjamin breaks up Elaine’s marriage to a handsome young WASP and carts her off to an unknown fate) is played without musical underscoring, a signal that what Benjamin is doing is really no laughing matter.
The youth-oriented message of The Graduate came through loud and clear to its intended audience, which paid no heed to the mixed reviews from middle-aged reviewers unable to grasp what Nichols and Henry were up to. Not so Roger Ebert, the newly appointed 25-year-old movie critic of the Chicago Sun-Times, who called The Graduate “the funniest American comedy of the year…because it has a point of view. That is to say, it is against something.”
Even more revealing was the response of David Brinkley, then the co-anchor of NBC’s nightly newscast, who dismissed The Graduate as “frantic nonsense” but added that his college-age son and his classmates “liked it because it said about the parents and others what they would have said about us if they had made the movie—that we are self-centered and materialistic, that we are licentious and deeply hypocritical about it, that we try to make them into walking advertisements for our own affluence.”
A year after the release of The Graduate, a film-industry report cited in Pictures at a Revolution revealed that “48 percent of all movie tickets in America were now being sold to filmgoers under the age of 24.” A very high percentage of those tickets were to The Graduate and Bonnie and Clyde. At long last, Hollywood had figured out what the Baby Boomers wanted to see.A nd how does The Graduate look a half-century later? To begin with, it now appears to have been Mike Nichols’s creative “road not taken.” In later years, Nichols became less an auteur than a Hollywood director who thought like a Broadway director, choosing vehicles of solid middlebrow-liberal appeal and serving them faithfully without imposing a strong creative vision of his own. In The Graduate, by contrast, he revealed himself to be powerfully aware of the same European filmmaking trends that shaped Bonnie and Clyde. Within a naturalistic framework, he deployed non-naturalistic “new wave” cinematographic techniques with prodigious assurance—and he was willing to end The Graduate on an ambiguous note instead of wrapping it up neatly and pleasingly, letting the camera linger on the unsure faces of Hoffman and Ross as they ride off into an unsettling future.
It is this ambiguity, coupled with Nichols’s prescient decision not to allow The Graduate to become a literal portrayal of American campus life in the troubled mid-’60s, that has kept the film fresh. But The Graduate is fresh in a very particular way: It is a young person’s movie, the tale of a boy-man terrified by the prospect of growing up to be like his parents. Therein lay the source of its appeal to young audiences. The Graduate showed them what they, too, feared most, and hinted at a possible escape route.
In the words of Beverly Gray, who saw The Graduate when it first came out in 1967: “The Graduate appeared in movie houses just as we young Americans were discovering how badly we wanted to distance ourselves from the world of our parents….That polite young high achiever, those loving but smothering parents, those comfortable but slightly bland surroundings: They combined to form an only slightly exaggerated version of my own cozy West L.A. world.”
Yet to watch The Graduate today—especially if you first saw it when much younger—is also to be struck by the extreme unattractiveness of its central character. Hoffman plays Benjamin not as the comically ineffectual nebbish of Jewish tradition but as a near-catatonic robot who speaks by turns in a flat monotone and a frightened nasal whine. It is impossible to understand why Mrs. Robinson would want to go to bed with such a mousy creature, much less why Elaine would run off with him—an impression that has lately acquired an overlay of retrospective irony in the wake of accusations that Hoffman has sexually harassed female colleagues on more than one occasion. Precisely because Benjamin is so unlikable, it is harder for modern-day viewers to identify with him in the same way as did Gray and her fellow Boomers. To watch a Graduate-influenced film like Noah Baumbach’s Kicking and Screaming (1995), a poignant romantic comedy about a group of Gen-X college graduates who deliberately choose not to get on with their lives, is to see a closely similar dilemma dramatized in an infinitely more “relatable” way, one in which the crippling anxiety of the principal characters is presented as both understandable and pitiable, thus making it funnier.
Be that as it may, The Graduate is a still-vivid snapshot of a turning point in American cultural history. Before Benjamin Braddock, American films typically portrayed men who were not overgrown, smooth-faced children but full-grown adults, sometimes misguided but incontestably mature. After him, permanent immaturity became the default position of Hollywood-style masculinity.
For this reason, it will be interesting to see what the Millennials, so many of whom demand to be shielded from the “triggering” realities of adult life, make of The Graduate if and when they come to view it. I have a feeling that it will speak to a fair number of them far more persuasively than it did to those of us who—unlike Benjamin Braddock—longed when young to climb the high hill of adulthood and see for ourselves what awaited us on the far side.
1 Algonquin, 278 pages
Choose your plan and pay nothing for six Weeks!
“I think that’s best left to states and locales to decide,” DeVos replied. “If the underlying question is . . .”
Murphy interrupted. “You can’t say definitively today that guns shouldn’t be in schools?”
“Well, I will refer back to Senator Enzi and the school that he was talking about in Wapiti, Wyoming, I think probably there, I would imagine that there’s probably a gun in the school to protect from potential grizzlies.”
Murphy continued his line of questioning unfazed. “If President Trump moves forward with his plan to ban gun-free school zones, will you support that proposal?”
“I will support what the president-elect does,” DeVos replied. “But, senator, if the question is around gun violence and the results of that, please know that my heart bleeds and is broken for those families that have lost any individual due to gun violence.”
Because all this happened several million outrage cycles ago, you may have forgotten what happened next. Rather than mention DeVos’s sympathy for the victims of gun violence, or her support for federalism, or even her deference to the president, the media elite fixated on her hypothetical aside about grizzly bears.
“Betsy DeVos Cites Grizzly Bears During Guns-in-Schools Debate,” read the NBC News headline. “Citing grizzlies, education nominee says states should determine school gun policies,” reported CNN. “Sorry, Betsy DeVos,” read a headline at the Atlantic, “Guns Aren’t a Bear Necessity in Schools.”
DeVos never said that they were, of course. Nor did she “cite” the bear threat in any definitive way. What she did was decline the opportunity to make a blanket judgment about guns and schools because, in a continent-spanning nation of more than 300 million people, one standard might not apply to every circumstance.
After all, there might be—there are—cases when guns are necessary for security. Earlier this year, Virginia Governor Terry McAuliffe signed into law a bill authorizing some retired police officers to carry firearms while working as school guards. McAuliffe is a Democrat.
In her answer to Murphy, DeVos referred to a private meeting with Senator Enzi, who had told her of a school in Wyoming that has a fence to keep away grizzly bears. And maybe, she reasoned aloud, the school might have a gun on the premises in case the fence doesn’t work.
As it turns out, the school in Wapiti is gun-free. But we know that only because the Washington Post treated DeVos’s offhand remark as though it were the equivalent of Alexander Butterfield’s revealing the existence of the secret White House tapes. “Betsy DeVos said there’s probably a gun at a Wyoming school to ward off grizzlies,” read the Post headline. “There isn’t.” Oh, snap!
The article, like the one by NBC News, ended with a snarky tweet. The Post quoted user “Adam B.,” who wrote, “‘We need guns in schools because of grizzly bears.’ You know what else stops bears? Doors.” Clever.
And telling. It becomes more difficult every day to distinguish between once-storied journalistic institutions and the jabbering of anonymous egg-avatar Twitter accounts. The eagerness with which the press misinterprets and misconstrues Trump officials is something to behold. The “context” the best and brightest in media are always eager to provide us suddenly goes poof when the opportunity arises to mock, impugn, or castigate the president and his crew. This tendency is especially pronounced when the alleged gaffe fits neatly into a prefabricated media stereotype: that DeVos is unqualified, say, or that Rick Perry is, well, Rick Perry.
On November 2, the secretary of energy appeared at an event sponsored by Axios.com and NBC News. He described a recent trip to Africa:
It’s going to take fossil fuels to push power out to those villages in Africa, where a young girl told me to my face, “One of the reasons that electricity is so important to me is not only because I won’t have to try to read by the light of a fire, and have those fumes literally killing people, but also from the standpoint of sexual assault.” When the lights are on, when you have light, it shines the righteousness, if you will, on those types of acts. So from the standpoint of how you really affect people’s lives, fossil fuels is going to play a role in that.
This heartfelt story of the impact of electrification on rural communities was immediately distorted into a metaphor for Republican ignorance and cruelty.
“Energy Secretary Rick Perry Just Made a Bizarre Claim About Sexual Assault and Fossil Fuels,” read the Buzzfeed headline. “Energy Secretary Rick Perry Says Fossil Fuels Can Prevent Sexual Assault,” read the headline from NBC News. “Rick Perry Says the Best Way to Prevent Rape Is Oil, Glorious Oil,” said the Daily Beast.
“Oh, that Rick Perry,” wrote Gail Collins in a New York Times column. “Whenever the word ‘oil’ is mentioned, Perry responds like a dog on the scent of a hamburger.” You will note that the word “oil” is not mentioned at all in Perry’s remarks.
You will note, too, that what Perry said was entirely commonsensical. While the precise relation between public lighting and public safety is unknown, who can doubt that brightly lit areas feel safer than dark ones—and that, as things stand today, cities and towns are most likely to be powered by fossil fuels? “The value of bright street lights for dispirited gray areas rises from the reassurance they offer to some people who need to go out on the sidewalk, or would like to, but lacking the good light would not do so,” wrote Jane Jacobs in The Death and Life of Great American Cities. “Thus the lights induce these people to contribute their own eyes to the upkeep of the street.” But c’mon, what did Jane Jacobs know?
No member of the Trump administration so rankles the press as the president himself. On the November morning I began this column, I awoke to outrage that President Trump had supposedly violated diplomatic protocol while visiting Japan and its prime minister, Shinzo Abe. “President Trump feeds fish, winds up pouring entire box of food into koi pond,” read the CNN headline. An article on CBSNews.com headlined “Trump empties box of fish food into Japanese koi pond” began: “President Donald Trump’s visit to Japan briefly took a turn from formal to fishy.” A Bloomberg reporter traveling with the president tweeted, “Trump and Abe spooning fish food into a pond. (Toward the end, @potus decided to just dump the whole box in for the fish).”
Except that’s not what Trump “decided.” In fact, Trump had done exactly what Abe had done a few seconds before. That fact was buried in write-ups of the viral video of Trump and the fish. “President Trump was criticized for throwing an entire box of fish food into a koi pond during his visit to Japan,” read a Tweet from the New York Daily News, linking to a report on phony criticism Trump received because of erroneous reporting from outlets like the News.
There’s an endless, circular, Möbius-strip-like quality to all this nonsense. Journalists are so eager to catch the president and his subordinates doing wrong that they routinely traduce the very canons of journalism they are supposed to hold dear. Partisan and personal animus, laziness, cynicism, and the oversharing culture of social media are a toxic mix. The press in 2017 is a lot like those Japanese koi fish: frenzied, overstimulated, and utterly mindless.
Choose your plan and pay nothing for six Weeks!
Review of 'Lessons in Hope' By George Weigel
Standing before the eternal flame, a frail John Paul shed silent tears for 6 million victims, including some of his own childhood friends from Krakow. Then, after reciting verses from Psalm 31, he began: “In this place of memories, the mind and heart and soul feel an extreme need for silence. … Silence, because there are no words strong enough to deplore the terrible tragedy of the Shoah.” Parkinson’s disease strained his voice, but it was clear that the pope’s irrepressible humanity and spiritual strength had once more stood him in good stead.
George Weigel watched the address from NBC’s Jerusalem studios, where he was providing live analysis for the network. As he recalls in Lessons in Hope, his touching and insightful memoir of his time as the pope’s biographer, “Our newsroom felt the impact of those words, spoken with the weight of history bearing down on John Paul and all who heard him: normally a place of bedlam, the newsroom fell completely silent.” The pope, he writes, had “invited the world to look, hard, at the stuff of its redemption.”
Weigel, a senior fellow at the Ethics and Public Policy Center, published his biography of John Paul in two volumes, Witness to Hope (1999) and The End and the Beginning (2010). His new book completes a John Paul triptych, and it paints a more informal, behind-the-scenes portrait. Readers, Catholic and otherwise, will finish the book feeling almost as though they knew the 264th successor of Peter. Lessons in Hope is also full of clerical gossip. Yet Weigel never loses sight of his main purpose: to illuminate the character and mind of the “emblematic figure of the second half of the twentieth century.”
The book’s most important contribution comes in its restatement of John Paul’s profound political thought at a time when it is sorely needed. Throughout, Weigel reminds us of the pope’s defense of the freedom of conscience; his emphasis on culture as the primary engine of history; and his strong support for democracy and the free economy.
When the Soviet Union collapsed, the pope continued to promote these ideas in such encyclicals as Centesimus Annus. The 1991 document reiterated the Church’s opposition to socialist regimes that reduce man to “a molecule within the social organism” and trample his right to earn “a living through his own initiative.” Centesimus Annus also took aim at welfare states for usurping the role of civil society and draining “human energies.” The pope went on to explain the benefits, material and moral, of free enterprise within a democratic, rule-of-law framework.
Yet a libertarian manifesto Centesimus Annus was not. It took note of free societies’ tendency to breed spiritual poverty, materialism, and social incohesion, which in turn could lead to soft totalitarianism. John Paul called on state, civil society, and people of God to supply the “robust public moral culture” (in Weigel’s words) that would curb these excesses and ensure that free-market democracies are ordered to the common good.
When Weigel emerged as America’s preeminent interpreter of John Paul, in the 1980s and ’90s, these ideas were ascendant among Catholic thinkers. In addition to Weigel, proponents included the philosopher Michael Novak and Father Richard John Neuhaus of First Things magazine (both now dead). These were faithful Catholics (in Neuhaus’s case, a relatively late convert) nevertheless at peace with the free society, especially the American model. They had many qualms with secular modernity, to be sure. But with them, there was no question that free societies and markets are preferable to unfree ones.
How things have changed. Today all the energy in those Catholic intellectual circles is generated by writers and thinkers who see modernity as beyond redemption and freedom itself as the problem. For them, the main question is no longer how to correct the free society’s course (by shoring up moral foundations, through evangelization, etc.). That ship has sailed or perhaps sunk, according to this view. The challenges now are to protect the Church against progressivism’s blows and to see beyond the free society as a political horizon.
Certainly the trends that worried John Paul in Centesimus Annus have accelerated since the encyclical was issued. “The claim that agnosticism and skeptical relativism are the philosophy and the basic attitude which correspond to democratic forms of political life” has become even more hegemonic than it was in 1991. “Those who are convinced that they know the truth and firmly adhere to it” increasingly get treated as ideological lepers. And with the weakening of transcendent truths, ideas are “easily manipulated for reasons of power.”
Thus a once-orthodox believer finds himself or herself compelled to proclaim that there is no biological basis to gender; that men can menstruate and become pregnant; that there are dozens of family forms, all as valuable and deserving of recognition as the conjugal union of a man and a woman; and that speaking of the West’s Judeo-Christian patrimony is tantamount to espousing white supremacy. John Paul’s warnings read like a description of the present.
The new illiberal Catholics—a label many of these thinkers embrace—argue that these developments aren’t a distortion of the idea of the free society but represent its very essence. This is a mistake. Basic to the free society is the freedom of conscience, a principle enshrined in democratic constitutions across the West and, I might add, in the Catholic Church’s post–Vatican II magisterium. Under John Paul, religious liberty became Rome’s watchword in the fight against Communist totalitarianism, and today it is the Church’s best weapon against the encroachments of secular progressivism. The battle is far from lost, moreover. There is pushback in the courts, at the ballot box, and online. Sometimes it takes demagogic forms that should discomfit people of faith. Then again, there is a reason such pushback is called “reaction.”
A bigger challenge for Catholics prepared to part ways with the free society as an ideal is this: What should Christian politics stand for in the 21st century? Setting aside dreams of reuniting throne and altar and similar nostalgia, the most cogent answer offered by Catholic illiberalism is that the Church should be agnostic with respect to regimes. As Harvard’s Adrian Vermeule has recently written, Christians should be ready to jettison all “ultimate allegiances,” including to the Constitution, while allying with any party or regime when necessary.
What at first glance looks like an uncompromising Christian politics—cunning, tactical, and committed to nothing but the interests of the Church—is actually a rather passive vision. For a Christianity that is “radically flexible” in politics is one that doesn’t transform modernity from within. In practice, it could easily look like the Vatican Ostpolitik diplomacy that sought to appease Moscow before John Paul was elected.
Karol Wojtya discarded Ostpolitik as soon as he took the Petrine office. Instead, he preached freedom and democracy—and meant it. Already as archbishop of Krakow under Communism, he had created free spaces where religious and nonreligious dissidents could engage in dialogue. As pope, he expressed genuine admiration for the classically liberal and decidedly secular Vaclav Havel. He hailed the U.S. Constitution as the source of “ordered freedom.” And when, in 1987, the Chilean dictator Augusto Pinochet asked him why he kept fussing about democracy, seeing as “one system of government is as good as another,” the pope responded: No, “the people have a right to their liberties, even if they make mistakes in exercising them.”
The most heroic and politically effective Christian figure of the 20th century, in other words, didn’t follow the path of radical flexibility. His Polish experience had taught him that there are differences between regimes—that some are bound to uphold conscience and human dignity, even if they sometimes fall short of these commitments, while others trample rights by design. The very worst of the latter kind could even whisk one’s boyhood friends away to extermination camps. There could be no radical Christian flexibility after the Holocaust.