Must the West threaten to bomb innocent bystanders in order to deter nuclear war? Does the West itself need to…
Must the West threaten to bomb innocent bystanders in order to deter nuclear war? Does the West itself need to be threatened with annihilation of its civil society in order to be deterred? President Reagan’s speech of March 23 proposing a decades-long research program to protect civilians against ballistic-missile attack revived these questions. The instant hoots of ridicule and references to Star Wars from many Senators and Congressmen suggest that holding out the nightmare vision of last things, the apocalypse, is now part of the nature of things; that the need to threaten the end of the earth must dominate earthly policy.
In fact, the West has for years used apocalyptic threats as a substitute for improving our capacity for discriminate response and in particular for a conventional reply to conventional attack. (The media hardly noticed the more immediate technical effort urged in the President’s speech—to improve conventional technology.) Reckless nuclear threats and the intimidating growth of both Soviet conventional and nuclear strength have had much to do with the rise of the anti-nuclear movement here and in Protestant Northern Europe. By revising many times in public their pastoral letter on war and peace, American Catholic bishops have dramatized the moral issues which statesmen, using empty threats to end the world, neglect or evade. For the bishops stand in a long moral tradition which condemns the threat to destroy innocents as well as their actual destruction. They try but do not escape reliance on threatening bystanders. Ironically, the view dominating all their revisions reflects an evasive secular extreme which, instead of speeding improvements in the ability to avoid bystanders, has tried to halt or curb them. But because the bishops must take threats seriously, they make more visible the essential evasions of Western statesmen. That, however, is a kind of virtue. The letter offers a unique opportunity to examine the moral, political, and military issues together, and to show that, as the President suggests, threatening to bomb innocents is not part of the nature of things. Nor has it been, as is now widely claimed, an essential of deterrence from the beginning. Nor is it the inevitable result of “modern technology.” It may be that our Senators and even some of our younger Congressmen haven’t watched Star Wars closely enough.
The bishops have been sending a message to strategists in Western foreign-policy establishments—and to strategists in Western anti-nuclear counter-establishments. It seems unequivocal: “Under no circumstances may nuclear weapons or other instruments of mass slaughter be used for the purpose of destroying population centers or other predominantly civilian targets.” Though that only restates an exemplary part of Vatican II two decades earlier, it is far from commonplace. Nonetheless it should be obvious to Catholics and non-Catholics alike. Informed realists in foreign-policy establishments as well as pacifists should oppose aiming to kill bystanders with nuclear or conventional weapons: indiscriminate Western threats paralyze the West, not the East. We have urgent political and military as well as moral grounds for improving our ability to answer an attack on Western military forces with less unintended killing, not to mention deliberate mass slaughter.
The bishops seem to be countering the perverse dogma which, after the Cuban missile crisis, came increasingly to be used by Western statesmen eager to spend less on defense: that the West should rely for deterring the Soviets on the ability to answer a nuclear military attack by assuring the deliberate destruction of tens or even hundreds of millions of Soviet civilians; and that the United States should also, for the supposed sake of “stability,” give up any defense of its own civilians and any attack on military targets in order to assure the Soviets that they could, in response, destroy a comparable number of American civilians. The long humanist as well as the religious tradition on “just war” stresses especially the need to avoid attacks on “open,” that is undefended, cities. The new doctrine exactly reversed this; it called both for leaving cities undefended and threatening to annihilate them. John Newhouse succinctly stated this dogma, to which he was sympathetic, in the “frosty apothegm”: “Offense is defense, defense is offense. Killing people is good, killing weapons is bad.” The late Donald Brennan, a long-term advocate of arms control to defend people and restrain offense from killing innocents, was not sympathetic. He noted that the acronym for Mutual Assured Destruction—MAD—described that Orwellian dogma.
Having observed long ago that not even Genghis Khan avoided combatants in order to focus solely on destroying noncombatants, I was grateful, on a first look at this issue in the evolving pastoral letter, to find the bishops on the side of the angels. Unfortunately, a closer reading suggested that they were also on the other side. For, while they sometimes say that we should not threaten to destroy civilians, they say too that we may continue to maintain nuclear weapons—and so implicitly threaten their use as a deterrent—while moving toward permanent verifiable nuclear and general disarmament; yet we may not meanwhile plan to be able to fight a nuclear war even in response to a nuclear attack.
Before that distant millennial day when all the world disarms totally, verifiably, and irrevocably—at least in nuclear weapons—if we should not intend to attack noncombatants, as the letter says, what alternative is there to deter nuclear attack or coercion? Plainly only to be able to aim at the combatants attacking us, or at their equipment, facilities, or direct sources of combat supply. That, however, is what is meant by planning to be able to fight a nuclear war—which the letter rejects.
Perhaps the bishops can work this out in later statements. But a close reading of their changing text, their congressional testimony, and the writings of their associates suggests that this is unlikely. For their struggle with conscience has led them to make only more explicit the widespread confusions and evasions of many secular strategists—including many statesmen, scientists, Senators, editors, and business leaders. Take John Cardinal Krol and Father Brian Hehir, who was staff adviser to the ad-hoc committee drafting the pastoral letter. Cardinal Krol repeated in a sermon at the White House in 1979 what he and his associates had been saying in recent years: in brief, “possession, yes, for deterrence . . . but use, never.” It is all right for the United States implicitly to threaten the use of nuclear weapons, but “at the point of such decisions, . . . political and military authorities are responsible to a higher set of values” and so “must reject the actual use of such weapons, whatever the consequences.” Any consequence “whatever” includes giving up military resistance. But “the history of certain countries under Communist rule today shows that not only are human means of resistance available and effective but also that human life does not lose all meaning with replacement of one political system by another.”
Father Hehir elaborates this view: (A) We should not get or keep an ability to attack combatants. (B) We may maintain an ability to attack noncombatants while waiting for nuclear disarmament, and (C) We may use that ability implicitly (though not explicitly) to threaten retaliation against noncombatants. (D) Indeed, to deter nuclear attack, we must convince other nations that our “determination to use nuclear weapons is beyond question.” (E) We should never intend to use nuclear weapons. (F) Nor (to make the deception harder) declare an intent to use them even in reply to a nuclear attack. (G) We should never actually use them; that is to say, we shouldn’t retaliate at all.
Precisely how this volubly revealed deception is to fool allies and adversaries “beyond question” has not itself been revealed. (Future sermons at the White House might have to be classified.) If the bishops could transmit that revelation, it would fortify a good many strategists in our foreign-affairs establishment who want fervently to believe that we can safely deter an adversary solely by threatening the nuclear extermination of his cities while making clear to the entire world that we would never use nuclear weapons at all; and who also want firmly to believe we needn’t spend much money on a less reckless defense. In sending that message to Western elites the letter only relays, amplifies, and broadcasts signals our elites have themselves been sending for years. The troubling obscurity of the letter reflects that establishment ambivalence and incoherence. On many matters of technical military and political fact the bishops derive their views not from sacred authority but from a more doubtful range of secular strategists than they realize. Much of the letter, for example, stems from the strategists who hold that defense is offense and that killing people is good and killing weapons bad—the very strategists who would rely exclusively on threatening to destroy cities.
In invoking divine authority to sustain such lay strategies, the bishops’ power seems dangerous to many Catholics who disagree. But their moral prestige alone gives weight to the bishops’ strategic views with non-Catholics and Catholics. They reinforce the impassioned pacifist and neutralist movements that have been growing in Europe and in the United States, as well as the establishment strategies which helped to generate these protest movements.
For the bishops pass lightly over or further confound many already muddled and controversial questions of fact and policy. In a world where so many intense, deep, and sometimes mutually reinforcing antagonisms divide regional as well as superpowers, are there serious early prospects for negotiating the complete, verifiable, and permanent elimination of nuclear or conventional arms? If antagonists don’t agree, should we disarm unilaterally? If we keep nuclear arms, how should we use them to deter their use against us or an ally? Might an adversary in some plausible circumstance make a nuclear attack on an element (perhaps a key non-nuclear element) of our military power or that of an ally to whom we have issued a nuclear guarantee? Might such an enemy nuclear attack (for example, one generated in the course of allied conventional resistance to a conventional invasion of NATO’s center or of a critical country on NATO’s northern or southern flank) have decisive military effects yet restrict side effects enough to leave us, and possibly our ally, a very large stake in avoiding “mutual mass slaughter”? Could some selective but militarily useful Western response to such a restricted nuclear attack destroy substantially fewer innocent bystanders than a direct attack on population centers? Would any discriminate Western response to a restricted nuclear attack—even one in an isolated area on a flank—inevitably (or more likely than not, or just possibly, or with some intermediate probability) lead to the destruction of humanity, or “something little better”? Or at least to an unprecedented catastrophe? Would it be less or more likely than an attack on population to lead to unrestricted attacks on populations? Can we deter a restricted nuclear attack better by threatening an “unlimited,” frankly suicidal, and therefore improbable attack on the aggressor’s cities, or by a limited but much more probable response suited to the circumstance?
The bishops’ authorities slip by or confuse almost all these questions. The bishops sometimes seem only to be saying that the extent of direct collateral harm done by a particular restricted attack is uncertain, quite apart from the possibilities of “escalation.” At other times they are certain that restricted attacks will lead to an entirely unrestricted war. And they then suggest that the chance is “so infinitesimal” that any Western nuclear response to a restricted attack would end short of ending humanity itself, that we might better threaten directly to bring on the apocalypse. The bishops cite experts as authority for their judgment that any use whatever of nuclear weapons would with an overwhelming probability lead to unlimited destruction. And some of their experts do seem to say just that. But some they cite appear only to say that we cannot be quite sure (that is, the probability is not equal to one) that any use of nuclear weapons would stay limited. If any response other than our surrender is to be believed, it makes a difference whether we talk of a probability that is not quite zero or a probability that is not quite equal to one that any nuclear response would bring on a suicidally total disaster. Yet two successive paragraphs in the 1982 Foreign Affairs article by McGeorge Bundy, George F. Kennan, Robert S. McNamara, and Gerard Smith proposing “no first use” of nuclear weapons, which the bishops cite, assert each of a wide range of such differing possibilities without distinction. Most authorities relied on by the bishops are themselves not very discriminating about which point they are trying to make.
Some important components of conventional military power vulnerable to nuclear attack are close to population centers. Others, however, may be very far from them—for example, naval forces at sea; or satellites in orbit hundreds or even a hundred thousand miles above the earth, that may be expected to perform the essential tasks during a conventional war of reconnaissance, surveillance, navigation, guidance, and communications. These are more vulnerable to nuclear than conventional attack. If we have no way of discouraging a limited nuclear attack except by extracting a promise from an adversary that he will not attack, or by threatening that we will respond to such isolated attacks with a suicidal retaliation on his cities, an adversary might, in the course of a conventional war, chance a small but effective nuclear attack against such isolated military targets. Such an attack would do incomparably less damage to civilians in the West than any of the “limited” attacks discussed by the bishops’ authorities. Is it really so evident that a similarly restricted Western nuclear response to such a nuclear attack would be nearly certain to escalate to the end of humanity? Wouldn’t a restricted response doing minimal damage to civilians on either side be much less likely to escalate than an attack on cities? And wouldn’t the ability to respond in a proportionate way be a better deterrent to an adversary’s crossing the gap between nuclear and conventional weapons? The bishops’ lay experts tend to see the Soviets as mirror images of themselves, but sometimes diabolize them. They argue as if the Soviets would not continue during a war to have the strongest possible incentives to keep escalation within bounds; and as if the Soviets would love every killing of a Western bystander exactly as much as the West values his survival; as if the Soviet interest were in annihilating rather than dominating Western society.
In fact, calculations cited by the bishops’ authorities hardly probe the issue as to whether an adversary might use nuclear weapons that would destroy key components of a military force discriminately, leaving us a very large stake in making either a discriminate response or no response at all. The calculations published in 1979 by the Office of Technology Assessment (OTA), in answer to an inquiry by supporters of MAD on the Senate Foreign Relations Committee, deal with hypothetical “small” and supposedly “limited” attacks. However, OTA’s “limitations” were not seriously designed to test the feasibility, now or in the future, of destroying military targets and not population. One of their “limited” cases involves direct attacks on the populations of Detroit and Leningrad. And OTA’s most “limited” Soviet attack directed 100 one-megaton nuclear warheads at oil refineries, including some inside Philadelphia and Los Angeles, in order “to inflict as much economic damage as possible” and “without any effort to maximize or minimize human casualties” (emphasis added). No one should be surprised that such a “limited” attack might kill about 5 million bystanders; or that a similar attack on Soviet oil refineries might kill 840,000—a result which the influential English military historian, Michael Howard, describes as “little better” than “a genocidal pact” killing up to 160 million in each country and leaving the rest “to envy the dead.”
The bishops rely heavily on a three-and-a-half page study embodying the views of fourteen scientists who seem mainly to be specialists in public health. The Papal Academy of Sciences convened this group from several countries, including the Soviet Union, “to examine the consequences of the use of nuclear weapons on the survival and the health of humanity.” Like the Physicians for Social Responsibility in this country, the group considers (except for one paragraph) only the effects of intentionally bombing cities. It says that the consequences of such an attack on the survival and health of humanity “appear obvious.” Indeed they have always been. That is the principal reason to reject MAD and avoid threatening cities.
The papal study devotes one paragraph to “a nuclear attack directed only at military facilities.” Like the pastoral letter, that paragraph assumes that any nuclear attack by an aggressor anywhere or any response by his victim would be directed at all the adversary’s military facilities, however minor or irrelevant to the immediate outcome of the conflict that generated the use of nuclear weapons. It also assumes there would be no attempt to explode the weapons at altitudes that avoided fallout and no attempt in any other way to confine destruction to targets critical to the conflict’s outcome.
But such analyses dodge all the serious issues as to whether an adversary might, in the course of a conventional war, use some nuclear weapons with substantial military effect and yet deliberately leave us and our allies with very strong incentives to avoid mutual mass slaughter; and as to whether we should have no response to such an attack except bringing on the mass slaughter or surrendering; and no better way of deterring it than promising one or the other or even, like the bishops’ strategists, both of these two incompatible bad alternatives.
Yet the problem of deterring nuclear coercion or attack on an ally will persist. Despite lip-service at Geneva and the United Nations, hardly anyone seriously expects that each and every one of the six or seven or eight nations that have made nuclear explosives will destroy all their nuclear arms irretrievably and verifiably in a future near enough to govern our present actions. (The uncertainty as to the number of present nuclear powers suggests some of the difficulty we would have in getting actionable evidence that all of the existing nuclear powers had destroyed all of their weapons.) Nor are all prospective nuclear powers likely or even able to surrender the possibility of making the bomb. Moreover, the harm that these weapons can do is so great that merely reducing them to the numbers talked of by “minimum deterrers,” who would use the remainder to threaten the mass slaughter of populations, would not remove and might increase the probability of an enormous catastrophe. And it would not prevent the potent use of threats of mass slaughter for coercing those who have disarmed. Pope John Paul II has observed that “a totally and permanently peaceful human society is unfortunately a utopia”; and that “pacifist declarations” frequently cloak plans for “aggression, domination, and manipulation of others” and could “lead straight to the false peace of totalitarian regimes.” (The Pope has known that false peace personally.)
It has been obvious since the 1950’s that the West needs to: rely less on threats of nuclear destruction and much more on improving conventional defenses; discourage the spread of nuclear weapons; and continue making nuclear weapons less vulnerable to attack, safer from “accidental” detonation, and more secure against seizure and unauthorized or mistaken use. The Soviet Union has its own reasons, as have we, for undertaking such measures unilaterally, with or without formal agreements or even “understandings.” Formal agreements on these matters, in fact, have frequently defeated their overt purpose. Agreements, for example, that were supposed to encourage exclusively peaceful uses and research on nuclear energy have spread plutonium usable in explosives. The bishops call for “strengthening command and control over nuclear weapons” to make them more secure against unauthorized or inadvertent use, but call more strongly for agreement on a freeze—which would halt all current programs to replace aging nuclear weapons with ones that are not only more secure against seizure but safer against accidents, more discriminate, and less susceptible to attack.
What is more, the West has many excellent reasons for reducing the numbers and destructiveness of its nuclear weapons quite apart from any agreement. The indiscriminate destructiveness of the American stockpile (as measured in numbers of megatons) was four times higher in 1960 than in 1980. The number of weapons was one-third higher in 1967. The persistent failure of the bishops and other strategists who make a fetish of bilateral agreements to observe the unilateral decline in destructiveness and numbers in American nuclear stockpiles shows, at the very least, a certain lack of seriousness. In any case, if a freeze doesn’t stop it from doing so, the U.S. can reduce further and drastically the numbers and destructiveness of its nuclear stockpile by exploiting the improved accuracies possible today. Improved accuracies make feasible greater discrimination as well as effectiveness in the use of nuclear weapons, and they also make possible more extensive replacement of nuclear with conventional weapons.
My own research and that of others has for many years pointed to the need for a much higher priority on improving our ability to hit what we aim at and only what we aim at. That would mean, in particular, that effective conventional weapons could drastically reduce the West’s reliance on nuclear force. Moreover, for years now, the thrust of technology, as in the electronics revolution, has been to improve the possibilities of discrimination and control. It can increasingly provide us with just such intelligent choices between using conventional or nuclear weapons, and between killing innocent bystanders with nuclear weapons or attacking means of aggression and domination.
The danger of Soviet aggression is more likely to be lessened by a Western ability to threaten the military means of domination than by a Western ability to threaten bystanders. First, the Soviets value their military power, on the evidence, more than the lives of bystanders. Second, Western non-suicidal threats against legitimate military targets are more credible than threats to bring about the destruction of civil society on both sides. The latter have a negligible likelihood of being carried out by Western leaders, and therefore cannot be relied on to dissuade Soviet intimidation or aggression. Finally, it is even more absurd and dangerous to suppose that the only way to dissuade the U.S. from unleashing aggression is to help the Soviets threaten our civilians by leaving them defenseless and by leaving us no choices other than capitulation or an uncontrollably destructive offense against Soviet cities that would invite the reciprocal destruction of our own civil society.
Only some widely prevalent but shallow evasions and self-befuddlements, and not any deep moral dilemma or basic paradox, force us to threaten the annihilation of civilians in order to prevent nuclear or conventional war. The bishops are clear about rejecting the actual use of nuclear weapons to kill innocents. About threats to kill innocents, they are much less clear. Their obscurity mirrors an uneasy area of darkness at the core of establishment views.
Precisely because the bishops’ views do not come from on high but are shared by many in the establishment, and also in the anti-nuclear and pacifist movements that shake the establishment, it is worth looking at their arguments on the morality of nuclear deterrence in the context of changing defense policies. Anti-nuclear arguments proceed from premises about the inevitable dependence of deterrence on threats deliberately or uncontrollably to kill innocents. To some degree, bluffs about bringing on the nuclear apocalypse helped generate the rise of the unilateral nuclear disarmers; and continuing reliance on such bluffs helps to disarm the establishment from answering the unilateral disarmers. The arguments of both undermine deterrence.
Many recent accounts of defense policy in the nuclear age rewrite history to lend an aura of inevitability to the extreme view that we can reliably deter a nuclear attack in any plausible circumstance solely by threats to kill innocents on both sides, threats which we plainly should never and would never carry out. Advocates of that dangerous self-paralyzing bluff claim that this extreme has been the essential base of Western defense policy since Hiroshima. It wasn’t at the beginning. Nor was it the meaning of the second-strike theory of deterrence that originated near the start of the 1950’s. The second-strike theory did not hold that we had to choose between deterring and being ready and willing to fight if deterrence failed. Americans who oppose unilateral disarmament have never split into a “party of deterrence” as distinct from a “war party” that prefers fighting to deterring a nuclear war. Advocates of MAD suggest as much. But MAD was not declaratory policy before the mid-1960’s. And it has never been operational policy. Yet many liberal and conservative critics of the bishops, like the bishops themselves, are under the impression that it always has been. Many believe that MAD has kept the nuclear peace and is therefore necessary, at least as myth. But the evolution of doctrines and policies of deterrence needs to be seen in relation to the changing technologies of discriminateness and control as well as the technologies of nuclear brute force.
Mass Destruction and Initial Doubts about Stability
Manhattan project scientists assumed immediately that the least destructive fission (or atomic) bomb would affect so large an area and the number of such bombs would always be so scarce that they were suited only to attacks on large population centers rather than military forces or war plants directly supporting them. Hence the standard description—weapons of “mass destruction” or “mass slaughter.” Worse yet, the atomic scientists thought atomic deterrence extremely unstable. (Leo Szilard, for example, thought in 1945 that the odds for nuclear war in ten years were 9 in 10.) In short, the imminence of total destruction was so probable that nothing less than world government soon and total disarmament would permit survival. It was—in a slogan common in 1945 to which Jonathan Schell might now subscribe—“One World or None.”
By the time it had become clear that we were not about to get one world, and that atomic weapons could be used effectively and in adequate numbers against military targets, the atomic scientists’ movement had come to the view that they should be used only against military targets. By then, fusion weapons were in prospect and many of the same scientists assumed, as they had at first about the A-bomb, that the new H-bomb was suited only to destroy population centers and, at that, offered a net advantage over A-bombs only against a few of the largest population centers. Therefore, they opposed the H-bomb and advocated a vast expansion of the A-bomb stockpile to be used in fighting a ground war in Europe, in anti-submarine warfare, in continental defense, and against enemy bomber bases.
In 1952, thoughtful analysts of the implications of thermonuclear weapons like the economist Charles Hitch found that—contrary to many claims—H-bombs were indeed much more effective than A-bombs against military targets and war-supporting industry; but, like the atomic scientists, he was concerned that they raised the gravest problems of unintended collateral damage to noncombatants. To reduce civilian casualties one should give priority to targets outside cities and warn urban populations to evacuate. Like the physicists, Hitch considered mainly very large (25-megaton) H-bombs delivered with great inaccuracy, that is, with half the bombs missing by a radius of at least a half-mile and generally by well over a mile. Other writers on the H-bomb at the time, like Bernard Brodie, an international-relations theorist who had once thought A-bombs were suited only to attack whole cities, sometimes agreed with Hitch that H-bombs made restraint essential and that war objectives had to be limited as well; at other times he talked of them as “city busters”; at still other times, he talked about their tactical advantage for use in Europe where they could destroy so large an area as to frustrate dispersion and concealment of ground forces.
Yet, whether one considered H-bombs or A-bombs, the trend in NATO policy—if only to keep defense budgets within domestic political bounds—was to rely increasingly on nuclear weapons in large numbers and to neglect the unintended harm they would do. Churchill, who justified British nuclear weapons in part because they would be able to destroy military targets of special interest to Britain, was so impressed by the destructive side effects of the H-bomb soon to be acquired by both Britain and the U.S., that he talked vividly and hopefully of safety becoming the sturdy child of “terror.” The Republicans, coming to power at the end of an unpopular and costly conventional war in Korea, talked of nuclear weapons as simply “modern weapons” which furnished a “bigger bang for a buck.” They talked of massive retaliation against lessser threats; and the NATO Military Committee in 1957 formally adopted a strategy of threatening a “full” nuclear response even to a local persisting incursion into NATO territory.
Inevitably, uneasiness about the sturdiness as well as the morality of a balance based on threats of such massive destruction, however unintentional, led many sober critics to propose more limited applications of nuclear force, and especially the use of small nuclear weapons on the battlefield. But it soon became clear that nuclear weapons used on the battlefield in the center of Europe also had drawbacks as a replacement for adequate conventional force. The Carte Blanche exercise in 1955 indicated that the side effects of their early introduction might kill nearly 2 million West Germans and wound many others. Chancellor Konrad Adenauer therefore resisted an increased reliance on nuclear weapons and changed his mind only at the end of 1956, when he saw that a conventional build-up in West Germany would be drastically constrained by domestic political problems in getting eighteen-month terms for army conscripts. After that, the Germans and other West Europeans came more than any American President since 1961 to favor relying on nuclear weapons as a cheap substitute for conventional force.
Operational plans, however, have always differed from the rhetoric of indiscriminate threats. Certainly NATO has never planned to avoid military targets in order deliberately to kill innocents at long or short range. NATO plans have always included various restraints on the size of weapons used against military targets in Eastern as well as Western Europe. Nonetheless the problem of unintended harm to noncombatants on both sides remained and always cast some doubt about the sturdiness of deterrence and especially about the Western will to respond to limited or isolated nuclear attacks against the military forces of an ally. (Where that ally is a country on the northern or southern flanks of Europe, the doubt is most obvious; yet these “flank countries” are at present more endangered and more critical for the Alliance than ever. Doubts have increased, especially about the effectiveness of massive nuclear threats as a substitute for conventional force.)
The Second-Strike Theory
Another line of research that was pursued intensively in classified form, beginning in 1951, disclosed a different but even more urgent range of problems about the sturdiness of nuclear deterrence. This research, which generated the second-strike theory of deterrence, looked at the vulnerabilities of all the essential elements of strategic nuclear forces under nuclear attack, and the problems these entailed for maintaining a convincing deterrent. These problems had been badly neglected in part because the original belief after World War II that nuclear weapons could be used effectively only against cities predisposed political and military leaders, as well as scientists, to overlook the possibility that our own nuclear force might come under attack; also because bombing doctrines during and before World War II had stressed that the chief aim of strategic forces was to destroy the centers of war-supporting industry and not the military forces themselves.
As a result, the force we had planned for the mid-and late-1950’s, before the introduction of ballistic missiles, was much more vulnerable than is generally realized even today. That was dangerous in particular because NATO had always counted on the help of the Strategic Air Command (SAC) to deter or oppose an invasion of Western Europe and to reduce the intimidating political shadow cast by the possibility that an invasion might grow out of some future crisis. A strategic force, however powerful when left undisturbed to do its work, cannot deter an attack which it is unable itself to survive; and the studies showed that we needed to protect not only the vehicles but all the complex elements of an effective response, including in particular a politically responsible command-and-control. Moreover, preserving control required operating in peacetime in ways that avoid a large risk of lethal “accidents” or even more lethal mistakes in response to false alarms. It excludes, for example, “launching under attack,” a euphemism for launching ICBM’s on ambiguous electromagnetic signals.
Popularizations of the second-strike theory and some recent academic accounts distort history to make it seem essential deliberately to threaten innocents rather than military forces in order to deter. They frequently identify a second strike with attacks on civilians. In its origins the second-strike theory assumed no such identity. The study that generated the distinction and first specified requirements for a second strike, the Rand Base Study, in which I was engaged between 1951 and 1953 with Fred Hoffman, Harry Rowen, and Robert Lutz, made explicit that it would not deal with how to choose targets, but rather how to choose a protected mode of basing and operating a strategic force that would be best for any of several target systems. It looked at several target sets typical of the time: a quite limited number of key war plants supporting combat; military targets whose destruction might retard the advance of ground forces in Europe; and those that might blunt a continuing enemy strategic attack. It did so in order to show in all cases how best to reduce the vulnerability of our own strategic forces. That was the more important result, but the study also saved 9 billion 1953 dollars, showing that one does not have to aim to destroy cities only, or to destroy cities at all, to avoid “exponential” increases in defense spending, as one implausible rationalization for bombing innocents has it.
The authors’ next long study, started at the end of 1953, was about defending a strategic force in the coming ballistic-missile era of reduced warning—then seven years or so off. It paid particular attention to “fail-safe methods” of avoiding war through mistaken responses to ambiguous signals and with the difficult issues of protecting political command-and-control. However, like the Base Study, it dealt only with the urgent problem of choosing responsible ways to protect SAC, not with choice among SAC’s targets. Separating targets for SAC never looked harder than in the mid-1950’s, since our bombs were then at their most destructive and expected inaccuracies near their anguishing worst. However, in successive later studies of strategic aims, the authors became increasingly clear that to have only the alternative of indiscriminate attack would seriously compromise the credibility that there would be any response at all. The two lines of research, one on targeting and reducing collateral damage and the other on protecting the strategic force, converged. It had become apparent that to have a persuasive deterrent, we had not only to be able to protect command-and-control, but also to have some alternatives which a responsible political leader would be willing to command.
Imprecision and Unintended Harm
The recognition at the end of 1953 that fusion warheads might be made small enough to be carried in ballistic missiles by the 1960’s might have seemed to hold out the prospect for reducing collateral damage somewhat. For these first ballistic-missile warheads were expected to be substantially smaller than the gravity bombs carried in aircraft. (Later Navy SLBM warheads were about the same size as some early A-bombs, 40 kilotons. Even the first SLBM and ICBM warheads were about a half-megaton, much smaller than the H-bombs contemplated in the initial debate.) In fact, however, the prospect of the ballistic missile worsened expectations about collateral damage because the first generation of missiles was expected to be much more inaccurate than aircraft. The median miss distance then expected for the first ballistic missiles was anywhere from two to five miles. A five-mile median radius of inaccuracy meant that half the bombs would strike outside of an 80-square-mile area!
But inaccuracy determines the unintended harm done in destroying a small target more basically than does the explosive yield of individual bombs. It is the lack of technology smart enough, rather than the availability of large brute-force single weapons, that lies at the root of the problem of collateral damage. One makes up for incompetence in aiming by filling an enormous area of uncertainty either with a few large-yield nuclear weapons or, as the British did in World War II, with many thousands of small conventional bombs. When the British discovered in June 1941 that only a third of the bomber crews who thought they had bombed the target were within 80 square miles of it, they resorted to huge raids involving thousands of bombers with results that became visible in Hamburg and in Dresden. David Irving’s estimate of the dead in Dresden came to 135,000—much more than the official estimates of the Hiroshima dead. A single American conventional raid on Tokyo in March 1945 destroyed an area over three times that destroyed by the Hiroshima bomb (15.8 compared to 4.7 square miles) and nearly nine times that destroyed by the Nagasaki bomb (1.8 square miles). The average area destroyed in 93 conventional attacks against Japanese cities amounted to the same as that in Nagasaki.
During the postwar period the prospects for reducing collateral damage seemed at their worst in the late 1950’s when the average explosive yield of a bomb was ten times the present level and when anticipated missile inaccuracies were also at their maximum. Some of the most familiar and perverse current views on nuclear deterrence, including those that have shaped the pastoral letter, were formed at that time. Since then, the prospects of hitting only what one is aiming at have changed by several orders of magnitude. That implies improvements in effectiveness against small, hard fixed targets that are in some ways more revolutionary than the transition from conventional to fission explosives or even fusion weapons. The fission and fusion revolutions blasted themselves, so to speak, into public awareness. Revolutionary improvements in our ability to focus destruction on targets alone have proceeded quietly and attracted less public notice and understanding.
The fact is, however, that a tenfold improvement in accuracy is roughly equal in effectiveness to a thousandfold increase in the explosive energy released by a weapon. Improving accuracy by a factor of 100 improves blast effectiveness against a small, hard military target about as much as multiplying the energy released a million times. The fission bomb at Hiroshima released about a thousand times more energy, and a 10-megaton fusion bomb can release a million times more energy, than a 10-ton conventional “block buster.” A one-hundredfold improvement in accuracy roughly equals in effectiveness a millionfold increase in the release of destructive energy to enable the blast destruction of a small fixed target.
The Revolution in Precision
But while the improvement in effectiveness may be the same, these two technologies achieve it in essentially different ways. When one improves effectiveness by releasing more destructive energy, there is a corresponding increase in collateral damage. When one improves the ability to destroy a target by increasing one’s accuracy, there is a corresponding decrease in collateral damage.
Improvements in guidance using midcourse adjustments have already reduced cruise-missile inaccuracies to 200 feet from the 12,000-30,000-feet average misses expected for ballistic missiles in the late 1950’s. That improvement by a factor of 60 to 150 makes feasible radical reductions in collateral damage. Even more important, terminal guidance systems in development now that can be deployed in the late 1980’s could further reduce inaccuracies at extended ranges by another order of magnitude. That would permit a conventional weapon to replace nuclear bombs in a wide variety of missions with an essentially equal probability of destroying a fixed military target. It would drastically raise the threshold beyond which one would have to resort to nuclear weapons in order to be effective. It would mean a much smaller likelihood of “escalation” and incomparably smaller side effects.
Destroying ground targets that might decide a conventional conflict could have much more troubling side effects even in relatively isolated areas than the destruction of equally decisive naval forces at sea or key satellites deep in space. Yet the situation has altered greatly here too. Most such land targets are less blast-resistant than ICBM silos. Yet attacking them effectively with the huge inaccuracies expected in the late 1950’s would have meant filling an enormous area of uncertainty with destruction. That might typically have subjected an area of 1000 square miles or so to unintended lethal effects. By contrast, a current cruise missile, with midcourse guidance and a small nuclear warhead, could be equally effective against a military target while confining lethal damage to less than one square mile. Most important, improved terminal guidance in the next few years could enable a cruise missile with a suitable non-nuclear warhead to destroy a military target and reduce the area of fatal collateral damage to about one-thousandth of a square mile—an enormous contrast with World War II.
Some conservative critics counter the bishops’ strictures against a nuclear response to conventional attack by suggesting that any “conventional war in Western Europe would almost certainly mean terror and destruction far in excess of World War II”—with perhaps 100 million dead; that, in short, any conventional conflict in Europe would bring on horrors hardly less terrible than nuclear war. Such expectations lead many Europeans to feel that even a conventional war would destroy Europe and end Western civilization. For the bishops, a policy of No-First-Use follows from the broader nuclear policy of “Use, Never.” And both are only part of Cardinal Krol’s injunction in his White House sermon against all war. (“No more war, war never again.”) Through all the political compromises in various drafts, the bishops support conventional alternatives only grudgingly. But estimates of conventional damage by the bishops’ critics have even less basis in evidence than those the bishops cite to show that nuclear damage would be unlimited. It is plain that the increasing advances in precision and control can be most fully exploited by suitably designed conventional weapons.
It is essential to emphasize that advances in our ability to reduce collateral damage and increase the effectiveness of conventional weapons do not blur the distinction between nuclear and conventional force. On the contrary, that remains vital. But these revolutionary changes make it much more feasible to avoid crossing the divide between nuclear and conventional weapons. They give us choices.
Discussions of the morality of bombing and deterrence today often proceed as if “the technical realities” foreclose choice (as one eminent physicist, Wolfgang Panofsky, suggests), as if “the mutual hostage relation” were not at all a “consequence of policy and therefore . . . subject to change,” but a matter of physics—permanently determined by the technology for releasing nuclear energy. Yet the evolution since the 1950’s of technologies other than the release of nuclear energy has altered the possibilities of discrimination and will not excuse us from the responsibility for preparing to keep violence from mounting without bounds.
With few exceptions, even the most thoughtful considerations of the morality of nuclear threats have been frozen in the technology of the late 1950’s and specifically that of nuclear brute force. This can be shown by referring to the evolution of NATO policy, to the development of technologies of destruction and of discrimination and control, and to a sequence of substantial analyses of the morality and prudence of threats to bomb innocents between the end of the 1950’s and the present.
Terror and Technology at the End of the 1950’s
Robert W. Tucker’s book, The Just War (1960), observed that the policy of nuclear deterrence in the 1950’s had demonstrated “at least a striking verbal insensitivity” to the consequences of the defensive use of nuclear force. Indeed, “the more extreme versions” were “obsessed” with the idea that the deterrent threat would never have to be carried out and therefore regarded “the effectiveness of deterrence as directly proportionate” to its horrors. If one accepted this extreme, then one had to acknowledge that “in the nuclear age . . . virtually no substantive restraints . . . need to be observed by those waging a defensive war.” But Tucker himself leaned toward the extreme, since he thought no restraints would be effective. He was writing when the average destructiveness of our weapons and the expected inaccuracies, and hence the probable unintended harm, were all near their peak.
Indiscriminateness, he suggested, is a “‘necessity’ that is inherent in technology.” He rejected the position taken by the World Council of Churches in 1958, against “all-out” use of nuclear weapons. As Paul Ramsey observed, Tucker agreed with the pacifists that statecraft in the nuclear age entails using evil means—threats whose execution would inevitably exterminate civilians. He parted company with the pacifists because the pacifists would abandon statecraft. Tucker would rather abandon morality. His concluding paragraph argued: “There is something patently absurd in the complaint that a threat of extermination, even when restricted to preventing one’s own annihilation, signifies a moral decline for which there is no explanation other than that men have deliberately chosen to abandon any sense of restraint. If men presently show less restraint in threatening their adversaries, it is largely because they are less secure than in an earlier age.” But during the 1950’s, doubts grew about the credibility and the political and military implications of threats of extermination and about whether there were no better choices.
The McNamara Doctrine of the First Two Years
The view dominant among the Kennedy administration and its advisers during its first two years embodied the two converging lines of research on the protection of the strategic force and its targeting. It put into effect many of the criticisms of massive retaliation that had accumulated during the 1950’s. It stressed the importance of a second-strike capability, including a responsible command-and-control system with its vulnerabilities reduced, for example, by the use of airborne command posts. But it also called for a conventional build-up to reduce reliance on nuclear weapons and contemplated the use of nuclear force itself only with discrimination and restraint in the service of political ends. Both conventional and nuclear force, neither of which could substitute for the other, would have to be used in limited ways, if we were to deter aggression, or frustrate it should it occur.
Alain Enthoven defended the continuing relevance of the traditional Christian doctrine of “just war” in the context of the initial Kennedy policy. He explicitly rejected the “realist” and pacifist views of deterrence, both of which assume the incompatibility of morality and statecraft in the nuclear age. We do not, he said, have to choose one or the other. The realists would eliminate moral restraints because they believe them impossible or suicidal. The pacifists think that the impossibility of restraint in nuclear war proves what they had believed all along, that the only moral course is to disarm totally, even if unilaterally, and that this would bring universal peace.
Enthoven distinguished his view also from the obsessive extreme which Tucker had in mind—the position known sometimes by the euphemisms “Minimum Deterrence” or “Deterrence Only.” Enthoven noted that this view, which had begun to take hold among academics after Sputnik (1957), resembled that of the pacifists in its belief that a lasting peace was feasible in the short term. But Deterrence Only would base stability on threats to respond to an attack on our strategic force by deliberately bombing enemy civilians, by avoiding enemy military targets, and by exposing our own civilians to attack. The core of this newer view, as he might have noted, was therefore an antithesis both of pacifist nonviolence and of the Christian and other ethical traditions of humane warfare.
It also differed drastically from preceding U.S. policy. In one sense the new dogma seemed to return to the immediate postwar understanding of nuclear weapons. But the typical view after Hiroshima held that the number of either side’s nuclear weapons would be intrinsically so small and the individual bombs so destructive that they could be effective only against large population centers. An aggressor could effectively attack only cities. His victims could effectively retaliate only against the aggressor’s cities. Deterrence Only, on the other hand, accepted the fact that strategic forces could bomb military forces, but held that we should threaten to respond to a nuclear attack only by bombing cities, and that we should leave our own cities undefended. It was remarkable not only for its extreme departure from humane ethics, but also because it represented a 180-degree turn by many of its main proponents, who, for nearly a decade before they adopted this dogma, had proposed using nuclear weapons only against military targets—in continental defense against invading bombers, against ground forces in Europe, and against combat ships at sea—and who had recommended immense deep-shelter programs for civil defense. Deterrence Only was an extreme minority view at the time of Enthoven’s writing. After the Cuban missile crisis, it became an established ideology.
It was in a speech at Ann Arbor, Michigan in June 1962 that Robert McNamara made public that, in a nuclear war growing out of a major attack on NATO, our main goal would be to destroy enemy military forces, not civilians. He added that we could reserve enough power to destroy the enemy’s society, “if driven to it,” and that threat would give him “the strongest imaginable incentive to refrain from striking our own cities.” (This last resort, which some moralists questioned at the time, I believe was unnecessary: the Soviets have the strongest incentives to preserve their military power.) The part of McNamara’s speech about restricting, so far as feasible, the use of strategic forces to military rather than civilian targets, was embedded in statements stressing that American military force was designed only to discourage aggression, not to change the status quo and never to initiate a war; and that the United States was reducing reliance on nuclear weapons in general and wanted to discourage their spread.
Despite these cautions, his speech produced a strikingly negative response from conservatives as well as liberals both here and abroad, and from keepers of the traditional morality of “just war.” McNamara’s harsh didactic style can hardly explain it. Rather, a certain ambivalence about, if not affection for, nuclear terror had become nearly universal. Franz Josef Strauss, then West German Defense Minister, made clear that he continued to believe that deterrence depended on threatening the immediate use of tactical nuclear weapons at the battle line, to be followed quickly by massive strategic retaliation. Senator Richard Russell and Senator Margaret Chase Smith, Democratic and Republican stalwarts respectively on the Senate Armed Services Committee, denounced McNamara’s statement. Some scientists and engineers, who had only recently, in the aftermath of Sputnik, turned to relying on threats to bomb cities and away from advocating the use of nuclear weapons against military forces and from massive continental defense and deep-shelter programs, now pronounced any ability to attack military forces or to defend cities to be “destabilizing.” With a rancor suggesting a bad conscience, they said that the very modest Kennedy fallout-shelter program, and the new official focus on military targets rather than massive retaliation, might influence American leaders to initiate preventive nuclear war. This, though members of the administration had abundantly stated the very opposite and had explicitly recognized that any nuclear war would be an “unprecedented catastrophe.”
It was plainly silly to suppose that American political leaders would be eager to unleash such an unprecedented catastrophe simply because it might not be total. The reaction was all the more striking since neither these critics nor anyone else had ever suggested that the much more costly and supposedly more effective programs the critics had been backing a few years earlier (for nearly leakproof air defenses, a thick ballistic-missile defense of population as well as of strategic forces, extensive deep shelters for civilians, and the limitation of nuclear weapons to legitimate military targets) would induce American leaders to undertake preventive war. All in all, the venomous response, including that of the media, was shallow, partisan, and, not infrequently, in bad faith. Such venom unfortunately continues to poison current debate as to whether there is an alternative to suicide or surrender. It takes great civic courage to sustain that burden and, in the détente that started after the missile crisis, the administration did not show such courage. Nonetheless, every one of the last six Secretaries of Defense has found it essential both to rely less on nuclear weapons and to return to the subject of the limited use of long- as well as short-range nuclear forces against military targets. Much of Paul Ramsey’s work on “just war” (brought together in The Just War: Force and Political Responsibility, 1968) is related to such a policy.
Ramsey’s answer to Tucker states that the conduct of a nuclear war need not—and, if it is to be moral, must not—“violate the moral immunity of noncombatants from direct attack.” Any harm to noncombatants should at least be unintended. He implies, moreover, that the conduct of nuclear war should involve a serious effort to minimize such unintended damage. If he had been more aware of the possibilities implicit in the electronic revolution, he might have added that research and development need to aim at improving the ability to discriminate. He insists that attacks should not only attempt to discriminate but that the unintended damage should be proportionate to any good that would come out of the war.
In a chapter on “The Limits of Nuclear War,” Ramsey considers what actions in a nuclear war are “undo-able” even if they are “thinkable.” He notes that McNamara’s announcement at Ann Arbor that our main aim in responding to an attack on the Alliance should be to destroy the enemy’s forces, not his civilian population, had occasioned hardly a single amen on either side of the Atlantic. The only responses were stereotyped objections from defense establishments here and abroad, and the same from publications like the Christian Century, normally regarded as keepers of such a civilized rule. Ramsey proceeds with a brilliant support of such limitation and with a sympathetic but penetrating critique of Thomas Schelling and Herman Kahn, who favored limiting nuclear war, but included under those limits attacks on cities, and who held that it might be rational to threaten such attacks even though it would be irrational to execute them. Limited attacks on military installations and forces are both thinkable and do-able, according to Ramsey; but a direct attack on innocent civilians to achieve some other goal, even a good goal, is wrong. Like art, a political action has consequences beyond itself, but, as Aristotle pointed out, an action is also right or wrong in itself. Attacking innocent civilians is wrong even to accomplish something else. Ramsey rejected the use of threats of even limited city attacks.
Enthoven criticized such threats on the grounds also that they would not be believed; that policies based on “the rationality of irrationality” (on which Father Hehir and the bishops also rely) are not viable in the long run for a democracy, especially one with allies: “Rather, the most credible kind of threat is the threat that we will do what in the event will be most in our interest to do.”
According to Michael Walzer, in Just and Unjust War, Ramsey relies on unintended “collateral civilian damage from counterforce warfare in its maximum form to deter potential aggressors.” Walzer himself believes that to deter one must intentionally or unintentionally threaten to kill innocents. But Ramsey was not referring in that context to deterrence of the initial outbreak of an aggression. He was talking of the possibility that, during a war waged against military targets on both sides, both sides might avoid attacking cities and also avoid a maximum counterforce attack—in order to prevent the collateral damage that would ensue from attacking even military targets that are closely co-located with population centers. That is very different from saying that to deter an initial attack one must threaten civilians—even unintentionally. Nor does selectivity in attacks on military targets during a war mean threatening civilians, but rather the opposite. Ramsey did sometimes falter by recommending a “studied ambiguity” about our intentions to retaliate in kind to an attack on cities. Michael Novak’s answer to the bishops also finds “the best of the ambiguous but morally good options . . . in a combination of counterforce and counter-value deterrence.” Yet even he is affected by the insidious semantics of MAD: “countervalue” suggests the Soviets value only bystanders, not military force. But to deter we need rely neither on unintended harm nor on ambiguous intentions.
McNamara, MAD, and MADCAP
One difficulty in getting the evolution straight of both official doctrines and operational policies on nuclear weapons is that the two have often diverged, and the statements of doctrine have often been designed for political combat within domestic bureaucracies rather than potential combat with the Soviets. McNamara in his first two years as Secretary of Defense sought options between suicide and surrender, according to Stewart Alsop, “as Parsifal sought the Grail.” Out of office, he has ended ironically by foreclosing all such options. With an intensity that dims his memory as well as his understanding, he doubts that any nuclear response to nuclear attack can limit destruction.
After the missile crisis, McNamara often talked of Assured Destruction—and later Mutual Assured Destruction—as if they were serious operational policies. Neither was. While Secretary, he never abandoned the goal of using strategic forces against Soviet military forces or the goal of limiting harm to American civilians. Even as declaratory doctrine he never stated MAD in the unqualified and brutal Orwellian form of the aphorism “killing weapons is bad, killing people is good.” When he talked about a capability for assured destruction of 20-25 percent of the Soviet population, he was thinking of deterring the Joint Chiefs of Staff from asking for higher budgets rather than the Soviets from attacking the U.S. It was his way, if not the best way, of winning a budget battle and putting a lower ceiling on the size of our strategic forces. He stressed that we would have the capability for destroying the Soviet population—and he expected that capacity to deter the Soviets; but if deterrence failed, we would use our strategic forces to destroy Soviet forces attacking the United States. Later, when he drifted toward regarding it as desirable for the Soviets to deter us, he was still talking about capabilities.
In short, the form of MAD doctrine he introduced can best be described by the acronym MADCAP rather than MAD. McNamara said we would use a MAD capability for deterrence without seriously intending to assure the destruction of enemy noncombatants. Nor was he entirely serious about attacks on combatants. MADCAP did not lead to any persistent thought about how to improve the force to make it increasingly discriminating, and it discouraged thinking about the selection of various theater and other military targets suited to proportionate responses. It led to slowing or stopping various programs that would have increased our ability to discriminate between military and civilian targets. It made us less serious about the problems of nuclear targeting of combatants or noncombatants: it avoided some of the obloquy of seriously threatening to do the cheap and easy job of killing large “soft” concentrations of civilians without forcing thought about the harder job of carefully selecting and, if necessary, destroying military targets without killing bystanders; or about the hard but feasible and necessary job of keeping violence under control.
The bishops, their defenders, and the strategists on whom they rely all talk of the uncontrollability of nuclear weapons as a deplorable but unavoidable fact of life. However, they make a virtue of this supposed necessity. John Garvey, columnist for the Catholic Commonweal, knows that one may not threaten what one does not intend to do, and grants that “if your enemy knows that you will absolutely refuse to use a weapon, what you have is no longer a weapon and is therefore useless”; but he claims that “it would be naive to think that we are so fully in control of ourselves that in the event of an attack we would not say, ‘What the hell,’ and hit them with everything we’ve got.” Which apparently would give the threat, however immoral, some use as a deterrent.
However, it would be naive or worse to suppose that we cannot impose controls over both initial and subsequent uses of nuclear weapons. “Permissive action links,” which we place on all our weapons overseas and which microchips and other electronic advances are constantly improving, can make it essentially infeasible for military commanders to use nuclear weapons without release by a remote political authority. Moreover, if we really thought political authority were reckless, we could make this release mechanism as elaborate as we liked and even divide the releasing codes so that they would require the agreement of many parties. But the processes of consultation in the Alliance are now complex, and would affect not only the initial, but also subsequent releases. It is most unlikely that we would simply say “Whee!” and let everything go. In Europe the problem is quite the opposite. We should not and do not rely on the threat of losing control to deter either nuclear or conventional attack. But MAD and the fictions of uncontrollability it has propagated encourage us to rely on the threat of losing control as a substitute for dealing with the dangers of conventional conflicts. In short, they have led us to be less serious about conventional war as well.
The bishops’ strategists, who believe that one can deter even if one is plainly committed never to use nuclear weapons, first, second, or ever, would maintain a capability but never use nuclear weapons at all. McNamara, when he changed from the doctrine of his first two years to talk of capabilities for mutual assured destruction, said he would maintain the capability to kill Russian civilians but would actually use nuclear weapons against certain military targets. That’s rather different. Nonetheless it was a long step on the way to the present absurdities and evasions of the moral and prudential problems of discouraging a nuclear attack on the U.S. or one of its allies. Or a conventional attack.
Soviet Values and MAD Nuclear Threats to Deter Conventional Attack
Michael Walzer writes perceptively about the use of terror by guerrillas to provoke counterterror against innocents. But when it comes to nuclear weapons, he accepts the MAD stereotype about the use of threats of terror against innocents to deter attack. He doesn’t question the technical determinism of the nuclear technologists that limiting harm to civilians on either side is impossible. He advances comfortably the familiar paradox about “the monstrous immorality that our policy contemplates” but thinks it inevitable. “The unavoidable truth is that all of these policies rest ultimately on immoral threats.” Like Tucker, Walzer is unwilling to give up immoral threats because he thinks they are necessary for deterrence. Here he rests on the baseless judgment that the only thing that will deter Soviet aggression is the prospect that Russian bystanders will be killed.
To reject that view one need not assume that Soviet values are the same as our own; nor that the Soviets are simply monsters who don’t care or even like to see civilians killed. We need only observe that the Soviets value military power and the means of domination at least as much and possibly more than the lives of Russian civilians. This is surely evidenced by a long history documented by careful scholars like Adam Ulam, Robert Conquest, Nikolai Tolstoy, and many others, in which the Soviets have sacrificed civilian lives for the sake of Soviet power. Their collectivization program in the 1920’s gained control over the peasants at the expense of slaughtering some 12-15 million of them. (Stalin told Churchill that the great bulk of 10 million kulaks had to be wiped out or transferred to Siberia.) The Soviet government sharply increased grain exports during the famine year of 1933, when 5 million Ukrainian peasants were dying. If Robert Conquest is right, the Great Purge of the late 1930’s killed several million more Soviet citizens. If Nikolai Tolstoy is right, Stalin and the NKVD were responsible for more than half of the 20-30 million deaths suffered by the Soviets during World War II. Soviet refusal to abide by the Geneva Convention on Prisoners of War doomed many additional Soviet as well as German prisoners.
Whatever else one may say of these actions, they do not suggest that Soviet leaders value the life of Russian citizens above political and military power. If the West responded to Soviet military attack by destroying military targets, it would affect something on which Soviet leaders continue to lavish a huge part of their painfully scarce resources and which they appear to cherish quite as much as they do Russian citizens; and the prospects of such a Western response would be the best derent to their initiating war. Moreover, continued attacks during a war on elements of their military power and means of domination would appear to be the best way to bring the war to a rapid close. Prudence does not force us to rely for deterrence on even unintended damage done to civilians. Discrimination remains an important goal during the war—and an important capability to achieve in advance of the war. It helps deter the war or bring it to an end.
But Walzer believes that “counterpopulation deterrence” is basic. He also believes it is perfectly effective. It “rules out” (i.e., makes so unlikely as to be negligible) any nuclear war between the great powers; even though the Soviets know we believe that nuclear attacks on populations would be suicidal, our threat would be sure to deter them. And, typical of his time, he is also quite comfortable about the effectiveness of counter-population deterrence for forestalling a conventional invasion. His complacency here parallels that expressed in various British and American magisterial writings of the late 1960’s and 1970’s. He quotes with approval a passage from Bernard Brodie: “The spectacle of a large Soviet field army crashing across the line into Western Europe in the hope and expectation that nuclear weapons would not be used against it—thereby putting itself and the USSR totally at risk while leaving the choice of weapons to us—would seem to be hardly worth a second thought. . . .” One may surmise that if Brodie were alive he would be having second thoughts. Many who wrote that way in the late 1960’s and 1970’s are less comfortable today, in particular about threatening mutual annihilation as a way of deterring a conventional attack on Western Europe.
McGeorge Bundy illustrates the change in the American establishment. He had chided Henry Kissinger for expressing public doubts on the credibility of American strategy for the protection of West Europe at Brussels in 1979. “American strategy for the protection of West Europe,” he was satisfied, was “a classic case of doctrinal confusion and pragmatic success.” (He inserted the two words “so far,” suggesting he was not completely satisfied.) I cautioned at the time that it would be a great mistake to attribute the pragmatic success to the doctrinal confusion; and Bundy did not disagree. The protest movements in Europe were already visible, for one thing; for another, there were the Soviets, and they might not be confused just because we were. We cannot count on a Mutual Assured Confusion. In any case, Bundy, less confident now about MAD threats to deter conventional invasion, has joined Robert McNamara, George Kennan, and Gerard Smith in proposing that we exchange pledges with the Soviets that neither would be the first to use nuclear weapons. The four stress the No-First-Use pledge much more than any serious and extensive program to improve the size or quality of NATO conventional forces, so that NATO could depend less on nuclear threats to overcome Soviet advantages in the use of conventional force. These advantages have to do not only with the massive and increasing size and quality of the Soviet force, but with the Soviets’ geographical position and their relatively improving access to air space and bases near critical areas. Japan and Korea as well as all our European allies are within immediate range of Soviet, but far from the center of American, conventional power. So is Persian Gulf oil on which they all have come to depend.
Indeed, it seems that Bundy and his three coauthors have not really abandoned an implicit threat of the first use of nuclear weapons to make up for our conventional disadvantage. For while the four may mean the Western pledge, they rely on the Soviets not trusting us to live up to our pledge and so continuing to keep their ground forces dispersed and less effective for conventional attack and defense. In short, the policy they advocate resembles the pastoral letter in explicitly abandoning a nuclear threat, while implicitly continuing to rely on it. In their case, the threat is implicit in NATO’s continued capability to use nuclear weapons first. If their policy led each side to believe the other’s pledge, the Soviet Union would be more likely to concentrate its conventional force effectively—and safely since, on their recommendation, we would keep our pledge. On the other hand, if we trusted the Soviet pledge, we might concentrate our defenses at the likely points of attack. That would not be safe since NATO has no way of enforcing such a Soviet pledge. It seems that the four want neither side to believe the other’s pledge. In sum, recommendations for exchanging unenforceable pledges about the first use of nuclear weapons in Europe do not reduce the doctrinal confusion that has been troubling NATO even on the subject of nuclear deterrence of conventional attack. They only alarm West European leaders who continue to rely excessively on nuclear weapons.
Many have observed that the four are rather perfunctory about a program to improve NATO conventional forces—in size, quality, method of deployment, or strategy—which would make it less necessary for European leaders to rely on nuclear weapons by making it more likely we could defeat by conventional means any of several plausible Soviet conventional attacks. They do talk of “maintaining and improving the specifically American conventional forces in Europe” but claim, in the face of much evidence of an unanticipated worsening in our ability to defend Europe’s interests in more than one critical area near the Soviet periphery, that we tend to exaggerate Soviet relative conventional strength. And they say we underestimate “Soviet awareness of the enormous costs and risks of any form of aggression against NATO”—which is to rely covertly on the threat of first use of nuclear weapons that they overtly abjure.
Recently Bundy and McNamara have joined Cyrus Vance and Elmo Zumwalt in a letter to the Congressional Budget Committees calling for large cuts in the administration’s FY ’84-FY ’89 defense budget—with two-thirds of the dollars cut coming out of conventional programs. Like some drafts of the pastoral letter warning that an “upward spiral even in conventional arms may lead to war,” and saying that “We do not in any way want to . . . [make] ‘the world safe for conventional war,’ which introduces its own horrors,” their budget letter warns of the dangers of “spurring the arms race.” What is more, the conventional arms cuts it recommends are squarely incompatible with reduced reliance on the early first use of nuclear weapons or indeed with any coherent view of potential critical conventional conflicts. It plans for only a short conventional war, cutting in half the program for increasing the number of days of stocks of “modern conventional munitions” in Europe. But it would cancel the C-5B program for rapid airlift and depend much more on the comparatively slow sealift that would be important in a long conventional war. It would focus the Navy largely on the defense of the sealines of communication in the North Atlantic, yet drastically cut Navy programs important for defending these sealines, such as those permitting long-range precise conventional attacks on the Soviet naval air bases from which Backfire bombers could menace both the sealines and ships defending them.
I do not doubt the earnestness of the authors’ desire for a more than nominal decrease in NATO’s reliance on nuclear weapons. I can testify that Robert McNamara’s interest goes back at least twenty-two years. I was his representative on the Acheson Committee which drafted the National Security Council decision formally to end the U.S. policy of massive retaliation in the spring of 1961. That decision called for raising the nuclear threshold by preparing a capability to defeat at its own level all but a very massive conventional attack; and the use of nuclear weapons only if our increased conventional force did not suffice. But as the stormy reaction to the McNamara doctrine of his first two years indicated, NATO’s threats of first use showed its reluctance to spend the resources needed for an adequate conventional defense rather than any convincing willingness actually to use nuclear weapons quickly or at all. Moreover, though McNamara doubted the utility of battlefield nuclear weapons, to quiet the political storm he did not resist sending several thousand more tactical nuclear weapons to Europe, making a stockpile there of 7,000. And contrary to his recent memory, he increased our total stock of nuclear weapons until it reached its peak in his last year as Secretary. When six years after the Acheson Report the Europeans did agree to “flexible response,” it was a grudging compromise—agreeing on the need for improved conventional forces but insisting that the main defense would be nuclear. That tended to undercut the seriousness with which they or we attended to the problem of improving NATO’s conventional ability to defend itself against conventional attack.
Carl Kaysen, McGeorge Bundy’s former deputy as National Security Adviser, in his influential contribution to the 1968 Brookings study, Agenda for the Nation, contemplated a No-First-Use pledge, but also called for large cuts in defense including the halving of U.S. ground forces in Germany. Senator Mark Hatfield and Senator William Proxmire, eager to freeze nuclear weapons then as now, led the battle to cut conventional arms. All that may seem bizarre, but it is not. The wave of “study groups” that deplored “exaggerations” of the Soviet build-up and the supposed spiraling of U.S. strategic budgets that forced the Soviets unwillingly to follow our lead, continued to set national priorities toward more social spending. But not much social spending could be got out of strategic budgets. They had been spiraling not up but down at 8 percent a year. By the early 1970’s, they were less than 1 percent of GNP, and by FY ’76, less than one-half of 1 percent. The Soviet deployment of ICBM’s, SLBM’s, and heavy and medium bombers averaged twice as great as the “greater-than-expected” threat predicted by Defense Secretaries for ten years starting with Secretary McNamara. Now, once more with program cuts in mind, the Bundy et al. budget letter talks of “greater-than-expected” threats and, like the bishops, resurrects the old apparition of our spurring an arms race by doing too much.
From the beginning of the 1960’s to the late 1970’s, the U.S. and all its major allies, while prattling about a U.S.-driven arms race, halved defense budgets in percent of GNP, while the Soviets steadily spent more in real terms for conventional as well as nuclear forces. As a result, NATO found itself continuing to rely on the early and first use of nuclear weapons, while the “correlation of forces” was changing so as to make that less convincing than ever before.
If the anti-nuclear movement in West Europe has served any useful function at all, it has done so by making responsible West Europeans more aware of the recklessness of depending on apocalyptic nuclear threats to meet conventional attacks. And given Europe’s economic problems, key Western leaders are forced to think not merely of multiplying brute numbers but also of exploiting the new intelligent technologies to increase the effectiveness of the resources used. Such an effort has been hampered up to now by a kind of Luddite and moralistic resistance to qualitative improvement and by a particular antipathy to technologies that improve the possibility of discrimination and choice.
Moralists who have chosen to emphasize the shallow paradoxes associated with deterrence by immoral threats against population have been at their worst when they have opposed any attempts to improve the capability to attack targets precisely and discriminately. While they have thought of themselves as aiming their opposition at the dangers of bringing on nuclear mass destruction, they have often stopped research and engineering on ways to destroy military targets without mass destruction; and they have done collateral damage to the development of precise, long-range conventional weapons. (Junior Congressmen like Thomas Downey and Edward Markey, who had their fun with talk of Star Wars in March, might have benefited from observing that Luke Skywalker used one accurately placed weapon to destroy the indiscriminately destructive Death Star. And with advanced terminal guidance we need not rely on “The Force.”) They have tried to stop, and have slowed, the development of technologies which can free us from the loose and wishful paradoxes involved in efforts to save the peace with unstable threats to terrorize our own as well as adversary civilians.
The events leading to the destruction of German and Japanese cities in World War II offer parallels. British scientists, when the menace of Hitler overcame their natural distaste for arms research, formed a Committee for the Scientific Study of Air Defense which backed Watson Watts’s development of radar for the defense of Britain. Their distaste was not overcome enough for them to support as energetically the Committee for the Scientific Study of Air Offense, whose work was quite desultory. The lag in developing radar for navigation and bombing, however, did not prevent the bombing of German targets. It only assured that the raids would destroy more German civilians. Some blame lies with the Royal Air Force’s failure to improve accuracy in the period between the wars. Marshall Trenchard, relying on the special experience of strategic bombing in clear weather against undefended targets in Iraq, thought British accuracy in general excellent. In 1928 he argued, “What is illegitimate, as being contrary to the dictates of humanity, is the indiscriminate bombing of a city for the sole purpose of terrorizing the civilian population.” Citing the draft code of rules for air war drawn up at the Hague in 1922-23, he held that air attacks were legitimate—“provided all reasonable care is taken to confine the scope of the bombing to the military objective. . . .” But he hardly took reasonable care to improve discriminateness before the war. (A minor fault, compared to that of religious strategists who testified to Congress against “targeting systems that minimize collateral damage to civilian life” and against any defense of U.S. civilians.) Trenchard’s opposite numbers in the British Army and Navy had doubted that the state of accuracy in 1928 would permit either the effectiveness or the discrimination that Trenchard claimed. During World War II, when he found how poor its aim was, Trenchard advised that if Bomber Command missed its intended targets it would still kill Germans and so do good work.
Declaratory doctrine for the American defense of Europe started in the 1950’s with the belief that strategic and tactical nuclear weapons could replace the conventional firepower which our NATO allies hesitated to supply against conventional invasion. It went through a phase in which many of the present advocates of MAD entertained exaggerated hopes for limiting the harm done by the large-scale use of tactical nuclear weapons on European battlefields; and for using massive active and civil defense, limiting to quite small amounts the damage done by a large raid on U.S. cities. When their hopes began to seem excessive, they switched to the view that the threat of unlimited mutual destruction was actually good, since it was nearly sure to deter even a conventional invasion. The last year or two have seen signs of a renewed serious interest in improving NATO’s ability to meet a conventional invasion in Europe on its own terms. Manfred Woerner, the current Minister of Defense in the German Federal Republic, has set forth a program which is designed not only to discourage a Soviet conventional invasion, but to do it responsibly in a way that will also put to rest the growing West German anti-nuclear movement. He would exploit the advanced technologies that are coming to be available for that purpose.
Woerner’s view stands in contrast to that of his predecessor, who held that even a conventional war in Europe would be “the end of Europe,” and that it was essential that tactical nuclear weapons be used quickly but only as a link to the “intercontinental exchange”—which would be “the end of the world.” But anyone who relies on such threats to deter a conventional attack is likely to threaten up to the last minute and then, when it would have become clear that the Soviets did not believe that NATO leaders would consciously bring on the end of Europe and then the end of the world, rush to reassure the Soviets that they did not really mean to execute the “threat.” Such a policy, Herman Kahn accurately labeled “preemptive surrender.” It differs from the policy advocated by West Germany’s party of the Greens in the anti-nuclear movement who would make their accommodation with the Soviets now, in time of peace, safely in advance of a threatened Soviet attack. Pierre Hassner has characterized the difference between the leaders of the anti-nuclear movement and some leading figures in the West European establishment who rely on suicidal threats: it is the difference between “preventive surrender” and “preemptive surrender.”
Deterring Nuclear Attack on an Ally
Bundy, McNamara, Kennan, and Smith have lost their faith in suicidal threats as a way of deterring a conventional invasion. They believe in the necessity and adequacy of such threats to deter nuclear attacks. However, a hope that an adversary can be safely deterred by our threat to blow him up along with ourselves, is unfounded not only for a conventional attack but also for a nuclear attack on the ally.
Consider a strategically placed ally like Norway with an American nuclear guarantee and no nuclear weapons of its own. How would a capability to destroy Soviet civilians, along with American civilians and possibly the civilization of Europe itself, discourage Soviet use of nuclear weapons against military targets in the course of an attack aimed at seizing the sparsely populated but strategic northernmost counties of Norway? No one—no Norwegian, no American leader, and no Soviet leader—would seriously expect us to respond to such an attack by consciously initiating the killing of 100 million or so innocent Soviet civilians and a corresponding number of Americans and/or West Europeans. That is one reason why some believers in MAD are explicitly for threats and against their execution. But a capability which plainly will never be used to initiate a chain of events we believe would lead to the end of civilization will terrify an adversary no more than a capability that would destroy half, or a tenth, or a millionth the number of civilians, or no civilians at all. The only way weapons can inspire concern is by the likelihood that they will be used. The residual fear that the West might deliberately blow up the world tends to terrify some in our own elites much more than the Soviets who chatter less on this subject.
The Incoherence of “Deterrence Only” Even for Deterring Nuclear A ttack on Oneself
Dogmas of “Minimum Deterrence” and “Deterrence Only” had their origins in the late 1950’s in the writings of General Pierre Gallois. Gallois believed that nuclear weapons spelled the end of alliance: no nuclear guarantee to a non-nuclear ally was credible since no nation would commit suicide for another. His version of Minimum Deterrence formed the center of his justification for the spread of nuclear weapons to any nation, even very small ones that wanted protection against nuclear attack or coercion. Initial American variants of the Minimum-Deterrence doctrine in 1958 cited some of Gallois’s principal arguments and the calculations he had designed in order to prove the necessity for targeting cities rather than opposing military forces; and some 1958 American writings on Minimum Deterrence recommended distributing Polaris submarines to NATO allies to replace the American guarantee. However, the incoherence of the Deterrence Only view is thorough and applies to deterring an attack on oneself. If it is true that a nation will not commit suicide for another, neither can it commit suicide to assure its own survival. Suicidal threats are in general not a reliable means of dissuasion.
Yet the total separation of threat from any possibility of execution has been common in establishments abroad as well as here, even among those who would maintain the Alliance. A former associate director of that pillar of the European establishment, the International Institute of Strategic Studies (IISS), talked in much the way Father Hehir does. Father Hehir holds that nuclear weapons exist “to be not used; their purpose is to threaten, not to strike.” Ian Smart then of IISS has said that “nuclear weapons are exclusively destined to deter” and suggested that only certain misguided American hawks view them “as reasonable and effective” for fighting. An instrument that destiny or purpose plainly made unreasonable and ineffective for actual use, and thus sure to remain unused, could hardly deter. It would make war more likely, not less.
William O’Brien’s 1981 book, The Conduct of Just and Limited War, while a painstakingly honest and informed inquiry into the circumstances in which war is justified and into its discriminate and proportionate conduct in a wide range of historical conflicts, is less incisive on MAD. He gives a little credence to the possibility that at least a one-sided abandonment of the threat against innocents might be destabilizing, and, though he is aware of the possibilities, he appears to underestimate the actual progress in technologies that gives us a choice between destroying military targets and destroying innocents. However, he is right on the mark in his more recent writings answering the Deterrence Only version of the pastoral letter proposed by Father Hehir and the Jesuit Father Francis Winters.
O’Brien is blunt about the insanity of deception labeling itself deception, as does the doctrine of Deterrence Only. Father Winters has an enthusiastic explication of the pastoral letter as opting “with notable casuistic ingenuity for possession of the strategic arsenal along with renunciation of the intention to employ it.” O’Brien responds that, “given the centrality of credibility to deterrence . . . this proposition is insane. What is needed is not casuistic ingenuity, but a serious commitment to face the dilemmas of nuclear deterrence without recourse to escapist diversions.”
As for Father Hehir, he is aware of but troubled by the fact that some nuclear weapons are less destructive than some conventional ones. He has argued on the basis of “psychological criteria” that we may continue to threaten to use nuclear weapons but should ban their actual use because he wants to solidify in our minds the dangers of crossing the gap between conventional and nuclear weapons. He wants to set up a psychological barrier against our ever using them. Unfortunately, like the lay strategists who are his model, he is less concerned to set up a psychological barrier against the use of nuclear weapons by our adversaries. Assuring them that we would never use nuclear weapons, even in response to a nuclear attack, cancels the deterrent and, for them, opens up a psychological expressway.
One can see why “casuistry,” which once meant dealing with cases of conscience and the resolution of questions of right or wrong in conduct, acquired a bad name and came to refer to the trivial and false application of moral principles to make things seem like their opposite. The upholders of the bishops’ doctrine of “Use, Never” (i.e., No Use—First-Second Or Ever) seem unaware that an adversary might be concerned not only about the magnitude of the harm we threaten but about the likelihood that we will inflict it.
However, it is a familiar fact of everyday life that we consider implicitly in our behavior not only the size of the assorted catastrophes we might conceivably face when we get up each morning but also their likelihood. Blizzards in August might find us peculiarly unequipped to survive them. So also sunstroke in December. Neither bothers us much, nor leads us to wear furs in summer and carry parasols in winter. Even when we face adversaries and not merely environmental dangers, we have a way of arraying threats according to the probability that they will be carried out and not only in terms of the damage they would do if they were. When a threatener can execute a terrible threat to us with little harm to himself, we worry more than when he would suffer at least as much as we would. Moreover, when a threatener, who expects to destroy himself and his allies along with the aggressor, says that he has no intention whatsoever and, in fact, would regard it as immoral to execute his threat, this can only be reassuring to a potential aggressor. It is an invitation rather than a deterrent. Somehow it does not occur to those who hope to deter by a suicidal threat (which they loudly proclaim they will never execute) that they may be doing the opposite of deterring. Their policy is—to use that dread catchword—“destabilizing.”
Soviet leaders who were not deterred by a threat they knew would never be executed would not, as Cardinal Krol suggests, have to be insane. It seems more nearly insane, as O’Brien says, to hold that in all circumstances, even during a stalled conventional invasion when all alternatives looked risky to them, the Soviets would be deterred “beyond question” from using nuclear weapons by our self-confessed suicidal bluff. Nonetheless, the doctrine of “Use, Never” advanced by the bishops merely makes more explicit the operational meaning of secular strategies of Deterrence Only. The Stanford physicist, Sidney Drell, recently has repeated the standard jumble about deterrence and fighting: instead of observing that our threat to fight back will dissuade an opponent only if he thinks we are able and if necesary willing to fight back, Drell says deterring and fighting are incompatible goals.
Deterrence Only focuses on deterring Western responses rather than Soviet attacks. It assumes that it is really the West, and especially the United States, in its misunderstanding of the Soviets, that menaces the nuclear peace and not the Soviets. This is an assumption widely held, even by those who oppose the disarmers. Michael Howard of Oxford tells us that the Soviets are entirely satisfied with the present division of Europe and that only Western extremists are not. He grants that the Soviets would revise the rest of the world, but doesn’t notice that in that process they might effectively alter the division of power within West Europe too. It would be hard for the Soviet Union to avoid altering the division of power in Europe, even if unintentionally, if it seized some future opportunity to satisfy its long expressed interest in expanding toward the Persian Gulf and the Eastern Mediterranean. (England is said to have acquired its empire in a fit of absent-mindedness.) Moreover, from the Soviet point of view, the destruction of the Western alliance that would result would surely be a bonus in defense of Soviet Western borders. George Kennan draws rather more satisfaction than is warranted from Soviet paranoid defensiveness. Paranoids can be dangerous.
But Michael Howard isn’t terribly worried about the Soviets beginning a war. He worries about Americans. Though he has been subject to attack by E.P. Thompson and the nuclear disarmers, he sometimes sounds a little like them. He says: “Whether I could encounter the same phenomenon in the Soviet Union, I do not know. But wars begin in the minds of men, and in many American minds the flames of war seem already to have taken a very firm hold.” And: “When I hear some of my American friends speak of that country [the Soviet Union], when I note how their eyes glaze over, their voices drop an octave, and they grind out the words ‘the Soviets’ in tones of gravelly hatred, I become really frightened; far more frightened than I am by the nuclear arsenals themselves or the various proposals for their use.” I know some of Howard’s American friends (indeed have counted myself as one), but none resembling that description. If such glazed-eyed monsters controlled the U.S. arsenal, instead of planning proportionate Western responses that might credibly discourage Soviet attack, the West might focus its attention entirely on stopping us and let the credibility of U.S. guarantees erode.
Unfortunately, the reactions to the President’s speech of March 23 on protecting civilians showed that the view of some Americans, indeed of some former Cabinet officers firmly attached to MAD doctrine, resembles that of Michael Howard. These Americans, like their British counterparts, may deplore the “oversimple” view of Soviet leaders which they attribute to American “hawks.” But when seized by MAD dogmas their view of U.S. leaders is more outrageously simple. They suppose American leaders to be so wantonly unconcerned about the unprecedented catastrophe of nuclear war that they are very likely to start one in any grave crisis. Anyone professing to believe that finds it even easier to believe that an American President would casually unleash nuclear war if he thought that American civil society had some substantial protection. But it is absurd to think that American or Soviet leaders are straining at the nuclear leash.
Former Defense Secretary Harold Brown answered the President with a variant of the fantasy that American hawks are likely to unleash nuclear war if they think the U.S. has a fair chance of coming out gravely but not totally ruined. The bishops cite him in support of their view that there is “an overwhelming probability that a nuclear exchange would have no limits.” While in office, Brown was torn between, on the one hand, the view forced upon him by evidence that Soviet arms had been going up while ours went down and, on the other hand, the view that both superpowers are engaged in a spiraling buildup incapable of yielding either side the ability to fight, to coerce, or even to gain some political advantage. Thus “the Soviets have as great an interest and should have as great an interest in strategic arms limitations as we do.” And he oscillated between the MAD dogma that all either side needs is to be able to destroy the other as a “functioning modern society”—an implicit pact for mutual suicide—and the recognition embodied in Presidential Directive 59 that the Soviets have made no such pact and shown no desire to make any possible Soviet attack an act of suicide. Like Hamlet (and McNamara) he is “but MAD north-north-west; when the wind is southerly, he knows a hawk from a handsaw.” But now the political winds blow more from the north and Brown’s American leaders are amazingly susceptible to clever briefers:
Deterrence must leave no doubt that an all-out nuclear war would destroy the nation—and the leadership—that launched it. Realistically we must contemplate deployments by both super-powers, investing huge amounts in such defensive systems. If a clever military briefer, in a time of grave crisis, with such systems in place, can persuade the political decision-makers that the defensive systems, operating together with other strategic forces, had a reasonable chance to function well enough to result in even a severely damaged “victor,” the scene will have been set for the ultimate disaster.
One might suppose that leaders on either side might be given pause if they thought that they would be completely destroyed even if the nation were not. But evidently the American leaders Brown contemplates wouldn’t mind that and would be easily swayed by a military briefer who told them that the nation would have a reasonable chance of coming out only “severely damaged.”
The United States could have launched a nuclear attack on the Soviet Union during any of several crises that came up while we had nuclear weapons and they did not. For example, we had 50 nuclear weapons and they had none in 1948 at the time of the Berlin crisis. It would not have taken a very clever military briefer to convince our leadership that the United States would not be destroyed by a nuclear attack in 1948. Yet since McNamara introduced the notion that it was very important for the U.S. that the Soviets be able to threaten the U.S. with annihilation of its cities, the absurdities implicit in MAD have become gospel even with intelligent men like Harold Brown.
The United States never seriously considered an attack on the Soviets when it had a nuclear monopoly; nor for many years after, while Soviet nuclear forces were extremely vulnerable. The idea that it would launch nuclear aggression now is a fantasy worthy only of the conspiracy theorists in the disarmament movement. Nor should we take seriously the idea that the Soviets tremble in fear that the United States might launch a nuclear attack simply because it had deployed some defense of innocent bystanders.
Many analyses in the 1960’s related the use of our strategic forces to the objective of limiting harm done to ourselves and our allies in case deterrence should fail; and they related deterring an adversary to the ability to harm him if we responded. McNamara’s Annual Posture Statements after the missile crisis, for example, tended to treat these two aims as independent. However, the separation misconstrues the problem of deterring. In a war, when all alternatives may be extremely risky to an adversary, we may not convince him that the alternative of nuclear attack is riskier than the others if we have persuaded him also that it can be done safely because we won’t retaliate for fear of the unlimited harm we would bring on ourselves. We only complete the absurdity and undermining of deterrence when we say that we have no intention to fight, that is, to use nuclear weapons if deterrence fails. Unfortunately, the principle of deterrence and the principle of “Use, Never” mutually annihilate each other.
Declaring—or telling oneself—that one does not really mean to use nuclear weapons if deterrence fails is one way of stilling uneasiness about threatening to kill innocents in order to deter. Another standard way of softening guilt is to say that the West should continue to raise such a threat even implicitly only if it is making serious progress toward the total elimination of nuclear weapons. That, however, does not lie solely within the West’s power. It depends on others who have or may acquire nuclear weapons, and in particular it depends on the disposition of the deeply suspicious, hostile leadership of the Soviet Union.
For a brief time in the immediate aftermath of Hiroshima, some Western leaders talked fervently about world government and the need to sacrifice national sovereignties to assure world peace. British Prime Minister Clement Attlee invoked “an act of faith” by the United States, the United Kingdom, and other nations, and “a new valuation of what are called national interests.” Secretary of War Henry L. Stimson “spoke continuously about a way to use nuclear energy for other things ‘than killing people’” and of “the changed relation of man to his universe.” It is easy to understand and sympathize with their initial emotional reaction to the enormous destruction released at Hiroshima and to feel their disappointment as Soviet behavior made evident that such hopes were utopian. But thirty-eight years later, the utopian hopes expressed by Jonathan Schell and others are more obviously groundless. Since then, Soviet behavior has made clear many times that Soviet versions of utopia differ from our own. The Soviets see the lasting independence of Western democracies side by side with their own system as a permanent danger to its maintenance, not to say its expansion toward an international utopia. Meanwhile, there is little evidence that some plausible arrangement would lead them to surrender so powerful an instrument of coercion or defense. That, after all, was indicated in their rejection of the Baruch-Acheson-Lilienthal plan for international control of atomic energy. Stalin exhibited none of the anguish sincerely felt by Western leaders and none of their momentary hopes for a world authority governing Communist and non-Communist nations side by side. The contrast of his private view with that of Western leadership is illustrated by the accounts of such privileged and reliable witnesses as Milovan Djilas: “He spoke of the A-bomb, ‘that is a powerful thing, powerful!’ His expression was full of admiration. . . .”
Nor have Soviet leaders since Stalin shown any lesser awareness of the value of nuclear weapons as an implicit or explicit means of intimidation in a hostile world they do not dominate. Their value is only enhanced by the contrasting Western scruples on the same subject. If Western political as well as religious leaders take Western possession of nuclear weapons as justified only if there is progress toward agreement with the Russians to eliminate them altogether, they place in Soviet hands the decision as to whether the West will continue to maintain a nuclear deterrent.
Not all differences are negotiable. Pretending that they are suggests a willingness to disarm unilaterally—either because the Soviets prevent agreement or because they agree only to a disarmament which would be purely nominal for them but real for the West. The Greens in West Germany look forward to the total elimination of nuclear weapons and their immediate withdrawal from Eastern and Western Europe. They are not noted for their realism. However, they reject Reagan’s zero option for intermediate nuclear forces in Europe as “unrealistic,” even though it would seem to be a substantial step on the way to their own goal. Petra Kelly and Manon Maren-Griesbach, two of their principal leaders, explain that the zero option is “unrealistic” because the Russians would never agree to it. It is therefore “not even an honest step toward arms reduction.” But the inconsistency of the Greens and their willingness to see the West accommodate to an unwavering Soviet aim to increase Soviet advantage does not differ substantially from many in the West who complain that the American government has not been able to convince the Soviets that we are sincere.
Paul Ramsey has understood very well what was involved in Western tendencies to take agreement with adversaries as an absolute essential. He questions the “omnicompetence of negotiation” and observes about some statements in Pacem in Terris that there can be hope in negotiations only if these proceed “from inner conviction” that, if such statements mean that “the way to conduct negotiations is not to permit them to fail,” then for any single nation to adopt that way of negotiating would mean “its premature surrender. . . . It takes two to negotiate in any such fashion.”
The view of the present administration on this subject is, at best, mixed and sometimes lacks conviction. The President has said “it takes two to tango.” But when the New York Times editorialist, who apparently thinks the impulse for social dancing is universal, said “So Tango!,” and when the American Catholic bishops proposed negotiating rather than responding to the Soviet build-up, the administration tended mainly to justify its programs as the best way to get agreements. Implicitly, the administration, then, seems to see no escape from the holocaust except by agreeing with the Soviets. But this particular apocalyptic view also has no basis in fact.
We should recognize that utopian hopes for total nuclear disarmament cannot excuse a Western failure to defend its independence soberly without using reckless threats. Unfortunately, our elites now link the phrase “arms control” not only to millennial dreams of early complete nuclear disarmament, but to the strategy of using threats to annihilate cities as a way of deterring attack; and to a perverse myth of the “arms race” that suggests that nuclear war is imminent because our nuclear arms have been spiraling exponentially and will continue to do so unless we limit our objectives to the destruction of a fixed small number of vulnerable population centers. (No one has ever suggested that the only way to avoid an exponential race in conventional arms is to train our fire on villages rather than enemy tanks. But when it comes to nuclear arms our elites will believe almost anything.) That is not the “arms control” Donald Brennan had in mind. “Arms control,” as he and the Princeton physicist, Freeman Dyson, have understood it, should aim at the more traditional and more sensible goal of restraining the bombardment of civilians. But the phrase is now loaded with wishful and mistaken prejudices. It suggests that without arms agreements our spending on defense inevitably will rise exponentially and uncontrollably; and that with arms agreements Soviet arms efforts will diminish. Experience for nearly two decades after the Cuban missile crisis illustrates the opposite.
A serious effort to negotiate agreements with the Soviets might enable us to achieve our objectives at lower levels of armaments than might otherwise be possible. (Improved active defenses, as J. Robert Oppenheimer observed, could facilitate such bilateral agreements since they would make us safer from cheating or assaults by third countries.) Being serious about arms agreements, however, is not the same as being desperate. Even without agreements the West is quite able to deter war and defend its independence against a formidable and persistently hostile adversary committed, as the Soviet Union has been, to changing the “correlation of forces” in its favor. The contrary view is deeply pessimistic and ultimately irresponsible, leading easily to treaties and “understandings” which only worsen the situation of the West.
For a serious and indeed sincere pursuit of arms negotiation by the West calls for a sober assessment of how any arrangements contemplated in an agreement are likely to affect the West’s long-term objectives of security and independence, and its intermediate objective of redressing the balance which worsened during the period of detente. These are not merely technical matters. The actual results of arms negotiations have, in the past, contrasted sharply with our expectations and desires. The negotiations of the last two decades started with Western expectations that the agreements achieved would reduce arms spending on both sides without any change in the balance. We assumed that the Soviets, like ourselves, had, as a principal objective, the desire to reduce the percentage of their resources devoted to arms spending and that they would choose “arms control” rather than arms competition. The record plainly shows that Western assumptions were wishful. The Soviets pursued arms agreements as a method of limiting Western spending—which did decline as a proportion of GNP by nearly half in the period after the missile crisis—while they themselves steadily increased their spending and did succeed in changing the balance. Now the West has the problem of catching up and that is especially hard to negotiate.
Serious negotiations today must recognize the limits to what they can accomplish. We and the Soviets share an interest in avoiding mutual suicide, an interest which each of us will pursue whether or not we reach genuine agreement in various understandings and formal treaties. But the Soviets also have interests in expanding their influence and control and, in the process, destabilizing the West, if necessary by the use of external force rather than simply by manipulating internal dissension. Arms agreements might temper, but are unlikely to eliminate, this reality. In particular, there seems scant basis to hope for major economies in our security effort through negotiated limits or reductions.
Experience suggests that when the Soviets agree to close off one path of effort, they redirect their resources to other projects posing differing but no lesser dangers. On the other hand, many of the ostensible goals of arms agreements are best achieved through measures which we can and should implement on our own. Our current efforts—which a freeze would stop—to design and deploy nuclear weapons which are more accident-proof and more secure against theft or unauthorized use, are a good example. Measures to improve the safety, security, and invulnerability of nuclear weapons can be implemented by both sides individually because they make sense for each side independently of formal treaties or elaborate verification measures. These need not mean a net increase in the numbers or destructiveness of nuclear weapons in our stockpile. The United States has already greatly reduced both the megatonnage and the numbers of its nuclear weapons. It recently removed 1,000 weapons from Europe and has said that if, in accordance with NATO’s decision in 1979, it installs 572 intermediate-range nuclear missiles, it will withdraw an equal number of warheads. If we increase precision further, we can drastically further reduce the number and destructiveness of our nuclear weapons. Increased precision can also improve the effectiveness of conventional weapons so that they may increasingly replace nuclear brute force. And it would improve our ability to avoid the unintended bombing of innocents with nuclear or conventional warheads. It would enlarge rather than foreclose our freedom to choose.
But many strategists in our foreign-policy establishment prefer to foreclose choice. The orthodox view, expressed by editors of our magazines dealing with foreign affairs, liberal Senators, scientists, and many former officials, holds that any use of nuclear weapons by us will almost surely end in a disaster leaving almost everybody dead or worse than dead; yet that we should have no alternative other than to threaten the bombing of cities; and that we should therefore make clear to our adversaries and allies that we will never fight a nuclear war. Anyone who holds that as the true faith will want to believe that he has no other choice. If he cannot say, like Flip Wilson, “The Devil made me do it,” he can introduce the deus ex machina of technology: Nuclear Technology makes me do it. He is likely to be outraged by any heretic who dares suggest we might have choices.
The grand inquisitors on the Senate Foreign Relations Committee had Kenneth Adelman on the rack recently during the hearings on his appointment as director of the Arms Control and Disarmament Agency. They probed to find some trace of a doubt in him on the question as to whether we should try to be able to limit nuclear destruction. Dostoevsky would have been fascinated. His Grand Inquisitor, a venerable Jesuit who had had Christ seized on the streets of Seville, argued with the savior that his mistake was not to recognize that men cannot bear the burden of free choice. That’s a point on which many in our establishment have impaled themselves.
Bishops, Statesmen, and Other Strategists on the Bombing of Innocents
Must-Reads from Magazine
t can be said that the Book of Samuel launched the American Revolution. Though antagonistic to traditional faith, Thomas Paine understood that it was not Montesquieu, or Locke, who was inscribed on the hearts of his fellow Americans. Paine’s pamphlet Common Sense is a biblical argument against British monarchy, drawing largely on the text of Samuel.
Today, of course, universal biblical literacy no longer exists in America, and sophisticated arguments from Scripture are all too rare. It is therefore all the more distressing when public intellectuals, academics, or religious leaders engage in clumsy acts of exegesis and political argumentation by comparing characters in the Book of Samuel to modern political leaders. The most common victim of this tendency has been the central character in the Book of Samuel: King David.
Most recently, this tendency was made manifest in the writings of Dennis Prager. In a recent defense of his own praise of President Trump, Prager wrote that “as a religious Jew, I learned from the Bible that God himself chose morally compromised individuals to accomplish some greater good. Think of King David, who had a man killed in order to cover up the adultery he committed with the man’s wife.” Prager similarly argued that those who refuse to vote for a politician whose positions are correct but whose personal life is immoral “must think God was pretty flawed in voting for King David.”
Prager’s invocation of King David was presaged on the left two decades ago. The records of the Clinton Presidential Library reveal that at the height of the Lewinsky scandal, an email from Dartmouth professor Susannah Heschel made its way into the inbox of an administration policy adviser with a similar comparison: “From the perspective of Jewish history, we have to ask how Jews can condemn President Clinton’s behavior as immoral, when we exalt King David? King David had Batsheva’s husband, Uriah, murdered. While David was condemned and punished, he was never thrown off the throne of Israel. On the contrary, he is exalted in our Jewish memory as the unifier of Israel.”
One can make the case for supporting politicians who have significant moral flaws. Indeed, America’s political system is founded on an awareness of the profound tendency to sinfulness not only of its citizens but also of its statesmen. “If men were angels, no government would be necessary,” James Madison informs us in the Federalist. At the same time, anyone who compares King David to the flawed leaders of our own age reveals a profound misunderstanding of the essential nature of David’s greatness. David was not chosen by God despite his moral failings; rather, David’s failings are the lens that reveal his true greatness. It is in the wake of his sins that David emerges as the paradigmatic penitent, whose quest for atonement is utterly unlike that of any other character in the Bible, and perhaps in the history of the world.
While the precise nature of David’s sins is debated in the Talmud, there is no question that they are profound. Yet it is in comparing David to other faltering figures—in the Bible or today—that the comparison falls flat. This point is stressed by the very Jewish tradition in whose name Prager claimed to speak.
It is the rabbis who note that David’s predecessor, Saul, lost the kingship when he failed to fulfill God’s command to destroy the egregiously evil nation of Amalek, whereas David commits more severe sins and yet remains king. The answer, the rabbis suggest, lies not in the sin itself but in the response. Saul, when confronted by the prophet Samuel, offers obfuscations and defensiveness. David, meanwhile, is similarly confronted by the prophet Nathan: “Thou hast killed Uriah the Hittite with the sword, and hast taken his wife to be thy wife, and hast slain him with the sword of the children of Ammon.” David’s immediate response is clear and complete contrition: “I have sinned against the Lord.” David’s penitence, Jewish tradition suggests, sets him apart from Saul. Soon after, David gave voice to what was in his heart at the moment, and gave the world one of the most stirring of the Psalms:
Have mercy upon me, O God, according to thy lovingkindness: according unto the multitude of thy tender mercies blot out my transgressions.
Wash me thoroughly from mine iniquity, and cleanse me from my sin. For I acknowledge my transgressions: and my sin is ever before me.
. . . Deliver me from bloodguiltiness, O God, thou God of my salvation: and my tongue shall sing aloud of thy righteousness.
O Lord, open thou my lips; and my mouth shall shew forth thy praise.
For thou desirest not sacrifice; else would I give it: thou delightest not in burnt offering.
The sacrifices of God are a broken spirit: a broken and a contrite heart, O God, thou wilt not despise.
The tendency to link David to our current age lies in the fact that we know more about David than any other biblical figure. The author Thomas Cahill has noted that in a certain literary sense, David is the only biblical figure that is like us at all. Prior to the humanist autobiographies of the Renaissance, he notes, “we can count only a few isolated instances of this use of ‘I’ to mean the interior self. But David’s psalms are full of I’s.” In David’s Psalms, Cahill writes, we “find a unique early roadmap to the inner spirit—previously mute—of ancient humanity.”
At the same time, a study of the Book of Samuel and of the Psalms reveals how utterly incomparable David is to anyone alive today. Haym Soloveitchik has noted that even the most observant of Jews today fail to feel a constant intimacy with God that the simplest Jew of the premodern age might have felt, that “while there are always those whose spirituality is one apart from that of their time, nevertheless I think it safe to say that the perception of God as a daily, natural force is no longer present to a significant degree in any sector of modern Jewry, even the most religious.” Yet for David, such intimacy with the divine was central to his existence, and the Book of Samuel and the Psalms are an eternal testament to this fact. This is why simple comparisons between David and ourselves, as tempting as they are, must be resisted. David Wolpe, in his book about David, attempts to make the case as to why King David’s life speaks to us today: “So versatile and enduring is David in our culture that rare is the week that passes without some public allusion to his life…We need to understand David better because we use his life to comprehend our own.”
The truth may be the opposite. We need to understand David better because we can use his life to comprehend what we are missing, and how utterly unlike our lives are to his own. For even the most religious among us have lost the profound faith and intimacy with God that David had. It is therefore incorrect to assume that because of David’s flaws it would have been, as Amos Oz has written, “fitting for him to reign in Tel Aviv.” The modern State of Israel was blessed with brilliant leaders, but to which of its modern warriors or statesmen should David be compared? To Ben Gurion, who stripped any explicit invocation of the Divine from Israel’s Declaration of Independence? To Moshe Dayan, who oversaw the reconquest of Jerusalem, and then immediately handed back the Temple Mount, the locus of King David’s dreams and desires, to the administration of the enemies of Israel? David’s complex humanity inspires comparison to modern figures, but his faith, contrition, and repentance—which lie at the heart of his story and success—defy any such engagement.
And so, to those who seek comparisons to modern leaders from the Bible, the best rule may be: Leave King David out of it.
Three attacks in Britain highlight the West’s inability to see the threat clearly
This lack of seriousness manifests itself in several ways. It’s perhaps most obvious in the failure to reform Britain’s chaotic immigration and dysfunctional asylum systems. But it’s also abundantly clear from the grotesque underfunding and under-resourcing of domestic intelligence. In MI5, Britain has an internal security service that is simply too small to do its job effectively, even if it were not handicapped by an institutional culture that can seem willfully blind to the ideological roots of the current terrorism problem.
In 2009, Jonathan Evans, then head of MI5, confessed at a parliamentary hearing about the London bus and subway attacks of 2005 that his organization only had sufficient resources to “hit the crocodiles close to the boat.” It was an extraordinary metaphor to use, not least because of the impression of relative impotence that it conveys. MI5 had by then doubled in size since 2001, but it still boasted a staff of only 3,500. Today it’s said to employ between 4,000 and 5,000, an astonishingly, even laughably, small number given a UK population of 65 million and the scale of the security challenges Britain now faces. (To be fair, the major British police forces all have intelligence units devoted to terrorism, and the UK government’s overall counterterrorism strategy involves a great many people, including social workers and schoolteachers.)
You can also see that unseriousness at work in the abject failure to coerce Britain’s often remarkably sedentary police officers out of their cars and stations and back onto the streets. Most of Britain’s big-city police forces have adopted a reactive model of policing (consciously rejecting both the New York Compstat model and British “bobby on the beat” traditions) that cripples intelligence-gathering and frustrates good community relations.
If that weren’t bad enough, Britain’s judiciary is led by jurists who came of age in the 1960s, and who have been inclined since 2001 to treat terrorism as an ordinary criminal problem being exploited by malign officials and politicians to make assaults on individual rights and to take part in “illegal” foreign wars. It has long been almost impossible to extradite ISIS or al-Qaeda–linked Islamists from the UK. This is partly because today’s English judges believe that few if any foreign countries—apart from perhaps Sweden and Norway—are likely to give terrorist suspects a fair trial, or able to guarantee that such suspects will be spared torture and abuse.
We have a progressive metropolitan media elite whose primary, reflexive response to every terrorist attack, even before the blood on the pavement is dry, is to express worry about an imminent violent anti-Muslim “backlash” on the part of a presumptively bigoted and ignorant indigenous working class. Never mind that no such “backlash” has yet occurred, not even when the young off-duty soldier Lee Rigby was hacked to death in broad daylight on a South London street in 2013.
Another sign of this lack of seriousness is the choice by successive British governments to deal with the problem of internal terrorism with marketing and “branding.” You can see this in the catchy consultant-created acronyms and pseudo-strategies that are deployed in place of considered thought and action. After every atrocity, the prime minister calls a meeting of the COBRA unit—an acronym that merely stands for Cabinet Office Briefing Room A but sounds like a secret organization of government superheroes. The government’s counterterrorism strategy is called CONTEST, which has four “work streams”: “Prevent,” “Pursue,” “Protect,” and “Prepare.”
Perhaps the ultimate sign of unseriousness is the fact that police, politicians, and government officials have all displayed more fear of being seen as “Islamophobic” than of any carnage that actual terror attacks might cause. Few are aware that this short-term, cowardly, and trivial tendency may ultimately foment genuine, dangerous popular Islamophobia, especially if attacks continue.R
ecently, three murderous Islamist terror attacks in the UK took place in less than a month. The first and third were relatively primitive improvised attacks using vehicles and/or knives. The second was a suicide bombing that probably required relatively sophisticated planning, technological know-how, and the assistance of a terrorist infrastructure. As they were the first such attacks in the UK, the vehicle and knife killings came as a particular shock to the British press, public, and political class, despite the fact that non-explosive and non-firearm terror attacks have become common in Europe and are almost routine in Israel.
The success of all three plots indicates troubling problems in British law-enforcement practice and culture, quite apart from any other failings on the parts of the state in charge of intelligence, border control, and the prevention of radicalization. At the time of writing, the British media have been full of encomia to police courage and skill, not least because it took “only” eight minutes for an armed Metropolitan Police team to respond to and confront the bloody mayhem being wrought by the three Islamist terrorists (who had ploughed their rented van into people on London Bridge before jumping out to attack passersby with knives). But the difficult truth is that all three attacks would be much harder to pull off in Manhattan, not just because all NYPD cops are armed, but also because there are always police officers visibly on patrol at the New York equivalents of London’s Borough Market on a Saturday night. By contrast, London’s Metropolitan police is a largely vehicle-borne, reactive force; rather than use a physical presence to deter crime and terrorism, it chooses to monitor closed-circuit street cameras and social-media postings.
Since the attacks in London and Manchester, we have learned that several of the perpetrators were “known” to the police and security agencies that are tasked with monitoring potential terror threats. That these individuals were nevertheless able to carry out their atrocities is evidence that the monitoring regime is insufficient.
It also seems clear that there were failures on the part of those institutions that come under the leadership of the Home Office and are supposed to be in charge of the UK’s border, migration, and asylum systems. Journalists and think tanks like Policy Exchange and Migration Watch have for years pointed out that these systems are “unfit for purpose,” but successive governments have done little to take responsible control of Britain’s borders. When she was home secretary, Prime Minister Theresa May did little more than jazz up the name, logo, and uniforms of what is now called the “Border Force,” and she notably failed to put in place long-promised passport checks for people flying out of the country. This dereliction means that it is impossible for the British authorities to know who has overstayed a visa or whether individuals who have been denied asylum have actually left the country.
It seems astonishing that Youssef Zaghba, one of the three London Bridge attackers, was allowed back into the country. The Moroccan-born Italian citizen (his mother is Italian) had been arrested by Italian police in Bologna, apparently on his way to Syria via Istanbul to join ISIS. When questioned by the Italians about the ISIS decapitation videos on his mobile phone, he declared that he was “going to be a terrorist.” The Italians lacked sufficient evidence to charge him with a crime but put him under 24-hour surveillance, and when he traveled to London, they passed on information about him to MI5. Nevertheless, he was not stopped or questioned on arrival and had not become one of the 3,000 official terrorism “subjects of interest” for MI5 or the police when he carried out his attack. One reason Zaghba was not questioned on arrival may have been that he used one of the new self-service passport machines installed in UK airports in place of human staff after May’s cuts to the border force. Apparently, the machines are not yet linked to any government watch lists, thanks to the general chaos and ineptitude of the Home Office’s efforts to use information technology.
The presence in the country of Zaghba’s accomplice Rachid Redouane is also an indictment of the incompetence and disorganization of the UK’s border and migration authorities. He had been refused asylum in 2009, but as is so often the case, Britain’s Home Office never got around to removing him. Three years later, he married a British woman and was therefore able to stay in the UK.
But it is the failure of the authorities to monitor ringleader Khuram Butt that is the most baffling. He was a known and open associate of Anjem Choudary, Britain’s most notorious terrorist supporter, ideologue, and recruiter (he was finally imprisoned in 2016 after 15 years of campaigning on behalf of al-Qaeda and ISIS). Butt even appeared in a 2016 TV documentary about ISIS supporters called The Jihadist Next Door. In the same year, he assaulted a moderate imam at a public festival, after calling him a “murtad” or apostate. The imam reported the incident to the police—who took six months to track him down and then let him off with a caution. It is not clear if Butt was one of the 3,000 “subjects of interest” or the additional 20,000 former subjects of interest who continue to be the subject of limited monitoring. If he was not, it raises the question of what a person has to do to get British security services to take him seriously as a terrorist threat; if he was in fact on the list of “subjects of interest,” one has to wonder if being so designated is any barrier at all to carrying out terrorist atrocities. It’s worth remembering, as few do here in the UK, that terrorists who carried out previous attacks were also known to the police and security services and nevertheless enjoyed sufficient liberty to go at it again.B
ut the most important reason for the British state’s ineffectiveness in monitoring terror threats, which May addressed immediately after the London Bridge attack, is a deeply rooted institutional refusal to deal with or accept the key role played by Islamist ideology. For more than 15 years, the security services and police have chosen to take note only of people and bodies that explicitly espouse terrorist violence or have contacts with known terrorist groups. The fact that a person, school, imam, or mosque endorses the establishment of a caliphate, the stoning of adulterers, or the murder of apostates has not been considered a reason to monitor them.
This seems to be why Salman Abedi, the Manchester Arena suicide bomber, was not being watched by the authorities as a terror risk, even though he had punched a girl in the face for wearing a short skirt while at university, had attended the Muslim Brotherhood-controlled Didsbury Mosque, was the son of a Libyan man whose militia is banned in the UK, had himself fought against the Qaddafi regime in Libya, had adopted the Islamist clothing style (trousers worn above the ankle, beard but no moustache), was part of a druggy gang subculture that often feeds individuals into Islamist terrorism, and had been banned from a mosque after confronting an imam who had criticized ISIS.
It was telling that the day after the Manchester Arena suicide-bomb attack, you could hear security officials informing radio and TV audiences of the BBC’s flagship morning-radio news show that it’s almost impossible to predict and stop such attacks because the perpetrators “don’t care who they kill.” They just want to kill as many people as possible, he said.
Surely, anyone with even a basic familiarity with Islamist terror attacks over the last 15 or so years and a nodding acquaintance with Islamist ideology could see that the terrorist hadn’t just chosen the Ariana Grande concert in Manchester Arena because a lot of random people would be crowded into a conveniently small area. Since the Bali bombings of 2002, nightclubs, discotheques, and pop concerts attended by shameless unveiled women and girls have been routinely targeted by fundamentalist terrorists, including in Britain. Among the worrying things about the opinion offered on the radio show was that it suggests that even in the wake of the horrific Bataclan attack in Paris during a November 2015 concert, British authorities may not have been keeping an appropriately protective eye on music venues and other places where our young people hang out in their decadent Western way. Such dereliction would make perfect sense given the resistance on the part of the British security establishment to examining, confronting, or extrapolating from Islamist ideology.
The same phenomenon may explain why authorities did not follow up on community complaints about Abedi. All too often when people living in Britain’s many and diverse Muslim communities want to report suspicious behavior, they have to do so through offices and organizations set up and paid for by the authorities as part of the overall “Prevent” strategy. Although criticized by the left as “Islamophobic” and inherently stigmatizing, Prevent has often brought the government into cooperative relationships with organizations even further to the Islamic right than the Muslim Brotherhood. This means that if you are a relatively secular Libyan émigré who wants to report an Abedi and you go to your local police station, you are likely to find yourself speaking to a bearded Islamist.
From its outset in 2003, the Prevent strategy was flawed. Its practitioners, in their zeal to find and fund key allies in “the Muslim community” (as if there were just one), routinely made alliances with self-appointed community leaders who represented the most extreme and intolerant tendencies in British Islam. Both the Home Office and MI5 seemed to believe that only radical Muslims were “authentic” and would therefore be able to influence young potential terrorists. Moderate, modern, liberal Muslims who are arguably more representative of British Islam as a whole (not to mention sundry Shiites, Sufis, Ahmmadis, and Ismailis) have too often found it hard to get a hearing.
Sunni organizations that openly supported suicide-bomb attacks in Israel and India and that justified attacks on British troops in Iraq and Afghanistan nevertheless received government subsidies as part of Prevent. The hope was that in return, they would alert the authorities if they knew of individuals planning attacks in the UK itself.
It was a gamble reminiscent of British colonial practice in India’s northwest frontier and elsewhere. Not only were there financial inducements in return for grudging cooperation; the British state offered other, symbolically powerful concessions. These included turning a blind eye to certain crimes and antisocial practices such as female genital mutilation (there have been no successful prosecutions relating to the practice, though thousands of cases are reported every year), forced marriage, child marriage, polygamy, the mass removal of girls from school soon after they reach puberty, and the epidemic of racially and religiously motivated “grooming” rapes in cities like Rotherham. (At the same time, foreign jihadists—including men wanted for crimes in Algeria and France—were allowed to remain in the UK as long as their plots did not include British targets.)
This approach, simultaneously cynical and naive, was never as successful as its proponents hoped. Again and again, Muslim chaplains who were approved to work in prisons and other institutions have sometimes turned out to be Islamist extremists whose words have inspired inmates to join terrorist organizations.
Much to his credit, former Prime Minister David Cameron fought hard to change this approach, even though it meant difficult confrontations with his home secretary (Theresa May), as well as police and the intelligence agencies. However, Cameron’s efforts had little effect on the permanent personnel carrying out the Prevent strategy, and cooperation with Islamist but currently nonviolent organizations remains the default setting within the institutions on which the United Kingdom depends for security.
The failure to understand the role of ideology is one of imagination as well as education. Very few of those who make government policy or write about home-grown terrorism seem able to escape the limitations of what used to be called “bourgeois” experience. They assume that anyone willing to become an Islamist terrorist must perforce be materially deprived, or traumatized by the experience of prejudice, or provoked to murderous fury by oppression abroad. They have no sense of the emotional and psychic benefits of joining a secret terror outfit: the excitement and glamor of becoming a kind of Islamic James Bond, bravely defying the forces of an entire modern state. They don’t get how satisfying or empowering the vengeful misogyny of ISIS-style fundamentalism might seem for geeky, frustrated young men. Nor can they appreciate the appeal to the adolescent mind of apocalyptic fantasies of power and sacrifice (mainstream British society does not have much room for warrior dreams, given that its tone is set by liberal pacifists). Finally, they have no sense of why the discipline and self-discipline of fundamentalist Islam might appeal so strongly to incarcerated lumpen youth who have never experienced boundaries or real belonging. Their understanding is an understanding only of themselves, not of the people who want to kill them.
Review of 'White Working Class' By Joan C. Williams
Williams is a prominent feminist legal scholar with degrees from Yale, MIT, and Harvard. Unbending Gender, her best-known book, is the sort of tract you’d expect to find at an intersectionality conference or a Portlandia bookstore. This is why her insightful, empathic book comes as such a surprise.
Books and essays on the topic have accumulated into a highly visible genre since Donald Trump came on the American political scene; J.D. Vance’s Hillbilly Elegy planted itself at the top of bestseller lists almost a year ago and still isn’t budging. As with Vance, Williams’s interest in the topic is personal. She fell “madly in love with” and eventually married a Harvard Law School graduate who had grown up in an Italian neighborhood in pre-gentrification Brook-lyn. Williams, on the other hand, is a “silver-spoon girl.” Her father’s family was moneyed, and her maternal grandfather was a prominent Reform rabbi.
The author’s affection for her “class-migrant” spouse and respect for his family’s hardships—“My father-in-law grew up on blood soup,” she announces in her opening sentence—adds considerable warmth to what is at bottom a political pamphlet. Williams believes that elite condescension and “cluelessness” played a big role in Trump’s unexpected and dreaded victory. Enlightening her fellow elites is essential to the task of returning Trump voters to the progressive fold where, she is sure, they rightfully belong.
Liberals were not always so dense about the working class, Williams observes. WPA murals and movies like On the Waterfront showed genuine fellow feeling for the proletariat. In the 1970s, however, the liberal mood changed. Educated boomers shifted their attention to “issues of peace, equal rights, and environmentalism.” Instead of feeling the pain of Arthur Miller and John Steinbeck characters, they began sneering at the less enlightened. These days, she notes, elite sympathies are limited to the poor, people of color (POC), and the LGBTQ population. Despite clear evidence of suffering—stagnant wages, disappearing manufacturing jobs, declining health and well-being—the working class gets only fly-over snobbery at best and, more often, outright loathing.
Williams divides her chapters into a series of explainers to questions she has heard from her clueless friends and colleagues: “Why Does the Working Class Resent the Poor?” “Why Does the Working Class Resent Professionals but Admire the Rich?” “Why Doesn’t the Working Class Just Move to Where the Jobs Are?” “Is the Working Class Just Racist?” She weaves her answers into a compelling picture of a way of life and worldview foreign to her targeted readers. Working-class Americans have had to struggle for whatever stability and comfort they have, she explains. Clocking in for midnight shifts year after year, enduring capricious bosses, plant closures, and layoffs, they’re reliant on tag-team parenting and stressed-out relatives for child care. The campus go-to word “privileged” seems exactly wrong.
Proud of their own self-sufficiency and success, however modest, they don’t begrudge the self-made rich. It’s snooty professionals and the dysfunctional poor who get their goat. From their vantage point, subsidizing the day care for a welfare mother when they themselves struggle to manage care on their own dime mocks both their hard work and their beliefs. And since, unlike most professors, they shop in the same stores as the dependent poor, they’ve seen that some of them game the system. Of course that stings.
White Working Class is especially good at evoking the alternate economic and mental universe experienced by Professional and Managerial Elites, or “PMEs.” PMEs see their non-judgment of the poor, especially those who are “POC,” as a mark of their mature understanding that we live in an unjust, racist system whose victims require compassion regardless of whether they have committed any crime. At any rate, their passions lie elsewhere. They define themselves through their jobs and professional achievements, hence their obsession with glass ceilings.
Williams tells the story of her husband’s faux pas at a high-school reunion. Forgetting his roots for a moment, the Ivy League–educated lawyer asked one of his Brooklyn classmates a question that is the go-to opener in elite social settings: “What do you do?” Angered by what must have seemed like deliberate humiliation by this prodigal son, the man hissed: “I sell toilets.”
Instead of stability and backyard barbecues with family and long-time neighbors and maybe the occasional Olive Garden celebration, PMEs are enamored of novelty: new foods, new restaurants, new friends, new experiences. The working class chooses to spend its leisure in comfortable familiarity; for the elite, social life is a lot like networking. Members of the professional class may view themselves as sophisticated or cosmopolitan, but, Williams shows, to the blue-collar worker their glad-handing is closer to phony social climbing and their abstract, knowledge-economy jobs more like self-important pencil-pushing.
White Working Class has a number of proposals for creating the progressive future Williams would like to see. She wants to get rid of college-for-all dogma and improve training for middle-skill jobs. She envisions a working-class coalition of all races and ethnicities bolstered by civics education with a “distinctly celebratory view of American institutions.” In a saner political environment, some of this would make sense; indeed, she echoes some of Marco Rubio’s 2016 campaign themes. It’s little wonder White Working Class has already gotten the stink eye from liberal reviewers for its purported sympathies for racists.
Alas, impressive as Williams’s insights are, they do not always allow her to transcend her own class loyalties. Unsurprisingly, her own PME biases mostly come to light in her chapters on race and gender. She reduces immigration concerns to “fear of brown people,” even as she notes elsewhere that a quarter of Latinos also favor a wall at the southern border. This contrasts startlingly with her succinct observation that “if you don’t want to drive working-class whites to be attracted to the likes of Limbaugh, stop insulting them.” In one particularly obtuse moment, she asserts: “Because I study social inequality, I know that even Malia and Sasha Obama will be disadvantaged by race, advantaged as they are by class.” She relies on dubious gender theories to explain why the majority of white women voted for Trump rather than for his unfairly maligned opponent. That Hillary Clinton epitomized every elite quality Williams has just spent more than a hundred pages explicating escapes her notice. Williams’s own reflexive retreat into identity politics is itself emblematic of our toxic divisions, but it does not invalidate the power of this astute book.
When music could not transcend evil
he story of European classical music under the Third Reich is one of the most squalid chapters in the annals of Western culture, a chronicle of collective complaisance that all but beggars belief. Without exception, all of the well-known musicians who left Germany and Austria in protest when Hitler came to power in 1933 were either Jewish or, like the violinist Adolf Busch, Rudolf Serkin’s father-in-law, had close family ties to Jews. Moreover, most of the small number of non-Jewish musicians who emigrated later on, such as Paul Hindemith and Lotte Lehmann, are now known to have done so not out of principle but because they were unable to make satisfactory accommodations with the Nazis. Everyone else—including Karl Böhm, Wilhelm Furtwängler, Walter Gieseking, Herbert von Karajan, and Richard Strauss—stayed behind and served the Reich.
The Berlin and Vienna Philharmonics, then as now Europe’s two greatest orchestras, were just as willing to do business with Hitler and his henchmen, firing their Jewish members and ceasing to perform the music of Jewish composers. Even after the war, the Vienna Philharmonic was notorious for being the most anti-Semitic orchestra in Europe, and it was well known in the music business (though never publicly discussed) that Helmut Wobisch, the orchestra’s principal trumpeter and its executive director from 1953 to 1968, had been both a member of the SS and a Gestapo spy.
The management of the Berlin Philharmonic made no attempt to cover up the orchestra’s close relationship with the Third Reich, no doubt because the Nazi ties of Karajan, who was its music director from 1956 until shortly before his death in 1989, were a matter of public record. Yet it was not until 2007 that a full-length study of its wartime activities, Misha Aster’s The Reich’s Orchestra: The Berlin Philharmonic 1933–1945, was finally published. As for the Vienna Philharmonic, its managers long sought to quash all discussion of the orchestra’s Nazi past, steadfastly refusing to open its institutional archives to scholars until 2008, when Fritz Trümpi, an Austrian scholar, was given access to its records. Five years later, the Viennese, belatedly following the precedent of the Berlin Philharmonic, added a lengthy section to their website called “The Vienna Philharmonic Under National Socialism (1938–1945),” in which the damning findings of Trümpi and two other independent scholars were made available to the public.
Now Trümpi has published The Political Orchestra: The Vienna and Berlin Philharmonics During the Third Reich, in which he tells how they came to terms with Nazism, supplying pre- and postwar historical context for their transgressions.1 Written in a stiff mixture of academic jargon and translatorese, The Political Orchestra is ungratifying to read. Even so, the tale that it tells is both compelling and disturbing, especially to anyone who clings to the belief that high art is ennobling to the spirit.U
nlike the Vienna Philharmonic, which has always doubled as the pit orchestra for the Vienna State Opera, the Berlin Philharmonic started life in 1882 as a fully independent, self-governing entity. Initially unsubsidized by the state, it kept itself afloat by playing a grueling schedule of performances, including “popular” non-subscription concerts for which modest ticket prices were levied. In addition, the orchestra made records and toured internationally at a time when neither was common.
These activities made it possible for the Berlin Philharmonic to develop into an internationally renowned ensemble whose fabled collective virtuosity was widely seen as a symbol of German musical distinction. Furtwängler, the orchestra’s principal conductor, declared in 1932 that the German music in which it specialized was “one of the very few things that actually contribute to elevating [German] prestige.” Hence, he explained, the need for state subsidy, which he saw as “a matter of [national] prestige, that is, to some extent a requirement of national prudence.” By then, though, the orchestra was already heavily subsidized by the city of Berlin, thus paving the way for its takeover by the Nazis.
The Vienna Philharmonic, by contrast, had always been subsidized. Founded in 1842 when the orchestra of what was then the Vienna Court Opera decided to give symphonic concerts on its own, it performed the Austro-German classics for an elite cadre of longtime subscribers. By restricting membership to local players and their pupils, the orchestra cultivated what Furtwängler, who spent as much time conducting in Vienna as in Berlin, described as a “homogeneous and distinct tone quality.” At once dark and sweet, it was as instantly identifiable—and as characteristically Viennese—as the strong, spicy bouquet of a Gewürztraminer wine.
Unlike the Berlin Philharmonic, which played for whoever would pay the tab and programmed new music as a matter of policy, the Vienna Philharmonic chose not to diversify either its haute-bourgeois audience or its conservative repertoire. Instead, it played Beethoven, Brahms, Haydn, Mozart, and Schubert (and, later, Bruckner and Richard Strauss) in Vienna for the Viennese. Starting in the ’20s, the orchestra’s recordings consolidated its reputation as one of the world’s foremost instrumental ensembles, but its internal culture remained proudly insular.
What the two orchestras had in common was a nationalistic ethos, a belief in the superiority of Austro-German musical culture that approached triumphalism. One of the darkest manifestations of this ethos was their shared reluctance to hire Jews. The Berlin Philharmonic employed only four Jewish players in 1933, while the Vienna Philharmonic contained only 11 Jews at the time of the Anschluss, none of whom was hired after 1920. To be sure, such popular Jewish conductors as Otto Klemperer and Bruno Walter continued to work in Vienna for as long as they could. Two months before the Anschluss, Walter led and recorded a performance of the Ninth Symphony of Gustav Mahler, his musical mentor and fellow Jew, who from 1897 to 1907 had been the director of the Vienna Court Opera and one of the Philharmonic’s most admired conductors. But many members of both orchestras were open supporters of fascism, and not a few were anti-Semites who ardently backed Hitler. By 1942, 62 of the 123 active members of the Vienna Philharmonic were Nazi party members.
The admiration that Austro-German classical musicians had for Hitler is not entirely surprising since he was a well-informed music lover who declared in 1938 that “Germany has become the guardian of European culture and civilization.” He made the support of German art, music very much included, a key part of his political program. Accordingly, the Berlin Philharmonic was placed under the direct supervision of Joseph Goebbels, who ensured the cooperation of its members by repeatedly raising their salaries, exempting them from military service, and guaranteeing their old-age pensions. But there had never been any serious question of protest, any more than there would be among the members of the Vienna Philharmonic when the Nazis gobbled up Austria. Save for the Jews and one or two non-Jewish players who were fired for reasons of internal politics, the musicians went along unhesitatingly with Hitler’s desires.
With what did they go along? Above all, they agreed to the scrubbing of Jewish music from their programs and the dismissal of their Jewish colleagues. Some Jewish players managed to escape with their lives, but seven of the Vienna Philharmonic’s 11 Jews were either murdered by the Nazis or died as a direct result of official persecution. In addition, both orchestras performed regularly at official government functions and made tours and other public appearances for propaganda purposes, and both were treated as gems in the diadem of Nazi culture.
As for Furtwängler, the most prominent of the Austro-German orchestral conductors who served the Reich, his relationship to Nazism continues to be debated to this day. He had initially resisted the firing of the Berlin Philharmonic’s Jewish members and protected them for as long as he could. But he was also a committed (if woolly-minded) nationalist who believed that German music had “a different meaning for us Germans than for other nations” and notoriously declared in an open letter to Goebbels that “we all welcome with great joy and gratitude . . . the restoration of our national honor.” Thereafter he cooperated with the Nazis, by all accounts uncomfortably but—it must be said—willingly. A monster of egotism, he saw himself as the greatest living exponent of German music and believed it to be his duty to stay behind and serve a cause higher than what he took to be mere party politics. “Human beings are free wherever Wagner and Beethoven are played, and if they are not free at first, they are freed while listening to these works,” he naively assured a horrified Arturo Toscanini in 1937. “Music transports them to regions where the Gestapo can do them no harm.”O
nce the war was over, the U.S. occupation forces decided to enlist the Berlin Philharmonic in the service of a democratic, anti-Soviet Germany. Furtwängler and Herbert von Karajan, who succeeded him as principal conductor, were officially “de-Nazified” and their orchestra allowed to function largely undisturbed, though six Nazi Party members were fired. The Vienna Philharmonic received similarly privileged treatment.
Needless to say, there was more to this decision than Cold War politics. No one questioned the unique artistic stature of either orchestra. Moreover, the Vienna Philharmonic, precisely because of its insularity, was now seen as a living museum piece, a priceless repository of 19th-century musical tradition. Still, many musicians and listeners, Jews above all, looked askance at both orchestras for years to come, believing them to be tainted by Nazism.
Indeed they were, so much so that they treated many of their surviving Jewish ex-members in a way that can only be described as vicious. In the most blatant individual case, the violinist Szymon Goldberg, who had served as the Berlin Philharmonic’s concertmaster under Furtwängler, was not allowed to reassume his post in 1945 and was subsequently denied a pension. As for the Vienna Philharmonic, the fact that it made Helmut Wobisch its executive director says everything about its deep-seated unwillingness to face up to its collective sins.
Be that as it may, scarcely any prominent musicians chose to boycott either orchestra. Leonard Bernstein went so far as to affect a flippant attitude toward the morally equivocal conduct of the Austro-German artists whom he encountered in Europe after the war. Upon meeting Herbert von Karajan in 1954, he actually told his wife Felicia that he had become “real good friends with von Karajan, whom you would (and will) adore. My first Nazi.”
At the same time, though, Bernstein understood what he was choosing to overlook. When he conducted the Vienna Philharmonic for the first time in 1966, he wrote to his parents:
I am enjoying Vienna enormously—as much as a Jew can. There are so many sad memories here; one deals with so many ex-Nazis (and maybe still Nazis); and you never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget. The city is so beautiful, and so full of tradition. Everyone here lives for music, especially opera, and I seem to be the new hero.
Did Bernstein sell his soul for the opportunity to work with so justly renowned an orchestra—and did he get his price by insisting that its members perform the symphonies of Mahler, with which he was by then closely identified? It is a fair question, one that does not lend itself to easy answers.
Even more revealing is the case of Bruno Walter, who never forgave Furtwängler for staying behind in Germany, informing him in an angry letter that “your art was used as a conspicuously effective means of propaganda for the regime of the Devil.” Yet Walter’s righteous anger did not stop him from conducting in Vienna after the war. Born in Berlin, he had come to identify with the Philharmonic so closely that it was impossible for him to seriously consider quitting its podium permanently. “Spiritually, I was a Viennese,” he wrote in Theme and Variations, his 1946 autobiography. In 1952, he made a second recording with the Vienna Philharmonic of Mahler’s Das Lied von der Erde, whose premiere he had conducted in 1911 and which he had recorded in Vienna 15 years earlier. One wonders what Walter, who had converted to Christianity but had been driven out of both his native lands for the crime of being Jewish, made of the text of the last movement: “My friend, / On this earth, fortune has not been kind to me! / Where do I go?”
As for the two great orchestras of the Third Reich, both have finally acknowledged their guilt and been forgiven, at least by those who know little of their past. It would occur to no one to decline on principle to perform with either group today. Such a gesture would surely be condemned as morally ostentatious, an exercise in what we now call virtue-signaling. Yet it is impossible to forget what Samuel Lipman wrote in 1993 in Commentary apropos the wartime conduct of Furtwängler: “The ultimate triumph of totalitarianism, I suppose it can be said, is that under its sway only a martyred death can be truly moral.” For the only martyrs of the Berlin and Vienna Philharmonics were their Jews. The orchestras themselves live on, tainted and beloved.
He knows what to reveal and what to conceal, understands the importance of keeping the semblance of distance between oneself and the story of the day, and comprehends the ins and outs of anonymous sourcing. Within days of his being fired by President Trump on May 9, for example, little green men and women, known only as his “associates,” began appearing in the pages of the New York Times and Washington Post to dispute key points of the president’s account of his dismissal and to promote Comey’s theory of the case.
“In a Private Dinner, Trump Demanded Loyalty,” the New York Times reported on May 11. “Comey Demurred.” The story was a straightforward narrative of events from Comey’s perspective, capped with an obligatory denial from the White House. The next day, the Washington Post reported, “Comey associates dispute Trump’s account of conversations.” The Post did not identify Comey’s associates, other than saying that they were “people who have worked with him.”
Maybe they were the same associates who had gabbed to the Times. Or maybe they were different ones. Who can tell? Regardless, the story these particular associates gave to the Post was readable and gripping. Comey, the Post reported, “was wary of private meetings and discussions with the president and did not offer the assurance, as Trump has claimed, that Trump was not under investigation as part of the probe into Russian interference in last year’s election.”
On May 16, Michael S. Schmidt of the Times published his scoop, “Comey Memo Says Trump Asked Him to End Flynn Investigation.” Schmidt didn’t see the memo for himself. Parts of it were read to him by—you guessed it—“one of Mr. Comey’s associates.” The following day, Robert Mueller was appointed special counsel to oversee the Russia investigation. On May 18, the Times, citing “two people briefed” on a call between Comey and the president, reported, “Comey, Unsettled by Trump, Is Said to Have Wanted Him Kept at a Distance.” And by the end of that week, Comey had agreed to testify before the Senate Intelligence Committee.
As his testimony approached, Comey’s people became more aggressive in their criticisms of the president. “Trump Should Be Scared, Comey Friend Says,” read the headline of a CNN interview with Brookings Institution fellow Benjamin Wittes. This “Comey friend” said he was “very shocked” when he learned that President Trump had asked Comey for loyalty. “I have no doubt that he regarded the group of people around the president as dishonorable,” Wittes said.
Comey, Wittes added, was so uncomfortable at the White House reception in January honoring law enforcement—the one where Comey lumbered across the room and Trump whispered something in his ear—that, as CNN paraphrased it, he “stood in a position so that his blue blazer would blend in with the room’s blue drapes in an effort for Trump to not notice him.” The integrity, the courage—can you feel it?
On June 6, the day before Comey’s prepared testimony was released, more “associates” told ABC that the director would “not corroborate Trump’s claim that on three separate occasions Comey told the president he was not under investigation.” And a “source with knowledge of Comey’s testimony” told CNN the same thing. In addition, ABC reported that, according to “a source familiar with Comey’s thinking,” the former director would say that Trump’s actions stopped short of obstruction of justice.
Maybe those sources weren’t as “familiar with Comey’s thinking” as they thought or hoped? To maximize the press coverage he already dominated, Comey had authorized the Senate Intelligence Committee to release his testimony ahead of his personal interview. That testimony told a different story than what had been reported by CNN and ABC (and by the Post on May 12). Comey had in fact told Trump the president was not under investigation—on January 6, January 27, and March 30. Moreover, the word “obstruction” did not appear at all in his written text. The senators asked Comey if he felt Trump obstructed justice. He declined to answer either way.
My guess is that Comey’s associates lacked Comey’s scalpel-like, almost Jesuitical ability to make distinctions, and therefore misunderstood what he was telling them to say to the press. Because it’s obvious Comey was the one behind the stories of Trump’s dishonesty and bad behavior. He admitted as much in front of the cameras in a remarkable exchange with Senator Susan Collins of Maine.
Comey said that, after Trump tweeted on May 12 that he’d better hope there aren’t “tapes” of their conversations, “I asked a friend of mine to share the content of the memo with a reporter. Didn’t do it myself, for a variety of reasons. But I asked him to, because I thought that might prompt the appointment of a special counsel. And so I asked a close friend of mine to do it.”
Collins asked whether that friend had been Wittes, known to cable news junkies as Comey’s bestie. Comey said no. The source for the New York Times article was “a good friend of mine who’s a professor at Columbia Law School,” Daniel Richman.
Every time I watch or read that exchange, I am amazed. Here is the former director of the FBI just flat-out admitting that, for months, he wrote down every interaction he had with the president of the United States because he wanted a written record in case the president ever fired or lied about him. And when the president did fire and lie about him, that director set in motion a series of public disclosures with the intent of not only embarrassing the president, but also forcing the appointment of a special counsel who might end up investigating the president for who knows what. And none of this would have happened if the president had not fired Comey or tweeted about him. He told the Senate that if Trump hadn’t dismissed him, he most likely would still be on the job.
Rarely, in my view, are high officials so transparent in describing how Washington works. Comey revealed to the world that he was keeping a file on his boss, that he used go-betweens to get his story into the press, that “investigative journalism” is often just powerful people handing documents to reporters to further their careers or agendas or even to get revenge. And as long as you maintain some distance from the fallout, and stick to the absolute letter of the law, you will come out on top, so long as you have a small army of nightingales singing to reporters on your behalf.
“It’s the end of the Comey era,” A.B. Stoddard said on Special Report with Bret Baier the other day. On the contrary: I have a feeling that, as the Russia investigation proceeds, we will be hearing much more from Comey. And from his “associates.” And his “friends.” And persons “familiar with his thinking.”
In April, COMMENTARY asked a wide variety of writers,
thinkers, and broadcasters to respond to this question: Is free speech under threat in the United States? We received twenty-seven responses. We publish them here in alphabetical order.
Floyd AbramsFree expression threatened? By Donald Trump? I guess you could say so.
When a president engages in daily denigration of the press, when he characterizes it as the enemy of the people, when he repeatedly says that the libel laws should be “loosened” so he can personally commence more litigation, when he says that journalists shouldn’t be allowed to use confidential sources, it is difficult even to suggest that he has not threatened free speech. And when he says to the head of the FBI (as former FBI director James Comey has said that he did) that Comey should consider “putting reporters in jail for publishing classified information,” it is difficult not to take those threats seriously.
The harder question, though, is this: How real are the threats? Or, as Michael Gerson put it in the Washington Post: Will Trump “go beyond mere Twitter abuse and move against institutions that limit his power?” Some of the president’s threats against the institution of the press, wittingly or not, have been simply preposterous. Surely someone has told him by now that neither he nor Congress can “loosen” libel laws; while each state has its own libel law, there is no federal libel law and thus nothing for him to loosen. What he obviously takes issue with is the impact that the Supreme Court’s 1964 First Amendment opinion in New York Times v. Sullivan has had on state libel laws. The case determined that public officials who sue for libel may not prevail unless they demonstrate that the statements made about them were false and were made with actual knowledge or suspicion of that falsity. So his objection to the rules governing libel law is to nothing less than the application of the First Amendment itself.
In other areas, however, the Trump administration has far more power to imperil free speech. We live under an Espionage Act, adopted a century ago, which is both broad in its language and uncommonly vague in its meaning. As such, it remains a half-open door through which an administration that is hostile to free speech might walk. Such an administration could initiate criminal proceedings against journalists who write about defense- or intelligence-related topics on the basis that classified information was leaked to them by present or former government employees. No such action has ever been commenced against a journalist. Press lawyers and civil-liberties advocates have strong arguments that the law may not be read so broadly and still be consistent with the First Amendment. But the scope of the Espionage Act and the impact of the First Amendment upon its interpretation remain unknown.
A related area in which the attitude of an administration toward the press may affect the latter’s ability to function as a check on government relates to the ability of journalists to protect the identity of their confidential sources. The Obama administration prosecuted more Espionage Act cases against sources of information to journalists than all prior administrations combined. After a good deal of deserved press criticism, it agreed to expand the internal guidelines of the Department of Justice designed to limit the circumstances under which such source revelation is demanded. But the guidelines are none too protective and are, after all, simply guidelines. A new administration is free to change or limit them or, in fact, abandon them altogether. In this area, as in so many others, it is too early to judge the ultimate treatment of free expression by the Trump administration. But the threats are real, and there is good reason to be wary.
Floyd Abrams is the author of The Soul of the First Amendment (Yale University Press, 2017).
Ayaan Hirsi AliFreedom of speech is being threatened in the United States by a nascent culture of hostility to different points of view. As political divisions in America have deepened, a conformist mentality of “right thinking” has spread across the country. Increasingly, American universities, where no intellectual doctrine ought to escape critical scrutiny, are some of the most restrictive domains when it comes to asking open-ended questions on subjects such as Islam.
Legally, speech in the United States is protected to a degree unmatched in almost any industrialized country. The U.S. has avoided unpredictable Canadian-style restrictions on speech, for example. I remain optimistic that as long as we have the First Amendment in the U.S., any attempt at formal legal censorship will be vigorously challenged.
Culturally, however, matters are very different in America. The regressive left is the forerunner threatening free speech on any issue that is important to progressives. The current pressure coming from those who call themselves “social-justice warriors” is unlikely to lead to successful legislation to curb the First Amendment. Instead, censorship is spreading in the cultural realm, particularly at institutions of higher learning.
The way activists of the regressive left achieve silence or censorship is by creating a taboo, and one of the most pernicious taboos in operation today is the word “Islamophobia.” Islamists are similarly motivated to rule any critical scrutiny of Islamic doctrine out of order. There is now a university center (funded by Saudi money) in the U.S. dedicated to monitoring and denouncing incidences of “Islamophobia.”
The term “Islamophobia” is used against critics of political Islam, but also against progressive reformers within Islam. The term implies an irrational fear that is tainted by hatred, and it has had a chilling effect on free speech. In fact, “Islamophobia” is a poorly defined term. Islam is not a race, and it is very often perfectly rational to fear some expressions of Islam. No set of ideas should be beyond critical scrutiny.
To push back in this cultural realm—in our universities, in public discourse—those favoring free speech should focus more on the message of dawa, the set of ideas that the Islamists want to promote. If the aims of dawa are sufficiently exposed, ordinary Americans and Muslim Americans will reject it. The Islamist message is a message of divisiveness, misogyny, and hatred. It’s anachronistic and wants people to live by tribal norms dating from the seventh century. The best antidote to Islamic extremism is the revelation of what its primary objective is: a society governed by Sharia. This is the opposite of censorship: It is documenting reality. What is life like in Saudi Arabia, Iran, the Northern Nigerian States? What is the true nature of Sharia law?
Islamists want to hide the true meaning of Sharia, Jihad, and the implications for women, gays, religious minorities, and infidels under the veil of “Islamophobia.” Islamists use “Islamophobia” to obfuscate their vision and imply that any scrutiny of political Islam is hatred and bigotry. The antidote to this is more exposure and more speech.
As pressure on freedom of speech increases from the regressive left, we must reject the notions that only Muslims can speak about Islam, and that any critical examination of Islamic doctrines is inherently “racist.”
Instead of contorting Western intellectual traditions so as not to offend our Muslim fellow citizens, we need to defend the Muslim dissidents who are risking their lives to promote the human rights we take for granted: equality for women, tolerance of all religions and orientations, our hard-won freedoms of speech and thought.
It is by nurturing and protecting such speech that progressive reforms can emerge within Islam. By accepting the increasingly narrow confines of acceptable discourse on issues such as Islam, we do dissidents and progressive reformers within Islam a grave disservice. For truly progressive reforms within Islam to be possible, full freedom of speech will be required.
Ayaan Hirsi Ali is a research fellow at the Hoover Institution, Stanford University, and the founder of the AHA Foundation.
Lee C. BollingerI know it is too much to expect that political discourse mimic the measured, self-questioning, rational, footnoting standards of the academy, but there is a difference between robust political debate and political debate infected with fear or panic. The latter introduces a state of mind that is visceral and irrational. In the realm of fear, we move beyond the reach of reason and a sense of proportionality. When we fear, we lose the capacity to listen and can become insensitive and mean.
Our Constitution is well aware of this fact about the human mind and of its negative political consequences. In the First Amendment jurisprudence established over the past century, we find many expressions of the problematic state of mind that is produced by fear. Among the most famous and potent is that of Justice Brandeis in Whitney v. California in 1927, one of the many cases involving aggravated fears of subversive threats from abroad. “It is the function of (free) speech,” he said, “to free men from the bondage of irrational fears.” “Men feared witches,” Brandeis continued, “and burned women.”
Today, our “witches” are terrorists, and Brandeis’s metaphorical “women” include the refugees (mostly children) and displaced persons, immigrants, and foreigners whose lives have been thrown into suspension and doubt by policies of exclusion.
The same fears of the foreign that take hold of a population inevitably infect our internal interactions and institutions, yielding suppression of unpopular and dissenting voices, victimization of vulnerable groups, attacks on the media, and the rise of demagoguery, with its disdain for facts, reason, expertise, and tolerance.
All of this poses a very special obligation on those of us within universities. Not only must we make the case in every venue for the values that form the core of who we are and what we do, but we must also live up to our own principles of free inquiry and fearless engagement with all ideas. This is why recent incidents on a handful of college campuses disrupting and effectively censoring speakers is so alarming. Such acts not only betray a basic principle but also inflame a rising prejudice against the academic community, and they feed efforts to delegitimize our work, at the very moment when it’s most needed.
I do not for a second support the view that this generation has an unhealthy aversion to engaging differences of opinion. That is a modern trope of polarization, as is the portrayal of universities as hypocritical about academic freedom and political correctness. But now, in this environment especially, universities must be at the forefront of defending the rights of all students and faculty to listen to controversial voices, to engage disagreeable viewpoints, and to make every effort to demonstrate our commitment to the sort of fearless and spirited debate that we are simultaneously asking of the larger society. Anyone with a voice can shout over a speaker; but being able to listen to and then effectively rebut those with whom we disagree—particularly those who themselves peddle intolerance—is one of the greatest skills our education can bestow. And it is something our democracy desperately needs more of. That is why, I say to you now, if speakers who are being denied access to other campuses come here, I will personally volunteer to introduce them, and listen to them, however much I may disagree with them. But I will also never hesitate to make clear why I disagree with them.
Lee C. Bollinger is the 19th president of Columbia University and the author of Uninhibited, Robust, and Wide-Open: A Free Press for a New Century. This piece has been excerpted from President Bollinger’s May 17 commencement address.
Richard A. Epstein
Today, the greatest threat to the constitutional protection of freedom of speech comes from campus rabble-rousers who invoke this very protection. In their book, the speech of people like Charles Murray and Heather Mac Donald constitutes a form of violence, bordering on genocide, that receives no First Amendment protection. Enlightened protestors are both bound and entitled to shout them down, by force or other disruptive actions, if their universities are so foolish as to extend them an invitation to speak. Any indignant minority may take the law into its own hands to eradicate the intellectual cancer before it spreads on their own campus.
By such tortured logic, a new generation of vigilantes distorts the First Amendment doctrine: Speech becomes violence, and violence becomes heroic acts of self-defense. The standard First Amendment interpretation emphatically rejects that view. Of course, the First Amendment doesn’t let you say what you want when and wherever you want to. Your freedom of speech is subject to the same limitations as your freedom of action. So you have no constitutional license to assault other people, to lie to them, or to form cartels to bilk them in the marketplace. But folks such as Murray, Mac Donald, and even Yiannopoulos do not come close to crossing into that forbidden territory. They are not using, for example, “fighting words,” rightly limited to words or actions calculated to provoke immediate aggression against a known target. Fighting words are worlds apart from speech that provokes a negative reaction in those who find your speech offensive solely because of the content of its message.
This distinction is central to the First Amendment. Fighting words have to be blocked by well-tailored criminal and civil sanctions lest some people gain license to intimidate others from speaking or peaceably assembling. The remedy for mere offense is to speak one’s mind in response. But it never gives anyone the right to block the speech of others, lest everyone be able to unilaterally increase his sphere of action by getting really angry about the beliefs of others. No one has the right to silence others by working himself into a fit of rage.
Obviously, it is intolerable to let mutual animosity generate factional warfare, whereby everyone can use force to silence rivals. To avoid this war of all against all, each side claims that only its actions are privileged. These selective claims quickly degenerate into a form of viewpoint discrimination, which undermines one of the central protections that traditional First Amendment law erects: a wall against each and every group out to destroy the level playing field on which robust political debate rests. Every group should be at risk for having its message fall flat. The new campus radicals want to upend that understanding by shutting down their adversaries if their universities do not. Their aggression must be met, if necessary, by counterforce. Silence in the face of aggression is not an acceptable alternative.
Richard A. Epstein is the Laurence A. Tisch Professor of Law at the New York University School of Law.
David FrenchWe’re living in the midst of a troubling paradox. At the exact same time that First Amendment jurisprudence has arguably never been stronger and more protective of free expression, millions of Americans feel they simply can’t speak freely. Indeed, talk to Americans living and working in the deep-blue confines of the academy, Hollywood, and the tech sector, and you’ll get a sense of palpable fear. They’ll explain that they can’t say what they think and keep their jobs, their friends, and sometimes even their families.
The government isn’t cracking down or censoring; instead, Americans are using free speech to destroy free speech. For example, a social-media shaming campaign is an act of free speech. So is an economic boycott. So is turning one’s back on a public speaker. So is a private corporation firing a dissenting employee for purely political reasons. Each of these actions is largely protected from government interference, and each one represents an expression of the speaker’s ideas and values.
The problem, however, is obvious. The goal of each of these kinds of actions isn’t to persuade; it’s to intimidate. The goal isn’t to foster dialogue but to coerce conformity. The result is a marketplace of ideas that has been emptied of all but the approved ideological vendors—at least in those communities that are dominated by online thugs and corporate bullies. Indeed, this mindset has become so prevalent that in places such as Portland, Berkeley, Middlebury, and elsewhere, the bullies and thugs have crossed the line from protected—albeit abusive—speech into outright shout-downs and mob violence.
But there’s something else going on, something that’s insidious in its own way. While politically correct shaming still has great power in deep-blue America, its effect in the rest of the country is to trigger a furious backlash, one characterized less by a desire for dialogue and discourse than by its own rage and scorn. So we’re moving toward two Americas—one that ruthlessly (and occasionally illegally) suppresses dissenting speech and the other that is dangerously close to believing that the opposite of political correctness isn’t a fearless expression of truth but rather the fearless expression of ideas best calculated to enrage your opponents.
The result is a partisan feedback loop where right-wing rage spurs left-wing censorship, which spurs even more right-wing rage. For one side, a true free-speech culture is a threat to feelings, sensitivities, and social justice. The other side waves high the banner of “free speech” to sometimes elevate the worst voices to the highest platforms—not so much to protect the First Amendment as to infuriate the hated “snowflakes” and trigger the most hysterical overreactions.
The culturally sustainable argument for free speech is something else entirely. It reminds the cultural left of its own debt to free speech while reminding the political right that a movement allegedly centered around constitutional values can’t abandon the concept of ordered liberty. The culture of free speech thrives when all sides remember their moral responsibilities—to both protect the right of dissent and to engage in ideological combat with a measure of grace and humility.
David French is a senior writer at National Review.
Pamela GellerThe real question isn’t whether free speech is under threat in the United States, but rather, whether it’s irretrievably lost. Can we get it back? Not without war, I suspect, as is evidenced by the violence at colleges whenever there’s the shamefully rare event of a conservative speaker on campus.
Free speech is the soul of our nation and the foundation of all our other freedoms. If we can’t speak out against injustice and evil, those forces will prevail. Freedom of speech is the foundation of a free society. Without it, a tyrant can wreak havoc unopposed, while his opponents are silenced.
With that principle in mind, I organized a free-speech event in Garland, Texas. The world had recently been rocked by the murder of the Charlie Hebdo cartoonists. My version of “Je Suis Charlie” was an event here in America to show that we can still speak freely and draw whatever we like in the Land of the Free. Yet even after jihadists attacked our event, I was blamed—by Donald Trump among others—for provoking Muslims. And if I tried to hold a similar event now, no arena in the country would allow me to do so—not just because of the security risk, but because of the moral cowardice of all intellectual appeasers.
Under what law is it wrong to depict Muhammad? Under Islamic law. But I am not a Muslim, I don’t live under Sharia. America isn’t under Islamic law, yet for standing for free speech, I’ve been:
- Prevented from running our advertisements in every major city in this country. We have won free-speech lawsuits all over the country, which officials circumvent by prohibiting all political ads (while making exceptions for ads from Muslim advocacy groups);
- Shunned by the right, shut out of the Conservative Political Action Conference;
- Shunned by Jewish groups at the behest of terror-linked groups such as the Council on American-Islamic Relations;
- Blacklisted from speaking at universities;
- Prevented from publishing books, for security reasons and because publishers fear shaming from the left;
- Banned from Britain.
A Seattle court accused me of trying to shut down free speech after we merely tried to run an FBI poster on global terrorism, because authorities had banned all political ads in other cities to avoid running ours. Seattle blamed us for that, which was like blaming a woman for being raped because she was wearing a short skirt.
This kind of vilification and shunning is key to the left’s plan to shut down all dissent from its agenda—they make legislation restricting speech unnecessary.
The same refusal to allow our point of view to be heard has manifested itself elsewhere. The foundation of my work is individual rights and equality for all before the law. These are the foundational principles of our constitutional republic. That is now considered controversial. Truth is the new hate speech. Truth is going to be criminalized.
The First Amendment doesn’t only protect ideas that are sanctioned by the cultural and political elites. If “hate speech” laws are enacted, who would decide what’s permissible and what’s forbidden? The government? The gunmen in Garland?
There has been an inversion of the founding premise of this nation. No longer is it the subordination of might to right, but right to might. History is repeatedly deformed with the bloody consequences of this transition.
Pamela Geller is the editor in chief of the Geller Report and president of the American Freedom Defense Initiative.
Jonah GoldbergOf course free speech is under threat in America. Frankly, it’s always under threat in America because it’s always under threat everywhere. Ronald Reagan was right when he said in 1961, “Freedom is never more than one generation away from extinction. We didn’t pass it on to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same.”
This is more than political boilerplate. Reagan identified the source of the threat: human nature. God may have endowed us with a right to liberty, but he didn’t give us all a taste for it. As with most finer things, we must work to acquire a taste for it. That is what civilization—or at least our civilization—is supposed to do: cultivate attachments to certain ideals. “Cultivate” shares the same Latin root as “culture,” cultus, and properly understood they mean the same thing: to grow, nurture, and sustain through labor.
In the past, threats to free speech have taken many forms—nationalist passion, Comstockery (both good and bad), political suppression, etc.—but the threat to free speech today is different. It is less top-down and more bottom-up. We are cultivating a generation of young people to reject free speech as an important value.
One could mark the beginning of the self-esteem movement with Nathaniel Branden’s 1969 paper, “The Psychology of Self-Esteem,” which claimed that “feelings of self-esteem were the key to success in life.” This understandable idea ran amok in our schools and in our culture. When I was a kid, Saturday-morning cartoons were punctuated with public-service announcements telling kids: “The most important person in the whole wide world is you, and you hardly even know you!”
The self-esteem craze was just part of the cocktail of educational fads. Other ingredients included multiculturalism, the anti-bullying crusade, and, of course, that broad phenomenon known as “political correctness.” Combined, they’ve produced a generation that rejects the old adage “sticks and stones can break my bones but words can never harm me” in favor of the notion that “words hurt.” What we call political correctness has been on college campuses for decades. But it lacked a critical mass of young people who were sufficiently receptive to it to make it a fully successful ideology. The campus commissars welcomed the new “snowflakes” with open arms; truly, these are the ones we’ve been waiting for.
“Words hurt” is a fashionable concept in psychology today. (See Psychology Today: “Why Words Can Hurt at Least as Much as Sticks and Stones.”) But it’s actually a much older idea than the “sticks and stones” aphorism. For most of human history, it was a crime to say insulting or “injurious” things about aristocrats, rulers, the Church, etc. That tendency didn’t evaporate with the Divine Right of Kings. Jonathan Haidt has written at book length about our natural capacity to create zones of sanctity, immune from reason.
And that is the threat free speech faces today. Those who inveigh against “hate speech” are in reality fighting “heresy speech”—ideas that do “violence” to sacred notions of self-esteem, racial or gender equality, climate change, and so on. Put whatever label you want on it, contemporary “social justice” progressivism acts as a religion, and it has no patience for blasphemy.
When Napoleon’s forces converted churches into stables, the clergy did not object on the grounds that regulations regarding the proper care and feeding of animals had been violated. They complained of sacrilege and blasphemy. When Charles Murray or Christina Hoff Summers visits college campuses, the protestors are behaving like the zealous acolytes of St. Jerome. Appeals to the First Amendment have as much power over the “antifa” fanatics as appeals to Odin did to champions of the New Faith.
That is the real threat to free speech today.
Jonah Goldberg is a senior editor at National Review and a fellow at the American Enterprise Institute.
KC JohnsonIn early May, the Washington Post urged universities to make clear that “racist signs, symbols, and speech are off-limits.” Given the extraordinarily broad definition of what constitutes “racist” speech at most institutions of higher education, this demand would single out most right-of-center (and, in some cases, even centrist and liberal) discourse on issues of race or ethnicity. The editorial provided the highest-profile example of how hostility to free speech, once confined to the ideological fringe on campus, has migrated to the liberal mainstream.
The last few years have seen periodic college protests—featuring claims that significant amounts of political speech constitute “violence,” thereby justifying censorship—followed by even more troubling attempts to appease the protesters. After the mob scene that greeted Charles Murray upon his visit to Middlebury College, for instance, the student government criticized any punishment for the protesters, and several student leaders wanted to require that future speakers conform to the college’s “community standard” on issues of race, gender, and ethnicity. In the last few months, similar attempts to stifle the free exchange of ideas in the name of promoting diversity occurred at Wesleyan, Claremont McKenna, and Duke. Offering an extreme interpretation of this point of view, one CUNY professor recently dismissed dialogue as “inherently conservative,” since it reinforced the “relations of power that presently exist.”
It’s easy, of course, to dismiss campus hostility to free speech as affecting only a small segment of American public life—albeit one that trains the next generation of judges, legislators, and voters. But, as Jonathan Chait observed in 2015, denying “the legitimacy of political pluralism on issues of race and gender” has broad appeal on the left. It is only most apparent on campus because “the academy is one of the few bastions of American life where the political left can muster the strength to impose its political hegemony upon others.” During his time in office, Barack Obama generally urged fellow liberals to support open intellectual debate. But the current campus environment previews the position of free speech in a post-Obama Democratic Party, increasingly oriented around identity politics.
Waning support on one end of the ideological spectrum for this bedrock American principle should provide a political opening for the other side. The Trump administration, however, seems poorly suited to make the case. Throughout his public career, Trump has rarely supported free speech, even in the abstract, and has periodically embraced legal changes to facilitate libel lawsuits. Moreover, the right-wing populism that motivates Trump’s base has a long tradition of ideological hostility to civil liberties of all types. Even in campus contexts, conservatives have defended free speech inconsistently, as seen in recent calls that CUNY disinvite anti-Zionist fanatic Linda Sarsour as a commencement speaker.
In a sharply polarized political environment, awash in dubiously-sourced information, free speech is all the more important. Yet this same environment has seen both sides, most blatantly elements of the left on campuses, demand restrictions on their ideological foes’ free speech in the name of promoting a greater good.
KC Johnson is a professor of history at Brooklyn College and the CUNY Graduate Center.
Laura KipnisI find myself with a strange-bedfellows problem lately. Here I am, a left-wing feminist professor invited onto the pages of Commentary—though I’d be thrilled if it were still 1959—while fielding speaking requests from right-wing think tanks and libertarians who oppose child-labor laws.
Somehow I’ve ended up in the middle of the free-speech-on-campus debate. My initial crime was publishing a somewhat contentious essay about campus sexual paranoia that put me on the receiving end of Title IX complaints. Apparently I’d created a “hostile environment” at my university. I was investigated (for 72 days). Then I wrote up what I’d learned about these campus inquisitions in a second essay. Then I wrote about it all some more, in a book exposing the kangaroo-court elements of the Title IX process—and the extra-legal gag orders imposed on everyone caught in its widening snare.
I can’t really comment on whether more charges have been filed against me over the book. I’ll just say that writing about being a Title IX respondent could easily become a life’s work. I learned, shortly after writing this piece, that I and my publisher were being sued for defamation, among other things.
Is free speech under threat on American campuses? Yes. We know all about student activists who wish to shut down talks by people with opposing views. I got smeared with a bit of that myself, after a speaking invitation at Wellesley—some students made a video protesting my visit before I arrived. The talk went fine, though a group of concerned faculty circulated an open letter afterward also protesting the invitation: My views on sexual politics were too heretical, and might have offended students.
I didn’t take any of this too seriously, even as right-wing pundits crowed, with Wellesley as their latest outrage bait. It was another opportunity to mock student activists, and the fact that I was myself a feminist rather than a Charles Murray or a Milo Yiannopoulos, made them positively gleeful.
I do find myself wondering where all my new free-speech pals were when another left-wing professor, Steven Salaita, was fired (or if you prefer euphemism, “his job offer was withdrawn”) from the University of Illinois after he tweeted criticism of Israel’s Gaza policy. Sure the tweets were hyperbolic, but hyperbole and strong opinions are protected speech, too.
I guess free speech is easy to celebrate until it actually challenges something. Funny, I haven’t seen Milo around lately—so beloved by my new friends when he was bashing minorities and transgender kids. Then he mistakenly said something authentic (who knew he was capable of it!), reminiscing about an experience a lot of gay men have shared: teenage sex with older men. He tried walking it back—no, no, he’d been a victim, not a participant—but his fan base was shrieking about pedophilia and fleeing in droves. Gee, they were all so against “political correctness” a few minutes before.
It’s easy to be a free-speech fan when your feathers aren’t being ruffled. No doubt what makes me palatable to the anti-PC crowd is having thus far failed to ruffle them enough. I’m just going to have to work harder.
Laura Kipnis’s latest book is Unwanted Advances: Sexual Paranoia Comes to Campus.
Eugene KontorovichThe free and open exchange of views—especially politically conservative or traditionally religious ones—is being challenged. This is taking place not just at college campuses but throughout our public spaces and cultural institutions. James Watson was fired from the lab he led since 1968 and could not speak at New York University because of petty, censorious students who would not know DNA from LSD. Our nation’s founders and heroes are being “disappeared” from public commemoration, like Trotsky from a photograph of Soviet rulers.
These attacks on “free speech” are not the result of government action. They are not what the First Amendment protects against. The current methods—professional and social shaming, exclusion, and employment termination—are more inchoate, and their effects are multiplied by self-censorship. A young conservative legal scholar might find himself thinking: “If the late Justice Antonin Scalia can posthumously be deemed a ‘bigot’ by many academics, what chance have I?”
Ironically, artists and intellectuals have long prided themselves on being the first defenders of free speech. Today, it is the institutions of both popular and high culture that are the censors. Is there one poet in the country who would speak out for Ann Coulter?
The inhibition of speech at universities is part of a broader social phenomenon of making longstanding, traditional views and practices sinful overnight. Conservatives have not put up much resistance to this. To paraphrase Martin Niemöller’s famous dictum: “First they came for Robert E. Lee, and I said nothing, because Robert E. Lee meant nothing to me.”
The situation with respect to Israel and expressions of support for it deserves separate discussion. Even as university administrators give political power to favored ideologies by letting them create “safe spaces” (safe from opposing views), Jews find themselves and their state at the receiving end of claims of apartheid—modern day blood libels. It is not surprising if Jewish students react by demanding that they get a safe space of their own. It is even less surprising if their parents, paying $65,000 a year, want their children to have a nicer time of it. One hears Jewish groups frequently express concern about Jewish students feeling increasingly isolated and uncomfortable on campus.
But demanding selective protection from the new ideological commissars is unlikely to bring the desired results. First, this new ideology, even if it can be harnessed momentarily to give respite to harassed Jews on campus, is ultimately illiberal and will be controlled by “progressive” forces. Second, it is not so terrible for Jews in the Diaspora to feel a bit uncomfortable. It has been the common condition of Jews throughout the millennia. The social awkwardness that Jews at liberal arts schools might feel in being associated with Israel is of course one of the primary justifications for the Jewish State. Facing the snowflakes incapable of hearing a dissonant view—but who nonetheless, in the grip of intersectional ecstasy, revile Jewish self-determination—Jewish students should toughen up.
Eugene Kontorovich teaches constitutional law at Northwestern University and heads the international law department of the Kohelet Policy Forum in Jerusalem.
Nicholas LemannThere’s an old Tom Wolfe essay in which he describes being on a panel discussion at Princeton in 1965 and provoking the other panelists by announcing that America, rather than being in crisis, is in the middle of a “happiness explosion.” He was arguing that the mass effects of 20 years of post–World War II prosperity made for a larger phenomenon than the Vietnam War, the racial crisis, and the other primary concerns of intellectuals at the time.
In the same spirit, I’d say that we are in the middle of a free-speech explosion, because of 20-plus years of the Internet and 10-plus years of social media. If one understands speech as disseminated individual opinion, then surely we live in the free-speech-est society in the history of the world. Anybody with access to the unimpeded World Wide Web can say anything to a global audience, and anybody can hear anything, too. All threats to free speech should be understood in the context of this overwhelmingly reality.
It is a comforting fantasy that a genuine free-speech regime will empower mainly “good,” but previously repressed, speech. Conversely, repressive regimes that are candid enough to explain their anti-free-speech policies usually say that they’re not against free speech, just “bad” speech. We have to accept that more free speech probably means, in the aggregate, more bad speech, and also a weakening of the power, authority, and economic support for information professionals such as journalists. Welcome to the United States in 2017.
I am lucky enough to live and work on the campus of a university, Columbia, that has been blessedly free of successful attempts to repress free speech. Just in the last few weeks, Charles Murray and Dinesh D’Souza have spoken here without incident. But, yes, the evidently growing popularity of the idea that “hate speech” shouldn’t be permitted on campuses is a problem, especially, it seems, at small private liberal-arts colleges. We should all do our part, and I do, by frequently and publicly endorsing free-speech principles. Opposing the BDS movement falls squarely into that category.
It’s not just on campuses that free-speech vigilance is needed, though. The number-one threat to free speech, to my mind, is that the wide-open Web has been replaced by privately owned platforms such as Facebook and Google as the way most people experience the public life of the Internet. These companies are committed to banning “hate speech,” and they are eager to operate freely in countries, like China, that don’t permit free political speech. That makes for a far more consequential constrained environment than any campus’s speech code.
Also, Donald Trump regularly engages in presidentially unprecedented rhetoric demonizing people who disagree with him. He seems to think this is all in good fun, but, as we have already seen at his rallies, not everybody hears it that way. The place where Trumpism will endanger free speech isn’t in the center—the White House press room—but at the periphery, for example in the way that local police handle bumptious protestors and the journalists covering them. This is already happening around the country. If Trump were as disciplined and knowledgeable as Vladimir Putin or Recep Tayyip Erdogan, which so far he seems not to be, then free speech could be in even more serious danger from government, which in most places is its usual main enemy.
Nicholas Lemann is a professor at Columbia Journalism School and a staff writer for the New Yorker.
Michael J. LewisFree speech is a right but it is also a habit, and where the habit shrivels so will the right. If free speech today is in headlong retreat—everywhere threatened by regulation, organized harassment, and even violence—it is in part because our political culture allowed the practice of persuasive oratory to atrophy. The process began in 1973, an unforeseen side effect of Roe v. Wade. Legislators were delighted to learn that by relegating this divisive matter of public policy to the Supreme Court and adopting a merely symbolic position, they could sit all the more safely in their safe seats.
Since then, one crucial question of public policy after another has been punted out of the realm of politics and into the judicial. Issues that might have been debated with all the rhetorical agility of a Lincoln and a Douglas, and then subjected to a process of negotiation, compromise, and voting, have instead been settled by decree: e.g., Chevron, Kelo, Obergefell. The consequences for speech have been pernicious. Since the time of Pericles, deliberative democracy has been predicated on the art of persuasion, which demands the forceful clarity of thought and expression without which no one has ever been persuaded. But a legislature that relegates its authority to judges and regulators will awaken to discover its oratorical culture has been stunted. When politicians, rather than seeking to convince and win over, prefer to project a studied and pleasant vagueness, debate withers into tedious defensive performance. It has been decades since any presidential debate has seen any sustained give and take over a matter of policy. If there is any suspense at all, it is only the possibility that a fatigued or peeved candidate might blurt out that tactless shard of truth known as a gaffe.
A generation accustomed to hearing platitudes smoothly dispensed from behind a teleprompter will find the speech of a fearless extemporaneous speaker to be startling, even disquieting; unfamiliar ideas always are. Unhappily, they have been taught to interpret that disquiet as an injury done to them, rather than as a premise offered to them to consider. All this would not have happened—certainly not to this extent—had not our deliberative democracy decided a generation ago that it preferred the security of incumbency to the risks of unshackled debate. The compulsory contraction of free speech on college campuses is but the logical extension of the voluntary contraction of free speech in our political culture.
Michael J. Lewis’s new book is City of Refuge: Separatists and Utopian Town Planning (Princeton University Press).
Heather Mac DonaldThe answer to the symposium question depends on how powerful the transmission belt is between academia and the rest of the country. On college campuses, violence and brute force are silencing speakers who challenge left-wing campus orthodoxies. These totalitarian outbreaks have been met with listless denunciations by college presidents, followed by . . . virtually nothing. As of mid-May, the only discipline imposed for 2017’s mass attacks on free speech at UC Berkeley, Middlebury, and Clare-mont McKenna College was a letter of reprimand inserted—sometimes only temporarily—into the files of several dozen Middlebury students, accompanied by a brief period of probation. Previous outbreaks of narcis-sistic incivility, such as the screaming-girl fit at Yale and the assaults on attendees of Yale’s Buckley program, were discreetly ignored by college administrators.
Meanwhile, the professoriate unapologetically defends censorship and violence. After the February 1 riot in Berkeley to prevent Milo Yiannapoulos from speaking, Déborah Blocker, associate professor of French at UC Berkeley, praised the rioters. They were “very well-organized and very efficient,” Blocker reported admiringly to her fellow professors. “They attacked property but they attacked it very sparingly, destroying just enough University property to obtain the cancellation order for the MY event and making sure no one in the crowd got hurt” (emphasis in original). (In fact, perceived Milo and Donald Trump supporters were sucker-punched and maced; businesses downtown were torched and vandalized.) New York University’s vice provost for faculty, arts, humanities, and diversity, Ulrich Baer, displayed Orwellian logic by claiming in a New York Times op-ed that shutting down speech “should be understood as an attempt to ensure the conditions of free speech for a greater group of people.”
Will non-academic institutions take up this zeal for outright censorship? Other ideological products of the left-wing academy have been fully absorbed and operationalized. Racial victimology, which drives much of the campus censorship, is now standard in government and business. Corporate diversity trainers counsel that bias is responsible for any lack of proportional racial representation in the corporate ranks. Racial disparities in school discipline and incarceration are universally attributed to racism rather than to behavior. Public figures have lost jobs for violating politically correct taboos.
Yet Americans possess an instinctive commitment to the First Amendment. Federal judges, hardly an extension of the Federalist Society, have overwhelmingly struck down campus speech codes. It is hard to imagine that they would be any more tolerant of the hate-speech legislation so prevalent in Europe. So the question becomes: At what point does the pressure to conform to the elite worldview curtail freedom of thought and expression, even without explicit bans on speech?
Social stigma against conservative viewpoints is not the same as actual censorship. But the line can blur. The Obama administration used regulatory power to impose a behavioral conformity on public and private entities. School administrators may have technically still possessed the right to dissent from novel theories of gender, but they had to behave as if they were fully on board with the transgender revolution when it came to allowing boys to use girls’ bathrooms and locker rooms.
Had Hillary Clinton had been elected president, the federal bureaucracy would have mimicked campus diversocrats with even greater zeal. That threat, at least, has been avoided. Heresies against left-wing dogma may still enter the public arena, if only by the back door. The mainstream media have lurched even further left in the Trump era, but the conservative media, however mocked and marginalized, are expanding (though Twitter and Facebook’s censorship of conservative speakers could be a harbinger of more official silencing).
Outside the academy, free speech is still legally protected, but its exercise requires ever greater determination.
Heather Mac Donald is a fellow at the Manhattan Institute and the author of The War on Cops.
John McWhorterThere is a certain mendacity, as Brick put it in Cat on a Hot Tin Roof, in our discussion of free speech on college campuses. Namely, none of us genuinely wish that absolutely all issues be aired in the name of education and open-mindedness. To insist so is to pretend that civilized humanity makes nothing we could call advancement in philosophical consensus.
I doubt we need “free speech” on issues such as whether slavery and genocide are okay, whether it has been a mistake to view women as men’s equals, or to banish as antique the idea that whites are a master race while other peoples represent a lower rung on the Darwinian scale. With all due reverence of John Stuart Mill’s advocacy for the regular airing of even noxious views in order to reinforce clarity on why they were rejected, we are also human beings with limited time. A commitment to the Enlightenment justifiably will decree that certain views are, indeed, no longer in need of discussion.
However, our modern social-justice warriors are claiming that this no-fly zone of discussion is vaster than any conception of logic or morality justifies. We are being told that questions regarding the modern proposals about cultural appropriation, about whether even passing infelicitous statements constitute racism in the way that formalized segregation and racist disparagement did, or about whether social disparities can be due to cultural legacies rather than structural impediments, are as indisputably egregious, backwards, and abusive as the benighted views of the increasingly distant past.
That is, the new idea is not only that discrimination and inequality still exist, but that to even question the left’s utopian expectation on such matters justifies the same furious, sloganistic and even physically violent resistance that was once levelled against those designated heretics by a Christian hegemony.
Of course the protesters in question do not recognize themselves in a portrait as opponents of something called heresy. They suppose that Galileo’s opponents were clearly wrong but that they, today, are actually correct in a way that no intellectual or moral argument could coherently deny.
As such, we have students allowed to decree college campuses as “racist” when they are the least racist spaces on the planet—because they are, predictably given the imperfection of humans, not perfectly free of passingly unsavory interactions. Thinkers invited to talk for a portion of an hour from the right rather than the left and then have dinner with a few people and fly home are treated as if they were reanimated Hitlers. The student of color who hears a few white students venturing polite questions about the leftist orthodoxy is supported in fashioning these questions as “racist” rhetoric.
The people on college campuses who openly and aggressively spout this new version of Christian (or even Islamist) crusading—ironically justifying it as a barricade against “fascist” muzzling of freedom when the term applies ominously well to the regime they are fostering—are a minority. However, the sawmill spinning blade of their rhetoric has succeeding in rendering opposition as risky as espousing pedophilia, such that only those natively open to violent criticism dare speak out. The latter group is small. The campus consensus thereby becomes, if only at moralistic gunpoint à la the ISIS victim video, a strangled hard-leftism.
Hence freedom of speech is indeed threatened on today’s college campuses. I have lost count of how many of my students, despite being liberal Democrats (many of whom sobbed at Hillary Clinton’s loss last November), have told me that they are afraid to express their opinions about issues that matter, despite the fact that their opinions are ones that any liberal or even leftist person circa 1960 would have considered perfectly acceptable.
Something has shifted of late, and not in a direction we can legitimately consider forwards.
John McWhorter teaches linguistics, philosophy, and music history at Columbia University and is the author of The Language Hoax, Words on the Move, and Talking Back, Talking Black.
Kate Bachelder OdellIt’s 2021, and Harvard Square has devolved into riots: Some 120 people are injured in protests, and the carnage includes fire-consumed cop cars and smashed-in windows. The police discharge canisters of tear gas, and, after apprehending dozens of protesters, enforce a 1:45 A.M. curfew. Anyone roaming the streets after hours is subject to arrest. About 2,000 National Guardsmen are prepared to intervene. Such violence and disorder is also roiling Berkeley and other elite and educated areas.
Oh, that’s 1970. The details are from the Harvard Crimson’s account of “anti-war” riots that spring. The episode is instructive in considering whether free speech is under threat in the United States. Almost daily, there’s a new YouTube installment of students melting down over viewpoints of speakers invited to one campus or another. Even amid speech threats from government—for example, the IRS’s targeting of political opponents—nothing has captured the public’s attention like the end of free expression at America’s institutions of higher learning.
Yet disruption, confusion, and even violence are not new campus phenomena. And it’s hard to imagine that young adults who deployed brute force in the 1960s and ’70s were deeply committed to the open and peaceful exchange of ideas.
There may also be reason for optimism. The rough and tumble on campus in the 1960s and ’70s produced a more even-tempered ’80s and ’90s, and colleges are probably heading for another course correction. In covering the ruckuses at Yale, Missouri, and elsewhere, I’ve talked to professors and students who are figuring out how to respond to the illiberalism, even if the reaction is delayed. The University of Chicago put out a set of free-speech principles last year, and others schools such as Princeton and Purdue have endorsed them.
The NARPs—Non-Athletic Regular People, as they are sometimes known on campus—still outnumber the social-justice warriors, who appear to be overplaying their hand. Case in point is the University of Missouri, which experienced a precipitous drop in enrollment after instructor Melissa Click and her ilk stoked racial tensions last spring. The college has closed dorms and trimmed budgets. Which brings us to another silver lining: The economic model of higher education (exorbitant tuition to pay ever more administrators) may blow up traditional college before the fascists can.
Note also that the anti-speech movement is run by rich kids. A Brookings Institution analysis from earlier this year discovered that “the average enrollee at a college where students have attempted to restrict free speech comes from a family with an annual income $32,000 higher than that of the average student in America.” Few rank higher in average income than those at Middlebury College, where students evicted scholar Charles Murray in a particularly ugly scene. (The report notes that Murray was received respectfully at Saint Louis University, “where the median income of students’ families is half Middlebury’s.”) The impulses of over-adulated 20-year-olds may soon be tempered by the tyranny of having to show up for work on a daily basis.
None of this is to suggest that free speech is enjoying some renaissance either on campus or in America. But perhaps as the late Wall Street Journal editorial-page editor Robert Bartley put it in his valedictory address: “Things could be worse. Indeed, they have been worse.”
Kate Bachelder Odell is an editorial writer for the Wall Street Journal.
Jonathan RauchIs free speech under threat? The one-syllable answer is “yes.” The three-syllable answer is: “Yes, of course.” Free speech is always under threat, because it is not only the single most successful social idea in all of human history, it is also the single most counterintuitive. “You mean to say that speech that is offensive, untruthful, malicious, seditious, antisocial, blasphemous, heretical, misguided, or all of the above deserves government protection?” That seemingly bizarre proposition is defensible only on the grounds that the marketplace of ideas turns out to be the most powerful engine of knowledge, prosperity, liberty, social peace, and moral advancement that our species has had the good fortune to discover.
Every new generation of free-speech advocates will need to get up every morning and re-explain the case for free speech and open inquiry—today, tomorrow, and forever. That is our lot in life, and we just need to be cheerful about it. At discouraging moments, it is helpful to remember that the country has made great strides toward free speech since 1798, when the Adams administration arrested and jailed its political critics; and since the 1920s, when the U.S. government banned and burned James Joyce’s great novel Ulysses; and since 1954, when the government banned ONE, a pioneering gay journal. (The cover article was a critique of the government’s indecency censors, who censored it.) None of those things could happen today.
I suppose, then, the interesting question is: What kind of threat is free speech under today? In the present age, direct censorship by government bodies is rare. Instead, two more subtle challenges hold sway, especially, although not only, on college campuses. The first is a version of what I called, in my book Kindly Inquisitors, the humanitarian challenge: the idea that speech that is hateful or hurtful (in someone’s estimation) causes pain and thus violates others’ rights, much as physical violence does. The other is a version of what I called the egalitarian challenge: the idea that speech that denigrates minorities (again, in someone’s estimation) perpetuates social inequality and oppression and thus also is a rights violation. Both arguments call upon administrators and other bureaucrats to defend human rights by regulating speech rights.
Both doctrines are flawed to the core. Censorship harms minorities by enforcing conformity and entrenching majority power, and it no more ameliorates hatred and injustice than smashing thermometers ameliorates global warming. If unwelcome words are the equivalent of bludgeons or bullets, then the free exchange of criticism—science, in other words—is a crime. I could go on, but suffice it to say that the current challenges are new variations on ancient themes—and they will be followed, in decades and centuries to come, by many, many other variations. Memo to free-speech advocates: Our work is never done, but the really amazing thing, given the proposition we are tasked to defend, is how well we are doing.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of Kindly Inquisitors: The New Attacks on Free Thought.
Nicholas Quinn RosenkranzSpeech is under threat on American campuses as never before. Censorship in various forms is on the rise. And this year, the threat to free speech on campus took an even darker turn, toward actual violence. The prospect of Milo Yiannopoulos speaking at Berkeley provoked riots that caused more than $100,000 worth of property damage on the campus. The prospect of Charles Murray speaking at Middlebury led to a riot that put a liberal professor in the hospital with a concussion. Ann Coulter’s speech at Berkeley was cancelled after the university determined that none of the appropriate venues could be protected from “known security threats” on the date in question.
The free-speech crisis on campus is caused, at least in part, by a more insidious campus pathology: the almost complete lack of intellectual diversity on elite university faculties. At Yale, for example, the number of registered Republicans in the economics department is zero; in the psychology department, there is one. Overall, there are 4,410 faculty members at Yale, and the total number of those who donated to a Republican candidate during the 2016 primaries was three.
So when today’s students purport to feel “unsafe” at the mere prospect of a conservative speaker on campus, it may be easy to mock them as “delicate snowflakes,” but in one sense, their reaction is understandable: If students are shocked at the prospect of a Republican behind a university podium, perhaps it is because many of them have never before laid eyes on one.
To see the connection between free speech and intellectual diversity, consider the recent commencement speech of Harvard President Drew Gilpin Faust:
Universities must be places open to the kind of debate that can change ideas….Silencing ideas or basking in intellectual orthodoxy independent of facts and evidence impedes our access to new and better ideas, and it inhibits a full and considered rejection of bad ones. . . . We must work to ensure that universities do not become bubbles isolated from the concerns and discourse of the society that surrounds them. Universities must model a commitment to the notion that truth cannot simply be claimed, but must be established—established through reasoned argument, assessment, and even sometimes uncomfortable challenges that provide the foundation for truth.
Faust is exactly right. But, alas, her commencement audience might be forgiven a certain skepticism. After all, the number of registered Republicans in several departments at Harvard—e.g., history and psychology—is exactly zero. In those departments, the professors themselves may be “basking in intellectual orthodoxy” without ever facing “uncomfortable challenges.” This may help explain why some students will do everything in their power to keep conservative speakers off campus: They notice that faculty hiring committees seem to do exactly the same thing.
In short, it is a promising sign that true liberal academics like Faust have started speaking eloquently about the crucial importance of civil, reasoned disagreement. But they will be more convincing on this point when they hire a few colleagues with whom they actually disagree.
Nicholas Quinn Rosenkranz is a professor of law at Georgetown. He serves on the executive committee of Heterodox Academy, which he co-founded, on the board of directors of the Federalist Society, and on the board of directors of the Foundation for Individual Rights in Education (FIRE).
Ben ShapiroIn February, I spoke at California State University in Los Angeles. Before my arrival, professors informed students that a white supremacist would be descending on the school to preach hate; threats of violence soon prompted the administration to cancel the event. I vowed to show up anyway. One hour before the event, the administration backed down and promised to guarantee that the event could go forward, but police officers were told not to stop the 300 students, faculty, and outside protesters who blocked and assaulted those who attempted to attend the lecture. We ended up trapped in the auditorium, with the authorities telling students not to leave for fear of physical violence. I was rushed from campus under armed police guard.
Is free speech under assault?
Of course it is.
On campus, free speech is under assault thanks to a perverse ideology of intersectionality that claims victim identity is of primary value and that views are a merely secondary concern. As a corollary, if your views offend someone who outranks you on the intersectional hierarchy, your views are treated as violence—threats to identity itself. On campus, statements that offend an individual’s identity have been treated as “microaggressions”–actual aggressions against another, ostensibly worthy of violence. Words, students have been told, may not break bones, but they will prompt sticks and stones, and rightly so.
Thus, protesters around the country—leftists who see verbiage as violence—have, in turn, used violence in response to ideas they hate. Leftist local authorities then use the threat of violence as an excuse to ideologically discriminate against conservatives. This means public intellectuals like Charles Murray being run off of campus and his leftist professorial cohort viciously assaulted; it means Ann Coulter being targeted for violence at Berkeley; it means universities preemptively banning me and Ayaan Hirsi Ali and Condoleezza Rice and even Jason Riley.
The campus attacks on free speech are merely the most extreme iteration of an ideology that spans from left to right: the notion that your right to free speech ends where my feelings begin. Even Democrats who say that Ann Coulter should be allowed to speak at Berkeley say that nobody should be allowed to contribute to a super PAC (unless you’re a union member, naturally).
Meanwhile, on the right, the president’s attacks on the press have convinced many Republicans that restrictions on the press wouldn’t be altogether bad. A Vanity Fair/60 Minutes poll in late April found that 36 percent of Americans thought freedom of the press “does more harm than good.” Undoubtedly, some of that is due to the media’s obvious bias. CNN’s Jeff Zucker has targeted the Trump administration for supposedly quashing journalism, but he was silent when the Obama administration’s Department of Justice cracked down on reporters from the Associated Press and Fox News, and when hacks like Deputy National Security Adviser Ben Rhodes openly sold lies regarding Iran. But for some on the right, the response to press falsities hasn’t been to call for truth, but to instead echo Trumpian falsehoods in the hopes of damaging the media. Free speech is only important when people seek the truth. Leftists traded truth for tribalism long ago; in response, many on the right seem willing to do the same. Until we return to a common standard under which facts matter, free speech will continue to rest on tenuous grounds.
Ben Shapiro is the editor in chief of The Daily Wire and the host of The Ben Shapiro Show.
Judith ShulevitzIt’s tempting to blame college and university administrators for the decline of free speech in America, and for years I did just that. If the guardians of higher education won’t inculcate the habits of mind required for serious thinking, I thought, who will? The unfettered but civil exchange of ideas is the basic operation of education, just as addition is the basic operation of arithmetic. And universities have to teach both the unfettered part and the civil part, because arguing in a respectful manner isn’t something anyone does instinctively.
So why change my mind now? Schools still cling to speech codes, and there still aren’t enough deans like the one at the University of Chicago who declared his school a safe-space-free zone. My alma mater just handed out prizes for “enhancing race and/or ethnic relations” to two students caught on video harassing the dean of their residential college, one screaming at him that he’d created “a space for violence to happen,” the other placing his face inches away from the dean’s and demanding, “Look at me.” All this because they deemed a thoughtful if ill-timed letter about Halloween costumes written by the dean’s wife to be an act of racist aggression. Yale should discipline students who behave like that, even if they’re right on the merits (I don’t think they were, but that’s not the point). They certainly don’t deserve awards. I can’t believe I had to write that sentence.
But in abdicating their responsibilites, the universities have enabled something even worse than an attack on free speech. They’ve unleashed an assault on themselves. There’s plenty of free speech around; we know that because so much bad speech—low-minded nonsense—tests our constitutional tolerance daily, and that’s holding up pretty well. (As Nicholas Lemann observes elsewhere in this symposium, Facebook and Google represent bigger threats to free speech than students and administrators.) What’s endangered is good speech.
Universities were setting themselves up to be used. Provocateurs exploit the atmosphere on campus to goad overwrought students, then gleefully trash the most important bastion of our crumbling civil society. Higher education and everything it stands for—logical argument, the scientific method, epistemological rigor—start to look illegitimate. Voters perceive tenure and research and higher education itself as hopelessly partisan and unworthy of taxpayers’ money.
The press is a secondary victim of this process of delegitimization. If serious inquiry can be waved off as ideology, then facts won’t be facts and reporting can’t be trusted. All journalism will be equal to all other journalism, and all journalists will be reduced to pests you can slam to the ground with near impunity. Politicians will be able to say anything and do just about anything and there will be no countervailing authority to challenge them. I’m pretty sure that that way lies Putinism and Erdoganism. And when we get to that point, I’m going to start worrying about free speech again.
Judith Shulevitz is a critic in New York.
Harvey SilverglateFree speech is, and has always been, threatened. The title of Nat Hentoff’s 1993 book Free Speech for Me – but Not for Thee is no less true today than at any time, even as the Supreme Court has accorded free speech a more absolute degree of protection than in any previous era.
Since the 1980s, the high court has decided most major free-speech cases in favor of speech, with most of the major decisions being unanimous or nearly so.
Women’s-rights advocates were turned back by the high court in 1986 when they sought to ban the sale of printed materials that, because deemed pornographic by some, were alleged to promote violence against women. Censorship in the name of gender–based protection thus failed to gain traction.
Despite the demands of civil-rights activists, the Supreme Court in 1992 declared cross-burning to be a protected form of expression in R.A.V. v. City of St. Paul, a decision later refined to strengthen a narrow exception for when cross-burning occurs primarily as a physical threat rather than merely an expression of hatred.
Other attempts at First Amendment circumvention have been met with equally decisive rebuff. When the Reverend Jerry Falwell sued Hustler magazine publisher Larry Flynt for defamation growing out of a parody depicting Falwell’s first sexual encounter as a drunken tryst with his mother in an outhouse, a unanimous Supreme Court lectured on the history of parody as a constitutionally protected, even if cruel, form of social and political criticism.
When the South Boston Allied War Veterans, sponsor of Boston’s Saint Patrick’s Day parade, sought to exclude a gay veterans’ group from marching under its own banner, the high court unanimously held that as a private entity, even though marching in public streets, the Veterans could exclude any group marching under a banner conflicting with the parade’s socially conservative message, notwithstanding public-accommodations laws. The gay group could have its own parade but could not rain on that of the conservatives.
Despite such legal clarity, today’s most potent attacks on speech are coming, ironically, from liberal-arts colleges. Ubiquitous “speech codes” limit speech that might insult, embarrass, or “harass,” in particular, members of “historically disadvantaged” groups. “Safe spaces” and “trigger warnings” protect purportedly vulnerable students from hearing words and ideas they might find upsetting. Student demonstrators and threats of violence have forced the cancellation of controversial speakers, left and right.
It remains unclear how much campus censorship results from politically correct faculty, control-obsessed student-life administrators, or students socialized and indoctrinated into intolerance. My experience suggests that the bureaucrats are primarily, although not entirely, to blame. When sued, colleges either lose or settle, pay a modest amount, and then return to their censorious ways.
This trend threatens the heart and soul of liberal education. Eventually it could infect the entire society as these students graduate and assume influential positions. Whether a resulting flood of censorship ultimately overcomes legal protections and weakens democracy remains to be seen.
Harvey Silverglate, a Boston-based lawyer and writer, is the co-author of The Shadow University: The Betrayal of Liberty on America’s Campuses (Free Press, 1998). He co-founded the Foundation for Individual Rights in Education in 1999 and is on FIRE’s board of directors. He spent some three decades on the board of the ACLU of Massachusetts, two of those years as chairman. Silverglate taught at Harvard Law School for a semester during a sabbatical he took in the mid-1980s.
Christina Hoff SommersWhen Heather Mac Donald’s “blue lives matter” talk was shut down by a mob at Claremont McKenna College, the president of neighboring Pomona College sent out an email defending free speech. Twenty-five students shot back a response: “Heather Mac Donald is a fascist, a white supremacist . . . classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.”
Some blame the new campus intolerance on hypersensitive, over-trophied millennials. But the students who signed that letter don’t appear to be fragile. Nor do those who recently shut down lectures at Berkeley, Middlebury, DePaul, and Cal State LA. What they are is impassioned. And their passion is driven by a theory known as intersectionality.
Intersectionality is the source of the new preoccupation with microaggressions, cultural appropriation, and privilege-checking. It’s the reason more than 200 colleges and universities have set up Bias Response Teams. Students who overhear potentially “otherizing” comments or jokes are encouraged to make anonymous reports to their campus BRTs. A growing number of professors and administrators have built their careers around intersectionality. What is it exactly?
Intersectionality is a neo-Marxist doctrine that views racism, sexism, ableism, heterosexism, and all forms of “oppression” as interconnected and mutually reinforcing. Together these “isms” form a complex arrangement of advantages and burdens. A white woman is disadvantaged by her gender but advantaged by her race. A Latino is burdened by his ethnicity but privileged by his gender. According to intersectionality, American society is a “matrix of domination,” with affluent white males in control. Not only do they enjoy most of the advantages, they also determine what counts as “truth” and “knowledge.”
But marginalized identities are not without resources. According to one of intersectionality’s leading theorists, Patricia Collins (former president of the American Sociology Association), disadvantaged groups have access to deeper, more liberating truths. To find their voice, and to enlighten others to the true nature of reality, they require a safe space—free of microaggressive put-downs and imperious cultural appropriations. Here they may speak openly about their “lived experience.” Lived experience, according to intersectional theory, is a better guide to the truth than self-serving Western and masculine styles of thinking. So don’t try to refute intersectionality with logic or evidence: That only proves that you are part of the problem it seeks to overcome.
How could comfortably ensconced college students be open to a convoluted theory that describes their world as a matrix of misery? Don’t they flinch when they hear intersectional scholars like bell hooks refer to the U.S. as an “imperialist, white-supremacist, capitalist patriarchy”? Most take it in stride because such views are now commonplace in high-school history and social studies texts. And the idea that knowledge comes from lived experience rather than painstaking study and argument is catnip to many undergrads.
Silencing speech and forbidding debate is not an unfortunate by-product of intersectionality—it is a primary goal. How else do you dismantle a lethal system of oppression? As the protesting students at Claremont McKenna explained in their letter: “Free speech . . . has given those who seek to perpetuate systems of domination a platform to project their bigotry.” To the student activists, thinkers like Heather MacDonald and Charles Murray are agents of the dominant narrative, and their speech is “a form of violence.”
It is hard to know how our institutions of higher learning will find their way back to academic freedom, open inquiry, and mutual understanding. But as long as intersectional theory goes unchallenged, campus fanaticism will intensify.
Christina Hoff Sommers is a resident scholar at the American Enterprise Institute. She is the author of several books, including Who Stole Feminism? and The War Against Boys. She also hosts The Factual Feminist, a video blog. @Chsommers
John StosselYes, some college students do insane things. Some called police when they saw “Trump 2016” chalked on sidewalks. The vandals at Berkeley and the thugs who assaulted Charles Murray are disgusting. But they are a minority. And these days people fight back.
Someone usually videotapes the craziness. Yale’s “Halloween costume incident” drove away two sensible instructors, but videos mocking Yale’s snowflakes, like “Silence U,” make such abuse less likely. Groups like Young America’s Foundation (YAF) publicize censorship, and the Foundation for Individual Rights in Education (FIRE) sues schools that restrict speech.
Consciousness has been raised. On campus, the worst is over. Free speech has always been fragile. I once took cameras to Seton Hall law school right after a professor gave a lecture on free speech. Students seemed to get the concept. Sean, now a lawyer, said, “Protect freedom for thought we hate; otherwise you never have a society where ideas clash, and we come up with the best idea.” So I asked, “Should there be any limits?” Students listed “fighting words,” “shouting fire in a theater,” malicious libel, etc.— reasonable court-approved exceptions. But then they went further. Several wanted bans on “hate” speech, “No value comes out of hate speech,” said Javier. “It inevitably leads to violence.”
No it doesn’t, I argued, “Also, doesn’t hate speech bring ideas into the open, so you can better argue about them, bringing you to the truth?”
“No,” replied Floyd, “With hate speech, more speech is just violence.”
So I pulled out a big copy of the First Amendment and wrote, “exception: hate speech.”
Two students wanted a ban on flag desecration “to respect those who died to protect it.”
One wanted bans on blasphemy:
“Look at the gravity of the harm versus the value in blasphemy—the harm outweighs the value.”
Several wanted a ban on political speech by corporations because of “the potential for large corporations to improperly influence politicians.”
Finally, Jillian, also now a lawyer, wanted hunting videos banned.
“It encourages harm down the road.”
I asked her, incredulously, “you’re comfortable locking up people who make a hunting film?”
“Oh, yeah,” she said. “It’s unnecessary cruelty to feeling and sentient beings.”
So, I picked up my copy of the Bill of Rights again. After “no law . . . abridging freedom of speech,” I added: “Except hate speech, flag burning, blasphemy, corporate political speech, depictions of hunting . . . ”
That embarrassed them. “We may have gone too far,” said Sean. Others agreed. One said, “Cross out the exceptions.” Free speech survived, but it was a close call. Respect for unpleasant speech will always be thin. Then-Senator Hillary Clinton wanted violent video games banned. John McCain and Russ Feingold tried to ban political speech. Donald Trump wants new libel laws, and if you burn a flag, he tweeted, consequences might be “loss of citizenship or a year in jail!” Courts or popular opinion killed those bad ideas.
Free speech will survive, assuming those of us who appreciate it use it to fight those who would smother it.
John Stossel is a FOX News/FOX Business Network Contributor.
Warren TreadgoldEven citizens of dictatorships are free to praise the regime and to talk about the weather. The only speech likely to be threatened anywhere is the sort that offends an important and intolerant group. What is new in America today is a leftist ideology that threatens speech precisely because it offends certain important and intolerant groups: feminists and supposedly oppressed minorities.
So far this new ideology is clearly dominant only in colleges and universities, where it has become so strong that most controversies concern outside speakers invited by students, not faculty speakers or speakers invited by administrators. Most academic administrators and professors are either leftists or have learned not to oppose leftism; otherwise they would probably never have been hired. Administrators treat even violent leftist protestors with respect and are ready to prevent conservative and moderate outsiders from speaking rather than provoke protests. Most professors who defend conservative or moderate speakers argue that the speakers’ views are indeed noxious but say that students should be exposed to them to learn how to refute them. This is very different from encouraging a free exchange of ideas.
Although the new ideology began on campuses in the ’60s, it gained authority outside them largely by means of several majority decisions of the Supreme Court, from Roe (1973) to Obergefell (2015). The Supreme Court decisions that endanger free speech are based on a presumed consensus of enlightened opinion that certain rights favored by activists have the same legitimacy as rights explicitly guaranteed by the Constitution—or even more legitimacy, because the rights favored by activists are assumed to be so fundamental that they need no grounding in specific constitutional language. The Court majorities found restricting abortion rights or homosexual marriage, as large numbers of Americans wish to do, to be constitutionally equivalent to restricting black voting rights or interracial marriage. Any denial of such equivalence therefore opposes fundamental constitutional rights and can be considered hate speech, advocating psychological and possibly physical harm to groups like women seeking abortions or homosexuals seeking approval. Such speech may still be constitutionally protected, but acting upon it is not.
This ideology of forbidding allegedly offensive speech has spread to most of the Democratic Party and the progressive movement. Rather than seeing themselves as taking one side in a free debate, progressives increasingly argue (for example) that opposing abortion is offensive to women and supporting the police is offensive to blacks. Some politicians object so strongly to such speech that despite their interest in winning votes, they attack voters who disagree with them as racists or sexists. Expressing views that allegedly discriminate against women, blacks, homosexuals, and various other minorities can now be grounds for a lawsuit.
Speech that supposedly offends women or minorities has already cost some people their careers, their businesses, and their opportunities to deliver or hear speeches. Such intimidation is the intended result of an ideology that threatens free speech.
Warren Treadgold is a professor of history at Saint Louis University.
Matt WelchLike a sullen zoo elephant rocking back and forth from leg to leg, there is an oversized paradox we’d prefer not to see standing smack in the sightlines of most our policy debates. Day by day, even minute by minute, America simultaneously gets less free in the laboratory, but more free in the field. Individuals are constantly expanding the limits and applications of their own autonomy, even as government transcends prior restraints on how far it can reach into our intimate business.
So it is that the Internal Revenue Service can charge foreign banks with collecting taxes on U.S. citizens (therefore causing global financial institutions to shun many of the estimated 6 million-plus Americans who live abroad), even while block-chain virtuosos make illegal transactions wholly undetectable to authorities. It has never been easier for Americans to travel abroad, and it’s never been harder to enter the U.S. without showing passports, fingerprints, retinal scans, and even social-media passwords.
What’s true for banking and tourism is doubly true for free speech. Social media has given everyone not just a platform but a megaphone (as unreadable as our Facebook timelines have all become since last November). At the same time, the federal government during this unhappy 21st century has continuously ratcheted up prosecutorial pressure against leakers, whistleblowers, investigative reporters, and technology companies.
A hopeful bulwark against government encroachment unique to the free-speech field is the Supreme Court’s very strong First Amendment jurisprudence in the past decade or two. Donald Trump, like Hillary Clinton before him, may prattle on about locking up flag-burners, but Antonin Scalia and the rest of SCOTUS protected such expression back in 1990. Barack Obama and John McCain (and Hillary Clinton—she’s as bad as any recent national politician on free speech) may lament the Citizens United decision, but it’s now firmly legal to broadcast unfriendly documentaries about politicians without fear of punishment, no matter the electoral calendar.
But in this very strength lies what might be the First Amendment’s most worrying vulnerability. Barry Friedman, in his 2009 book The Will of the People, made the persuasive argument that the Supreme Court typically ratifies, post facto, where public opinion has already shifted. Today’s culture of free speech could be tomorrow’s legal framework. If so, we’re in trouble.
For evidence of free-speech slippage, just read around you. When both major-party presidential nominees react to terrorist attacks by calling to shut down corners of the Internet, and when their respective supporters are actually debating the propriety of sucker punching protesters they disagree with, it’s hard to escape the conclusion that our increasingly shrill partisan sorting is turning the very foundation of post-1800 global prosperity into just another club to be swung in our national street fight.
In the eternal cat-and-mouse game between private initiative and government control, the former is always advantaged by the latter’s fundamental incompetence. But what if the public willingly hands government the power to muzzle? It may take a counter-cultural reformation to protect this most noble of American experiments.
Matt Welch is the editor at large of Reason.
Adam. J. WhiteFree speech is indeed under threat on our university campuses, but the threat did not begin there and it will not end there. Rather, the campus free-speech crisis is a particularly visible symptom of a much more fundamental crisis in American culture.
The problem is not that some students, teachers, and administrators reject traditional American values and institutions, or even that they are willing to menace or censor others who defend those values and institutions. Such critics have always existed, and they can be expected to use the tools and weapons at their disposal. The problem is that our country seems to produce too few students, teachers, and administrators who are willing or able to respond to them.
American families produce children who arrive on campus unprepared for, or uninterested in, defending our values and institutions. For our students who are focused primarily on their career prospects (if on anything at all), “[c]ollege is just one step on the continual stairway of advancement,” as David Brooks observed 16 years ago. “They’re not trying to buck the system; they’re trying to climb it, and they are streamlined for ascent. Hence they are not a disputatious group.”
Meanwhile, parents bear incomprehensible financial burdens to get their kids through college, without a clear sense of precisely what their kids will get out of these institutions in terms of character formation or civic virtue. With so much money at stake, few can afford for their kids to pursue more than career prospects.
Those problems are not created on campus, but they are exacerbated there, as too few college professors and administrators see their institutions as cultivators of American culture and republicanism. Confronted with activists’ rage, they offer no competing vision of higher education—let alone a compelling one.
Ironically, we might borrow a solution from the Left. Where progressives would leverage state power in service of their health-care agenda, we could do the same for education. State legislatures and governors, recognizing the present crisis, should begin to reform and renegotiate the fundamental nature of state universities. By making state universities more affordable, more productive, and more reflective of mainstream American values, they will attract students—and create incentives for competing private universities to follow suit.
Let’s hope they do it soon, for what’s at stake is much more than just free speech on campus, or even free speech writ large. In our time, as in Tocqueville’s, “the instruction of the people powerfully contributes to the support of a democratic republic,” especially “where instruction which awakens the understanding is not separated from moral education which amends the heart.” We need our colleges to cultivate—not cut down—civic virtue and our capacity for self-government. “Republican government presupposes the existence of these qualities in a higher degree than any other form,” Madison wrote in Federalist 55. If “there is not sufficient virtue among men for self-government,” then “nothing less than the chains of despotism” can restrain us “from destroying and devouring one another.”
Adam J. White is a research fellow at the Hoover Institution.
Cathy YoungA writer gets expelled from the World Science Fiction Convention for criticizing the sci-fi community’s preoccupation with racial and gender “inclusivity” while moderating a panel. An assault on free speech, or an exercise of free association? How about when students demand the disinvitation of a speaker—or disrupt the speech? When a critic of feminism gets banned from a social-media platform for unspecified “abuse”?
Such questions are at the heart of many recent free-speech controversies. There is no censorship by government; but how concerned should we be when private actors effectively suppress unpopular speech? Even in the freest society, some speech will—and should—be considered odious and banished to unsavory fringes. No one weeps for ostracized Holocaust deniers or pedophilia apologists.
But shunned speech needs to remain a narrow exception—or acceptable speech will inexorably shrink. As current Federal Communications Commission chairman Ajit Pai cautioned last year, First Amendment protections will be hollowed out unless undergirded by cultural values that support a free marketplace of ideas.
Sometimes, attacks on speech come from the right. In 2003, an Iraq War critic, reporter Chris Hedges, was silenced at Rockford College in Illinois by hecklers who unplugged the microphone and rushed the stage; some conservative pundits defended this as robust protest. Yet the current climate on the left—in universities, on social media, in “progressive” journalism, in intellectual circles—is particularly hostile to free expression. The identity-politics left, fixated on subtle oppressions embedded in everyday attitudes and language, sees speech-policing as the solution.
Is hostility to free-speech values on the rise? New York magazine columnist Jesse Singal argues that support for restrictions on public speech offensive to minorities has remained steady, and fairly high, since the 1970s. Perhaps. But the range of what qualifies as offensive—and which groups are to be shielded—has expanded dramatically. In our time, a leading liberal magazine, the New Republic, can defend calls to destroy a painting of lynching victim Emmett Till because the artist is white and guilty of “cultural appropriation,” and a feminist academic journal can be bullied into apologizing for an article on transgender issues that dares to mention “male genitalia.”
There is also a distinct trend of “bad” speech being squelched by coercion, not just disapproval. That includes the incidents at Middlebury College in Vermont and at Claremont McKenna in California, where mobs not only prevented conservative speakers—Charles Murray and Heather Mac Donald—from addressing audiences but physically threatened them as well. It also includes the use of civil-rights legislation to enforce goodthink in the workplace: Businesses may face stiff fines if they don’t force employees to call a “non-binary” co-worker by the singular “they,” even when talking among themselves.
These trends make a mockery of liberalism and enable the kind of backlash we have seen with Donald Trump’s election. But the backlash can bring its own brand of authoritarianism. It’s time to start rebuilding the culture of free speech across political divisions—a project that demands, above all, genuine openness and intellectual consistency. Otherwise it will remain, as the late, great Nat Hentoff put it, a call for “free speech for me, but not for thee.”
Cathy Young is a contributing editor at Reason.
Robert J. ZimmerFree speech is not a natural feature of human society. Many people are comfortable with free expression for views they agree with but would withhold this privilege for those they deem offensive. People justify such restrictions by various means: the appeal to moral certainty, political agendas, demand for change, opposing change, retaining power, resisting authority, or, more recently, not wanting to feel uncomfortable. Moral certainty about one’s views or a willingness to indulge one’s emotions makes it easy to assert that others are doing true damage or creating unacceptable offense simply by presenting a fundamentally different perspective.
The resulting challenges to free expression may come in the form of laws, threats, pressure (whether societal, group, or organizational), or self-censorship in the face of a prevailing consensus. Specific forms of challenge may be more or less pronounced as circumstances vary. But the widespread temptation to consider the silencing of “objectionable” viewpoints as acceptable implies that the challenge to free expression is always present.
The United States today is no exception. We benefit from the First Amendment, which asserts that the government shall make no law abridging the freedom of speech. However, fostering a society supporting free expression involves matters far beyond the law. The ongoing and increasing demonization of one group by another creates a political and social environment conducive to suppressing speech. Even violent acts opposing speech can become acceptable or encouraged. Such behavior is evident at both political rallies and university events. Our greatest current threat to free expression is the emergence of a national culture that accepts the legitimacy of suppression of speech deemed objectionable by a segment of the population.
University and college campuses present a particularly vivid instance of this cultural shift. There have been many well-publicized episodes of speakers being disinvited or prevented from speaking because of their views. However, the problem is much deeper, as there is significant self-censorship on many campuses. Both faculty and students sometimes find themselves silenced by social and institutional pressures to conform to “acceptable” views. Ironically, the very mission of universities and colleges to provide a powerful and deeply enriching education for their students demands that they embrace and protect free expression and open discourse. Failing to do so significantly diminishes the quality of the education they provide.
My own institution, the University of Chicago, through the words and actions of its faculty and leaders since its founding, has asserted the importance of free expression and its essential role in embracing intellectual challenge. We continue to do so today as articulated by the Chicago Principles, which strongly affirm that “the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” It is only in such an environment that universities can fulfill their own highest aspirations and provide leadership by demonstrating the value of free speech within society more broadly. A number of universities have joined us in reinforcing these values. But it remains to be seen whether the faculty and leaders of many institutions will truly stand up for these values, and in doing so provide a model for society as a whole.
Robert J. Zimmer is the president of the University of Chicago.