In a recent interview with the New Republic, Paul Warnke, the newly appointed head of the Arms Control and Disarmament…
In a recent interview with the New Republic, Paul Warnke, the newly appointed head of the Arms Control and Disarmament Agency, responded as follows to the question of how the United States ought to react to indications that the Soviet leadership thinks it possible to fight and win a nuclear war. “In my view,” he replied, “this kind of thinking is on a level of abstraction which is unrealistic. It seems to me that instead of talking in those terms, which would indulge what I regard as the primitive aspects of Soviet nuclear doctrine, we ought to be trying to educate them into the real world of strategic nuclear weapons, which is that nobody could possibly win.”1
Even after allowance has been made for Mr. Warnke’s notoriously careless syntax, puzzling questions remain. On what grounds does he, a Washington lawyer, presume to “educate” the Soviet general staff composed of professional soldiers who thirty years ago defeated the Wehrmacht—and, of all things, about the “real world of strategic nuclear weapons” of which they happen to possess a considerably larger arsenal than we? Why does he consider them children who ought not to be “indulged”? And why does he chastise for what he regards as a “primitive” and unrealistic strategic doctrine not those who hold it, namely the Soviet military, but Americans who worry about their holding it?
Be all that as it may, even if Mr. Warnke refuses to take Soviet strategic doctrine seriously, it behooves us to take Mr. Warnke’s views of Soviet doctrine seriously. He not only will head our SALT II team; his thinking as articulated in the above statement and on other occasions reflects all the conventional wisdom of the school of strategic theory dominant in the United States, one of whose leading characteristics is scorn for Soviet views on nuclear warfare.
American and Soviet nuclear doctrines, it needs stating at the outset, are starkly at odds. The prevalent U.S. doctrine holds that an all-out war between countries in possession of sizable nuclear arsenals would be so destructive as to leave no winner; thus resort to arms has ceased to represent a rational policy option for the leaders of such countries vis-à-vis one another. The classic dictum of Clausewitz, that war is politics pursued by other means, is widely believed in the United States to have lost its validity after Hiroshima and Nagasaki. Soviet doctrine, by contrast, emphatically asserts that while an all-out nuclear war would indeed prove extremely destructive to both parties, its outcome would not be mutual suicide: the country better prepared for it and in possession of a superior strategy could win and emerge a viable society. “There is profound erroneousness and harm in the disorienting claims of bourgeois ideologies that there will be no victor in a thermonuclear world war,” thunders an authoritative Soviet publication.2 The theme is mandatory in the current Soviet military literature. Clausewitz, buried in the United States, seems to be alive and prospering in the Soviet Union.
The predisposition of the American strategic community is to shrug off this fundamental doctrinal discrepancy. American doctrine has been and continues to be formulated and implemented by and large without reference to its Soviet counterpart. It is assumed here that there exists one and only one “rational” strategy appropriate to the age of thermonuclear weapons, and that this strategy rests on the principle of “mutual deterrence” developed in the United States some two decades ago. Evidence that the Russians do not share this doctrine which, as its name indicates, postulates reciprocal attitudes, is usually dismissed with the explanation that they are clearly lagging behind us: given time and patient “education,” they will surely come around.
It is my contention that this attitude rests on a combination of arrogance and ignorance; that it is dangerous; and that it is high time to start paying heed to Soviet strategic doctrine, lest we end up deterring no one but ourselves. There is ample evidence that the Soviet military say what they mean, and usually mean what they say. When the recently deceased Soviet Minister of Defense, Marshal Grechko, assures us: “We have never concealed, and do not conceal, the fundamental, principal tenets of our military doctrine,”3 he deserves a hearing. This is especially true in view of the fact that Soviet military deployments over the past twenty years make far better sense in the light of Soviet doctrine, “primitive” and “unrealistic” as the latter may appear, than when reflected in the mirror of our own doctrinal assumptions.
Mistrust of the military professional, combined with a pervasive conviction, typical of commercial societies, that human conflicts are at bottom caused by misunderstanding and ought to be resolved by negotiations rather than force, has worked against serious attention to military strategy by the United States. We have no general staff; we grant no higher degrees in “military science”; and, except for Admiral Mahan, we have produced no strategist of international repute. America has tended to rely on its insularity to protect it from aggressors, and on its unique industrial capacity to help crush its enemies once war was under way. The United States is accustomed to waging wars of its own choosing and on its own terms. It lacks an ingrained strategic tradition. In the words of one historian, Americans tend to view both military strategy and the armed forces as something to be “employed intermittently to destroy occasional and intermittent threats posed by hostile powers.”4
This approach to warfare has had a number of consequences. The United States wants to win its wars quickly and with the smallest losses in American lives. It is disinclined, therefore, to act on protracted and indirect strategies, or to engage in limited wars and wars of attrition. Once it resorts to arms, it prefers to mobilize the great might of its industrial plant to produce vast quantities of the means of destruction with which in the shortest possible time to undermine the enemy’s will and ability to continue the struggle. Extreme reliance on technological superiority, characteristic of U.S. warfare, is the obverse side of America’s extreme sensitivity to its own casualties; so is indifference to the casualties inflicted on the enemy. The strategic bombing campaigns waged by the U.S. Air Force and the RAF against Germany and Japan in World War II excellently implemented this general attitude. Paradoxically, America’s dread of war and casualties pushes it to adopt some of the most brutal forms of warfare, involving the indiscriminate destruction of the enemy’s homeland with massive civilian deaths.
These facts must be borne in mind to understand the way the United States reacted to the advent of the nuclear bomb. The traditional military services—the army and the navy—whose future seemed threatened by the invention of a weapon widely believed to have revolutionized warfare and rendered conventional forces obsolete, resisted extreme claims made on behalf of the bomb. But they were unable to hold out for very long. An alliance of politicians and scientists, backed by the Air Force, soon overwhelmed them. “Victory through Air Power,” a slogan eminently suited to the American way of war, carried all before it once bombs could be devised whose explosive power was measured in kilotons and megatons.
The U.S. Army tried to argue after Hiroshima and Nagasaki that the new weapons represented no fundamental breakthrough. No revolution in warfare had occurred, its spokesman claimed: atomic bombs were merely a more efficient species of the aerial bombs used in World War II, and in themselves no more able to ensure victory than the earlier bombs had been. As evidence, they could point to the comprehensive U.S. Strategic Bombing Surveys carried out after the war to assess the effects of the bombing campaigns. These had demonstrated that saturation raids against German and Japanese cities had neither broken the enemy’s morale nor paralyzed his armaments industry; indeed, German productivity kept on rising in the face of intensified Allied bombing, attaining its peak in the fall of 1944, on the eve of capitulation.
And when it came to horror, atomic bombs had nothing over conventional ones: as against the 72,000 casualties caused by the atomic bomb in Hiroshima, conventional raids carried out against Tokyo and Dresden in 1945 had caused 84,000 and 135,000 fatalities, respectively. Furthermore, those who sought to minimize the impact of the new weapon argued, atomic weapons in no sense obviated the need for sizable land and sea forces. For example, General Ridgway, as Chief of Staff in the early 1950’s, maintained that war waged with tactical nuclear weapons would demand larger rather than smaller field armies since these weapons were more complicated, since they would produce greater casualties, and since the dispersal of troops required by nuclear tactics called for increasing the depth of the combat zone.5
As we shall note below, similar arguments disputing the revolutionary character of the nuclear weapon surfaced in the Soviet Union, and there promptly came to dominate strategic theory. In the United States, they were just as promptly silenced by a coalition of groups each of which it suited, for its own reasons, to depict the atomic bomb as the “absolute weapon” that had, in large measure, rendered traditional military establishments redundant and traditional strategic thinking obsolete.
Once World War II was over, the United States was most eager to demobilize its armed forces. Between June 1945 and June 1946, the U.S. Army reduced its strength from 8.3 to 1.9 million men; comparable manpower cuts were achieved in the navy and air force. Little more than a year after Germany’s surrender, the military forces of the United States, which at their peak had stood at 12.3 million men, were cut down to 3 million; two years later they declined below 2 million. The demobilization proceeded at a pace (if not in a manner) reminiscent of the dissolution of the Russian army in the revolutionary year of 1917. Nothing could have stopped this mass of humanity streaming homeward. To most Americans peacetime conditions meant reversion to a skeletal armed force.
Yet, at the same time, growing strains in the wartime alliance with the Soviet Union, and mounting evidence that Stalin was determined to exploit the chaotic conditions brought about by the collapse of the Axis powers to expand his domain, called for an effective military force able to deter the Soviets. The United States could not fulfill its role as leader of the Western coalition without an ability to project its military power globally.
In this situation, the nuclear weapon seemed to offer an ideal solution: the atomic bomb could hardly have come at a better time from the point of view of U.S. international commitments. Here was a device so frighteningly destructive, it was believed, that the mere threat of its employment would serve to dissuade would-be aggressors from carrying out their designs. Once the Air Force received the B-36, the world’s first intercontinental bomber, the United States acquired the ability to threaten the Soviet Union with devastating punishment without, at the same time, being compelled to maintain a large and costly standing army.
Reliance on the nuclear deterrent became more imperative than ever after the conclusion of the Korean war, in the course of which U.S. defense expenditures had been sharply driven up. President Eisenhower had committed himself to a policy of fiscal restraint. He wanted to cut the defense budget appreciably, and yet he had to do so without jeopardizing either America’s territorial security or its worldwide commitments. In an effort to reconcile these contradictory desires, the President and his Secretary of State, John Foster Dulles, enunciated in the winter of 1953-54 a strategic doctrine which to an unprecedented degree based the country’s security on a single weapon, the nuclear deterrent. In an address to the United Nations in December 1953, Eisenhower argued that since there was no defense against nuclear weapons (i.e., thermonuclear or hydrogen bombs, which both countries were then beginning to produce), war between the two “atomic colossi” would leave no victors and probably cause the demise of civilization. A month later, Dulles enunciated what came to be known as the doctrine of “massive retaliation.” The United States, he declared, had decided “to depend primarily upon a great capacity to retaliate, instantly, by means and at places of our choosing.” Throughout his address, Dulles emphasized the fiscal benefits of such a strategy, “more basic security at less cost.”
The Eisenhower-Dulles formula represented a neat compromise between America’s desires to reduce the defense budget and simultaneously to retain the capacity to respond to Soviet threats. The driving force was not, however, military but budgetary: behind “massive retaliation” (as well as its offspring, “mutual deterrence”) lay fiscal imperatives. In the nuclear deterrent, the United States found a perfect resolution of the conflicting demands of domestic and foreign responsibilities. For this reason alone its adoption was a foregone conclusion: the alternatives were either a vast standing army or forfeiture of status as a leading world power. The Air Force enthusiastically backed the doctrine of massive retaliation. As custodian of the atomic bomb, it had a vested interest in a defense posture of which that weapon was the linchpin. And since in the first postwar decade the intercontinental bomber was the only available vehicle for delivering the bomb against an enemy like the Soviet Union, the Air Force could claim a goodly share of the defense budget built around the retaliation idea.
Although the Soviet Union exploded a fission bomb in 1949 and announced the acquisition of a fusion (or hydrogen) bomb four years later, the United States still continued for a while longer to enjoy an effective monopoly on nuclear retaliation, since the Soviet Union lacked the means of delivering quantities of such bombs against U.S. territory. That situation changed dramatically in 1957 when the Soviets launched the Sputnik. This event, which their propaganda hailed as a great contribution to the advancement of science (and ours as proof of the failures of the American educational system!), represented in fact a significant military demonstration, namely, the ability of the Russians to deliver nuclear warheads against the United States homeland, until then immune from direct enemy threats. At this point massive retaliation ceased to make much sense and before long yielded to the doctrine of “mutual deterrence.” The new doctrine postulated that inasmuch as both the Soviet Union and the United States possessed (or would soon possess) the means of destroying each other, neither country could rationally contemplate resort to war. The nuclear stockpiles of each were an effective deterrent which ensured that they would not be tempted to launch an attack.
This doctrine was worked out in great and sophisticated detail by a bevy of civilian experts employed by various government and private organizations. These physicists, chemists, mathematicians, economists, and political scientists came to the support of the government’s fiscally-driven imperatives with scientific demonstrations in favor of the nuclear deterrent. Current U.S. strategic theory was thus born of a marriage between the scientist and the accountant. The professional soldier was jilted.
A large part of the U.S. scientific community had been convinced as soon as the first atomic bomb was exploded that the nuclear weapon, which that community had conceived and helped to develop, had accomplished a complete revolution in warfare. This conclusion was reached without much reference to the analysis of the effects of atomic weapons carried out by the military, and indeed without consideration of the traditional principles of warfare. It represented, rather, an act of faith on the part of an intellectual community which held strong pacifist convictions and felt deep guilt at having participated in the creation of a weapon of such destructive power. As early as 1946, in an influential book sponsored by the Yale Institute of International Affairs, under the title The Absolute Weapon, a group of civilian strategic theorists enunciated the principles of the mutual-deterrence theory which subsequently became the official U.S. strategic doctrine. The principal points made in this work may be summarized as follows:
- Nuclear weapons are “absolute weapons” in the sense that they can cause unacceptable destruction, but also and above all because there exists against them no possible defense. When the aggressor is certain to suffer the same punishment as his victim, aggression ceases to make sense. Hence war is no longer a rational policy option, as it had been throughout human history. In the words of Bernard Brodie, the book’s editor: “Thus far the chief purpose of our military establishment had been to win wars. From now on its chief purpose must be to avert them. It can have almost no other useful purpose” (p. 76).
- Given the fact that the adjective “absolute” means, by definition, incapable of being exceeded or surpassed, in the nuclear age military superiority has become meaningless. As another contributor to the book, William T.R. Fox, expressed it: “When dealing with the absolute weapon, arguments based on relative advantage lose their point” (p. 181). From which it follows that the objective of modern defense policy should be not superiority in weapons, traditionally sought by the military, but “sufficiency”; just enough nuclear weapons to be able to threaten a potential aggressor with unacceptable retaliation—in other words, an “adequate” deterrent, no more, no less.
- Nuclear deterrence can become effective only if it restrains mutually—i.e., if the United States and the Soviet Union each can deter the other from aggression. An American monopoly on nuclear weapons would be inherently destabilizing, both because it could encourage the United States to launch a nuclear attack, and, at the same time, by making the Russians feel insecure, cause them to act aggressively. “Neither we nor the Russians can expect to feel even reasonably safe unless an atomic attack by one were certain to unleash a devastating atomic counterattack by the other,” Arnold Wolfers maintained (p. 135). In other words, to feel secure the United States actually required the Soviet Union to have the capacity to destroy it.
Barely one year after Hiroshima and three years before the Soviets were to acquire a nuclear bomb, The Absolute Weapon articulated the philosophical premises underlying the mutual-deterrence doctrine which today dominates U.S. strategic thinking. Modern strategy, in the opinion of its contributors, involved preventing wars rather than winning them, securing sufficiency in decisive weapons rather than superiority, and even ensuring the potential enemy’s ability to strike back. Needless to elaborate, these principles ran contrary to all the tenets of traditional military theory, which had always called for superiority in forces and viewed the objective of war to be victory. But then, if one had decided that the new weapons marked a qualitative break with all the weapons ever used in combat, one could reasonably argue that past military experience, and the theory based on it, had lost relevance. Implicit in these assumptions was the belief that Clausewitz and his celebrated formula proclaiming war an extension of politics were dead. Henry Kissinger, who can always be counted upon to utter commonplaces in the tone of prophetic revelation, announced Clausewitz’s obituary nearly twenty years after The Absolute Weapon had made the point, in these words: “The traditional mode of military analysis which saw in war a continuation of politics but with its own appropriate means is no longer applicable.”6
American civilian strategists holding such views gained the dominant voice in the formulation of U.S. strategic doctrine with the arrival in Washington in 1961 of Robert S. McNamara as President Kennedy’s Secretary of Defense. A prominent business executive specializing in finance and accounting, McNamara applied to the perennial problem of American strategy—how to maintain a credible global military posture without a large and costly military establishment—the methods of cost analysis. These had first been applied by the British during World War II under the name “operations research” and subsequently came to be adopted here as “systems analysis.” Weapons’ procurement was to be tested and decided by the same methods used to evaluate returns on investment in ordinary business enterprises. Mutual deterrence was taken for granted: the question of strategic posture reduced itself to the issue of which weapons systems would provide the United States with effective deterrence at the least expense. Under McNamara the procurement of weapons, decided on the basis of cost effectiveness, came in effect to direct strategy, rather than the other way around, as had been the case through most of military history. It is at this point that applied science in partnership with budgetary accountancy—a partnership which had developed U.S. strategic theory—also took charge of U.S. defense policy.
As worked out in the 1960’s, and still in effect today, American nuclear theory rests on these propositions: All-out nuclear war is not a rational policy option, since no winner could possibly emerge from such a war. Should the Soviet Union nevertheless launch a surprise attack on the United States, the latter would emerge with enough of a deterrent to devastate the Soviet Union in a second strike. Since’ such a retaliatory attack would cost the Soviet Union millions of casualties and the destruction of all its major cities, a Soviet first strike is most unlikely. Meaningful defenses against a nuclear attack are technically impossible and psychologically counterproductive; nuclear superiority is meaningless.
In accord with these assumptions, the United States in the mid-1960’s unilaterally froze its force of ICBM’s at 1,054 and dismantled nearly all its defenses against enemy bombers. Civil-defense was all but abandoned, as was in time the attempt to create an ABM system which held out the possibility of protecting American missile sites against a surprise enemy attack. The Russians were watched benignly as they moved toward parity with the United States in the number of intercontinental launchers, and then proceeded to attain numerical superiority. The expectation was that as soon as the Russians felt themselves equal to the United States in terms of effective deterrence, they would stop further deployments. The frenetic pace of the Soviet nuclear build-up was explained first on the ground that the Russians had a lot of catching up to do, then that they had to consider the Chinese threat, and finally on the grounds that they are inherently a very insecure people and should be allowed an edge in deterrent capability.
Whether mutual deterrence deserves the name of a strategy at all is a real question. As one student of the subject puts it:
Although commonly called a “strategy,” “assured destruction” was by itself an antithesis of strategy. Unlike any strategy that ever preceded it throughout the history of armed conflict, it ceased to be useful precisely where military strategy is supposed to come into effect: at the edge of war. It posited that the principal mission of the U.S. military under conditions of ongoing nuclear operations against [the continental United States] was to shut its eyes, grit its teeth, and reflexively unleash an indiscriminate and simultaneous reprisal against all Soviet aim points on a preestablished target list. Rather than deal in a considered way with the particular attack on hand so as to minimize further damage to the United States and maximize the possibility of an early settlement on reasonably acceptable terms, it had the simple goal of inflicting punishment for the Soviet transgression. Not only did this reflect an implicit repudiation of political responsibility, it also risked provoking just the sort of counterreprisal against the United States that a rational wartime strategy should attempt to prevent.7
I cite this passage merely to indicate that the basic postulates of U.S. nuclear strategy are not as self-evident and irrefutable as its proponents seem to believe; and that, therefore, their rejection by the Soviet military is not, in and of itself, proof that Soviet thinking is “primitive” and devoid of a sense of realism.
The principal differences between American and Soviet strategies are traceable to different conceptions of the role of conflict and its inevitable concomitant, violence, in human relations; and secondly, to different functions which the military establishment performs in the two societies.
In the United States, the consensus of the educated and affluent holds all recourse to force to be the result of an inability or an unwillingness to apply rational analysis and patient negotiation to disagreements: the use of force is prima facie evidence of failure. Some segments of this class not only refuse to acknowledge the existence of violence as a fact of life, they have even come to regard fear—the organism’s biological reaction to the threat of violence—as inadmissible. “The notion of being threatened has acquired an almost class connotation,” Daniel P. Moynihan notes in connection with the refusal of America’s “sophisticated” elite to accept the reality of a Soviet threat. “If you’re not very educated, you’re easily frightened. And not being ever frightened can be a formula for self-destruction.”8
Now this entire middle-class, commercial, essentially Protestant ethos is absent from Soviet culture, whose roots feed on another kind of soil, and which has had for centuries to weather rougher political climes. The Communist revolution of 1917, by removing from positions of influence what there was of a Russian bourgeoisie (a class Lenin was prone to define as much by cultural as by socioeconomic criteria), in effect installed in power the muzhik, the Russian peasant. And the muzhik had been taught by long historical experience that cunning and coercion alone ensured survival: one employed cunning when weak, and cunning coupled with coercion when strong. Not to use force when one had it indicated some inner weakness. Marxism, with its stress on class war as a natural condition of mankind so long as the means of production were privately owned, has merely served to reinforce these ingrained convictions. The result is an extreme Social-Darwinist outlook on life which today permeates the Russian elite as well as the Russian masses, and which only the democratic intelligentsia and the religious dissenters oppose to any significant extent.
The Soviet ruling elite regards conflict and violence as natural regulators of all human affairs: wars between nations, in its view, represent only a variant of wars between classes, recourse to the one or the other being dependent on circumstances. A conflictless world will come into being only when the socialist (i.e., Communist) mode of production spreads across the face of the earth.
The Soviet view of armed conflict can be illustrated with another citation from the writings of the late Marshal Grechko, one of the most influential Soviet military figures of the post-World War II era. In his principal treatise, Grechko refers to the classification of wars formulated in 1972 by his U.S. counterpart, Melvin Laird. Laird divided wars according to engineering criteria—in terms of weapons employed and the scope of the theater of operations—to come up with four principal types of wars: strategic-nuclear, theater-nuclear, theater-conventional, and local-conventional. Dismissing this classification as inadequate, Grechko applies quite different standards to come up with his own typology:
Proceeding from the fundamental contradictions of the contemporary era, one can distinguish, according to sociopolitical criteria, the following types of wars: (1) wars between states (coalitions) of two contrary social systems—capitalist and socialist; (2) civil wars between the proletariat and the bourgeoisie, or between the popular masses and the forces of the extreme reaction supported by the imperialists of other countries; (3) wars between imperialist states and the peoples of colonial and dependent states fighting for their freedom and independence; and (4) wars among capitalist states.9
This passage contains many interesting implications. For instance, it makes no allowance for war between two Communist countries, like the Soviet Union and China, though such a war seems greatly to preoccupy the Soviet leadership. Nor does it provide for war pitting a coalition of capitalist and Communist states against another capitalist state, such as actually occurred during World War II when the United States and the Soviet Union joined forces against Germany. But for our purposes, the most noteworthy aspect of Grechko’s system of classification is the notion that social and national conflicts within the capitalist camp (that is, in all countries not under Communist control) are nothing more than a particular mode of class conflict of which all-out nuclear war between the superpowers is a conceivable variant. In terms of this typology, an industrial strike in the United States, the explosion of a terrorist bomb in Belfast or Jerusalem, the massacre by Rhodesian guerrillas of a black village or a white farmstead, differ from nuclear war between the Soviet Union and the United States only in degree, not in kind. All such conflicts are calibrations on the extensive scale by which to measure the historic conflict which pits Communism against capitalism and imperialism. Such conflicts are inherent in the stage of human development which precedes the final abolition of classes.
Middle-class American intellectuals simply cannot assimilate this mentality, so alien is it to their experience and view of human nature. Confronted with the evidence that the most influential elements in the Soviet Union do indeed hold such views, they prefer to dismiss the evidence as empty rhetoric, and to regard with deep suspicion the motives of anyone who insists on taking it seriously. Like some ancient Oriental despots, they vent their wrath on the bearers of bad news. How ironic that the very people who have failed so dismally to persuade American television networks to eliminate violence from their programs, nevertheless feel confident that they can talk the Soviet leadership into eliminating violence from its political arsenal!
Solzhenitsyn grasped the issue more profoundly as well as more realistically when he defined the antithesis of war not as the absence of armed conflict between nations—i.e., “peace” in the conventional meaning of the term—but as the absence of all violence, internal as well as external. His comprehensive definition, drawn from his Soviet experience, obversely matches the comprehensive Soviet definition of warfare.
We know surprisingly little about the individuals and institutions whose responsibility it is to formulate Soviet military doctrine. The matter is handled with the utmost secrecy, which conceals from the eyes of outsiders the controversies that undoubtedly surround it. Two assertions, however, can be made with confidence.
Because of Soviet adherence to the Clausewitzian principle that warfare is always an extension of politics—i.e., subordinate to overall political objectives (about which more below)—Soviet military planning is carried out under the close supervision of the country’s highest political body, the Politburo. Thus military policy is regarded as an intrinsic element of “grand strategy,” whose arsenal also includes a variety of non-military instrumentalities.
Secondly, the Russians regard warfare as a science (nauka, in the German sense of Wissenschaft). Instruction in the subject is offered at a number of university-level institutions, and several hundred specialists, most of them officers on active duty, have been accorded the Soviet equivalent of the Ph.D. in military science. This means that Soviet military doctrine is formulated by full-time specialists: it is as much the exclusive province of the certified military professional as medicine is that of the licensed physician. The civilian strategic theorist who since World War II has played a decisive role in the formulation of U.S. strategic doctrine is not in evidence in the Soviet Union, and probably performs at best a secondary, consultative function.
Its penchant for secrecy notwithstanding, the Soviet military establishment does release a large quantity of unclassified literature in the form of books, specialist journals, and newspapers. Of the books, the single most authoritative work at present is unquestionably the collective study, Military Strategy, edited by the late Marshal V. D. Sokolovskii, which summarizes Soviet warfare doctrine of the nuclear age.10 Although published fifteen years ago, Sokolovskii’s volume remains the only Soviet strategic manual publicly available—a solitary monument confronting a mountain of Western works on strategy. A series called “The Officer’s Library” brings out important specialized studies.11 The newspaper Krasnaia zvezda (“Red Star”) carries important theoretical articles which, however, vie for the reader’s attention with heroic pictures of Soviet troops storming unidentified beaches and firing rockets at unnamed foes. The flood of military works has as its purpose indoctrination, an objective to which the Soviet high command attaches the utmost importance: indoctrination both in the psychological sense, designed to persuade the Soviet armed forces that they are invincible, as well as of a technical kind, to impress upon the officers and ranks the principles of Soviet tactics and the art of operations.
To a Western reader, most of this printed matter is unadulterated rubbish. It not only lacks the sophistication and intellectual elegance which he takes for granted in works on problems of nuclear strategy; it is also filled with a mixture of pseudo-Marxist jargon and the crudest kind of Russian jingoism. Which is one of the reasons why it is hardly ever read in the West, even by people whose business it is to devise a national strategy against a possible Soviet threat. By and large the material is ignored. Two examples must suffice. Strategy in the Missile Age, an influential work by Bernard Brodie, one of the pioneers of U.S. nuclear doctrine, which originally came out in 1959, and was republished in 1965, makes only a few offhand allusions to Soviet nuclear strategy, and then either to note with approval that it is “developing along lines familiar in the United States” (p. 171), or else, when the Russians prefer to follow their own track, to dismiss it as a “ridiculous and reckless fantasy” (p. 215). Secretary of Defense McNamara perused Sokolovskii and “remained unimpressed,” for nowhere in the book did he find “a sophisticated analysis of nuclear war.”12
The point to bear in mind, however, is that Soviet military literature, like all Soviet literature on politics broadly defined, is written in an elaborate code language. Its purpose is not to dazzle with originality and sophistication but to convey to the initiates messages of grave importance. Soviet policy-makers may speak to one another plainly in private, but when they take pen in hand they invariably resort to an “Aesopian” language, a habit acquired when the forerunner of today’s Communist party had to function in the Czarist underground. Buried in the flood of seemingly meaningless verbiage, nuggets of precious information on Soviet perceptions and intentions can more often than not be unearthed by a trained reader. In 1958-59 two American specialists employed by the Rand Corporation, Raymond L. Garthoff and Herbert S. Dinerstein, by skillfully deciphering Soviet literature on strategic problems and then interpreting this information against the background of the Soviet military tradition, produced a remarkably prescient forecast af actual Soviet military policies of the 1960’s and 1970’s.13 Unfortunately, their findings were largely ignored by U.S. strategists from the scientific community who had convinced themselves that there was only one strategic doctrine appropriate to the age of nuclear weapons, and that therefore evidence indicating that the Soviets were adopting a different strategy could be safely disregarded.
This predisposition helps explain why U.S. strategists persistently ignored signs indicating that those who had control of Soviet Russia’s nuclear arsenal were not thinking in terms of mutual deterrence. The calculated nonchalance with which Stalin at Potsdam reacted to President Truman’s confidences about the American atomic bomb was a foretaste of things to come. Initial Soviet reactions to Hiroshima and Nagasaki were similar in tone: the atomic weapon had not in any significant manner altered the science of warfare or rendered obsolete the principles which had guided the Red Army in its victorious campaigns against the Wehrmacht. These basic laws, known as the five “constant principles” that win wars, had been formulated by Stalin in 1942. They were, in declining order of importance: “stability of the home front,” followed by morale of the armed forces, quantity and quality of the divisions, military equipment, and, finally, ability of the commanders.14 There was no such thing as an “absolute weapon”—weapons altogether occupied a subordinate place in warfare; defense against atomic bombs was entirely possible.15 This was disconcerting, to be sure, but it could be explained away as a case of sour grapes. After all, the Soviet Union had no atomic bomb, and it was not in its interest to seem overly impressed by a weapon on which its rival enjoyed a monopoly.16
In September 1949 the Soviet Union exploded a nuclear device. Disconcertingly, its attitude to nuclear weapons did not change, at any rate not in public. For the remaining four years, until Stalin’s death, the Soviet high command continued to deny that nuclear weapons required fundamental revisions of accepted military doctrine. With a bit of good will, this obduracy could still have been rationalized: for although the Soviet Union now had the weapon, it still lacked adequate means of delivering it across continents insofar as it had few intercontinental bombers (intercontinental rockets were regarded in the West as decades away). The United States, by contrast, possessed not only a fleet of strategic bombers but also numerous air bases in countries adjoining Soviet Russia. So once again one could find a persuasive explanation of why the Russians refused to see the light. It seemed reasonable to expect that as soon as they had acquired both a stockpile of atomic bombs and a fleet of strategic bombers, they would adjust their doctrine to conform with the American.
Events which ensued immediately after Stalin’s death seemed to lend credence to these expectations. Between 1953 and 1957 a debate took place in the pages of Soviet publications which, for all its textural obscurity, indicated that a new school of Soviet strategic thinkers had arisen to challenge the conventional wisdom. The most articulate spokesman of this new school, General N. Talenskii, argued that the advent of nuclear weapons, especially the hydrogen bomb which had just appeared on the scene, did fundamentally alter the nature of warfare. The sheer destructiveness of these weapons was such that one could no longer talk of a socialist strategy automatically overcoming the strategy of capitalist countries: the same rules of warfare now applied to both social systems. For the first time doubt was cast on the immutability of Stalin’s “five constant principles.” In the oblique manner in which Soviet debates on matters of such import are invariably conducted, Talenskii was saying that perhaps, after all, war had ceased to represent a viable policy option. More important yet, speeches delivered by leading Soviet politicians in the winter of 1953-54 seemed to support the thesis advanced by President Eisenhower in his United Nations address of December 1953 that nuclear war could spell the demise of civilization. In an address delivered on March 12, 1954, and reported the following day in Pravda, Stalin’s immediate successor, Georgii Malenkov, echoed Eisenhower’s sentiments: a new world war would unleash a holocaust which “with the present means of warfare, means the destruction of world civilization.”17
This assault on its traditional thinking—and, obliquely, on its traditional role—engendered a furious reaction from the Soviet military establishment. The Red Army was not about to let itself be relegated to the status of a militia whose principal task was averting war rather than winning it. Malenkov’s unorthodox views on war almost certainly contributed to his downfall; at any rate, his dismissal in February 1955 as party leader was accompanied by a barrage of press denunciations of the notion that war had become unfeasible. There are strong indications that Malenkov’s chief rival, Khrushchev, capitalized on the discontent of the military to form with it an alliance with whose help he eventually rode to power. The successful military counterattack seems to have been led by the World War II hero, Marshal Georgii Zhukov, whom Khrushchev made his Minister of Defense and brought into the Presidium. The guidelines of Soviet nuclear strategy, still in force today, were formulated during the first two years of Khrushchev’s tenure (1955-57), under the leadership of Zhukov himself. They resulted in the unequivocal rejection of the notion of the “absolute weapon” and all the theories that U.S. strategists had deduced from it. Stalin’s view of the military “constants” was implicitly reaffirmed. Thus the re-Stalinization of Soviet life, so noticeable in recent years, manifested itself first in military doctrine.
To understand this unexpected turn of events—so unexpected that most U.S. military theorists thus far have not been able to come to terms with it—one must take into account the function performed by the military in the Soviet system.
Unlike the United States, the Soviet government needs and wants a large military force. It has many uses for it, at home and abroad. As a regime which rests neither on tradition nor on a popular mandate, it sees in its military the most effective manifestation of government omnipotence, the very presence of which discourages any serious opposition from raising its head in the country as well as in its dependencies. It is, after all, the Red Army that keeps Eastern Europe within the Soviet camp. Furthermore, since the regime is driven by ideology, internal politics, and economic exigencies steadily to expand, it requires an up-to-date military force capable of seizing opportunities which may present themselves along the Soviet Union’s immensely long frontier or even beyond. The armed forces of the Soviet Union thus have much more to do than merely protect the country from potential aggressors: they are the mainstay of the regime’s authority and a principal instrumentality of its internal and external policies. Given the shaky status of the Communist regime internally, the declining appeal of its ideology, and the non-competitiveness of its goods on world markets, a persuasive case can even be made that, ruble for ruble, expenditures on the military represent for the Soviet leadership an excellent and entirely “rational” capital investment.
For this reason alone (and there were other compelling reasons too, as we shall see), the Soviet leadership could not accept the theory of mutual deterrence.18 After all, this theory, pushed to its logical conclusion, means that a country can rely for its security on a finite number of nuclear warheads and on an appropriate quantity of delivery vehicles; so that, apart perhaps from some small mobile forces needed for local actions, the large and costly traditional military establishments can be disbanded. Whatever the intrinsic military merits of this doctrine may be, its broader implications are entirely unacceptable to a regime like the Soviet one for whom military power serves not only (or even primarily) to deter external aggressors, but also and above all to ensure internal stability and permit external expansion. Thus, ultimately, it is political rather than strictly strategic or fiscal considerations that may be said to have determined Soviet reactions to nuclear weapons and shaped the content of Soviet nuclear strategy. As a result, Soviet advocates of mutual deterrence like Talenskii were gradually silenced. By the mid-1960’s the country adopted what in military jargon is referred to as a “war-fighting” and “war-winning” doctrine.
Given this fundamental consideration, the rest followed with a certain inexorable logic. The formulation of Soviet strategy in the nuclear age was turned over to the military who are in complete control of the Ministry of Defense. (Two American observers describe this institution as a “uniformed empire.”19 ) The Soviet General Staff had only recently emerged from winning one of the greatest wars in history. Immensely confident of their own abilities, scornful of what they perceived as the minor contribution of the United States to the Nazi defeat, inured to casualties running into tens of millions, the Soviet generals tackled the task with relish. Like their counterparts in the U.S. Army, they were professionally inclined to denigrate the exorbitant claims made on behalf of the new weapon by strategists drawn from the scientific community; unlike the Americans, however, they did not have to pay much heed to the civilians. In its essentials, Soviet nuclear doctrine as it finally emerged is not all that different from what American doctrine might have been had military and geopolitical rather than fiscal considerations played the decisive role here as they did there.
Soviet military theorists reject the notion that technology (i.e., weapons) decides strategy. They perceive the relationship to be the reverse: strategic objectives determine the procurement and application of weapons. They agree that the introduction of nuclear weapons has profoundly affected warfare, but deny that nuclear weapons have altered its essential quality. The novelty of nuclear weapons consists not in their destructiveness—that is, after all, a matter of degree, and a country like the Soviet Union which, as Soviet generals proudly boast, suffered in World War II the loss of over 20 million casualties, as well as the destruction of 1,710 towns, over 70,1300 villages, and some 32,000 industrial establishments to win the war and emerge as a global power, is not to be intimidated by the prospect of destruction.20 Rather, the innovation consists of the fact that nuclear weapons, coupled with intercontinental missiles, can by themselves carry out strategic missions which previously were accomplished only by means of prolonged tactical operations:
Nuclear missiles have altered the relationship of tactical, operational, and strategic acts of the armed conflict. If in the past the strategic end-result was secured by a succession of sequential, most often long-term, efforts [and] comprised the sum of tactical and operational successes, strategy being able to realize its intentions only with the assistance of the art of operations and tactics, then today, by means of powerful nuclear strikes, strategy can attain its objectives directly.21
In other words, military strategy, rather than a casualty of technology, has, thanks to technology, become more central than ever. By adopting this view, Soviet theorists believe themselves to have adapted modern technological innovations in weaponry to the traditions of military science.
Implicit in all this is the idea that nuclear war is feasible and that the basic function of warfare, as defined by Clausewitz, remains permanently valid, whatever breakthroughs may occur in technology. “It is well known that the essential nature of war as a continuation of politics does not change with changing technology and armament.”22 This code phrase from Sokolovskii’s authoritative manual was certainly hammered out with all the care that in the United States is lavished on an amendment to the Constitution. It spells the rejection of the whole basis on which U.S. strategy has come to rest: thermonuclear war is not suicidal, it can be fought and won, and thus resort to war must not be ruled out.
In addition (though we have no solid evidence to this effect) it seems likely that Soviet strategists reject the mutual-deterrence theory on several technical grounds of a kind that have been advanced by American critics of this theory like Albert Wohlstetter, Herman Kahn, and Paul Nitze.
- Mutual deterrence postulates a certain finality about weapons technology: it does not allow for further scientific breakthroughs that could result in the deterrent’s becoming neutralized. On the offensive side, for example, there is the possibility of significant improvements in the accuracy of ICBM’s or striking innovations in anti-submarine warfare; on the defensive, satellites which are essential for early warning of an impending attack could be blinded and lasers could be put to use to destroy incoming missiles.
- Mutual deterrence constitutes “passive defense” which usually leads to defeat. It threatens punishment to the aggressor after he has struck, which may or may not deter him from striking; it cannot prevent him from carrying out his designs. The latter objective requires the application of “active defense”—i.e., nuclear preemption.
- The threat of a second strike, which underpins the mutual-deterrence doctrine, may prove ineffectual. The side that has suffered the destruction of the bulk of its nuclear forces in a surprise first strike may find that it has so little of a deterrent left and the enemy so much, that the cost of striking back in retaliation would be exposing its own cities to total destruction by the enemy’s third strike. The result could be a paralysis of will, and capitulation instead of a second strike.
Soviet strategists make no secret of the fact that they regard the U.S. doctrine (with which, judging by the references in their literature, they are thoroughly familiar) as second-rate. In their view, U.S. strategic doctrine is obsessed with a single weapon which it “absolutizes” at the expense of everything else that military experience teaches soldiers to take into account. Its philosophical foundations are “idealism” and “metaphysics”—i.e., currents which engage in speculative discussions of objects (in this case, weapons) and of their “intrinsic” qualities, rather than relying on pragmatic considerations drawn from experience.23
Since the mid-1960’s, the proposition that thermonuclear war would be suicidal for both parties has been used by the Russians largely as a commodity for export. Its chief proponents include staff members of the Moscow Institute of the USA and Canada, and Soviet participants at Pugwash, Dartmouth, and similar international conferences, who are assigned the task of strengthening the hand of anti-military intellectual circles in the West. Inside the Soviet Union, such talk is generally denounced as “bourgeois pacifism.”24
In the Soviet view, a nuclear war would be total and go beyond formal defeat of one side by the other: “War must not simply [be] the defeat of the enemy, it must be his destruction. This condition has become the basis of Soviet military strategy,” according to the Military-Historical Journal.25 Limited nuclear war, flexible response, escalation, damage limiting, and all the other numerous refinements of U.S. strategic doctrine find no place in its Soviet counterpart (although, of course, they are taken into consideration in Soviet operational planning).
For Soviet generals the decisive influence in the formulation of nuclear doctrine were the lessons of World War II with which, for understandable reasons, they are virtually obsessed. This experience they seem to have supplemented with knowledge gained from professional scrutiny of the record of Nazi and Japanese offensive operations, as well as the balance sheet of British and American strategic-bombing campaigns. More recently, the lessons of the Israeli-Arab wars of 1967 and 1973 in which they indirectly participated seem also to have impressed Soviet strategists, reinforcing previously held convictions. They also follow the Western literature, tending to side with the critics of mutual deterrence. The result of all these diverse influences is a nuclear doctrine which assimilates into the main body of the Soviet military tradition the technical implications of nuclear warfare without surrendering any of the fundamentals of this tradition.
The strategic doctrine adopted by the USSR over the past two decades calls for a policy diametrically opposite to that adopted in the United States by the predominant community of civilian strategists: not deterrence but victory, not sufficiency in weapons but superiority, not retaliation but offensive action. The doctrine has five related elements: (1) preemption (first strike), (2) quantitative superiority in arms, (3) counterforce targeting, (4) combined-arms operations, and (5) defense. We shall take up each of these elements in turn.
Preemption. The costliest lesson which the Soviet military learned in World War II was the importance of surprise. Because Stalin thought he had an understanding with Hitler, and because he was afraid to provoke his Nazi ally, he forbade the Red Army to mobilize for the German attack of which he had had ample warning. As a result of this strategy of “passive defense,” Soviet forces suffered frightful losses and were nearly defeated. This experience etched itself very deeply on the minds of the Soviet commanders: in their theoretical writings no point is emphasized more consistently than the need never again to allow themselves to be caught in a surprise attack. Nuclear weapons make this requirement especially urgent because, according to Soviet theorists, the decision in a nuclear conflict in all probability will be arrived at in the initial hours. In a nuclear war the Soviet Union, therefore, would not again have at its disposal the time which it enjoyed in 1941-42 to mobilize reserves for a victorious counteroffensive after absorbing devastating setbacks.
Given the rapidity of modern warfare (an ICBM can traverse the distance between the USSR and the United States in thirty minutes), not to be surprised by the enemy means, in effect, to inflict surprise on him. Once the latter’s ICBM’s have left their silos, once his bombers have taken to the air and his submarines to sea, a counterattack is greatly reduced in effectiveness. These considerations call for a preemptive strike. Soviet theorists draw an insistent, though to an outside observer very fuzzy, distinction between “preventive” and “preemptive” attacks. They claim that the Soviet Union will never start a war—i.e., it will never launch a preventive attack—but once it had concluded that an attack upon it was imminent, it would not hesitate to preempt. They argue that historical experience indicates outbreaks of hostilities are generally preceded by prolonged diplomatic crises and military preparations which signal to an alert command an imminent threat and the need to act. Though the analogy is not openly drawn, the action which Soviet strategists seem to have in mind is that taken by the Israelis in 1967, a notably successful example of “active defense” involving a well-timed preemptive strike. (In 1973, by contrast, the Israelis pursued the strategy of “passive defense,” with unhappy consequences.) The Soviet doctrine of nuclear preemption was formulated in the late 1950’s, and described at the time by Garthoff and Dinerstein in the volumes cited above.
A corollary of the preemption strategy holds that a country’s armed forces must always be in a state of high combat readiness so as to be able to go over to active operations with the least delay. Nuclear warfare grants no time for mobilization. Stress on the maintenance of a large ready force is one of the constant themes of Soviet military literature. It helps explain the immense land forces which the USSR maintains at all times and equips with the latest weapons as they roll off the assembly lines.
Quantitative superiority. There is no indication that the Soviet military share the view prevalent in the U.S. that in the nuclear age numbers of weapons do not matter once a certain quantity had been attained. They do like to pile up all sorts of weapons, new on top of old, throwing away nothing that might come in handy. This propensity to accumulate hardware is usually dismissed by Western observers with contemptuous references to a Russian habit dating back to Czarist days. It is not, however, as mindless as it may appear. For although Soviet strategists believe that the ultimate outcome in a nuclear war will be decided in the initial hours of the conflict, they also believe that a nuclear war will be of long duration: to consummate victory—that is, to destroy the enemy—may take months or even longer. Under these conditions, the possession of a large arsenal of nuclear delivery systems, as well as of other types of weapons, may well prove to be of critical importance. Although prohibited by self-imposed limitations agreed upon in 1972 at SALT I from exceeding a set number of intercontinental ballistic-missile launchers, the Soviet Union is constructing large numbers of so-called Intermediate Range Ballistic Missile launchers (i.e., launchers of less than intercontinental range), not covered by SALT. Some of these could be rapidly converted into regular intercontinental launchers, should the need arise.26
Reliance on quantity has another cause, namely, the peculiarly destructive capability of modern missiles equipped with Multiple Independently-targettable Reentry Vehicles, or MIRV’s. The nose cones of MIRVed missiles, which both super-powers possess, when in mid-course, split like a peapod to launch several warheads, each aimed at a separate target. A single missile equipped with three MIRV’s of sufficient accuracy, yield, and reliability can destroy up to three of the enemy’s missiles—provided, of course, it catches them in their silos, before they have been fired (which adds another inducement to preemption). Theoretically, assuming high accuracy and reliability, should the entire American force of 1,054 ICBM’s be MIRVed (so far only half of them have been MIRVed), it would take only 540 American ICBM’s, each with three MIRV’s, to attack the entire Soviet force of 1,618 ICBM’s. The result would leave the United States with 514 ICBM’s and the USSR with few survivors. Unlikely as the possibility of an American preemptive strike may be, Soviet planners apparently prefer to take no chances; they want to be in a position rapidly to replace ICBM’s lost to a sudden enemy first strike. Conversely, given its doctrine of preemption, the Soviet Union wants to be in a position to destroy the largest number of American missiles with the smallest number of its own, so as to be able to face down the threat of a U.S. second strike. Its most powerful ICBM, the SS-18, is said to have been tested with up to 10 MIRV’s (compared to 3 of the Minuteman-3, America’s only MIRVed ICBM). It has been estimated that 300 of these giant Soviet missiles, authorized under SALT I, could seriously threaten the American arsenal of ICBM’s.
Counterforce. Two terms commonly used in the jargon of modern strategy are “counterforce” and “countervalue.” Both terms refer to the nature of the target of a strategic nuclear weapon. Counterforce means that the principal objective of one’s nuclear missiles are the enemy’s forces—i.e., his launchers as well as the related command and communication facilities. Countervalue means that one’s principal targets are objects of national “value,” namely the enemy’s population and industrial centers.
Given the predominantly defensive (retaliatory) character of current U.S. strategy, it is naturally predisposed to a countervalue targeting policy. The central idea of the U.S. strategy of deterrence holds that should the Soviet Union dare to launch a surprise first strike at the United States, the latter would use its surviving missiles to lay waste Soviet cities. It is taken virtually for granted in this country that no nation would consciously expose itself to the risk of having its urban centers destroyed—an assumption which derives from British military theory of the 1920’s and 1930’s, and which influenced the RAF to concentrate on strategic bombing raids on German cities in World War II.
The Soviet high command has never been much impressed with the whole philosophy of counter-value strategic bombing, and during World War II resisted the temptation to attack German cities. This negative attitude to bombing of civilians is conditioned not by humanitarian considerations but by cold, professional assessments of the effects of that kind of strategic bombing as revealed by the Allied Strategic Bombing Surveys. The findings of these surveys were largely ignored in the United States, but they seem to have made a strong impression in the USSR. Not being privy to the internal discussions of the Soviet military, we can do no better than consult the writings of an eminent British scientist, P.M.S. Blackett, noted for his pro-Soviet sympathies, whose remarkable book Fear, War and the Bomb, published in 1948-49, indicated with great prescience the lines which Soviet strategic thinking were subsequently to take.
Blackett, who won the Nobel Prize for Physics in 1948, had worked during the war in British Operations Research. He concluded that strategic bombing was ineffective, and wrote his book as an impassioned critique of the idea of using atomic weapons as a strategic deterrent. Translating the devastation wrought upon Germany into nuclear terms, he calculated that it represented the equivalent of the destruction that would have been caused by 400 “improved” Hiroshima-type atomic bombs. Yet despite such punishment, Nazi Germany did not collapse. Given the much greater territory of the Soviet Union and a much lower population density, he argued, it would require “thousands” of atomic bombs to produce decisive results in a war between America and Russia.27 Blackett minimized the military effects of the atomic bombing on Japan. He recalled that in Hiroshima trains were operating forty-eight hours after the blast; that industries were left almost undamaged and could have been back in full production within a month; and that if the most elementary civil-defense precautions had been observed, civilian casualties would have been substantially reduced. Blackett’s book ran so contrary to prevailing opinion and was furthermore so intemperately anti-American in tone that its conclusions were rejected out of hand in the West.
Too hastily, it appears in retrospect. For while it is true that the advent of hydrogen bombs a few years later largely invalidated the estimates on which he had relied, Blackett correctly anticipated Soviet reactions. Analyzing the results of Allied saturation bombing of Germany, Soviet generals concluded that it was largely a wasted effort. Sokolovskii cites in his manual the well-known figures showing that German military productivity rose throughout the war until the fall of 1944, and concludes: “It was not so much the economic struggle and economic exhaustion [i.e., countervalue bombing] that were the causes for the defeat of Hitler’s Germany, but rather the armed conflict and the defeat of its armed forces [i.e., the counterforce strategy pursued by the Red Army.]”28
Soviet nuclear strategy is counterforce oriented. It targets for destruction—at any rate, in the initial strike—not the enemy’s cities but his military forces and their command and communication facilities. Its primary aim is to destroy not civilians but soldiers and their leaders, and to undermine not so much the will to resist as the capability to do so. In the words of Grechko:
The Strategic Rocket Forces, which constitute the basis of the military might of our armed forces, are designed to annihilate the means of the enemy’s nuclear attack, large groupings of his armies, and his military bases; to destroy his military industries; [and] to disorganize the political and military administration of the aggressor as well as his rear and transport.29
Any evidence that the United States may contemplate switching to a counterforce strategy, such as occasionally crops up, throws Soviet generals into a tizzy of excitement. It clearly frightens them far more than the threat to Soviet cities posed by the countervalue strategic doctrine.
Combined-arms operations. Soviet theorists regard strategic nuclear forces (organized since 1960 into a separate arm, the Strategic Rocket Forces) to be the decisive branch of the armed services, in the sense that the ultimate outcome of modern war would be settled by nuclear exchanges. But since nuclear war, in their view, must lead not only to the enemy’s defeat but also to his destruction (i.e., his incapacity to offer further resistance), they consider it necessary to make preparations for the follow-up phase, which may entail a prolonged war of attrition. At this stage of the conflict, armies will be needed to occupy the enemy’s territory, and navies to interdict his lanes of communications. “In the course of operations [battles], armies will basically complete the final destruction of the enemy brought about by strikes of nuclear rocket weapons.”30 Soviet theoretical writings unequivocally reject reliance on any one strategy (such as the Blitzkrieg) or on any one weapon, to win wars. They believe that a nuclear war will require the employment of all arms to attain final victory.
The large troop concentrations of Warsaw Pact forces in Eastern Europe—well in excess of reasonable defense requirements—make sense if viewed in the light of Soviet combined-arms doctrine. They are there not only to have the capacity to launch a surprise land attack against NATO, but also to attack and seize Western Europe with a minimum of damage to its cities and industries after the initial strategic nuclear exchanges have taken place, partly to keep Europe hostage, partly to exploit European productivity as a replacement for that of which the Soviet Union would have been deprived by an American second strike.
As for the ocean-going navy which the Soviet Union has now acquired, it consists primarily of submarines and ground-based naval air forces, and apparently would have the task of cleaning the seas of U.S. ships of all types and cutting the sea lanes connecting the United States with allied powers and sources of raw materials.
The notion of an extended nuclear war is deeply embedded in Soviet thinking, despite its being dismissed by Western strategists who think of war as a one-two exchange. As Blackett noted sarcastically already in 1948-49: “Some armchair strategists (including some atomic scientists) tend to ignore the inevitable counter-moves of the enemy. More chess playing and less nuclear physics might have instilled a greater sense of the realities.”31 He predicted that a World War III waged with the atomic bombs then available would last longer than either of its predecessors, and require combined-arms operations—which seems to be the current Soviet view of the matter.
Defense. As noted, the U.S. theory of mutual deterrence postulates that no effective defense can be devised against an all-out nuclear attack: it is this postulate that makes such a war appear totally irrational. In order to make this premise valid, American civilian strategists have argued against a civil-defense program, against the ABM, and against air defenses.
Nothing illustrates better the fundamental differences between the two strategic doctrines than their attitudes to defense against a nuclear attack. The Russians agreed to certain imprecisely defined limitations on ABM after they had initiated a program in this direction, apparently because they were unable to solve the technical problems involved and feared the United States would forge ahead in this field. However, they then proceeded to build a tight ring of anti-aircraft defenses around the country while also developing a serious program of civil defense.
Before dismissing Soviet civil-defense efforts as wishful thinking, as is customary in Western circles, two facts must be emphasized.
One is that the Soviet Union does not regard civil defense to be exclusively for the protection of ordinary civilians. Its chief function seems to be to protect what in Russia are known as the “cadres,” that is, the political and military leaders as well as industrial managers and skilled workers—those who could reestablish the political and economic system once the war was over. Judging by Soviet definitions, civil defense has as much to do with the proper functioning of the country during and immediately after the war as with holding down casualties. Its organization, presently under Deputy Minister of Defense, Colonel-General A. Altunin, seems to be a kind of shadow government charged with responsibility for administering the country under the extreme stresses of nuclear war and its immediate aftermath.32
Secondly, the Soviet Union is inherently less vulnerable than the United States to a counter-value attack. According to the most recent Soviet census (1970), the USSR had only nine cities with a population of one million or more; the aggregate population of these cities was 20.5 million, or 8.5 per cent of the country’s total. The United States 1970 census showed thirty-five metropolitan centers with over one million inhabitants, totaling 84.5 million people, or 41.5 per cent of the country’s aggregate. It takes no professional strategist to visualize what these figures mean. In World War II, the Soviet Union lost 20 million inhabitants out of a population of 170 million—i.e., 12 per cent; yet the country not only survived but emerged stronger politically and militarily than it had ever been. Allowing for the population growth which has occurred since then, this experience suggests that as of today the USSR could absorb the loss of 30 million of its people and be no worse off, in terms of human casualties, than it had been at the conclusion of World War II. In other words, all of the USSR’s multimillion cities could be destroyed without trace or survivors, and, provided that its essential cadres had been saved, it would emerge less hurt in terms of casualties than it was in 1945.
Such figures are beyond the comprehension of most Americans. But clearly a country that since 1914 has lost, as a result of two world wars, a civil war, famine, and various “purges,” perhaps up to 60 million citizens, must define “unacceptable damage” differently from the United States which has known no famines or purges, and whose deaths from all the wars waged since 1775 are estimated at 650,000—fewer casualties than Russia suffered in the 900-day siege of Leningrad in World War II alone. Such a country tends also to assess the rewards of defense in much more realistic terms.
How significant are these recondite doctrinal differences? It has been my invariable experience when lecturing on these matters that during the question period someone in the audience will get up and ask: “But is it not true that we and the Russians already possess enough nuclear weapons to destroy each other ten times over” (or fifty, or a hundred—the figures vary)? My temptation is to reply: “Certainly. But we also have enough bullets to shoot every man, woman, and child, and enough matches to set the whole world on fire. The point lies not in our ability to wreak total destruction: it lies in intent.” And insofar as military doctrine is indicative of intent, what the Russians think to do with their nuclear arsenal is a matter of utmost importance that calls for close scrutiny.
Enough has already been said to indicate the disparities between American and Soviet strategic doctrines of the nuclear age. These differences may be most pithily summarized by stating that whereas we view nuclear weapons as a deterrent, the Russians see them as a “compellant”—with all the consequences that follow. Now it must be granted that the actual, operative differences between the two doctrines may not be quite as sharp as they appear in the public literature: it is true that our deterrence doctrine leaves room for some limited offensive action, just as the Russians include elements of deterrence in their “war-fighting” and “war-winning” doctrine. Admittedly, too, a country’s military doctrine never fully reveals how it would behave under actual combat conditions. And yet the differences here are sharp and fundamental enough, and the relationship of Soviet doctrine to Soviet deployments sufficiently close, to suggest that ignoring or not taking seriously Soviet military doctrine may have very detrimental effects on U.S. security. There is something innately destabilizing in the very fact that we consider nuclear war unfeasible and suicidal for both, and our chief adversary views it as feasible and winnable for himself.
SALT misses the point at issue so long as it addresses itself mainly to the question of numbers of strategic weapons: equally important are qualitative improvements within the existing quotas, and the size of regular land and sea forces. Above all, however, looms the question of intent: as long as the Soviets persist in adhering to the Clausewitzian maxim on the function of war, mutual deterrence does not really exist. And unilateral deterrence is feasible only if we understand the Soviet war-winning strategy and make it impossible for them to succeed.
1 “The Real Paul Warnke,” the New Republic, March 26, 1977, p. 23.
2 N.V. Karabanov in N.V. Karabanov, et al., Filosofskoe nasledie V. I. Lenina i problemy sovremennoi voiny (“The Philosophical Heritage of V.I. Lenin and the Problems of Contemporary War”) (Moscow, 1972), pp. 18-19, cited in Leon Gouré, Foy D. Kohler, and Mose L. Harvey, eds., The Role of Nuclear Forces in Current Soviet Strategy (1974), p. 60.
3 A.A. Grechko, Vooruzhonnye sily sovetskogo gosudarstva (“The Armed Forces of the Soviet State”) (Moscow, 1975), p. 345.
4 Russell F. Weigley, The American Way of War (1973), p. 368.
5 Matthew B. Ridgway, Soldier (1956), pp. 296-97.
6 In Michael Howard, ed., The Theory and Practice of War (London, 1965), p. 291.
7 Benjamin S. Lambeth, Selective Nuclear Options in American and Soviet Strategic Policy (Rand Corporation, R-2034-DDRE, December 1976), p. 14. This study analyzes and approves of the refinement introduced into the U.S. doctrine by James R. Schlesinger as Secretary of Defense in the form of “limited-response options.”
8 Interview with Playboy, March 1977, p. 72.
9 Grechko, Voorozhunnye sily sovetskogo gosudarstva, pp. 347-48, emphasis added.
10 Voennaia strategiia (Moscow, 1962). Since 1962 there have been two revised editions (1963 and 1968). The 1962 edition was immediately translated into English; but currently the best version is that edited by Harriet Fast Scott (Crane-Russak, 1975) which renders the third edition but collates its text with the preceding two.
11 To date, twelve volumes in this series have been translated into English and made publicly available through the U.S. Government Printing Office.
12 William W. Kaufmann, The McNamara Strategy (1964), p. 97.
13 Garthoff's principal works are Soviet Military Doctrine (1953), Soviet Strategy in the Nuclear Age (1958), and The Soviet Image of Future War (1959). Dinerstein wrote War and the Soviet Union (1959).
14 Cited in J.M. Mackintosh, The Strategy and Tactics of Soviet Foreign Policy (London, 1962), pp. 90-91, emphasis added.
15 Articles in the New Times for 1945-46 cited in P.M.S. Blackett, Fear, War and the Bomb (1949), pp. 163-65.
16 We now know that orders to proceed with the development of a Soviet atomic bomb were issued by Stalin in June 1942, probably as a result of information relayed by Klaus Fuchs concerning the Manhattan Project, on which he was working at Los Alamos, Bulletin of the Atomic Scientists, XXIII, No. 10, December 1967, p. 15.
17 Dinerstein, War and the Soviet Union, p. 71.
18 I would like to stress the word “theory,” for the Russians certainly accept the fact of deterrence. The difference is that whereas American theorists of mutual deterrence regard this condition as mutually desirable and permanent, Soviet strategists regard it as undesirable and transient: they are entirely disinclined to allow us the capability of deterring them.
19 Matthew P. Gallagher and Karl F. Spielmann, Jr., Soviet Decision-Making for Defense (1972), p. 39.
20 The figures are from Grechko, Vooruzhonnye sily, p. 97.
21 Metodologicheskie problemy voennoi teorii i praktiki (“Methodological Problems of Military Theory and Practice”) (Moscow, Ministry of Defense of the USSR, 1969), p. 288.
22 V.D. Sokolovskii, Soviet Military Strategy (Rand Corporation, 1963), p. 99, emphasis added.
23 See, e.g., Metodologicheskie problemy, pp. 289-90.
24 Gouré et al., The Role of Nuclear Forces, p. 9.
25 Cited ibid., p. 106.
26 I have in mind the SS-20, a recently developed Soviet rocket. This is a two-stage version of the intercontinental SS-16 which can be turned into an SS-16 with the addition of a third booster and fired from the same launcher. Its production is not restricted by SALT I and not covered by the Vladivostok Accord.
27 Blackett, Fear, p. 88. As a matter of fact, recent unofficial Soviet calculations stress that the United States dropped on Vietnam the TNT equivalent of 650 Hiroshima-type bombs—also without winning the war: Kommunist Vooruzhonnykh Sil (“The Communist of the Armed Forces”), No. 24, December 1973, p. 27, cited in Gouré et al., The Role of Nuclear Forces, p. 104.
28 Sokolovskii, Soviet Military Strategy (3rd edition), p. 21.
29 A.A. Grechko, Na strazhe mira i stroitel'stva Kommunizma, (“Guarding Peace and the Construction of Communism”) (Moscow, 1971), p. 41.
30 Metodologicheskie problemy, p. 288.
31 Blackett, Fear, p. 79.
32 On the subject of civil defense, see Leon Gouré, War Survival in Soviet Strategy (1976).
Why the Soviet Union Thinks It Could Fight & Win a Nuclear War
Must-Reads from Magazine
Can it be reversed?
Writing in these pages last year (“Illiberalism: The Worldwide Crisis,” July/August 2016), I described this surge of intemperate politics as a global phenomenon, a crisis of illiberalism stretching from France to the Philippines and from South Africa to Greece. Donald Trump and Bernie Sanders, I argued, were articulating American versions of this growing challenge to liberalism. By “liberalism,” I was referring not to the left or center-left but to the philosophy of individual rights, free enterprise, checks and balances, and cultural pluralism that forms the common ground of politics across the West.
Less a systematic ideology than a posture or sensibility, the new illiberalism nevertheless has certain core planks. Chief among these are a conspiratorial account of world events; hostility to free trade and finance capital; opposition to immigration that goes beyond reasonable restrictions and bleeds into virulent nativism; impatience with norms and procedural niceties; a tendency toward populist leader-worship; and skepticism toward international treaties and institutions, such as NATO, that provide the scaffolding for the U.S.-led postwar order.
The new illiberals, I pointed out, all tend to admire established authoritarians to varying degrees. Trump, along with France’s Marine Le Pen and many others, looks to Vladimir Putin. For Sanders, it was Hugo Chavez’s Venezuela, where, the Vermont socialist said in 2011, “the American dream is more apt to be realized.” Even so, I argued, the crisis of illiberalism traces mainly to discontents internal to liberal democracies.
Trump’s election and his first eight months in office have confirmed the thrust of my predictions, if not all of the policy details. On the policy front, the new president has proved too undisciplined, his efforts too wild and haphazard, to reorient the U.S. government away from postwar liberal order.
The courts blunted the “Muslim ban.” The Trump administration has reaffirmed Washington’s commitment to defend treaty partners in Europe and East Asia. Trumpian grumbling about allies not paying their fair share—a fair point in Europe’s case, by the way—has amounted to just that. The president did pull the U.S. out of the Trans-Pacific Partnership, but even the ultra-establishmentarian Hillary Clinton went from supporting to opposing the pact once she figured out which way the Democratic winds were blowing. The North American Free Trade Agreement, which came into being nearly a quarter-century ago, does look shaky at the moment, but there is no reason to think that it won’t survive in some modified form.
Yet on the cultural front, the crisis of illiberalism continues to rage. If anything, it has intensified, as attested by the events surrounding the protest over a Robert E. Lee statue in Charlottesville, Virginia. The president refused to condemn unequivocally white nationalists who marched with swastikas and chanted “Jews will not replace us.” Trump even suggested there were “very fine people” among them, thus winking at the so-called alt-right as he had during the campaign. In the days that followed, much of the left rallied behind so-called antifa (“anti-fascist”) militants who make no secret of their allegiance to violent totalitarian ideologies at the other end of the political spectrum.
Disorder is the new American normal, then. Questions that appeared to have been settled—about the connection between economic and political liberty, the perils of conspiracism and romantic politics, America’s unique role on the world stage, and so on—are unsettled once more. Serious people wonder out loud whether liberal democracy is worth maintaining at all, with many of them concluding that it is not. The return of ideas that for good reason were buried in the last century threatens the decent political order that has made the U.S. an exceptionally free and prosperous civilization.F or many leftists, America’s commitment to liberty and equality before the law has always masked despotism and exploitation. This view long predated Trump’s rise, and if they didn’t subscribe to it themselves, too often mainstream Democrats and progressives treated its proponents—the likes of Noam Chomsky and Howard Zinn—as beloved and respectable, if slightly eccentric, relatives.
This cynical vision of the free society (as a conspiracy against the dispossessed) was a mainstay of Cold War–era debates about the relative merits of Western democracy and Communism. Soviet apologists insisted that Communist states couldn’t be expected to uphold “merely” formal rights when they had set out to shape a whole new kind of man. That required “breaking a few eggs,” in the words of the Stalinist interrogators in Arthur Koestler’s Darkness at Noon. Anyway, what good were free speech and due process to the coal miner, when under capitalism the whole social structure was rigged against him?
That line worked for a time, until the scale of Soviet tyranny became impossible to justify by anyone but its most abject apologists. It became obvious that “bourgeois justice,” however imperfect, was infinitely preferable to the Marxist alternative. With the Communist experiment discredited, and Western workers uninterested in staging world revolution, the illiberal left began shifting instead to questions of identity. In race-gender-sexuality theory and the identitarian “subaltern,” it found potent substitutes for dialectical materialism and the proletariat. We are still living with the consequences of this shift.
Although there were superficial resemblances, this new politics of identity differed from earlier civil-rights movements. Those earlier movements had sought a place at the American table for hitherto entirely or somewhat excluded groups: blacks, women, gays, the disabled, and so on. In doing so, they didn’t seek to overturn or radically reorganize the table. Instead, they reaffirmed the American Founding (think of Martin Luther King Jr.’s constant references to the Declaration of Independence). And these movements succeeded, owing to America’s tremendous capacity for absorbing social change.
Yet for the new identitarians, as for the Marxists before them, liberal-democratic order was systematically rigged against the downtrodden—now redefined along lines of race, gender, and sexuality, with social class quietly swept under the rug. America’s strides toward racial progress, not least the election and re-election of an African-American president, were dismissed. The U.S. still deserved condemnation because it fell short of perfect inclusion, limitless autonomy, and complete equality—conditions that no free society can achieve given the root fact of human nature. The accidentals had changed from the Marxist days, in other words, but the essentials remained the same.
In one sense, though, the identitarians went further. The old Marxists still claimed to stand on objectively accessible truth. Not so their successors. Following intellectual lodestars such as the gender theorist Judith Butler, the identity left came to reject objective truth—and with it, biological sex differences, aesthetic standards in art, the possibility of universal moral precepts, and much else of the kind. All of these things, the left identitarians said, were products of repressive institutions, hierarchies, and power.
Today’s “social-justice warriors” are heirs to this sordid intellectual legacy. They claim to seek justice. But, unmoored from any moral foundations, SJW justice operates like mob justice and revolutionary terror, usually carried out online. SJWs claim to protect individual autonomy, but the obsession with group identity and power dynamics means that SJW autonomy claims must destroy the autonomy of others. Self-righteousness married to total relativism is a terrifying thing.
It isn’t enough to have legalized same-sex marriage in the U.S. via judicial fiat; the evangelical baker must be forced to bake cakes for gay weddings. It isn’t enough to have won legal protection and social acceptance for the transgendered; the Orthodox rabbi must use preferred trans pronouns on pain of criminal prosecution. Likewise, since there is no objective truth to be gained from the open exchange of ideas, any speech that causes subjective discomfort among members of marginalized groups must be suppressed, if necessary through physical violence. Campus censorship that began with speech codes and mobs that prevented conservative and pro-Israel figures from speaking has now evolved into a general right to beat anyone designated as a “fascist,” on- or off-campus.
For the illiberal left, the election of Donald Trump was indisputable proof that behind America’s liberal pieties lurks, forever, the beast of bigotry. Trump, in this view, wasn’t just an unqualified vulgarian who nevertheless won the decisive backing of voters dissatisfied with the alternative or alienated from mainstream politics. Rather, a vote for Trump constituted a declaration of war against women, immigrants, and other victims of American “structures of oppression.” There would be no attempt to persuade Trump supporters; war would be answered by war.
This isn’t liberalism. Since it can sometimes appear as an extension of traditional civil-rights activism, however, identity leftism has glommed itself onto liberalism. It is frequently impossible to tell where traditional autonomy- and equality-seeking liberalism ends and repressive identity leftism begins. Whether based on faulty thinking or out of a sense of weakness before an angry and energetic movement, liberals have too often embraced the identity left as their own. They haven’t noticed how the identitarians seek to undermine, not rectify, liberal order.
Some on the left, notably Columbia University’s Mark Lilla, are sounding the alarm and calling on Democrats to stress the common good over tribalism. Yet these are a few voices in the wilderness. Identitarians of various stripes still lord over the broad left, where it is fashionable to believe that the U.S. project is predatory and oppressive by design. If there is a viable left alternative to identity on the horizon, it is the one offered by Sanders and his “Bernie Bros”—which is to say, a reversion to the socialism and class struggle of the previous century.
Americans, it seems, will have to wait a while for reason and responsibility to return to the left.T
hen there is the illiberal fever gripping American conservatives. Liberal democracy has always had its critics on the right, particularly in Continental Europe, where statist, authoritarian, and blood-and-soil accounts of conservatism predominate. Mainstream Anglo-American conservatism took a different course. It has championed individual rights, free enterprise, and pluralism while insisting that liberty depends on public virtue and moral order, and that sometimes the claims of liberty and autonomy must give way to those of tradition, state authority, and the common good.
The whole beauty of American order lies in keeping in tension these rival forces that are nevertheless fundamentally at peace. The Founders didn’t adopt wholesale Enlightenment liberalism; rather, they tempered its precepts about universal rights with the teachings of biblical religion as well as Roman political theory. The Constitution drew from all three wellsprings. The product was a whole, and it is a pointless and ahistorical exercise to elevate any one source above the others.
American conservatism and liberalism, then, are in fact branches of each other, the one (conservatism) invoking tradition and virtue to defend and, when necessary, discipline the regime of liberty; the other (liberalism) guaranteeing the open space in which churches, volunteer organizations, philanthropic activity, and other sources of tradition and civic virtue flourish, in freedom, rather than through state establishment or patronage.
One result has been long-term political stability, a blessing that Americans take for granted. Another has been the transformation of liberalism into the lingua franca of all politics, not just at home but across a world that, since 1945, has increasingly reflected U.S. preferences. The great French classical liberal Raymond Aron noted in 1955 that the “essentials of liberalism—the respect for individual liberty and moderate government—are no longer the property of a single party: they have become the property of all.” As Aron archly pointed out, even liberalism’s enemies tend to frame their objections using the rights-based talk associated with liberalism.
Under Trump, however, some in the party of the right have abdicated their responsibility to liberal democracy as a whole. They have reduced themselves to the lowest sophistry in defense of the New Yorker’s inanities and daily assaults on presidential norms. Beginning when Trump clinched the GOP nomination last year, a great deal of conservative “thinking” has amounted to: You did X to us, now enjoy it as we dish it back to you and then some. Entire websites and some of the biggest stars in right-wing punditry are singularly devoted to making this rather base point. If Trump is undermining this or that aspect of liberal order that was once cherished by conservatives, so be it; that 63 million Americans supported him and that the president “drives the left crazy”—these are good enough reasons to go along.
Some of this is partisan jousting that occurs with every administration. But when it comes to Trump’s most egregious statements and conduct—such as his repeated assertions that the U.S. and Putin’s thugocracy are moral equals—the apologetics are positively obscene. Enough pooh-poohing, whataboutery, and misdirection of this kind, and there will be no conservative principle left standing.
More perniciously, as once-defeated illiberal philosophies have returned with a vengeance to the left, so have their reactionary analogues to the right. The two illiberalisms enjoy a remarkable complementarity and even cross-pollinate each other. This has developed to the point where it is sometimes hard to distinguish Tucker Carlson from Chomsky, Laura Ingraham from Julian Assange, the Claremont Review from New Left Review, and so on.
Two slanders against liberalism in particular seem to be gathering strength on the thinking right. The first is the tendency to frame elements of liberal democracy, especially free trade, as a conspiracy hatched by capitalists, the managerial class, and others with soft hands against American workers. One needn’t renounce liberal democracy as a whole to believe this, though believers often go the whole hog. The second idea is that liberalism itself was another form of totalitarianism all along and, therefore, that no amount of conservative course correction can set right what is wrong with the system.
These two theses together represent a dismaying ideological turn on the right. The first—the account of global capitalism as an imposition of power over the powerless—has gained currency in the pages of American Affairs, the new journal of Trumpian thought, where class struggle is a constant theme. Other conservatives, who were always skeptical of free enterprise and U.S.-led world order, such as the Weekly Standard’s Christopher Caldwell, are also publishing similar ideas to a wider reception than perhaps greeted them in the past.
In a March 2017 essay in the Claremont Review of Books, for example, Caldwell flatly described globalization as a “con game.” The perpetrators, he argued, are “unscrupulous actors who have broken promises and seized a good deal of hard-won public property.” These included administrations of both parties that pursued trade liberalization over decades, people who live in cities and therefore benefit from the knowledge-based economy, American firms, and really anyone who has ever thought to capitalize on global supply chains to boost competitiveness—globalists, in a word.
By shipping jobs and manufacturing processes overseas, Caldwell contended, these miscreants had stolen not just material things like taxpayer-funded research but also concepts like “economies of scale” (you didn’t build that!). Thus, globalization in the West differed “in degree but not in kind from the contemporaneous Eastern Bloc looting of state assets.”
That comparison with predatory post-Communist privatization is a sure sign of ideological overheating. It is somewhat like saying that a consumer bank’s lending to home buyers differs in degree but not in kind from a loan shark’s racket in a housing project. Well, yes, in the sense that the underlying activity—moneylending, the purchase of assets—is the same in both cases. But the context makes all the difference: The globalization that began after World War II and accelerated in the ’90s took place within a rules-based system, which duly elected or appointed policymakers in Western democracies designed in good faith and for a whole host of legitimate strategic and economic reasons.
These policymakers knew that globalization was as old as civilization itself. It would take place anyway, and the only question was whether it would be rules-based and efficient or the kind of globalization that would be driven by great-power rivalry and therefore prone to protectionist trade wars. And they were right. What today’s anti-trade types won’t admit is that defeating the Trans-Pacific Partnership and a proposed U.S.-European trade pact known as TTIP won’t end globalization as such; instead, it will cede the game to other powers that are less concerned about rules and fair play.
The postwar globalizers may have gone too far (or not far enough!). They certainly didn’t give sufficient thought to the losers in the system, or how to deal with the de-industrialization that would follow when information became supremely mobile and wages in the West remained too high relative to skills and productivity gains in the developing world. They muddled and compromised their way through these questions, as all policymakers in the real world do.
The point is that these leaders—the likes of FDR, Churchill, JFK, Ronald Reagan, Margaret Thatcher, and, yes, Bill Clinton—acted neither with malice aforethought nor anti-democratically. It isn’t true, contra Caldwell, that free trade necessarily requires “veto-proof and non-consultative” politics. The U.S., Britain, and other members of what used to be called the Free World have respected popular sovereignty (as understood at the time) for as long as they have been trading nations. Put another way, you were far more likely to enjoy political freedom if you were a citizen of one of these states than of countries that opposed economic liberalism in the 20th century. That remains true today. These distinctions matter.
Caldwell and like-minded writers of the right, who tend to dwell on liberal democracies’ crimes, are prepared to tolerate far worse if it is committed in the name of defeating “globalism.” Hence the speech on Putin that Caldwell delivered this spring at a Hillsdale College gathering in Phoenix. Promising not to “talk about what to think about Putin,” he proceeded to praise the Russian strongman as the “preeminent statesman of our time” (alongside Turkish strongman Recep Tayyip Erdogan). Putin, Caldwell said, “has become a symbol of national self-determination.”
Then Caldwell made a remark that illuminates the link between the illiberalisms of yesterday and today. Putin is to “populist conservatives,” he declared, what Castro once was to progressives. “You didn’t have to be a Communist to appreciate the way Castro, whatever his excesses, was carving out a space of autonomy for his country.”
Whatever his excesses, indeed.T
he other big idea is that today’s liberal crises aren’t a bug but a core feature of liberalism. This line of thinking is particularly prevalent among some Catholic traditionalists and other orthodox Christians (both small- and capital-“o”). The common denominator, it seems to me, is having grown up as a serious believer at a time when many liberals—to their shame—have declared war on faith generally and social conservatism in particular.
The argument essentially is this:
We (social conservatives, traditionalists) saw the threat from liberalism coming. With its claims about abstract rights and universal reason, classical liberalism had always posed a danger to the Church and to people of God. We remembered what those fired up by the new ideas did to our nuns and altars in France. Still we made peace with American liberal order, because we were told that the Founders had “built on low but solid ground,” to borrow Leo Strauss’s famous formulation, or that they had “built better than they knew,” as American Catholic hierarchs in the 19th century put it.
Maybe these promises held good for a couple of centuries, the argument continues, but they no longer do. Witness the second sexual revolution under way today. The revolutionaries are plainly telling us that we must either conform our beliefs to Herod’s ways or be driven from the democratic public square. Can it still be said that the Founding rested on solid ground? Did the Founders really build better than they knew? Or is what is passing now precisely what they intended, the rotten fruit of the Enlightenment universalism that they planted in the Constitution? We don’t love Trump (or Putin, Hungary’s Viktor Orbán, etc.), but perhaps he can counter the pincer movement of sexual and economic liberalism, and restore a measure of solidarity and commitment to the Western project.
The most pessimistic of these illiberal critics go so far as to argue that liberalism isn’t all that different from Communism, that both are totalitarian children of the Enlightenment. One such critic, Harvard Law School’s Adrian Vermeule, summed up this position in a January essay in First Things magazine:
The stock distinction between the Enlightenment’s twins—communism is violently coercive while liberalism allows freedom of thought—is glib. Illiberal citizens, trapped [under liberalism] without exit papers, suffer a narrowing sphere of permitted action and speech, shrinking prospects, and increasing pressure from regulators, employers, and acquaintances, and even from friends and family. Liberal society celebrates toleration, diversity, and free inquiry, but in practice it features a spreading social, cultural, and ideological conformism.1
I share Vermeule’s despair and that of many other conservative-Christian friends, because there have been genuinely alarming encroachments against conscience, religious freedom, and the dignity of life in Western liberal democracies in recent years. Even so, despair is an unhelpful companion to sober political thought, and the case for plunging into political illiberalism is weak, even on social-conservative grounds.
Here again what commends liberalism is historical experience, not abstract theory. Simply put, in the real-world experience of the 20th century, the Church, tradition, and religious minorities fared far better under liberal-democratic regimes than they did under illiberal alternatives. Are coercion and conformity targeting people of faith under liberalism? To be sure. But these don’t take the form of the gulag or the concentration camp or the soccer stadium–cum-killing field. Catholic political practice knows well how to draw such moral distinctions between regimes: Pope John Paul II befriended Reagan. If liberal democracy and Communism were indeed “twins” whose distinctions are “glib,” why did he do so?
And as Pascal Bruckner wrote in his essay “The Tyranny of Guilt,” if liberal democracy does trap or jail you (politically speaking), it also invariably slips the key under your cell door. The Swedish midwives driven out of the profession over their pro-life views can take their story to the media. The Down syndrome advocacy outfit whose anti-eugenic advertising was censored in France can sue in national and then international courts. The Little Sisters of the Poor can appeal to the Supreme Court for a conscience exemption to Obamacare’s contraceptives mandate. And so on.
Conversely, once you go illiberal, you don’t just rid yourself of the NGOs and doctrinaire bureaucrats bent on forcing priests to perform gay marriages; you also lose the legal guarantees that protect the Church, however imperfectly, against capricious rulers and popular majorities. And if public opinion in the West is turning increasingly secular, indeed anti-Christian, as social conservatives complain and surveys seem to confirm, is it really a good idea to militate in favor of a more illiberal order rather than defend tooth and nail liberal principles of freedom of conscience? For tomorrow, the state might fall into Elizabeth Warren’s hands.
Nor, finally, is political liberalism alone to blame for the Church’s retreating on various fronts. There have been plenty of wounds inflicted by churchmen and laypeople, who believed that they could best serve the faith by conforming its liturgy, moral teaching, and public presence to liberal order. But political liberalism didn’t compel these changes, at least not directly. In the space opened up by liberalism, and amid the kaleidoscopic lifestyles that left millions of people feeling empty and confused, it was perfectly possible to propose tradition as an alternative. It is still possible to do so.N one of this is to excuse the failures of liberals. Liberals and mainstream conservatives must go back to the drawing board, to figure out why it is that thoughtful people have come to conclude that their system is incompatible with democracy, nationalism, and religious faith. Traditionalists and others who see Russia’s mafia state as a defender of Christian civilization and national sovereignty have been duped, but liberals bear some blame for driving large numbers of people in the West to that conclusion.
This is a generational challenge for the liberal project. So be it. Liberal societies like America’s by nature invite such questioning. But before we abandon the 200-and-some-year-old liberal adventure, it is worth examining the ways in which today’s left-wing and right-wing critiques of it mirror bad ideas that were overcome in the previous century. The ideological ferment of the moment, after all, doesn’t relieve the illiberals of the responsibility to reckon with the lessons of the past.
1 Vermeule was reviewing The Demon in Democracy, a 2015 book by the Polish political theorist and parliamentarian Ryszard Legutko that makes the same case. Fred Siegel’s review of the English edition appeared in our June 2016 issue.
How the courts are intervening to block some of the most unjust punishments of our time
Barrett’s decision marked the 59th judicial setback for a college or university since 2013 in a due-process lawsuit brought by a student accused of sexual assault. (In four additional cases, the school settled a lawsuit before any judicial decision occurred.) This body of law serves as a towering rebuke to the Obama administration’s reinterpretation of Title IX, the 1972 law barring sex discrimination in schools that receive federal funding.
Beginning in 2011, the Education Department’s Office for Civil Rights (OCR) issued a series of “guidance” documents pressuring colleges and universities to change how they adjudicated sexual-assault cases in ways that increased the likelihood of guilty findings. Amid pressure from student and faculty activists, virtually all elite colleges and universities have gone far beyond federal mandates and have even further weakened the rights of students accused of sexual assault.
Like all extreme victims’-rights approaches, the new policies had the greatest impact on the wrongly accused. A 2016 study from UCLA public-policy professor John Villasenor used just one of the changes—schools employing the lowest standard of proof, a preponderance of the evidence—to predict that as often as 33 percent of the time, campus Title IX tribunals would return guilty findings in cases involving innocent students. Villasenor’s study could not measure the impact of other Obama-era policy demands—such as allowing accusers to appeal not-guilty findings, discouraging cross-examination of accusers, and urging schools to adjudicate claims even when a criminal inquiry found no wrongdoing.
In a September 7 address at George Mason University, Education Secretary Betsy DeVos stated that “no student should be forced to sue their way to due process.” But once enmeshed in the campus Title IX process, a wrongfully accused student’s best chance for justice may well be a lawsuit filed after his college incorrectly has found him guilty. (According to data from United Educators, a higher-education insurance firm, 99 percent of students accused of campus sexual assault are male.) The Foundation for Individual Rights has identified more than 180 such lawsuits filed since the 2011 policy changes. That figure, obviously, excludes students with equally strong claims whose families cannot afford to go to court. These students face life-altering consequences. As Judge T.S. Ellis III noted in a 2016 decision, it is “so clear as to be almost a truism” that a student will lose future educational and employment opportunities if his college wrongly brands him a rapist.
“It is not the role of the federal courts to set aside decisions of school administrators which the court may view as lacking in wisdom or compassion.” So wrote the Supreme Court in a 1975 case, Wood v. Strickland. While the Supreme Court has made clear that colleges must provide accused students with some rights, especially when dealing with nonacademic disciplinary questions, courts generally have not been eager to intervene in such matters.
This is what makes the developments of the last four years all the more remarkable. The process began in May 2013, in a ruling against St. Joseph’s University, and has lately accelerated (15 rulings in 2016 and 21 thus far in 2017). Of the 40 setbacks for colleges in federal court, 14 came from judges nominated by Barack Obama, 11 from Clinton nominees, and nine from selections of George W. Bush. Brown University has been on the losing side of three decisions; Duke, Cornell, and Penn State, two each.
Court decisions since the expansion of Title IX activism have not all gone in one direction. In 36 of the due-process lawsuits, courts have permitted the university to maintain its guilty finding. (In four other cases, the university settled despite prevailing at a preliminary stage.) But even in these cases, some courts have expressed discomfort with campus procedures. One federal judge was “greatly troubled” that Georgia Tech veered “very far from an ideal representation of due process” when its investigator “did not pursue any line of investigation that may have cast doubt on [the accuser’s] account of the incident.” Another went out of his way to say that he considered it plausible that a former Case Western Reserve University student was actually “innocent of the charges levied against him.” And one state appellate judge opened oral argument by bluntly informing the University of California’s lawyer, “When I . . . finished reading all the briefs in this case, my comment was, ‘Where’s the kangaroo?’”
Judges have, obviously, raised more questions in cases where the college has found itself on the losing side. Those lawsuits have featured three common areas of concern: bias in the investigation, resulting in a college decision based on incomplete evidence; procedures that prevented the accused student from challenging his accuser’s credibility, chiefly through cross-examination; and schools utilizing a process that seemed designed to produce a predetermined result, in response to real or perceived pressure from the federal government.C olleges and universities have proven remarkably willing to act on incomplete information when adjudicating sexual-assault cases. In December 2013, for example, Amherst College expelled a student for sexual assault despite text messages (which the college investigator failed to discover) indicating that the accuser had consented to sexual contact. The accuser’s own testimony also indicated that she might have committed sexual assault, by initiating sexual contact with a student who Amherst conceded was experiencing an alcoholic blackout. When the accused student sued Amherst, the college said its failure to uncover the text messages had been irrelevant because its investigator had only sought texts that portrayed the incident as nonconsensual. In February, Judge Mark Mastroianni allowed the accused student’s lawsuit to proceed, commenting that the texts could raise “additional questions about the credibility of the version of events [the accuser] gave during the disciplinary proceeding.” The two sides settled in late July.
Amherst was hardly alone in its eagerness to avoid evidence that might undermine the accuser’s version of events; the same happened at Penn State, St. Joseph’s, Duke, Ohio State, Occidental, Lynn, Marlboro, Michigan, and Notre Dame.
Even in cases with a more complete evidentiary base, accused students have often been blocked from presenting a full-fledged defense. As part of its reinterpretation of Title IX, the Obama administration sought to shield campus accusers from cross-examination. OCR’s 2011 guidance “strongly” discouraged direct cross-examination of accusers by the accused student—a critical restriction, since most university procedures require the accused student, rather than his lawyer, to defend himself in the hearing. OCR’s 2014 guidance suggested that this type of cross-examination in and of itself could create a hostile environment. The Obama administration even spoke favorably about the growing trend among schools to abolish hearings altogether and allow a single official to serve as investigator, prosecutor, judge, and jury in sexual-assault cases.
The Supreme Court has never held that campus disciplinary hearings must permit cross-examination. Nonetheless, the recent attack on the practice has left schools struggling to explain why they would not want to utilize what the Court has described as the “greatest legal engine ever invented for the discovery of truth.” In June 2016, the University of Cincinnati found a student guilty of sexual assault after a hearing at which neither his accuser nor the university’s Title IX investigator appeared. In an unintentionally comical line, the hearing chair noted the absent witnesses before asking the accused student if he had “any questions of the Title IX report.” The student, befuddled, replied, “Well, since she’s not here, I can’t really ask anything of the report.” (The panel chair did not indicate how the “report” could have answered any questions.) Cincinnati found the student guilty anyway.1
Limitations on full cross-examination also played a role in judicial setbacks for Middlebury, George Mason, James Madison, Ohio State, Occidental, Penn State, Brandeis, Amherst, Notre Dame, and Skidmore.
Finally, since 2011, more than 300 students have filed Title IX complaints with the Office for Civil Rights, alleging mishandling of their sexual-assault allegation by their college. OCR’s leadership seemed to welcome the complaints, which allowed Obama officials not only to inspect the individual case but all sexual-assault claims at the school in question over a three-year period. Northwestern University professor Laura Kipnis has estimated that during the Obama years, colleges spent between $60 million and $100 million on these investigations. If OCR finds a Title IX violation, that might lead to a loss of federal funding. This has led Harvard Law professors Jeannie Suk Gersen, Janet Halley, Elizabeth Bartholet, and Nancy Gertner to observe in a white paper submitted to OCR that universities have “strong incentives to ensure the school stays in OCR’s good graces.”
One of the earliest lawsuits after the Obama administration’s policy shift, involving former Xavier University basketball player Dez Wells, demonstrated how an OCR investigation can affect the fairness of a university inquiry. The accuser’s complaint had been referred both to Xavier’s Title IX office and the Cincinnati police. The police concluded that the allegation was meritless; Hamilton County Prosecuting Attorney Joseph Deters later said he considered charging the accuser with filing a false police report.
Deters asked Xavier to delay its proceedings until his office completed its investigation. School officials refused. Instead, three weeks after the initial allegation, the university expelled Wells. He sued and speculated that Xavier’s haste came not from a quest for justice but instead from a desire to avoid difficulties in finalizing an agreement with OCR to resolve an unrelated complaint filed by two female Xavier students. (In recent years, OCR has entered into dozens of similar resolution agreements, which bind universities to policy changes in exchange for removing the threat of losing federal funds.) In a July 2014 ruling, Judge Arthur Spiegel observed that Xavier’s disciplinary tribunal, however “well-equipped to adjudicate questions of cheating, may have been in over its head with relation to an alleged false accusation of sexual assault.” Soon thereafter, the two sides settled; Wells transferred to the University of Maryland.
Ohio State, Occidental, Cornell, Middlebury, Appalachian State, USC, and Columbia have all found themselves on the losing side of court decisions arising from cases that originated during a time in which OCR was investigating or threatening to investigate the school. (In the Ohio State case, one university staffer testified that she didn’t know whether she had an obligation to correct a false statement by an accuser to a disciplinary panel.) Pressure from OCR can be indirect, as well. The Obama administration interpreted federal law as requiring all universities to have at least one Title IX coordinator; larger universities now employ dozens of Title IX personnel who, as the Harvard Law professors explained, “have reason to fear for their jobs if they hold a student not responsible or if they assign a rehabilitative or restorative rather than a harshly punitive sanction.”A mid the wave of judicial setbacks for universities, two decisions in particular stand out. Easily the most powerful opinion in a campus due-process case came in March 2016 from Judge F. Dennis Saylor. While the stereotypical campus sexual-assault allegation results from an alcohol-filled, one-night encounter between a male and a female student, a case at Brandeis University involved a long-term monogamous relationship between two male students. A bad breakup led to the accusing student’s filing the following complaint, against which his former boyfriend was expected to provide a defense: “Starting in the month of September, 2011, the Alleged violator of Policy had numerous inappropriate, nonconsensual sexual interactions with me. These interactions continued to occur until around May 2013.”
To adjudicate, Brandeis hired a former OCR staffer, who interviewed the two students and a few of their friends. Since the university did not hold a hearing, the investigator decided guilt or innocence on her own. She treated each incident as if the two men were strangers to each other, which allowed her to determine that sexual “violence” had occurred in the relationship. The accused student, she found, sometimes looked at his boyfriend in the nude without permission and sometimes awakened his boyfriend with kisses when the boyfriend wanted to stay asleep. The university’s procedures prevented the student from seeing the investigator’s report, with its absurdly broad definition of sexual misconduct, in preparing his appeal. “In the context of American legal culture,” Boston Globe columnist Dante Ramos later argued, denying this type of information “is crazy.” “Standard rules of evidence and other protections for the accused keep things like false accusations or mistakes by authorities from hurting innocent people.” When the university appeal was denied, the student sued.
At an October 2015 hearing to consider the university’s motion to dismiss, Saylor seemed flabbergasted at the unfairness of the school’s approach. “I don’t understand,” he observed, “how a university, much less one named after Louis Brandeis, could possibly think that that was a fair procedure to not allow the accused to see the accusation.” Brandeis’s lawyer cited pressure to conform to OCR guidance, but the judge deemed the university’s procedures “closer to Salem 1692 than Boston, 2015.”
The following March, Saylor issued an 89-page opinion that has been cited in virtually every lawsuit subsequently filed by an accused student. “Whether someone is a ‘victim’ is a conclusion to be reached at the end of a fair process, not an assumption to be made at the beginning,” Saylor wrote. “If a college student is to be marked for life as a sexual predator, it is reasonable to require that he be provided a fair opportunity to defend himself and an impartial arbiter to make that decision.” Saylor concluded that Brandeis forced the accused student “to defend himself in what was essentially an inquisitorial proceeding that plausibly failed to provide him with a fair and reasonable opportunity to be informed of the charges and to present an adequate defense.”
The student, vindicated by the ruling’s sweeping nature, then withdrew his lawsuit. He currently is pursuing a Title IX complaint against Brandeis with OCR.
Four months later, a three-judge panel of the Second Circuit Court of Appeals produced an opinion that lacked Saylor’s rhetorical flourish or his understanding of the basic unfairness of the campus Title IX process. But by creating a more relaxed standard for accused students to make federal Title IX claims, the Second Circuit’s decision in Doe v. Columbia carried considerable weight.
Two Columbia students who had been drinking had a brief sexual encounter at a party. More than four months later, the accuser claimed she was too intoxicated to have consented. Her allegation came in an atmosphere of campus outrage about the university’s allegedly insufficient toughness on sexual assault. In this setting, the accused student found Columbia’s Title IX investigator uninterested in hearing his side of the story. He cited witnesses who would corroborate his belief that the accuser wasn’t intoxicated; the investigator declined to speak with them. The student was found guilty, although for reasons differing from the initial claim; the Columbia panel ruled that he had “directed unreasonable pressure for sexual activity toward the [accuser] over a period of weeks,” leaving her unable to consent on the night in question. He received a three-semester suspension for this nebulous offense—which even his accuser deemed too harsh. He sued, and the case was assigned to Judge Jesse Furman.
Furman’s opinion provided a ringing victory for Columbia and the Obama-backed policies it used. As Title IX litigator Patricia Hamill later observed, Furman’s “almost impossible standard” required accused students to have inside information about the institution’s handling of other sexual-assault claims—information they could plausibly obtain only through the legal process known as discovery, which happens at a later stage of litigation—in order to survive a university’s initial motion to dismiss. Furman suggested that, to prevail, an accused student would need to show that his school treated a female student accused of sexual assault more favorably, or at least provide details about how cases against other accused students showed a pattern of bias. But federal privacy law keeps campus disciplinary hearings private, leaving most accused students with little opportunity to uncover the information before their case is dismissed.
At the same time, the opinion excused virtually any degree of unfairness by the institution. Furman reasoned that taking “allegations of rape on campus seriously and . . . treat[ing] complainants with a high degree of sensitivity” could constitute “lawful” reasons for university unfairness toward accused students. Samantha Harris of the Foundation for Individual Rights in Education detected the decision’s “immediate and nationwide impact” in several rulings against accused students. It also played the same role in university briefs that Saylor’s Brandeis opinion did in filings by accused students.
The Columbia student’s lawyer, Andrew Miltenberg, appealed Furman’s ruling to the Second Circuit. The stakes were high, since a ruling affirming the lower court’s reasoning would have all but foreclosed Title IX lawsuits by accused students in New York, Connecticut, and Vermont. But a panel of three judges, all nominated by Democratic presidents, overturned Furman’s decision. In the opinion’s crucial passage, Judge Pierre Leval held that a university “is not excused from liability for discrimination because the discriminatory motivation does not result from a discriminatory heart, but rather from a desire to avoid practical disadvantages that might result from unbiased action. A covered university that adopts, even temporarily, a policy of bias favoring one sex over the other in a disciplinary dispute, doing so in order to avoid liability or bad publicity, has practiced sex discrimination, notwithstanding that the motive for the discrimination did not come from ingrained or permanent bias against that particular sex.” Before the Columbia decision, courts almost always had rebuffed Title IX pleadings from accused students. More recently, judges have allowed Title IX claims to proceed against Amherst, Cornell, California–Santa Barbara, Drake, and Rollins.
After the Second Circuit’s decision, Columbia settled with the accused student, sparing its Title IX decision-makers from having to testify at a trial. James Madison was one of the few universities to take a different course, with disastrous results. A lawsuit from an accused student survived a motion to dismiss, but the university refused to settle, allowing the student’s lawyer to depose the three school employees who had decided his client’s fate. One unintentionally revealed that he had misapplied the university’s own definition of consent. Another cited the importance of the accuser’s slurring words on a voicemail, thus proving her extreme intoxication on the night of the alleged assault. It was left to the accused student’s lawyer, at a deposition months after the decision had been made, to note that the voicemail in question actually was received on a different night. In December 2016, Judge Elizabeth Dillon, an Obama nominee, granted summary judgment to the accused student, concluding that “significant anomalies in the appeal process” violated his due-process rights under the Constitution.niversities were on the losing side of 36 due-process rulings when Obama appointee Catherine Lhamon was presiding over the Office for Civil Rights between 2013 and 2016; no record exists of her publicly acknowledging any of them. In June 2017, however, Lhamon suddenly rejoiced that “yet another federal court” had found that students disciplined for sexual misconduct “were not denied due process.” That Fifth Circuit decision, involving two former students at the University of Houston, was an odd case for her to celebrate. The majority cabined its findings to the “unique facts” of the case—that the accused students likely would have been found guilty even under the fairest possible process. And the dissent, from Judge Edith Jones, denounced the procedures championed by Lhamon and other Obama officials as “heavily weighted in favor of finding guilt,” predicting “worse to come if appellate courts do not step in to protect students’ procedural due process right where allegations of quasi-criminal sexual misconduct arise.”
At this stage, Lhamon, who now chairs the U.S. Commission on Civil Rights, cannot be taken seriously when it comes to questions of campus due process. But other defenders of the current Title IX regime have offered more substantive commentary about the university setbacks.
Legal scholar Michelle Anderson was one of the few to even discuss the due-process decisions. “Colleges and universities do not always adjudicate allegations of sexual assault well,” she noted in a 2016 law review article defending the Obama-era policies. Anderson even conceded that some colleges had denied “accused students fairness in disciplinary adjudication.” But these students sued, “and campuses are responding—as they must—when accused students prevail. So campuses face powerful legal incentives on both sides to address campus sexual assault, and to do so fairly and impartially.”
This may be true, but Anderson does not explain why wrongly accused students should bear the financial and emotional burden of inducing their colleges to implement fair procedures. More important, scant evidence exists that colleges have responded to the court victories of wrongly accused students by creating fairer procedures. Some have even made it more difficult for wrongly accused students to sue. After losing a lawsuit in December 2014, Brown eliminated the right of students accused of sexual assault to have “every opportunity” to present evidence. That same year, an accused student showed how Swarthmore had deviated from its own procedures in his case. The college quickly settled the lawsuit—and then added a clause to its procedures immunizing it from similar claims in the future. Swarthmore currently informs accused students that “rules of evidence ordinarily found in legal proceedings shall not be applied, nor shall any deviations from any of these prescribed procedures alone invalidate a decision.”
Many lawsuits are still working their way through the judicial system; three cases are pending at federal appellate courts. Of the two that address substantive matters, oral arguments seemed to reveal skepticism of the university’s position. On July 26, a three-judge panel of the First Circuit considered a case at Boston College, where the accused student plausibly argued that someone else had committed the sexual assault (which occurred on a poorly lit dance floor). Judges Bruce Selya and William Kayatta seemed troubled that a Boston College dean had improperly intruded on the hearing board’s deliberations. At the Sixth Circuit a few days later, Judges Richard Griffin and Amul Thapar both expressed concerns about the University of Cincinnati’s downplaying the importance of cross-examination in campus-sex adjudications. Judge Eric Clay was quieter, but he wondered about the tension between the university’s Title IX and truth-seeking obligations.
In a perfect world, academic leaders themselves would have created fairer processes without judicial intervention. But in the current campus environment, such an approach is impossible. So, at least for the short term, the courts remain the best, albeit imperfect, option for students wrongly accused of sexual assault. Meanwhile, every year, young men entrust themselves and their family’s money to institutions of higher learning that are indifferent to their rights and unconcerned with the injustices to which these students might be subjected.
1 After a district court placed that finding on hold, the university appealed to the Sixth Circuit.
Review of 'Terror in France' By Gilles Kepel
Kepel is particularly knowledgeable about the history and process of radicalization that takes place in his nation’s heavily Muslim banlieues (the depressed housing projects ringing Paris and other major cities), and Terror in France is informed by decades of fieldwork in these volatile locales. What we have been witnessing for more than a decade, Kepel argues, is the “third wave” of global jihadism, which is not so much a top-down doctrinally inspired campaign (as were the 9/11 attacks, directed from afar by the oracular figure of Osama bin Laden) but a bottom-up insurgency with an “enclave-based ethnic-racial logic of violence” to it. Kepel traces the phenomenon back to 2005, a convulsive year that saw the second-generation descendants of France’s postcolonial Muslim immigrants confront a changing socio-political landscape.
That was the year of the greatest riots in modern French history, involving mostly young Muslim men. It was also the year that Abu Musab al-Suri, the Syrian-born Islamist then serving as al-Qaeda’s operations chief in Europe, published The Global Islamic Resistance Call. This 1,600-page manifesto combined pious imprecations against the West with do-it-yourself ingenuity, an Anarchist’s Cookbook for the Islamist set. In Kepel’s words, the manifesto preached a “jihadism of proximity,” the brand of civil war later adopted by the Islamic State. It called for ceaseless, mass-casualty attacks in Western cities—attacks which increase suspicion and regulation of Muslims and, in turn, drive those Muslims into the arms of violent extremists.
The third-generation jihad has been assisted by two phenomena: social-networking sites that easily and widely disseminate Islamist propaganda (thus increasing the rate of self-radicalization) and the so-called Arab Spring, which led to state collapse in Syria and Libya, providing “an exceptional site for military training and propaganda only a few hours’ flight from Europe, and at a very low cost.”
Kepel’s book is not just a study of the ideology and tactics of Islamists but a sociopolitical overview of how this disturbing phenomenon fits within a country on the brink. For example, Kepel finds that jihadism is emerging in conjunction with developments such as the “end of industrial society.” A downturn in work has led to an ominous situation in which a “right-wing ethnic nationalism” preying on the economically anxious has risen alongside Islamism as “parallel conduits for expressing grievances.” Filling a space left by the French Communist Party (which once brought the ethnic French working class and Arab immigrants together), these two extremes leer at each other from opposite sides of a societal chasm, signaling the potentially cataclysmic future that awaits France if both mass unemployment and Islamist terror continue undiminished.
The French economy has also had a more direct inciting effect on jihadism. Overregulated labor markets make it difficult for young Muslims to get jobs, thus exacerbating the conditions of social deprivation and exclusion that make individuals susceptible to radicalization. The inability to tackle chronic unemployment has led to widespread Muslim disillusionment with the left (a disillusionment aggravated by another, often glossed over, factor: widespread Muslim opposition to the Socialist Party’s championing of same-sex marriage). Essentially, one left-wing constituency (unions) has made the unemployment of another constituency (Muslim youth) the mechanism for maintaining its privileges.
Kepel does not, however, cite deprivation as the sole or even main contributing factor to Islamist radicalization. One Parisian banlieue that has sent more than 80 residents to fight in Syria, he notes, has “attractive new apartment buildings” built by the state and features a mosque “constructed with the backing of the Socialist mayor.” It is also the birthplace of well-known French movie stars of Arab descent, and thus hardly a place where ambition goes to die. “The Islamophobia mantra and the victim mentality it reinforces makes it possible to rationalize a total rejection of France and a commitment to jihad by making a connection between unemployment, discrimination, and French republican values,” Kepel writes. Indeed, Kepel is refreshingly derisive of the term “Islamophobia” throughout the book, excoriating Islamists and their fellow travelers for “substituting it for anti-Semitism as the West’s cardinal sin.” These are meaningful words coming from Kepel, a deeply learned scholar of Islam who harbors great respect for the faith and its adherents.
Kepel also weaves the saga of jihadism into the ongoing “kulturkampf within the French left.” Arguments about Islamist terrorism demonstrate a “divorce between a secular progressive tradition” and the children of the Muslim immigrants this tradition fought to defend. The most ironically perverse manifestation of this divorce was ISIS’s kidnapping of Didier François, co-founder of the civil-rights organization SOS Racisme. Kepel recognizes the origins of this divorce in the “red-green” alliance formed decades ago between Islamists and elements of the French intellectual left, such as Michel Foucault, a cheerleader of the Iranian revolution.
Though he offers a rigorous history and analysis of the jihadist problem, Kepel is generally at a loss for solutions. He decries a complacent French elite, with its disregard for genuine expertise (evidenced by the decline in institutional academic support for Islamicists and Arabists) and the narrow, relatively impenetrable way in which it perpetuates itself, chiefly with a single school (the École normale supérieure) that practically every French politician must attend. Despite France’s admirable republican values, this has made the process of assimilation rather difficult. But other than wishing that the public education system become more effective and inclusive at instilling republican values, Kepel provides little in the way of suggestions as to how France emerges from this mess. That a scholar of such erudition and humanity can do little but throw up his hands and issue a sigh of despair cannot bode well. The third-generation jihad owes as much to the political breakdown in France as it does to the meltdown in the Middle East. Defeating this two-headed beast requires a new and comprehensive playbook: the West’s answer to The Global Islamic Resistance Call. That book has yet to be written.
resident Trump, in case you haven’t noticed, has a tendency to exaggerate. Nothing is “just right” or “meh” for him. Buildings, crowds, election results, and military campaigns are always outsized, gargantuan, larger, and more significant than you might otherwise assume. “People want to believe that something is the biggest and the greatest and the most spectacular,” he wrote 30 years ago in The Art of the Deal. “I call it truthful hyperbole. It’s an innocent form of exaggeration—and a very effective form of promotion.”
So effective, in fact, that the press has picked up the habit. Reporters and editors agree with the president that nothing he does is ordinary. After covering Trump for more than two years, they still can’t accept him as a run-of-the-mill politician. And while there are aspects of Donald Trump and his presidency that are, to say the least, unusual, the media seem unable to distinguish between the abnormal and significant—firing the FBI director in the midst of an investigation into one’s presidential campaign, for example—and the commonplace.
Consider the fiscal deal President Trump struck with Democratic leaders in early September.
On September 6, the president held an Oval Office meeting with Vice President Pence, Treasury Secretary Mnuchin, and congressional leaders of both parties. He had to find a way to (a) raise the debt ceiling, (b) fund the federal government, and (c) spend money on hurricane relief. The problem is that a bloc of House Republicans won’t vote for (a) unless the increase is accompanied by significant budget cuts, which interferes with (b) and (c). To raise the debt ceiling, then, requires Democratic votes. And the debt ceiling must be raised. “There is zero chance—no chance—we will not raise the debt ceiling,” Senate Majority Leader Mitch McConnell said in August.
The meeting went like this. First House Speaker Paul Ryan asked for an 18-month increase in the debt ceiling so Republicans wouldn’t have to vote again on the matter until after the midterm elections. Democrats refused. The bargaining continued until Ryan asked for a six-month increase. The Democrats remained stubborn. So Trump, always willing to kick a can down the road, interrupted Mnuchin to offer a three-month increase, a continuing resolution that will keep the government open through December, and about $8 billion in hurricane money. The Democrats said yes.
That, anyway, is what happened. But the media are not satisfied to report what happened. They want—they need—to tell you what it means. And what does it mean? Well, they aren’t really sure. But it’s something big. It’s something spectacular. For example:
1. “Trump Bypasses Republicans to Strike Deal on Debt Limit and Harvey Aid” was the headline of a story for the New York Times by Peter Baker, Thomas Kaplan, and Michael D. Shear. “The deal to keep the government open and paying its debts until Dec. 15 represented an extraordinary public turn for the president, who has for much of his term set himself up on the right flank of the Republican Party,” their article began. Fair enough. But look at how they import speculation and opinion into the following sentence: “But it remained unclear whether Mr. Trump’s collaboration with Democrats foreshadowed a more sustained shift in strategy by a president who has presented himself as a master dealmaker or amounted to just a one-time instinctual reaction of a mercurial leader momentarily eager to poke his estranged allies.”
2. “The decision was one of the most fascinating and mysterious moves he’s made with Congress during eight months in office,” reported Jeff Zeleny, Dana Bash, Deirdre Walsh, and Jeremy Diamond for CNN. Thanks for sharing!
3. “Trump budget deal gives GOP full-blown Stockholm Syndrome,” read the headline of Tina Nguyen’s piece for Vanity Fair. “Donald Trump’s unexpected capitulation to new best buds ‘Chuck and Nancy’ has thrown the Grand Old Party into a frenzy as Republicans search for explanations—and scapegoats.”
4. “For Conservatives, Trump’s Deal with Democrats Is Nightmare Come True,” read the headline for a New York Times article by Jeremy W. Peters and Maggie Haberman. “It is the scenario that President Trump’s most conservative followers considered their worst nightmare, and on Wednesday it seemed to come true: The deal-making political novice, whose ideology and loyalty were always fungible, cut a deal with Democrats.”
5. “Trump sides with Democrats on fiscal issues, throwing Republican plans into chaos,” read the Washington Post headline the day after the deal was announced. “The president’s surprise stance upended sensitive negotiations over the debt ceiling and other crucial policy issues this fall and further imperiled his already tenuous relationships with Senate Majority Leader Mitch McConnell and House Speaker Paul Ryan.” Yes, the negotiations were upended. Then they made a deal.
6. “Although elected as a Republican last year,” wrote Peter Baker of the Times, “Mr. Trump has shown in the nearly eight months in office that he is, in many ways, the first independent to hold the presidency since the advent of the two-party system around the time of the Civil War.” The title of Baker’s news analysis: “Bound to No Party, Trump Upends 150 Years of Two-Party Rule.” One hundred and fifty years? Why not 200?
The journalistic rule of thumb used to be that an article describing a political, social, or cultural trend requires at least three examples. Not while covering Trump. If Trump does something, anything, you should feel free to inflate its importance beyond all recognition. And stuff your “reporting” with all sorts of dramatic adjectives and frightening nouns: fascinating, mysterious, unexpected, extraordinary, nightmare, chaos, frenzy, and scapegoats. It’s like a Vince Flynn thriller come to life.
The case for the significance of the budget deal would be stronger if there were a consensus about whom it helped. There isn’t one. At first the press assumed Democrats had won. “Republicans left the Oval Office Wednesday stunned,” reported Rachael Bade, Burgess Everett, and Josh Dawsey of Politico. Another trio of Politico reporters wrote, “In the aftermath, Republicans seethed privately and distanced themselves publicly from the deal.” Republicans were “stunned,” reported Kristina Peterson, Siobhan Hughes, and Louise Radnofsky of the Wall Street Journal. “Meet the swamp: Donald Trump punts September agenda to December after meeting with Congress,” read the headline of Charlie Spiering’s Breitbart story.
By the following week, though, these very outlets had decided the GOP was looking pretty good. “Trump’s deal with Democrats bolsters Ryan—for now,” read the Politico headline on September 11. “McConnell: No New Debt Ceiling Vote until ‘Well into 2018,’” reported the Washington Post. “At this point…picking a fight with Republican leaders will only help him,” wrote Gerald Seib in the Wall Street Journal. “Trump has long warned that he would work with Democrats, if necessary, to fulfill his campaign promises. And Wednesday’s deal is a sign that he intends to follow through on that threat,” wrote Breitbart’s Joel Pollak.
The sensationalism, the conflicting interpretations, the visceral language is dizzying. We have so many reporters chasing the same story that each feels compelled to gussy up a quotidian budget negotiation until it resembles the Ribbentrop–Molotov pact, and none feel it necessary to apply to their own reporting the scrutiny and incredulity they apply to Trump. The truth is that no one knows what this agreement portends. Nor is it the job of a reporter to divine the meaning of current events like an augur of Rome. Sometimes a cigar is just a cigar. And a deal is just a deal.
Remembering something wonderful
Not surprisingly, many well-established performers were left in the lurch by the rise of the new media. Moreover, some vaudevillians who, like Fred Allen, had successfully reinvented themselves for radio were unable to make the transition to TV. But a handful of exceptionally talented performers managed to move from vaudeville to radio to TV, and none did it with more success than Jack Benny, whose feigned stinginess, scratchy violin playing, slightly effeminate demeanor, and preternaturally exact comic timing made him one of the world’s most beloved performers. After establishing himself in vaudeville, he became the star of a comedy series, The Jack Benny Program, that aired continuously, first on radio and then TV, from 1932 until 1965. Save for Bob Hope, no other comedian of his time was so popular.
With the demise of nighttime network radio as an entertainment medium, the 931 weekly episodes of The Jack Benny Program became the province of comedy obsessives—and because Benny’s TV series was filmed in black-and-white, it is no longer shown in syndication with any regularity. And while he also made Hollywood films, some of which were box-office hits, only one, Ernst Lubitsch’s To Be or Not to Be (1942), is today seen on TV other than sporadically.
Nevertheless, connoisseurs of comedy still regard Benny, who died in 1974, as a giant, and numerous books, memoirs, and articles have been published about his life and art. Most recently, Kathryn H. Fuller-Seeley, a professor at the University of Texas at Austin, has brought out Jack Benny and the Golden Age of Radio Comedy, the first book-length primary-source academic study of The Jack Benny Program and its star.1 Fuller-Seeley’s genuine appreciation for Benny’s work redeems her anachronistic insistence on viewing it through the fashionable prism of gender- and race-based theory, and her book, though sober-sided to the point of occasional starchiness, is often quite illuminating.
Most important of all, off-the-air recordings of 749 episodes of the radio version of The Jack Benny Program survive in whole or part and can easily be downloaded from the Web. As a result, it is possible for people not yet born when Benny was alive to hear for themselves why he is still remembered with admiration and affection—and why one specific aspect of his performing persona continues to fascinate close observers of the American scene.B orn Benjamin Kubelsky in Chicago in 1894, Benny was the son of Eastern European émigrés (his father was from Poland, his mother from Lithuania). He started studying violin at six and had enough talent to pursue a career in music, but his interests lay elsewhere, and by the time he was a teenager, he was working in vaudeville as a comedian who played the violin as part of his act. Over time he developed into a “monologist,” the period term for what we now call a stand-up comedian, and he began appearing in films in 1929 and on network radio three years after that.
Radio comedy, like silent film, is now an obsolete art form, but the program formats that it fostered in the ’20s and ’30s all survived into the era of TV, and some of them flourish to this day. One, episodic situation comedy, was developed in large part by Jack Benny and his collaborators. Benny and Harry Conn, his first full-time writer, turned his weekly series, which started out as a variety show, into a weekly half-hour playlet featuring a regular cast of characters augmented by guest stars. Such playlets, relying as they did on a setting that was repeated from week to week, were easier to write than the free-standing sketches favored by Allen, Hope, and other ex-vaudevillians, and by the late ’30s, the sitcom had become a staple of radio comedy.
The process, as documented by Fuller-Seeley, was a gradual one. The Jack Benny Program never broke entirely with the variety format, continuing to feature both guest stars (some of whom, like Ronald Colman, ultimately became semi-regular members of the show’s rotating ensemble of players) and songs sung by Dennis Day, a tenor who joined the cast in 1939. Nor was it the first radio situation comedy: Amos & Andy, launched in 1928, was a soap-opera-style daily serial that also featured regular characters. Nevertheless, it was Benny who perfected the form, and his own character would become the prototype for countless later sitcom stars.
The show’s pivotal innovation was to turn Benny and the other cast members into fictionalized versions of themselves—they were the stars of a radio show called “The Jack Benny Program.” Sadye Marks, Benny’s wife, played Mary Livingstone, his sharp-tongued secretary, with three other characters added as the self-reflexive concept took shape. Don Wilson, the stout, genial announcer, came on board in 1934. He was followed in 1936 by Phil Harris, Benny’s roguish bandleader, and, in 1939, by Day, Harris’s simple-minded vocalist. To this team was added a completely fictional character, Rochester Van Jones, Benny’s raspy-voiced, outrageously impertinent black valet, played by Eddie Anderson, who joined the cast in 1938.
As these five talented performers coalesced into a tight-knit ensemble, the jokey, vaudeville-style sketch comedy of the early episodes metamorphosed into sitcom-style scripts that portrayed their offstage lives, as well as the making of the show itself. Scarcely any conventional jokes were told, nor did Benny’s writers employ the topical and political references in which Allen and Hope specialized. Instead, the show’s humor arose almost entirely from the close interplay of character and situation.
Benny was not solely responsible for the creation of this format, which was forged by Conn and perfected by his successors. Instead, he doubled as the star and producer—or, to use the modern term, show runner—closely supervising the writing of the scripts and directing the performances of the other cast members. In addition, he and Conn turned the character of Jack Benny from a sophisticated vaudeville monologist into the hapless butt of the show’s humor, a vain, sexually inept skinflint whose character flaws were ceaselessly twitted by his colleagues, who in turn were given most of the biggest laugh lines.
This latter innovation was a direct reflection of Benny’s real-life personality. Legendary for his voluble appreciation of other comedians, he was content to respond to the wisecracking of his fellow cast members with exquisitely well-timed interjections like “Well!” and “Now, cut that out,” knowing that the comic spotlight would remain focused on the man of whom they were making fun and secure in the knowledge that his own comic personality was strong enough to let them shine without eclipsing him in the process.
And with each passing season, the fictional personalities of Benny and his colleagues became ever more firmly implanted in the minds of their listeners, thus allowing the writers to get laughs merely by alluding to their now-familiar traits. At the same time, Benny and his writers never stooped to coasting on their familiarity. Even the funniest of the “cheap jokes” that were their stock-in-trade were invariably embedded in carefully honed dramatic situations that heightened their effectiveness.
A celebrated case in point is the best-remembered laugh line in the history of The Jack Benny Program, heard in a 1948 episode in which a burglar holds Benny up on the street. “Your money or your life,” the burglar says—to which Jack replies, after a very long pause, “I’m thinking it over!” What makes this line so funny is, of course, our awareness of Benny’s stinginess, reinforced by a decade and a half of constant yet subtly varied repetition. What is not so well remembered is that the line is heard toward the end of an episode that aired shortly after Ronald Colman won an Oscar for his performance in A Double Life. Inspired by this real-life event, the writers concocted an elaborately plotted script in which Benny talks Colman (who played his next-door neighbor on the show) into letting him borrow the Oscar to show to Rochester. It is on his way home from this errand that Benny is held up, and the burglar not only robs him of his money but also steals the statuette, a situation that was resolved to equally explosive comic effect in the course of two subsequent episodes.
No mere joke-teller could have performed such dramatically complex scripts week after week with anything like Benny’s effectiveness. The secret of The Jack Benny Program was that its star, fully aware that he was not “being himself” but playing a part, did so with an actor’s skill. This was what led Ernst Lubitsch to cast him in To Be or Not to Be, in which he plays a mediocre Shakespearean tragedian, a character broadly related to but still quite different from the one who appeared on his own radio show. As Lubitsch explained to Benny, who was skeptical about his ability to carry off the part:
A clown—he is a performer what is doing funny things. A comedian—he is a performer what is saying funny things. But you, Jack, you are an actor, you are an actor playing the part of a comedian and this you are doing very well.
To Be or Not to Be also stands out from the rest of Benny’s work because he plays an identifiably Jewish character. The Jack Benny character that he played on radio and TV, by contrast, was never referred to or explicitly portrayed as Jewish. To be sure, most listeners were in no doubt of his Jewishness, and not merely because Benny made no attempt in real life to conceal his ethnicity, of which he was by all accounts proud. The Jack Benny Program was written by Jews, and the ego-puncturing insults with which their scripts were packed, as well as the schlemiel-like aspect of Benny’s “fall guy” character, were quintessentially Jewish in style.
As Benny explained in a 1948 interview cited by Fuller-Seeley:
The humor of my program is this: I’m a big shot, see? I’m fast-talking. I’m a smart guy. I’m boasting about how marvelous I am. I’m a marvelous lover. I’m a marvelous fiddle player. Then, five minutes after I start shooting off my mouth, my cast makes a shmo out of me.
Even so, his avoidance of specific Jewish identification on the air is noteworthy precisely because his character was a miser. At a time when overt anti-Semitism was still common in America, it is remarkable that Benny’s comic persona was based in large part on an anti-Semitic stereotype—yet one that seems not to have inspired any anti-Semitic attacks on Benny himself. When, in 1945, his writers came up with the idea of an “I Can’t Stand Jack Benny Because . . . ” write-in campaign, they received 270,000 entries. Only three made mention of his Jewishness.
As for the winning entry, submitted by a California lawyer, it says much about what insulated Benny from such attacks: “He fills the air with boasts and brags / And obsolete, obnoxious gags / The way he plays his violin / Is music’s most obnoxious sin / His cowardice alone, indeed, / Is matched by his obnoxious greed / And all the things that he portrays / Show up MY OWN obnoxious ways.” It is clear that Benny’s foibles were seen by his listeners not as particular but universal, just as there was no harshness in the razzing of his fellow cast members, who very clearly loved the Benny character in spite of his myriad flaws. So, too, did the American people. Several years after his TV series was cancelled, a corporation that was considering using him as a spokesman commissioned a national poll to find out how popular he was. It learned that only 3 percent of the respondents disliked him.
Therein lay Benny’s triumph: He won total acceptance from the American public and did so by embodying a Jewish stereotype from which the sting of prejudice had been leached. Far from being a self-hating whipping boy for anti-Semites, he turned himself into WASP America’s Jewish uncle, preposterous yet lovable.W hen the bottom fell out of network radio, Benny negotiated the move to TV without a hitch, debuting on the small screen in 1950 and bringing the radio version of The Jack Benny Program to a close five years later, making it one of the very last radio comedy series to shut up shop. Even after his weekly TV series was finally canceled by CBS in 1965, he continued to star in well-received one-shot specials on NBC.
But Benny’s TV appearances, for all their charm, were never quite equal in quality to his radio work, which is why he clung to the radio version of The Jack Benny Program until network radio itself went under: Better than anyone else, he knew how good the show had been. For the rest of his life, he lived off the accumulated comic capital built up by 21 years of weekly radio broadcasts.
Now, at long last, he belongs to the ages, and The Jack Benny Program is a museum piece. Yet it remains hugely influential, albeit at one or more removes from the original. From The Dick Van Dyke Show and The Danny Thomas Show to Seinfeld, Everybody Loves Raymond, and The Larry Sanders Show, every ensemble-cast sitcom whose central character is a fictionalized version of its star is based on Benny’s example. And now that the ubiquity of the Web has made the radio version of his series readily accessible for the first time, anyone willing to make the modest effort necessary to seek it out is in a position to discover that The Jack Benny Program, six decades after it left the air, is still as wonderfully, benignly funny as it ever was, a monument to the talent of the man who, more than anyone else, made it so.
Review of 'The Transferred Life of George Eliot' By Philip Davis
Not that there’s any danger these theoretically protesting students would have read George Eliot’s works—not even the short one, Silas Marner (1861), which in an earlier day was assigned to high schoolers. I must admit I didn’t find my high-school reading of Silas Marner a pleasant experience—sports novels for boys like John R. Tunis’s The Kid from Tomkinsville were inadequate preparation. I must confess, too, that when I was in graduate school, determined to study 17th-century English verse, my reaction to the suggestion that I should also read Middlemarch (1871–72) was “What?! An 800-page novel by the guy who wrote Silas Marner?” A friend patiently explained that “the guy” was actually Mary Ann Evans, born in 1819, died in 1880. Partly because she was living in sin with the literary jack-of-all-trades George Henry Lewes (legally and irrevocably bound to his estranged wife), she adopted “George Eliot” as a protective pseudonym when, in her 1857 debut, she published Scenes from Clerical Life.
I did, many times over and with awe and delight, go on to read Middlemarch and the seven other novels, often in order to teach them to college students. Students have become less and less receptive over the years. Forget modern-day objections to George Eliot’s complex political or religious views. Adam Bede (1859) and The Mill on the Floss (1860) were too hefty, and the triple-decked Middlemarch and Deronda, even if I set aside three weeks for them, rarely got finished.
The middle 20th century was perhaps a more a propitious time for appreciating George Eliot, Henry James, and other 19th-century English and American novelists. Influential teachers like F.R. Leavis at Cambridge and Lionel Trilling at Columbia were then working hard to persuade students that the study of literature, not just poetry and drama but also fiction, matters both to their personal lives—the development of their sensibility or character—and to their wider society. The “moral imagination” that created Middlemarch enriches our minds by dramatizing the complications—the frequent blurring of good and evil—in our lives. Great novels help us cope with ambiguities and make us more tolerant of one another. Many of Leavis’s and Trilling’s students became teachers themselves, and for several decades the feeling of cultural urgency was sustained. In the 1970s, though, between the leftist emphasis on literature as “politics by other means” and the deconstructionist denial of the possibility of any knowledge, literary or otherwise, independent of political power, the high seriousness of Leavis and Trilling began to fade.
The study of George Eliot and her life has gone through many stages. Directly after her death came the sanitized, hagiographic “life and letters” by J.W. Cross, the much younger man she married after Lewes’s death. Gladstone called it “a Reticence in three volumes.” The three volumes helped spark, if they didn’t cause, the long reaction against the Victorian sages generally that culminated in the dismissively satirical work of the Bloomsbury biographer and critic Lytton Strachey in his immensely influential Eminent Victorians (1916). Strachey’s mistreatment of his forbears was, with regard to George Eliot at least, tempered almost immediately by Virginia Woolf. It was Woolf who in 1919 provocatively said that Middlemarch had been “the first English novel for adults.” Eventually, the critical tide against George Eliot was decisively reversed in the ’40s by Joan Bennett and Leavis, who made the inarguable case for her genuine and lasting achievement. That period of correction culminated in the 1960s with Gordon S. Haight’s biography and with interpretive studies by Barbara Hardy and W.J. Harvey. Books on George Eliot over the last four decades have largely been written by specialists for specialists—on her manuscripts or working notes, and on her affiliations with the scientists, social historians, and competing novelists of her day.
The same is true, only more so, of the books written, with George Eliot as the ostensible subject, to promote deconstructionist or feminist agendas. Biographies have done a better job appealing to the common reader, not least because the woman’s own story is inherently compelling. The question right now is whether a book combining biographical and interpretive insight—one “pitched,” as publishers like to say, not just at experts but at the common reader—is past praying for.
Philip Davis, a Victorian scholar and an editor at Oxford University Press, hopes not. His The Transferred Life of George Eliot—transferred, that is, from her own experience into her letters, journals, essays, and novels, and beyond them into us—deserves serious attention. Davis is conscious that George Eliot called biographies of writers “a disease of English literature,” both overeager to discover scandals and too inclined to substitute day-to-day travels, relationships, dealings with publishers and so on, for critical attention to the books those writers wrote. Davis therefore devotes himself to George Eliot’s writing. Alas, he presumes rather too much knowledge on the reader’s part of the day-to-day as charted in Haight’s marvelous life. (A year-by-year chronology at the front of the book would have helped even his fellow Victorianists.)
As for George Eliot’s writing, Davis is determined to refute “what has been more or less said . . . in the schools of theory for the last 40 years—that 19th-century realism is conservatively bland and unimaginative, bourgeois and parochial, not truly art at all.” His argument for the richness, breadth, and art of George Eliot’s realism—her factual and sympathetic depiction of poor and middling people, without omitting a candid representation of the rich—is most convincing. What looms largest, though, is the realist, the woman herself—the Mary Ann Evans who, from the letters to the novels, became first Marian Evans the translator and essayist and then later “her own greatest character”: George Eliot the novelist. Davis insists that “the meaning of that person”—not merely the voice of her omniscient narrators but the omnipresent imagination that created the whole show—“has not yet exhausted its influence nor the larger future life she should have had, and may still have, in the world.”
The transference of George Eliot’s experience into her fiction is unquestionable: In The Mill on the Floss, for example, Mary Ann is Maggie, and her brother Isaac is Tom Tulliver. Davis knows that a better word might be transmutation, as George Eliot had, in Henry James’s words, “a mind possessed,” for “the creations which brought her renown were of the incalculable kind, shaped themselves in mystery, in some intellectual back-shop or secret crucible, and were as little as possible implied in the aspect of her life.” No data-accumulating biographer, even the most exhaustive, can account for that “incalculable . . . mystery.”
Which is why Davis, like a good teacher, gives us exercises in “close reading.” He pauses to consider how a George Eliot sentence balances or turns on an easy-to-skip-over word or phrase—the balance or turn often representing a moment when the novelist looks at what’s on the underside of the cards.
George Eliot’s style is subtle because her theme is subtle. Take D.H. Lawrence’s favorite heroine, the adolescent Maggie Tulliver. The external event in The Mill on the Floss may be the girl’s impulsive cutting off her unruly hair to spite her nagging aunts, or the young woman’s drifting down the river with a superficially attractive but truly impossible boyfriend. But the real “action” is Maggie’s internal self-blame and self-assertion. No Victorian novelist was better than George Eliot at tracing the psychological development of, say, a husband and wife who realize they married each other for shallow reasons, are unhappy, and now must deal with the ordinary necessities of balancing the domestic budget—Lydgate and Rosamund in Middlemarch—or, in the same novel, the religiously inclined Dorothea’s mistaken marriage to the old scholar Casaubon. That mistake precipitates not merely disenchantment and an unconscious longing for love with someone else, but (very finely) a quest for a religious explanation of and guide through her quandary.
It’s the religio-philosophical side of George Eliot about which Davis is strongest—and weakest. Her central theological idea, if one may simplify, was that the God of the Bible didn’t exist “out there” but was a projection of the imagination of the people who wrote it. Jesus wasn’t, in Davis’s characterization of her view, “the impervious divine, but [a man who] shed tears and suffered,” and died feeling forsaken. “This deep acceptance of so-called weakness was what most moved Marian Evans in her Christian inheritance. It was what God was for.” That is, the character of Jesus, and the dramatic play between him and his Father, expressed the human emotions we and George Eliot are all too familiar with. The story helps reconcile us to what is, finally, inescapable suffering.
George Eliot came to this demythologized understanding not only of Judaism and Christianity but of all religions through her contact first with a group of intellectuals who lived near Coventry, then with two Germans she translated: David Friedrich Strauss, whose 1,500-page Life of Jesus Critically Examined (1835–36) was for her a slog, and Ludwig Feuerbach, whose Essence of Christianity (1841) was for her a joy. Also, in the search for the universal morality that Strauss and Feuerbach believed Judaism and Christianity expressed mythically, there was Spinoza’s utterly non-mythical Ethics (1677). It was seminal for her—offering, as Davis says, “the intellectual origin for freethinking criticism of the Bible and for the replacement of religious superstition and dogmatic theology by pure philosophic reason.” She translated it into English, though her version did not appear until 1981.
I wish Davis had left it there, but he takes it too far. He devotes more than 40 pages—a tenth of the whole book—to her three translations, taking them as a mother lode of ideational gold whose tailings glitter throughout her fiction. These 40 pages are followed by 21 devoted to Herbert Spencer, the Victorian hawker of theories-of-everything (his 10-volume System of Synthetic Philosophy addresses biology, psychology, sociology, and ethics). She threw herself at the feet of this intellectual huckster, and though he rebuffed her painfully amorous entreaties, she never ceased revering him. Alas, Spencer was a stick—the kind of philosopher who was incapable of emotion. And she was his intellectual superior in every way. The chapter is largely unnecessary.
The book comes back to life when Davis turns to George Henry Lewes, the man who gave Mary Ann Evans the confidence to become George Eliot—perhaps the greatest act of loving mentorship in all of literature. Like many prominent Victorians, Lewes dabbled in all the arts and sciences, publishing highly readable accounts of them for a general audience. His range was as wide as Spencer’s, but his personality and writing had an irrepressible verve that Spencer could only have envied. Lewes was a sort Stephen Jay Gould yoked to Daniel Boorstin, popularizing other people’s findings and concepts, and coming up with a few of his own. He regarded his Sea-Side Studies (1860) as “the book . . . which was to me the most unalloyed delight,” not least because Marian, whom he called Polly, had helped gather the data. She told a friend “There is so much happiness condensed in it! Such scrambles over rocks, and peeping into clear pool [sic], and strolls along the pure sands, and fresh air mingling with fresh thoughts.” In his remarkably intelligent 1864 biography of Goethe, Lewes remarks that the poet “knew little of the companionship of two souls striving in emulous spirit of loving rivalry to become better, to become wiser, teaching each other to soar.” Such a companionship Lewes and George Eliot had in spades, and some of Davis’s best passages describe it.
Regrettably, Davis also offers many passages well below the standard of his best—needlessly repeating an already established point or obfuscating the obvious. Still, The Transferred Lives is the most formidably instructive, and certainly complete, life-and-works treatment of George Eliot we have.