In a recent interview with the New Republic, Paul Warnke, the newly appointed head of the Arms Control and Disarmament…
In a recent interview with the New Republic, Paul Warnke, the newly appointed head of the Arms Control and Disarmament Agency, responded as follows to the question of how the United States ought to react to indications that the Soviet leadership thinks it possible to fight and win a nuclear war. “In my view,” he replied, “this kind of thinking is on a level of abstraction which is unrealistic. It seems to me that instead of talking in those terms, which would indulge what I regard as the primitive aspects of Soviet nuclear doctrine, we ought to be trying to educate them into the real world of strategic nuclear weapons, which is that nobody could possibly win.”1
Even after allowance has been made for Mr. Warnke’s notoriously careless syntax, puzzling questions remain. On what grounds does he, a Washington lawyer, presume to “educate” the Soviet general staff composed of professional soldiers who thirty years ago defeated the Wehrmacht—and, of all things, about the “real world of strategic nuclear weapons” of which they happen to possess a considerably larger arsenal than we? Why does he consider them children who ought not to be “indulged”? And why does he chastise for what he regards as a “primitive” and unrealistic strategic doctrine not those who hold it, namely the Soviet military, but Americans who worry about their holding it?
Be all that as it may, even if Mr. Warnke refuses to take Soviet strategic doctrine seriously, it behooves us to take Mr. Warnke’s views of Soviet doctrine seriously. He not only will head our SALT II team; his thinking as articulated in the above statement and on other occasions reflects all the conventional wisdom of the school of strategic theory dominant in the United States, one of whose leading characteristics is scorn for Soviet views on nuclear warfare.
American and Soviet nuclear doctrines, it needs stating at the outset, are starkly at odds. The prevalent U.S. doctrine holds that an all-out war between countries in possession of sizable nuclear arsenals would be so destructive as to leave no winner; thus resort to arms has ceased to represent a rational policy option for the leaders of such countries vis-à-vis one another. The classic dictum of Clausewitz, that war is politics pursued by other means, is widely believed in the United States to have lost its validity after Hiroshima and Nagasaki. Soviet doctrine, by contrast, emphatically asserts that while an all-out nuclear war would indeed prove extremely destructive to both parties, its outcome would not be mutual suicide: the country better prepared for it and in possession of a superior strategy could win and emerge a viable society. “There is profound erroneousness and harm in the disorienting claims of bourgeois ideologies that there will be no victor in a thermonuclear world war,” thunders an authoritative Soviet publication.2 The theme is mandatory in the current Soviet military literature. Clausewitz, buried in the United States, seems to be alive and prospering in the Soviet Union.
The predisposition of the American strategic community is to shrug off this fundamental doctrinal discrepancy. American doctrine has been and continues to be formulated and implemented by and large without reference to its Soviet counterpart. It is assumed here that there exists one and only one “rational” strategy appropriate to the age of thermonuclear weapons, and that this strategy rests on the principle of “mutual deterrence” developed in the United States some two decades ago. Evidence that the Russians do not share this doctrine which, as its name indicates, postulates reciprocal attitudes, is usually dismissed with the explanation that they are clearly lagging behind us: given time and patient “education,” they will surely come around.
It is my contention that this attitude rests on a combination of arrogance and ignorance; that it is dangerous; and that it is high time to start paying heed to Soviet strategic doctrine, lest we end up deterring no one but ourselves. There is ample evidence that the Soviet military say what they mean, and usually mean what they say. When the recently deceased Soviet Minister of Defense, Marshal Grechko, assures us: “We have never concealed, and do not conceal, the fundamental, principal tenets of our military doctrine,”3 he deserves a hearing. This is especially true in view of the fact that Soviet military deployments over the past twenty years make far better sense in the light of Soviet doctrine, “primitive” and “unrealistic” as the latter may appear, than when reflected in the mirror of our own doctrinal assumptions.
Mistrust of the military professional, combined with a pervasive conviction, typical of commercial societies, that human conflicts are at bottom caused by misunderstanding and ought to be resolved by negotiations rather than force, has worked against serious attention to military strategy by the United States. We have no general staff; we grant no higher degrees in “military science”; and, except for Admiral Mahan, we have produced no strategist of international repute. America has tended to rely on its insularity to protect it from aggressors, and on its unique industrial capacity to help crush its enemies once war was under way. The United States is accustomed to waging wars of its own choosing and on its own terms. It lacks an ingrained strategic tradition. In the words of one historian, Americans tend to view both military strategy and the armed forces as something to be “employed intermittently to destroy occasional and intermittent threats posed by hostile powers.”4
This approach to warfare has had a number of consequences. The United States wants to win its wars quickly and with the smallest losses in American lives. It is disinclined, therefore, to act on protracted and indirect strategies, or to engage in limited wars and wars of attrition. Once it resorts to arms, it prefers to mobilize the great might of its industrial plant to produce vast quantities of the means of destruction with which in the shortest possible time to undermine the enemy’s will and ability to continue the struggle. Extreme reliance on technological superiority, characteristic of U.S. warfare, is the obverse side of America’s extreme sensitivity to its own casualties; so is indifference to the casualties inflicted on the enemy. The strategic bombing campaigns waged by the U.S. Air Force and the RAF against Germany and Japan in World War II excellently implemented this general attitude. Paradoxically, America’s dread of war and casualties pushes it to adopt some of the most brutal forms of warfare, involving the indiscriminate destruction of the enemy’s homeland with massive civilian deaths.
These facts must be borne in mind to understand the way the United States reacted to the advent of the nuclear bomb. The traditional military services—the army and the navy—whose future seemed threatened by the invention of a weapon widely believed to have revolutionized warfare and rendered conventional forces obsolete, resisted extreme claims made on behalf of the bomb. But they were unable to hold out for very long. An alliance of politicians and scientists, backed by the Air Force, soon overwhelmed them. “Victory through Air Power,” a slogan eminently suited to the American way of war, carried all before it once bombs could be devised whose explosive power was measured in kilotons and megatons.
The U.S. Army tried to argue after Hiroshima and Nagasaki that the new weapons represented no fundamental breakthrough. No revolution in warfare had occurred, its spokesman claimed: atomic bombs were merely a more efficient species of the aerial bombs used in World War II, and in themselves no more able to ensure victory than the earlier bombs had been. As evidence, they could point to the comprehensive U.S. Strategic Bombing Surveys carried out after the war to assess the effects of the bombing campaigns. These had demonstrated that saturation raids against German and Japanese cities had neither broken the enemy’s morale nor paralyzed his armaments industry; indeed, German productivity kept on rising in the face of intensified Allied bombing, attaining its peak in the fall of 1944, on the eve of capitulation.
And when it came to horror, atomic bombs had nothing over conventional ones: as against the 72,000 casualties caused by the atomic bomb in Hiroshima, conventional raids carried out against Tokyo and Dresden in 1945 had caused 84,000 and 135,000 fatalities, respectively. Furthermore, those who sought to minimize the impact of the new weapon argued, atomic weapons in no sense obviated the need for sizable land and sea forces. For example, General Ridgway, as Chief of Staff in the early 1950’s, maintained that war waged with tactical nuclear weapons would demand larger rather than smaller field armies since these weapons were more complicated, since they would produce greater casualties, and since the dispersal of troops required by nuclear tactics called for increasing the depth of the combat zone.5
As we shall note below, similar arguments disputing the revolutionary character of the nuclear weapon surfaced in the Soviet Union, and there promptly came to dominate strategic theory. In the United States, they were just as promptly silenced by a coalition of groups each of which it suited, for its own reasons, to depict the atomic bomb as the “absolute weapon” that had, in large measure, rendered traditional military establishments redundant and traditional strategic thinking obsolete.
Once World War II was over, the United States was most eager to demobilize its armed forces. Between June 1945 and June 1946, the U.S. Army reduced its strength from 8.3 to 1.9 million men; comparable manpower cuts were achieved in the navy and air force. Little more than a year after Germany’s surrender, the military forces of the United States, which at their peak had stood at 12.3 million men, were cut down to 3 million; two years later they declined below 2 million. The demobilization proceeded at a pace (if not in a manner) reminiscent of the dissolution of the Russian army in the revolutionary year of 1917. Nothing could have stopped this mass of humanity streaming homeward. To most Americans peacetime conditions meant reversion to a skeletal armed force.
Yet, at the same time, growing strains in the wartime alliance with the Soviet Union, and mounting evidence that Stalin was determined to exploit the chaotic conditions brought about by the collapse of the Axis powers to expand his domain, called for an effective military force able to deter the Soviets. The United States could not fulfill its role as leader of the Western coalition without an ability to project its military power globally.
In this situation, the nuclear weapon seemed to offer an ideal solution: the atomic bomb could hardly have come at a better time from the point of view of U.S. international commitments. Here was a device so frighteningly destructive, it was believed, that the mere threat of its employment would serve to dissuade would-be aggressors from carrying out their designs. Once the Air Force received the B-36, the world’s first intercontinental bomber, the United States acquired the ability to threaten the Soviet Union with devastating punishment without, at the same time, being compelled to maintain a large and costly standing army.
Reliance on the nuclear deterrent became more imperative than ever after the conclusion of the Korean war, in the course of which U.S. defense expenditures had been sharply driven up. President Eisenhower had committed himself to a policy of fiscal restraint. He wanted to cut the defense budget appreciably, and yet he had to do so without jeopardizing either America’s territorial security or its worldwide commitments. In an effort to reconcile these contradictory desires, the President and his Secretary of State, John Foster Dulles, enunciated in the winter of 1953-54 a strategic doctrine which to an unprecedented degree based the country’s security on a single weapon, the nuclear deterrent. In an address to the United Nations in December 1953, Eisenhower argued that since there was no defense against nuclear weapons (i.e., thermonuclear or hydrogen bombs, which both countries were then beginning to produce), war between the two “atomic colossi” would leave no victors and probably cause the demise of civilization. A month later, Dulles enunciated what came to be known as the doctrine of “massive retaliation.” The United States, he declared, had decided “to depend primarily upon a great capacity to retaliate, instantly, by means and at places of our choosing.” Throughout his address, Dulles emphasized the fiscal benefits of such a strategy, “more basic security at less cost.”
The Eisenhower-Dulles formula represented a neat compromise between America’s desires to reduce the defense budget and simultaneously to retain the capacity to respond to Soviet threats. The driving force was not, however, military but budgetary: behind “massive retaliation” (as well as its offspring, “mutual deterrence”) lay fiscal imperatives. In the nuclear deterrent, the United States found a perfect resolution of the conflicting demands of domestic and foreign responsibilities. For this reason alone its adoption was a foregone conclusion: the alternatives were either a vast standing army or forfeiture of status as a leading world power. The Air Force enthusiastically backed the doctrine of massive retaliation. As custodian of the atomic bomb, it had a vested interest in a defense posture of which that weapon was the linchpin. And since in the first postwar decade the intercontinental bomber was the only available vehicle for delivering the bomb against an enemy like the Soviet Union, the Air Force could claim a goodly share of the defense budget built around the retaliation idea.
Although the Soviet Union exploded a fission bomb in 1949 and announced the acquisition of a fusion (or hydrogen) bomb four years later, the United States still continued for a while longer to enjoy an effective monopoly on nuclear retaliation, since the Soviet Union lacked the means of delivering quantities of such bombs against U.S. territory. That situation changed dramatically in 1957 when the Soviets launched the Sputnik. This event, which their propaganda hailed as a great contribution to the advancement of science (and ours as proof of the failures of the American educational system!), represented in fact a significant military demonstration, namely, the ability of the Russians to deliver nuclear warheads against the United States homeland, until then immune from direct enemy threats. At this point massive retaliation ceased to make much sense and before long yielded to the doctrine of “mutual deterrence.” The new doctrine postulated that inasmuch as both the Soviet Union and the United States possessed (or would soon possess) the means of destroying each other, neither country could rationally contemplate resort to war. The nuclear stockpiles of each were an effective deterrent which ensured that they would not be tempted to launch an attack.
This doctrine was worked out in great and sophisticated detail by a bevy of civilian experts employed by various government and private organizations. These physicists, chemists, mathematicians, economists, and political scientists came to the support of the government’s fiscally-driven imperatives with scientific demonstrations in favor of the nuclear deterrent. Current U.S. strategic theory was thus born of a marriage between the scientist and the accountant. The professional soldier was jilted.
A large part of the U.S. scientific community had been convinced as soon as the first atomic bomb was exploded that the nuclear weapon, which that community had conceived and helped to develop, had accomplished a complete revolution in warfare. This conclusion was reached without much reference to the analysis of the effects of atomic weapons carried out by the military, and indeed without consideration of the traditional principles of warfare. It represented, rather, an act of faith on the part of an intellectual community which held strong pacifist convictions and felt deep guilt at having participated in the creation of a weapon of such destructive power. As early as 1946, in an influential book sponsored by the Yale Institute of International Affairs, under the title The Absolute Weapon, a group of civilian strategic theorists enunciated the principles of the mutual-deterrence theory which subsequently became the official U.S. strategic doctrine. The principal points made in this work may be summarized as follows:
- Nuclear weapons are “absolute weapons” in the sense that they can cause unacceptable destruction, but also and above all because there exists against them no possible defense. When the aggressor is certain to suffer the same punishment as his victim, aggression ceases to make sense. Hence war is no longer a rational policy option, as it had been throughout human history. In the words of Bernard Brodie, the book’s editor: “Thus far the chief purpose of our military establishment had been to win wars. From now on its chief purpose must be to avert them. It can have almost no other useful purpose” (p. 76).
- Given the fact that the adjective “absolute” means, by definition, incapable of being exceeded or surpassed, in the nuclear age military superiority has become meaningless. As another contributor to the book, William T.R. Fox, expressed it: “When dealing with the absolute weapon, arguments based on relative advantage lose their point” (p. 181). From which it follows that the objective of modern defense policy should be not superiority in weapons, traditionally sought by the military, but “sufficiency”; just enough nuclear weapons to be able to threaten a potential aggressor with unacceptable retaliation—in other words, an “adequate” deterrent, no more, no less.
- Nuclear deterrence can become effective only if it restrains mutually—i.e., if the United States and the Soviet Union each can deter the other from aggression. An American monopoly on nuclear weapons would be inherently destabilizing, both because it could encourage the United States to launch a nuclear attack, and, at the same time, by making the Russians feel insecure, cause them to act aggressively. “Neither we nor the Russians can expect to feel even reasonably safe unless an atomic attack by one were certain to unleash a devastating atomic counterattack by the other,” Arnold Wolfers maintained (p. 135). In other words, to feel secure the United States actually required the Soviet Union to have the capacity to destroy it.
Barely one year after Hiroshima and three years before the Soviets were to acquire a nuclear bomb, The Absolute Weapon articulated the philosophical premises underlying the mutual-deterrence doctrine which today dominates U.S. strategic thinking. Modern strategy, in the opinion of its contributors, involved preventing wars rather than winning them, securing sufficiency in decisive weapons rather than superiority, and even ensuring the potential enemy’s ability to strike back. Needless to elaborate, these principles ran contrary to all the tenets of traditional military theory, which had always called for superiority in forces and viewed the objective of war to be victory. But then, if one had decided that the new weapons marked a qualitative break with all the weapons ever used in combat, one could reasonably argue that past military experience, and the theory based on it, had lost relevance. Implicit in these assumptions was the belief that Clausewitz and his celebrated formula proclaiming war an extension of politics were dead. Henry Kissinger, who can always be counted upon to utter commonplaces in the tone of prophetic revelation, announced Clausewitz’s obituary nearly twenty years after The Absolute Weapon had made the point, in these words: “The traditional mode of military analysis which saw in war a continuation of politics but with its own appropriate means is no longer applicable.”6
American civilian strategists holding such views gained the dominant voice in the formulation of U.S. strategic doctrine with the arrival in Washington in 1961 of Robert S. McNamara as President Kennedy’s Secretary of Defense. A prominent business executive specializing in finance and accounting, McNamara applied to the perennial problem of American strategy—how to maintain a credible global military posture without a large and costly military establishment—the methods of cost analysis. These had first been applied by the British during World War II under the name “operations research” and subsequently came to be adopted here as “systems analysis.” Weapons’ procurement was to be tested and decided by the same methods used to evaluate returns on investment in ordinary business enterprises. Mutual deterrence was taken for granted: the question of strategic posture reduced itself to the issue of which weapons systems would provide the United States with effective deterrence at the least expense. Under McNamara the procurement of weapons, decided on the basis of cost effectiveness, came in effect to direct strategy, rather than the other way around, as had been the case through most of military history. It is at this point that applied science in partnership with budgetary accountancy—a partnership which had developed U.S. strategic theory—also took charge of U.S. defense policy.
As worked out in the 1960’s, and still in effect today, American nuclear theory rests on these propositions: All-out nuclear war is not a rational policy option, since no winner could possibly emerge from such a war. Should the Soviet Union nevertheless launch a surprise attack on the United States, the latter would emerge with enough of a deterrent to devastate the Soviet Union in a second strike. Since’ such a retaliatory attack would cost the Soviet Union millions of casualties and the destruction of all its major cities, a Soviet first strike is most unlikely. Meaningful defenses against a nuclear attack are technically impossible and psychologically counterproductive; nuclear superiority is meaningless.
In accord with these assumptions, the United States in the mid-1960’s unilaterally froze its force of ICBM’s at 1,054 and dismantled nearly all its defenses against enemy bombers. Civil-defense was all but abandoned, as was in time the attempt to create an ABM system which held out the possibility of protecting American missile sites against a surprise enemy attack. The Russians were watched benignly as they moved toward parity with the United States in the number of intercontinental launchers, and then proceeded to attain numerical superiority. The expectation was that as soon as the Russians felt themselves equal to the United States in terms of effective deterrence, they would stop further deployments. The frenetic pace of the Soviet nuclear build-up was explained first on the ground that the Russians had a lot of catching up to do, then that they had to consider the Chinese threat, and finally on the grounds that they are inherently a very insecure people and should be allowed an edge in deterrent capability.
Whether mutual deterrence deserves the name of a strategy at all is a real question. As one student of the subject puts it:
Although commonly called a “strategy,” “assured destruction” was by itself an antithesis of strategy. Unlike any strategy that ever preceded it throughout the history of armed conflict, it ceased to be useful precisely where military strategy is supposed to come into effect: at the edge of war. It posited that the principal mission of the U.S. military under conditions of ongoing nuclear operations against [the continental United States] was to shut its eyes, grit its teeth, and reflexively unleash an indiscriminate and simultaneous reprisal against all Soviet aim points on a preestablished target list. Rather than deal in a considered way with the particular attack on hand so as to minimize further damage to the United States and maximize the possibility of an early settlement on reasonably acceptable terms, it had the simple goal of inflicting punishment for the Soviet transgression. Not only did this reflect an implicit repudiation of political responsibility, it also risked provoking just the sort of counterreprisal against the United States that a rational wartime strategy should attempt to prevent.7
I cite this passage merely to indicate that the basic postulates of U.S. nuclear strategy are not as self-evident and irrefutable as its proponents seem to believe; and that, therefore, their rejection by the Soviet military is not, in and of itself, proof that Soviet thinking is “primitive” and devoid of a sense of realism.
The principal differences between American and Soviet strategies are traceable to different conceptions of the role of conflict and its inevitable concomitant, violence, in human relations; and secondly, to different functions which the military establishment performs in the two societies.
In the United States, the consensus of the educated and affluent holds all recourse to force to be the result of an inability or an unwillingness to apply rational analysis and patient negotiation to disagreements: the use of force is prima facie evidence of failure. Some segments of this class not only refuse to acknowledge the existence of violence as a fact of life, they have even come to regard fear—the organism’s biological reaction to the threat of violence—as inadmissible. “The notion of being threatened has acquired an almost class connotation,” Daniel P. Moynihan notes in connection with the refusal of America’s “sophisticated” elite to accept the reality of a Soviet threat. “If you’re not very educated, you’re easily frightened. And not being ever frightened can be a formula for self-destruction.”8
Now this entire middle-class, commercial, essentially Protestant ethos is absent from Soviet culture, whose roots feed on another kind of soil, and which has had for centuries to weather rougher political climes. The Communist revolution of 1917, by removing from positions of influence what there was of a Russian bourgeoisie (a class Lenin was prone to define as much by cultural as by socioeconomic criteria), in effect installed in power the muzhik, the Russian peasant. And the muzhik had been taught by long historical experience that cunning and coercion alone ensured survival: one employed cunning when weak, and cunning coupled with coercion when strong. Not to use force when one had it indicated some inner weakness. Marxism, with its stress on class war as a natural condition of mankind so long as the means of production were privately owned, has merely served to reinforce these ingrained convictions. The result is an extreme Social-Darwinist outlook on life which today permeates the Russian elite as well as the Russian masses, and which only the democratic intelligentsia and the religious dissenters oppose to any significant extent.
The Soviet ruling elite regards conflict and violence as natural regulators of all human affairs: wars between nations, in its view, represent only a variant of wars between classes, recourse to the one or the other being dependent on circumstances. A conflictless world will come into being only when the socialist (i.e., Communist) mode of production spreads across the face of the earth.
The Soviet view of armed conflict can be illustrated with another citation from the writings of the late Marshal Grechko, one of the most influential Soviet military figures of the post-World War II era. In his principal treatise, Grechko refers to the classification of wars formulated in 1972 by his U.S. counterpart, Melvin Laird. Laird divided wars according to engineering criteria—in terms of weapons employed and the scope of the theater of operations—to come up with four principal types of wars: strategic-nuclear, theater-nuclear, theater-conventional, and local-conventional. Dismissing this classification as inadequate, Grechko applies quite different standards to come up with his own typology:
Proceeding from the fundamental contradictions of the contemporary era, one can distinguish, according to sociopolitical criteria, the following types of wars: (1) wars between states (coalitions) of two contrary social systems—capitalist and socialist; (2) civil wars between the proletariat and the bourgeoisie, or between the popular masses and the forces of the extreme reaction supported by the imperialists of other countries; (3) wars between imperialist states and the peoples of colonial and dependent states fighting for their freedom and independence; and (4) wars among capitalist states.9
This passage contains many interesting implications. For instance, it makes no allowance for war between two Communist countries, like the Soviet Union and China, though such a war seems greatly to preoccupy the Soviet leadership. Nor does it provide for war pitting a coalition of capitalist and Communist states against another capitalist state, such as actually occurred during World War II when the United States and the Soviet Union joined forces against Germany. But for our purposes, the most noteworthy aspect of Grechko’s system of classification is the notion that social and national conflicts within the capitalist camp (that is, in all countries not under Communist control) are nothing more than a particular mode of class conflict of which all-out nuclear war between the superpowers is a conceivable variant. In terms of this typology, an industrial strike in the United States, the explosion of a terrorist bomb in Belfast or Jerusalem, the massacre by Rhodesian guerrillas of a black village or a white farmstead, differ from nuclear war between the Soviet Union and the United States only in degree, not in kind. All such conflicts are calibrations on the extensive scale by which to measure the historic conflict which pits Communism against capitalism and imperialism. Such conflicts are inherent in the stage of human development which precedes the final abolition of classes.
Middle-class American intellectuals simply cannot assimilate this mentality, so alien is it to their experience and view of human nature. Confronted with the evidence that the most influential elements in the Soviet Union do indeed hold such views, they prefer to dismiss the evidence as empty rhetoric, and to regard with deep suspicion the motives of anyone who insists on taking it seriously. Like some ancient Oriental despots, they vent their wrath on the bearers of bad news. How ironic that the very people who have failed so dismally to persuade American television networks to eliminate violence from their programs, nevertheless feel confident that they can talk the Soviet leadership into eliminating violence from its political arsenal!
Solzhenitsyn grasped the issue more profoundly as well as more realistically when he defined the antithesis of war not as the absence of armed conflict between nations—i.e., “peace” in the conventional meaning of the term—but as the absence of all violence, internal as well as external. His comprehensive definition, drawn from his Soviet experience, obversely matches the comprehensive Soviet definition of warfare.
We know surprisingly little about the individuals and institutions whose responsibility it is to formulate Soviet military doctrine. The matter is handled with the utmost secrecy, which conceals from the eyes of outsiders the controversies that undoubtedly surround it. Two assertions, however, can be made with confidence.
Because of Soviet adherence to the Clausewitzian principle that warfare is always an extension of politics—i.e., subordinate to overall political objectives (about which more below)—Soviet military planning is carried out under the close supervision of the country’s highest political body, the Politburo. Thus military policy is regarded as an intrinsic element of “grand strategy,” whose arsenal also includes a variety of non-military instrumentalities.
Secondly, the Russians regard warfare as a science (nauka, in the German sense of Wissenschaft). Instruction in the subject is offered at a number of university-level institutions, and several hundred specialists, most of them officers on active duty, have been accorded the Soviet equivalent of the Ph.D. in military science. This means that Soviet military doctrine is formulated by full-time specialists: it is as much the exclusive province of the certified military professional as medicine is that of the licensed physician. The civilian strategic theorist who since World War II has played a decisive role in the formulation of U.S. strategic doctrine is not in evidence in the Soviet Union, and probably performs at best a secondary, consultative function.
Its penchant for secrecy notwithstanding, the Soviet military establishment does release a large quantity of unclassified literature in the form of books, specialist journals, and newspapers. Of the books, the single most authoritative work at present is unquestionably the collective study, Military Strategy, edited by the late Marshal V. D. Sokolovskii, which summarizes Soviet warfare doctrine of the nuclear age.10 Although published fifteen years ago, Sokolovskii’s volume remains the only Soviet strategic manual publicly available—a solitary monument confronting a mountain of Western works on strategy. A series called “The Officer’s Library” brings out important specialized studies.11 The newspaper Krasnaia zvezda (“Red Star”) carries important theoretical articles which, however, vie for the reader’s attention with heroic pictures of Soviet troops storming unidentified beaches and firing rockets at unnamed foes. The flood of military works has as its purpose indoctrination, an objective to which the Soviet high command attaches the utmost importance: indoctrination both in the psychological sense, designed to persuade the Soviet armed forces that they are invincible, as well as of a technical kind, to impress upon the officers and ranks the principles of Soviet tactics and the art of operations.
To a Western reader, most of this printed matter is unadulterated rubbish. It not only lacks the sophistication and intellectual elegance which he takes for granted in works on problems of nuclear strategy; it is also filled with a mixture of pseudo-Marxist jargon and the crudest kind of Russian jingoism. Which is one of the reasons why it is hardly ever read in the West, even by people whose business it is to devise a national strategy against a possible Soviet threat. By and large the material is ignored. Two examples must suffice. Strategy in the Missile Age, an influential work by Bernard Brodie, one of the pioneers of U.S. nuclear doctrine, which originally came out in 1959, and was republished in 1965, makes only a few offhand allusions to Soviet nuclear strategy, and then either to note with approval that it is “developing along lines familiar in the United States” (p. 171), or else, when the Russians prefer to follow their own track, to dismiss it as a “ridiculous and reckless fantasy” (p. 215). Secretary of Defense McNamara perused Sokolovskii and “remained unimpressed,” for nowhere in the book did he find “a sophisticated analysis of nuclear war.”12
The point to bear in mind, however, is that Soviet military literature, like all Soviet literature on politics broadly defined, is written in an elaborate code language. Its purpose is not to dazzle with originality and sophistication but to convey to the initiates messages of grave importance. Soviet policy-makers may speak to one another plainly in private, but when they take pen in hand they invariably resort to an “Aesopian” language, a habit acquired when the forerunner of today’s Communist party had to function in the Czarist underground. Buried in the flood of seemingly meaningless verbiage, nuggets of precious information on Soviet perceptions and intentions can more often than not be unearthed by a trained reader. In 1958-59 two American specialists employed by the Rand Corporation, Raymond L. Garthoff and Herbert S. Dinerstein, by skillfully deciphering Soviet literature on strategic problems and then interpreting this information against the background of the Soviet military tradition, produced a remarkably prescient forecast af actual Soviet military policies of the 1960’s and 1970’s.13 Unfortunately, their findings were largely ignored by U.S. strategists from the scientific community who had convinced themselves that there was only one strategic doctrine appropriate to the age of nuclear weapons, and that therefore evidence indicating that the Soviets were adopting a different strategy could be safely disregarded.
This predisposition helps explain why U.S. strategists persistently ignored signs indicating that those who had control of Soviet Russia’s nuclear arsenal were not thinking in terms of mutual deterrence. The calculated nonchalance with which Stalin at Potsdam reacted to President Truman’s confidences about the American atomic bomb was a foretaste of things to come. Initial Soviet reactions to Hiroshima and Nagasaki were similar in tone: the atomic weapon had not in any significant manner altered the science of warfare or rendered obsolete the principles which had guided the Red Army in its victorious campaigns against the Wehrmacht. These basic laws, known as the five “constant principles” that win wars, had been formulated by Stalin in 1942. They were, in declining order of importance: “stability of the home front,” followed by morale of the armed forces, quantity and quality of the divisions, military equipment, and, finally, ability of the commanders.14 There was no such thing as an “absolute weapon”—weapons altogether occupied a subordinate place in warfare; defense against atomic bombs was entirely possible.15 This was disconcerting, to be sure, but it could be explained away as a case of sour grapes. After all, the Soviet Union had no atomic bomb, and it was not in its interest to seem overly impressed by a weapon on which its rival enjoyed a monopoly.16
In September 1949 the Soviet Union exploded a nuclear device. Disconcertingly, its attitude to nuclear weapons did not change, at any rate not in public. For the remaining four years, until Stalin’s death, the Soviet high command continued to deny that nuclear weapons required fundamental revisions of accepted military doctrine. With a bit of good will, this obduracy could still have been rationalized: for although the Soviet Union now had the weapon, it still lacked adequate means of delivering it across continents insofar as it had few intercontinental bombers (intercontinental rockets were regarded in the West as decades away). The United States, by contrast, possessed not only a fleet of strategic bombers but also numerous air bases in countries adjoining Soviet Russia. So once again one could find a persuasive explanation of why the Russians refused to see the light. It seemed reasonable to expect that as soon as they had acquired both a stockpile of atomic bombs and a fleet of strategic bombers, they would adjust their doctrine to conform with the American.
Events which ensued immediately after Stalin’s death seemed to lend credence to these expectations. Between 1953 and 1957 a debate took place in the pages of Soviet publications which, for all its textural obscurity, indicated that a new school of Soviet strategic thinkers had arisen to challenge the conventional wisdom. The most articulate spokesman of this new school, General N. Talenskii, argued that the advent of nuclear weapons, especially the hydrogen bomb which had just appeared on the scene, did fundamentally alter the nature of warfare. The sheer destructiveness of these weapons was such that one could no longer talk of a socialist strategy automatically overcoming the strategy of capitalist countries: the same rules of warfare now applied to both social systems. For the first time doubt was cast on the immutability of Stalin’s “five constant principles.” In the oblique manner in which Soviet debates on matters of such import are invariably conducted, Talenskii was saying that perhaps, after all, war had ceased to represent a viable policy option. More important yet, speeches delivered by leading Soviet politicians in the winter of 1953-54 seemed to support the thesis advanced by President Eisenhower in his United Nations address of December 1953 that nuclear war could spell the demise of civilization. In an address delivered on March 12, 1954, and reported the following day in Pravda, Stalin’s immediate successor, Georgii Malenkov, echoed Eisenhower’s sentiments: a new world war would unleash a holocaust which “with the present means of warfare, means the destruction of world civilization.”17
This assault on its traditional thinking—and, obliquely, on its traditional role—engendered a furious reaction from the Soviet military establishment. The Red Army was not about to let itself be relegated to the status of a militia whose principal task was averting war rather than winning it. Malenkov’s unorthodox views on war almost certainly contributed to his downfall; at any rate, his dismissal in February 1955 as party leader was accompanied by a barrage of press denunciations of the notion that war had become unfeasible. There are strong indications that Malenkov’s chief rival, Khrushchev, capitalized on the discontent of the military to form with it an alliance with whose help he eventually rode to power. The successful military counterattack seems to have been led by the World War II hero, Marshal Georgii Zhukov, whom Khrushchev made his Minister of Defense and brought into the Presidium. The guidelines of Soviet nuclear strategy, still in force today, were formulated during the first two years of Khrushchev’s tenure (1955-57), under the leadership of Zhukov himself. They resulted in the unequivocal rejection of the notion of the “absolute weapon” and all the theories that U.S. strategists had deduced from it. Stalin’s view of the military “constants” was implicitly reaffirmed. Thus the re-Stalinization of Soviet life, so noticeable in recent years, manifested itself first in military doctrine.
To understand this unexpected turn of events—so unexpected that most U.S. military theorists thus far have not been able to come to terms with it—one must take into account the function performed by the military in the Soviet system.
Unlike the United States, the Soviet government needs and wants a large military force. It has many uses for it, at home and abroad. As a regime which rests neither on tradition nor on a popular mandate, it sees in its military the most effective manifestation of government omnipotence, the very presence of which discourages any serious opposition from raising its head in the country as well as in its dependencies. It is, after all, the Red Army that keeps Eastern Europe within the Soviet camp. Furthermore, since the regime is driven by ideology, internal politics, and economic exigencies steadily to expand, it requires an up-to-date military force capable of seizing opportunities which may present themselves along the Soviet Union’s immensely long frontier or even beyond. The armed forces of the Soviet Union thus have much more to do than merely protect the country from potential aggressors: they are the mainstay of the regime’s authority and a principal instrumentality of its internal and external policies. Given the shaky status of the Communist regime internally, the declining appeal of its ideology, and the non-competitiveness of its goods on world markets, a persuasive case can even be made that, ruble for ruble, expenditures on the military represent for the Soviet leadership an excellent and entirely “rational” capital investment.
For this reason alone (and there were other compelling reasons too, as we shall see), the Soviet leadership could not accept the theory of mutual deterrence.18 After all, this theory, pushed to its logical conclusion, means that a country can rely for its security on a finite number of nuclear warheads and on an appropriate quantity of delivery vehicles; so that, apart perhaps from some small mobile forces needed for local actions, the large and costly traditional military establishments can be disbanded. Whatever the intrinsic military merits of this doctrine may be, its broader implications are entirely unacceptable to a regime like the Soviet one for whom military power serves not only (or even primarily) to deter external aggressors, but also and above all to ensure internal stability and permit external expansion. Thus, ultimately, it is political rather than strictly strategic or fiscal considerations that may be said to have determined Soviet reactions to nuclear weapons and shaped the content of Soviet nuclear strategy. As a result, Soviet advocates of mutual deterrence like Talenskii were gradually silenced. By the mid-1960’s the country adopted what in military jargon is referred to as a “war-fighting” and “war-winning” doctrine.
Given this fundamental consideration, the rest followed with a certain inexorable logic. The formulation of Soviet strategy in the nuclear age was turned over to the military who are in complete control of the Ministry of Defense. (Two American observers describe this institution as a “uniformed empire.”19 ) The Soviet General Staff had only recently emerged from winning one of the greatest wars in history. Immensely confident of their own abilities, scornful of what they perceived as the minor contribution of the United States to the Nazi defeat, inured to casualties running into tens of millions, the Soviet generals tackled the task with relish. Like their counterparts in the U.S. Army, they were professionally inclined to denigrate the exorbitant claims made on behalf of the new weapon by strategists drawn from the scientific community; unlike the Americans, however, they did not have to pay much heed to the civilians. In its essentials, Soviet nuclear doctrine as it finally emerged is not all that different from what American doctrine might have been had military and geopolitical rather than fiscal considerations played the decisive role here as they did there.
Soviet military theorists reject the notion that technology (i.e., weapons) decides strategy. They perceive the relationship to be the reverse: strategic objectives determine the procurement and application of weapons. They agree that the introduction of nuclear weapons has profoundly affected warfare, but deny that nuclear weapons have altered its essential quality. The novelty of nuclear weapons consists not in their destructiveness—that is, after all, a matter of degree, and a country like the Soviet Union which, as Soviet generals proudly boast, suffered in World War II the loss of over 20 million casualties, as well as the destruction of 1,710 towns, over 70,1300 villages, and some 32,000 industrial establishments to win the war and emerge as a global power, is not to be intimidated by the prospect of destruction.20 Rather, the innovation consists of the fact that nuclear weapons, coupled with intercontinental missiles, can by themselves carry out strategic missions which previously were accomplished only by means of prolonged tactical operations:
Nuclear missiles have altered the relationship of tactical, operational, and strategic acts of the armed conflict. If in the past the strategic end-result was secured by a succession of sequential, most often long-term, efforts [and] comprised the sum of tactical and operational successes, strategy being able to realize its intentions only with the assistance of the art of operations and tactics, then today, by means of powerful nuclear strikes, strategy can attain its objectives directly.21
In other words, military strategy, rather than a casualty of technology, has, thanks to technology, become more central than ever. By adopting this view, Soviet theorists believe themselves to have adapted modern technological innovations in weaponry to the traditions of military science.
Implicit in all this is the idea that nuclear war is feasible and that the basic function of warfare, as defined by Clausewitz, remains permanently valid, whatever breakthroughs may occur in technology. “It is well known that the essential nature of war as a continuation of politics does not change with changing technology and armament.”22 This code phrase from Sokolovskii’s authoritative manual was certainly hammered out with all the care that in the United States is lavished on an amendment to the Constitution. It spells the rejection of the whole basis on which U.S. strategy has come to rest: thermonuclear war is not suicidal, it can be fought and won, and thus resort to war must not be ruled out.
In addition (though we have no solid evidence to this effect) it seems likely that Soviet strategists reject the mutual-deterrence theory on several technical grounds of a kind that have been advanced by American critics of this theory like Albert Wohlstetter, Herman Kahn, and Paul Nitze.
- Mutual deterrence postulates a certain finality about weapons technology: it does not allow for further scientific breakthroughs that could result in the deterrent’s becoming neutralized. On the offensive side, for example, there is the possibility of significant improvements in the accuracy of ICBM’s or striking innovations in anti-submarine warfare; on the defensive, satellites which are essential for early warning of an impending attack could be blinded and lasers could be put to use to destroy incoming missiles.
- Mutual deterrence constitutes “passive defense” which usually leads to defeat. It threatens punishment to the aggressor after he has struck, which may or may not deter him from striking; it cannot prevent him from carrying out his designs. The latter objective requires the application of “active defense”—i.e., nuclear preemption.
- The threat of a second strike, which underpins the mutual-deterrence doctrine, may prove ineffectual. The side that has suffered the destruction of the bulk of its nuclear forces in a surprise first strike may find that it has so little of a deterrent left and the enemy so much, that the cost of striking back in retaliation would be exposing its own cities to total destruction by the enemy’s third strike. The result could be a paralysis of will, and capitulation instead of a second strike.
Soviet strategists make no secret of the fact that they regard the U.S. doctrine (with which, judging by the references in their literature, they are thoroughly familiar) as second-rate. In their view, U.S. strategic doctrine is obsessed with a single weapon which it “absolutizes” at the expense of everything else that military experience teaches soldiers to take into account. Its philosophical foundations are “idealism” and “metaphysics”—i.e., currents which engage in speculative discussions of objects (in this case, weapons) and of their “intrinsic” qualities, rather than relying on pragmatic considerations drawn from experience.23
Since the mid-1960’s, the proposition that thermonuclear war would be suicidal for both parties has been used by the Russians largely as a commodity for export. Its chief proponents include staff members of the Moscow Institute of the USA and Canada, and Soviet participants at Pugwash, Dartmouth, and similar international conferences, who are assigned the task of strengthening the hand of anti-military intellectual circles in the West. Inside the Soviet Union, such talk is generally denounced as “bourgeois pacifism.”24
In the Soviet view, a nuclear war would be total and go beyond formal defeat of one side by the other: “War must not simply [be] the defeat of the enemy, it must be his destruction. This condition has become the basis of Soviet military strategy,” according to the Military-Historical Journal.25 Limited nuclear war, flexible response, escalation, damage limiting, and all the other numerous refinements of U.S. strategic doctrine find no place in its Soviet counterpart (although, of course, they are taken into consideration in Soviet operational planning).
For Soviet generals the decisive influence in the formulation of nuclear doctrine were the lessons of World War II with which, for understandable reasons, they are virtually obsessed. This experience they seem to have supplemented with knowledge gained from professional scrutiny of the record of Nazi and Japanese offensive operations, as well as the balance sheet of British and American strategic-bombing campaigns. More recently, the lessons of the Israeli-Arab wars of 1967 and 1973 in which they indirectly participated seem also to have impressed Soviet strategists, reinforcing previously held convictions. They also follow the Western literature, tending to side with the critics of mutual deterrence. The result of all these diverse influences is a nuclear doctrine which assimilates into the main body of the Soviet military tradition the technical implications of nuclear warfare without surrendering any of the fundamentals of this tradition.
The strategic doctrine adopted by the USSR over the past two decades calls for a policy diametrically opposite to that adopted in the United States by the predominant community of civilian strategists: not deterrence but victory, not sufficiency in weapons but superiority, not retaliation but offensive action. The doctrine has five related elements: (1) preemption (first strike), (2) quantitative superiority in arms, (3) counterforce targeting, (4) combined-arms operations, and (5) defense. We shall take up each of these elements in turn.
Preemption. The costliest lesson which the Soviet military learned in World War II was the importance of surprise. Because Stalin thought he had an understanding with Hitler, and because he was afraid to provoke his Nazi ally, he forbade the Red Army to mobilize for the German attack of which he had had ample warning. As a result of this strategy of “passive defense,” Soviet forces suffered frightful losses and were nearly defeated. This experience etched itself very deeply on the minds of the Soviet commanders: in their theoretical writings no point is emphasized more consistently than the need never again to allow themselves to be caught in a surprise attack. Nuclear weapons make this requirement especially urgent because, according to Soviet theorists, the decision in a nuclear conflict in all probability will be arrived at in the initial hours. In a nuclear war the Soviet Union, therefore, would not again have at its disposal the time which it enjoyed in 1941-42 to mobilize reserves for a victorious counteroffensive after absorbing devastating setbacks.
Given the rapidity of modern warfare (an ICBM can traverse the distance between the USSR and the United States in thirty minutes), not to be surprised by the enemy means, in effect, to inflict surprise on him. Once the latter’s ICBM’s have left their silos, once his bombers have taken to the air and his submarines to sea, a counterattack is greatly reduced in effectiveness. These considerations call for a preemptive strike. Soviet theorists draw an insistent, though to an outside observer very fuzzy, distinction between “preventive” and “preemptive” attacks. They claim that the Soviet Union will never start a war—i.e., it will never launch a preventive attack—but once it had concluded that an attack upon it was imminent, it would not hesitate to preempt. They argue that historical experience indicates outbreaks of hostilities are generally preceded by prolonged diplomatic crises and military preparations which signal to an alert command an imminent threat and the need to act. Though the analogy is not openly drawn, the action which Soviet strategists seem to have in mind is that taken by the Israelis in 1967, a notably successful example of “active defense” involving a well-timed preemptive strike. (In 1973, by contrast, the Israelis pursued the strategy of “passive defense,” with unhappy consequences.) The Soviet doctrine of nuclear preemption was formulated in the late 1950’s, and described at the time by Garthoff and Dinerstein in the volumes cited above.
A corollary of the preemption strategy holds that a country’s armed forces must always be in a state of high combat readiness so as to be able to go over to active operations with the least delay. Nuclear warfare grants no time for mobilization. Stress on the maintenance of a large ready force is one of the constant themes of Soviet military literature. It helps explain the immense land forces which the USSR maintains at all times and equips with the latest weapons as they roll off the assembly lines.
Quantitative superiority. There is no indication that the Soviet military share the view prevalent in the U.S. that in the nuclear age numbers of weapons do not matter once a certain quantity had been attained. They do like to pile up all sorts of weapons, new on top of old, throwing away nothing that might come in handy. This propensity to accumulate hardware is usually dismissed by Western observers with contemptuous references to a Russian habit dating back to Czarist days. It is not, however, as mindless as it may appear. For although Soviet strategists believe that the ultimate outcome in a nuclear war will be decided in the initial hours of the conflict, they also believe that a nuclear war will be of long duration: to consummate victory—that is, to destroy the enemy—may take months or even longer. Under these conditions, the possession of a large arsenal of nuclear delivery systems, as well as of other types of weapons, may well prove to be of critical importance. Although prohibited by self-imposed limitations agreed upon in 1972 at SALT I from exceeding a set number of intercontinental ballistic-missile launchers, the Soviet Union is constructing large numbers of so-called Intermediate Range Ballistic Missile launchers (i.e., launchers of less than intercontinental range), not covered by SALT. Some of these could be rapidly converted into regular intercontinental launchers, should the need arise.26
Reliance on quantity has another cause, namely, the peculiarly destructive capability of modern missiles equipped with Multiple Independently-targettable Reentry Vehicles, or MIRV’s. The nose cones of MIRVed missiles, which both super-powers possess, when in mid-course, split like a peapod to launch several warheads, each aimed at a separate target. A single missile equipped with three MIRV’s of sufficient accuracy, yield, and reliability can destroy up to three of the enemy’s missiles—provided, of course, it catches them in their silos, before they have been fired (which adds another inducement to preemption). Theoretically, assuming high accuracy and reliability, should the entire American force of 1,054 ICBM’s be MIRVed (so far only half of them have been MIRVed), it would take only 540 American ICBM’s, each with three MIRV’s, to attack the entire Soviet force of 1,618 ICBM’s. The result would leave the United States with 514 ICBM’s and the USSR with few survivors. Unlikely as the possibility of an American preemptive strike may be, Soviet planners apparently prefer to take no chances; they want to be in a position rapidly to replace ICBM’s lost to a sudden enemy first strike. Conversely, given its doctrine of preemption, the Soviet Union wants to be in a position to destroy the largest number of American missiles with the smallest number of its own, so as to be able to face down the threat of a U.S. second strike. Its most powerful ICBM, the SS-18, is said to have been tested with up to 10 MIRV’s (compared to 3 of the Minuteman-3, America’s only MIRVed ICBM). It has been estimated that 300 of these giant Soviet missiles, authorized under SALT I, could seriously threaten the American arsenal of ICBM’s.
Counterforce. Two terms commonly used in the jargon of modern strategy are “counterforce” and “countervalue.” Both terms refer to the nature of the target of a strategic nuclear weapon. Counterforce means that the principal objective of one’s nuclear missiles are the enemy’s forces—i.e., his launchers as well as the related command and communication facilities. Countervalue means that one’s principal targets are objects of national “value,” namely the enemy’s population and industrial centers.
Given the predominantly defensive (retaliatory) character of current U.S. strategy, it is naturally predisposed to a countervalue targeting policy. The central idea of the U.S. strategy of deterrence holds that should the Soviet Union dare to launch a surprise first strike at the United States, the latter would use its surviving missiles to lay waste Soviet cities. It is taken virtually for granted in this country that no nation would consciously expose itself to the risk of having its urban centers destroyed—an assumption which derives from British military theory of the 1920’s and 1930’s, and which influenced the RAF to concentrate on strategic bombing raids on German cities in World War II.
The Soviet high command has never been much impressed with the whole philosophy of counter-value strategic bombing, and during World War II resisted the temptation to attack German cities. This negative attitude to bombing of civilians is conditioned not by humanitarian considerations but by cold, professional assessments of the effects of that kind of strategic bombing as revealed by the Allied Strategic Bombing Surveys. The findings of these surveys were largely ignored in the United States, but they seem to have made a strong impression in the USSR. Not being privy to the internal discussions of the Soviet military, we can do no better than consult the writings of an eminent British scientist, P.M.S. Blackett, noted for his pro-Soviet sympathies, whose remarkable book Fear, War and the Bomb, published in 1948-49, indicated with great prescience the lines which Soviet strategic thinking were subsequently to take.
Blackett, who won the Nobel Prize for Physics in 1948, had worked during the war in British Operations Research. He concluded that strategic bombing was ineffective, and wrote his book as an impassioned critique of the idea of using atomic weapons as a strategic deterrent. Translating the devastation wrought upon Germany into nuclear terms, he calculated that it represented the equivalent of the destruction that would have been caused by 400 “improved” Hiroshima-type atomic bombs. Yet despite such punishment, Nazi Germany did not collapse. Given the much greater territory of the Soviet Union and a much lower population density, he argued, it would require “thousands” of atomic bombs to produce decisive results in a war between America and Russia.27 Blackett minimized the military effects of the atomic bombing on Japan. He recalled that in Hiroshima trains were operating forty-eight hours after the blast; that industries were left almost undamaged and could have been back in full production within a month; and that if the most elementary civil-defense precautions had been observed, civilian casualties would have been substantially reduced. Blackett’s book ran so contrary to prevailing opinion and was furthermore so intemperately anti-American in tone that its conclusions were rejected out of hand in the West.
Too hastily, it appears in retrospect. For while it is true that the advent of hydrogen bombs a few years later largely invalidated the estimates on which he had relied, Blackett correctly anticipated Soviet reactions. Analyzing the results of Allied saturation bombing of Germany, Soviet generals concluded that it was largely a wasted effort. Sokolovskii cites in his manual the well-known figures showing that German military productivity rose throughout the war until the fall of 1944, and concludes: “It was not so much the economic struggle and economic exhaustion [i.e., countervalue bombing] that were the causes for the defeat of Hitler’s Germany, but rather the armed conflict and the defeat of its armed forces [i.e., the counterforce strategy pursued by the Red Army.]”28
Soviet nuclear strategy is counterforce oriented. It targets for destruction—at any rate, in the initial strike—not the enemy’s cities but his military forces and their command and communication facilities. Its primary aim is to destroy not civilians but soldiers and their leaders, and to undermine not so much the will to resist as the capability to do so. In the words of Grechko:
The Strategic Rocket Forces, which constitute the basis of the military might of our armed forces, are designed to annihilate the means of the enemy’s nuclear attack, large groupings of his armies, and his military bases; to destroy his military industries; [and] to disorganize the political and military administration of the aggressor as well as his rear and transport.29
Any evidence that the United States may contemplate switching to a counterforce strategy, such as occasionally crops up, throws Soviet generals into a tizzy of excitement. It clearly frightens them far more than the threat to Soviet cities posed by the countervalue strategic doctrine.
Combined-arms operations. Soviet theorists regard strategic nuclear forces (organized since 1960 into a separate arm, the Strategic Rocket Forces) to be the decisive branch of the armed services, in the sense that the ultimate outcome of modern war would be settled by nuclear exchanges. But since nuclear war, in their view, must lead not only to the enemy’s defeat but also to his destruction (i.e., his incapacity to offer further resistance), they consider it necessary to make preparations for the follow-up phase, which may entail a prolonged war of attrition. At this stage of the conflict, armies will be needed to occupy the enemy’s territory, and navies to interdict his lanes of communications. “In the course of operations [battles], armies will basically complete the final destruction of the enemy brought about by strikes of nuclear rocket weapons.”30 Soviet theoretical writings unequivocally reject reliance on any one strategy (such as the Blitzkrieg) or on any one weapon, to win wars. They believe that a nuclear war will require the employment of all arms to attain final victory.
The large troop concentrations of Warsaw Pact forces in Eastern Europe—well in excess of reasonable defense requirements—make sense if viewed in the light of Soviet combined-arms doctrine. They are there not only to have the capacity to launch a surprise land attack against NATO, but also to attack and seize Western Europe with a minimum of damage to its cities and industries after the initial strategic nuclear exchanges have taken place, partly to keep Europe hostage, partly to exploit European productivity as a replacement for that of which the Soviet Union would have been deprived by an American second strike.
As for the ocean-going navy which the Soviet Union has now acquired, it consists primarily of submarines and ground-based naval air forces, and apparently would have the task of cleaning the seas of U.S. ships of all types and cutting the sea lanes connecting the United States with allied powers and sources of raw materials.
The notion of an extended nuclear war is deeply embedded in Soviet thinking, despite its being dismissed by Western strategists who think of war as a one-two exchange. As Blackett noted sarcastically already in 1948-49: “Some armchair strategists (including some atomic scientists) tend to ignore the inevitable counter-moves of the enemy. More chess playing and less nuclear physics might have instilled a greater sense of the realities.”31 He predicted that a World War III waged with the atomic bombs then available would last longer than either of its predecessors, and require combined-arms operations—which seems to be the current Soviet view of the matter.
Defense. As noted, the U.S. theory of mutual deterrence postulates that no effective defense can be devised against an all-out nuclear attack: it is this postulate that makes such a war appear totally irrational. In order to make this premise valid, American civilian strategists have argued against a civil-defense program, against the ABM, and against air defenses.
Nothing illustrates better the fundamental differences between the two strategic doctrines than their attitudes to defense against a nuclear attack. The Russians agreed to certain imprecisely defined limitations on ABM after they had initiated a program in this direction, apparently because they were unable to solve the technical problems involved and feared the United States would forge ahead in this field. However, they then proceeded to build a tight ring of anti-aircraft defenses around the country while also developing a serious program of civil defense.
Before dismissing Soviet civil-defense efforts as wishful thinking, as is customary in Western circles, two facts must be emphasized.
One is that the Soviet Union does not regard civil defense to be exclusively for the protection of ordinary civilians. Its chief function seems to be to protect what in Russia are known as the “cadres,” that is, the political and military leaders as well as industrial managers and skilled workers—those who could reestablish the political and economic system once the war was over. Judging by Soviet definitions, civil defense has as much to do with the proper functioning of the country during and immediately after the war as with holding down casualties. Its organization, presently under Deputy Minister of Defense, Colonel-General A. Altunin, seems to be a kind of shadow government charged with responsibility for administering the country under the extreme stresses of nuclear war and its immediate aftermath.32
Secondly, the Soviet Union is inherently less vulnerable than the United States to a counter-value attack. According to the most recent Soviet census (1970), the USSR had only nine cities with a population of one million or more; the aggregate population of these cities was 20.5 million, or 8.5 per cent of the country’s total. The United States 1970 census showed thirty-five metropolitan centers with over one million inhabitants, totaling 84.5 million people, or 41.5 per cent of the country’s aggregate. It takes no professional strategist to visualize what these figures mean. In World War II, the Soviet Union lost 20 million inhabitants out of a population of 170 million—i.e., 12 per cent; yet the country not only survived but emerged stronger politically and militarily than it had ever been. Allowing for the population growth which has occurred since then, this experience suggests that as of today the USSR could absorb the loss of 30 million of its people and be no worse off, in terms of human casualties, than it had been at the conclusion of World War II. In other words, all of the USSR’s multimillion cities could be destroyed without trace or survivors, and, provided that its essential cadres had been saved, it would emerge less hurt in terms of casualties than it was in 1945.
Such figures are beyond the comprehension of most Americans. But clearly a country that since 1914 has lost, as a result of two world wars, a civil war, famine, and various “purges,” perhaps up to 60 million citizens, must define “unacceptable damage” differently from the United States which has known no famines or purges, and whose deaths from all the wars waged since 1775 are estimated at 650,000—fewer casualties than Russia suffered in the 900-day siege of Leningrad in World War II alone. Such a country tends also to assess the rewards of defense in much more realistic terms.
How significant are these recondite doctrinal differences? It has been my invariable experience when lecturing on these matters that during the question period someone in the audience will get up and ask: “But is it not true that we and the Russians already possess enough nuclear weapons to destroy each other ten times over” (or fifty, or a hundred—the figures vary)? My temptation is to reply: “Certainly. But we also have enough bullets to shoot every man, woman, and child, and enough matches to set the whole world on fire. The point lies not in our ability to wreak total destruction: it lies in intent.” And insofar as military doctrine is indicative of intent, what the Russians think to do with their nuclear arsenal is a matter of utmost importance that calls for close scrutiny.
Enough has already been said to indicate the disparities between American and Soviet strategic doctrines of the nuclear age. These differences may be most pithily summarized by stating that whereas we view nuclear weapons as a deterrent, the Russians see them as a “compellant”—with all the consequences that follow. Now it must be granted that the actual, operative differences between the two doctrines may not be quite as sharp as they appear in the public literature: it is true that our deterrence doctrine leaves room for some limited offensive action, just as the Russians include elements of deterrence in their “war-fighting” and “war-winning” doctrine. Admittedly, too, a country’s military doctrine never fully reveals how it would behave under actual combat conditions. And yet the differences here are sharp and fundamental enough, and the relationship of Soviet doctrine to Soviet deployments sufficiently close, to suggest that ignoring or not taking seriously Soviet military doctrine may have very detrimental effects on U.S. security. There is something innately destabilizing in the very fact that we consider nuclear war unfeasible and suicidal for both, and our chief adversary views it as feasible and winnable for himself.
SALT misses the point at issue so long as it addresses itself mainly to the question of numbers of strategic weapons: equally important are qualitative improvements within the existing quotas, and the size of regular land and sea forces. Above all, however, looms the question of intent: as long as the Soviets persist in adhering to the Clausewitzian maxim on the function of war, mutual deterrence does not really exist. And unilateral deterrence is feasible only if we understand the Soviet war-winning strategy and make it impossible for them to succeed.
1 “The Real Paul Warnke,” the New Republic, March 26, 1977, p. 23.
2 N.V. Karabanov in N.V. Karabanov, et al., Filosofskoe nasledie V. I. Lenina i problemy sovremennoi voiny (“The Philosophical Heritage of V.I. Lenin and the Problems of Contemporary War”) (Moscow, 1972), pp. 18-19, cited in Leon Gouré, Foy D. Kohler, and Mose L. Harvey, eds., The Role of Nuclear Forces in Current Soviet Strategy (1974), p. 60.
3 A.A. Grechko, Vooruzhonnye sily sovetskogo gosudarstva (“The Armed Forces of the Soviet State”) (Moscow, 1975), p. 345.
4 Russell F. Weigley, The American Way of War (1973), p. 368.
5 Matthew B. Ridgway, Soldier (1956), pp. 296-97.
6 In Michael Howard, ed., The Theory and Practice of War (London, 1965), p. 291.
7 Benjamin S. Lambeth, Selective Nuclear Options in American and Soviet Strategic Policy (Rand Corporation, R-2034-DDRE, December 1976), p. 14. This study analyzes and approves of the refinement introduced into the U.S. doctrine by James R. Schlesinger as Secretary of Defense in the form of “limited-response options.”
8 Interview with Playboy, March 1977, p. 72.
9 Grechko, Voorozhunnye sily sovetskogo gosudarstva, pp. 347-48, emphasis added.
10 Voennaia strategiia (Moscow, 1962). Since 1962 there have been two revised editions (1963 and 1968). The 1962 edition was immediately translated into English; but currently the best version is that edited by Harriet Fast Scott (Crane-Russak, 1975) which renders the third edition but collates its text with the preceding two.
11 To date, twelve volumes in this series have been translated into English and made publicly available through the U.S. Government Printing Office.
12 William W. Kaufmann, The McNamara Strategy (1964), p. 97.
13 Garthoff's principal works are Soviet Military Doctrine (1953), Soviet Strategy in the Nuclear Age (1958), and The Soviet Image of Future War (1959). Dinerstein wrote War and the Soviet Union (1959).
14 Cited in J.M. Mackintosh, The Strategy and Tactics of Soviet Foreign Policy (London, 1962), pp. 90-91, emphasis added.
15 Articles in the New Times for 1945-46 cited in P.M.S. Blackett, Fear, War and the Bomb (1949), pp. 163-65.
16 We now know that orders to proceed with the development of a Soviet atomic bomb were issued by Stalin in June 1942, probably as a result of information relayed by Klaus Fuchs concerning the Manhattan Project, on which he was working at Los Alamos, Bulletin of the Atomic Scientists, XXIII, No. 10, December 1967, p. 15.
17 Dinerstein, War and the Soviet Union, p. 71.
18 I would like to stress the word “theory,” for the Russians certainly accept the fact of deterrence. The difference is that whereas American theorists of mutual deterrence regard this condition as mutually desirable and permanent, Soviet strategists regard it as undesirable and transient: they are entirely disinclined to allow us the capability of deterring them.
19 Matthew P. Gallagher and Karl F. Spielmann, Jr., Soviet Decision-Making for Defense (1972), p. 39.
20 The figures are from Grechko, Vooruzhonnye sily, p. 97.
21 Metodologicheskie problemy voennoi teorii i praktiki (“Methodological Problems of Military Theory and Practice”) (Moscow, Ministry of Defense of the USSR, 1969), p. 288.
22 V.D. Sokolovskii, Soviet Military Strategy (Rand Corporation, 1963), p. 99, emphasis added.
23 See, e.g., Metodologicheskie problemy, pp. 289-90.
24 Gouré et al., The Role of Nuclear Forces, p. 9.
25 Cited ibid., p. 106.
26 I have in mind the SS-20, a recently developed Soviet rocket. This is a two-stage version of the intercontinental SS-16 which can be turned into an SS-16 with the addition of a third booster and fired from the same launcher. Its production is not restricted by SALT I and not covered by the Vladivostok Accord.
27 Blackett, Fear, p. 88. As a matter of fact, recent unofficial Soviet calculations stress that the United States dropped on Vietnam the TNT equivalent of 650 Hiroshima-type bombs—also without winning the war: Kommunist Vooruzhonnykh Sil (“The Communist of the Armed Forces”), No. 24, December 1973, p. 27, cited in Gouré et al., The Role of Nuclear Forces, p. 104.
28 Sokolovskii, Soviet Military Strategy (3rd edition), p. 21.
29 A.A. Grechko, Na strazhe mira i stroitel'stva Kommunizma, (“Guarding Peace and the Construction of Communism”) (Moscow, 1971), p. 41.
30 Metodologicheskie problemy, p. 288.
31 Blackett, Fear, p. 79.
32 On the subject of civil defense, see Leon Gouré, War Survival in Soviet Strategy (1976).
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Why the Soviet Union Thinks It Could Fight & Win a Nuclear War
Must-Reads from Magazine
f all the surprises of the Trump era, none is more notable than the pronounced shift toward Israel. Such a shift was not predictable from Donald Trump’s conduct on the campaign trail; as he sought the Republican nomination, Trump distinguished himself by his refusal to express unqualified support for Israel and his airy conviction that his business experience gave him unique insight into how to strike “a real-estate deal” to resolve the Israeli–Palestinian conflict. In addition, his isolationist talk alarmed Israel’s friends in the United States and elsewhere if for no other reason than that isolationism, anti-Zionism, and anti-Semitism often go hand in hand in hand.
But shift he did. In the 14 months since his inauguration, the new president has announced that the United States accepts Jerusalem as Israel’s capital and has declared his intention to build a new U.S. Embassy in Jerusalem, first mandated by U.S. law in 1996. He has installed one of his Orthodox Jewish lawyers as the U.S. ambassador and another as his key envoy on Israeli–Palestinian issues. America’s ambassador to the United Nations has not only spoken out on Israel’s behalf forcefully and repeatedly; Nikki Haley has also led the way in cutting the U.S. stipend to the refugee relief agency that is an effective front for the Palestinian terror state in Gaza. And, as Meir Y. Soloveichik and Michael Medved both detail elsewhere in this issue, his vice president traveled to Israel in January and delivered the most pro-Zionist speech any major American politician has ever given.
Part of this shift can also be seen in what Trump has not done. He has not signaled, in interviews or in policy formulations, that the United States views Israeli actions in and around Gaza and the West Bank as injurious to a future peace. And his administration has not complained about Israeli actions taken in self-defense in Lebanon and Syria but has, instead, supported Israel’s right to defend itself.
This marks a breathtaking contrast with the tone and spirit of the relationship between the two countries during the previous administration. The eight Obama years were characterized by what can only be called a gut hostility rooted in the president’s own ideological distaste for the Jewish state.
The intensity of that hostility ebbed and flowed depending on circumstances, but from early 2009, it kept the relationship between the United States and Israel in a condition of low-grade fever throughout Barack Obama’s tenure—never comfortable, never easy, always a bit off-kilter, always with a bit of a headache that never went away, and always in danger of spiking into a dangerous pyrexia. That fever spike happened no fewer than five times during the Obama presidency. Although these spikes were usually portrayed as the consequences of the personal friction between Obama and Israeli Prime Minister Benjamin Netanyahu, that friction was itself the result of the ideas about the Middle East and the world in general Obama had brought with him to the White House. In this case, the political became the personal, not the other way around.
Given the general leftish direction of his foreign-policy views from college onward, it would have been a miracle had Obama felt kindly disposed toward the Jewish state’s own understanding of its tactical and strategic condition. And Netanyahu spoke out openly and forcefully to kindly disposed Americans—from evangelical Christians to congressional Republicans—about the threats to his country from nearby terrorism and rockets, and a developing nuclear Iran 900 miles away. His candor proved a perpetual irritant to a president whose opening desire was to see “daylight” (as he said in February 2009) between the two countries. Obama caused one final fever spike as he left office by refusing to veto a hostile United Nations resolution. This appeared churlish but was, in fact, Obama allowing himself the full rein of his true and long-standing convictions on his way out the door.T
he things Trump both has and has not done should not seem startling. They constitute the baseline of what we ought to expect one ally would say and not say about the behavior of another ally. But as Obama’s disgraceful conduct demonstrated, Israel is not just another ally and never has been. It is a unique experiment in statehood—a Western country on Mideast soil, born from an anti-colonialist movement that is now viewed by many former colonial powers as an unjust colonial power, created by an international organization that is now largely organized as a means of expressing rage against it.
Historically, American leaders have had to reckon with these unique realities—and the fact that the hostile nations surrounding Israel and hungering for its destruction happen to sit atop the lifeblood of the industrial economy. The so-called realists who claim to view the world and the pursuit of America’s interests through cold and unsentimental eyes have experienced Israel mostly as a burden.
Through many twists and turns over the seven decades of Israel’s existence, they have felt that America’s support for Israel is mostly the result of short-sighted domestic political concerns for which they have little patience—the wishes of Jewish voters, or the religious concerns of evangelical voters, or post-Holocaust sympathy that has required (though they would never say it aloud) an unnatural suspension of our pursuit of the American national interest.
Israel created problems with oil countries, and with the United Nations, and with those who see the claims for the necessity of a Jewish state as a form of special pleading. As a result, the realists have spent the past seven decades whispering in the ears of America’s leaders that they have the right to expect Israel to do things we would not expect of another ally and to demand it behave in ways we would not demand of any other friendly country.
The realists and others have spent nearly 50 years propounding a unified-field theory of Middle East turmoil according to which many if not all of the region’s problems are the result of Israel’s existence. Were it not for Israel, there would not have been regional wars in 1956, 1967, 1973, and 1982—no matter who might have borne the greatest degree of responsibility for them. There would have been other conflicts, but not this one. There would have been no world-recession-inducing oil embargo in 1973 because there would have been no response to the Yom Kippur War. Were it not for Israel, for example, there would be no Israeli–Palestinian problem; there would have been some other version of the problem, but not this one.
Unhappiness about the condition of the Palestinians in a world with Israel was held to be the cause of existential unhappiness on the Arab street and therefore of instability in friendly authoritarian regimes throughout the Middle East. Meanwhile, Israel’s own pursuit of what it and its voting populace took to be their national interests was usually treated with disdain at the very least and outright fury at moments of crisis.
It was therefore axiomatic that the solution to many if not most of the region’s problems ran right though the center of Jerusalem. It would take a complex process, a peace process, that would lead to a deal—a deal no one who believed in this magical process could actually describe honestly and forthrightly or give a sense as to what its final contours would be. If you could create a peace process leading to a deal, though, that deal itself would work like a bone-marrow transplant—through a mysterious process spreading new immunities to instability in the Middle East that would heal the causes of conflict and bring about a new era.
Again, this was the view of the realists. With Israel’s 70th anniversary coming hard upon us, the question one needs to ask is this: What if the realists were nothing but fantasists? What if their approach to the Middle East from the time of Israel’s founding was based in wildly unrealistic ideas and emotions? Central to their gullibility was the wild and irrational idea that peace was or ever could be the result of a process. No, peace is a condition of soul, an exhaustion from the impact of conflict, born of a desire to end hostilities. Only after this state is achieved can there be a workable process, because both parties would already have crossed the Rubicon dividing them and would only then need to work out the details of coexistence.
There was no peace to be had. The Arab states didn’t want it. The Palestinians didn’t want it. The Israelis did and do, but not at the expense of their existence. The Arabs demanded concessions, and the Israelis have made many over the years, but they could not concede the security of the millions of Israel’s citizens who had made this miracle of a country an enduring reality. The realists fetishized “process” because it seemed the only way to compel change from the outside. And so Israel has borne the brunt of the anger that follows whenever a fantasist is forced to confront a reality he would rather close his eyes to.
That is why I think what Trump and his people have done over the past 14 months represents a new and genuine realism. They are dealing with Israel and its relationships in the region as they are, not as they would wish them to be. They are seeing how the government of Egypt under Abdel Fattah el-Sisi is making common cause with Israel against the Hamas entity in Gaza and against ISIS forces in the Suez. They are witness to the effort at radical reformation in Saudi Arabia under Muhammad bin-Salman—and how that seems to be going hand in hand with an astonishing new concord between Israel and the Desert Kingdom over the common threat from Iran. This is a harmonizing of interests that would have seemed positively science-fictional in living memory.
Mostly, what they are seeing is that an ally is an ally. Israel’s intelligence agencies are providing the kind of information America cannot get on its own about Syria and Iran and the threat from ISIS. Israel is a technological powerhouse whose innovations are already helping to revolutionize American military know-how. Israel’s army is the strongest in the world apart from the regional superpowers—and the only one outside Western Europe and the United States firmly locked in alliance with the West. Things are changing radically in the Middle East, and as the 21st century progresses it is possible that Israel will play a constructive and influential role outside its borders in helping to maintain and strengthen a Pax Americana.
Donald Trump is a flighty man. All of this could change. But for now, the replacement of the false realism of the past with a new realism for the 21st century seems like a revolutionary development that needs to be taken very, very seriously.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
f the making of Washington movies, there is no end. Kohelet said this in Ecclesiastes, I think. Or maybe it was Gene Shalit on the Today Show. It’s a truism in any case. Steven Spielberg’s latest entry in the genre, The Post, is for many Washingtonians the most powerful example in the long line. When the movie opened here in late December, there were reports of audiences cheering lustily and even dissolving in tears at the movie’s end, as if they were watching a speech by President Obama. The local paper ran news articles about it, along with numberless feature stories, interviews, op-eds, fact-checks, reviews, and reviews of reviews.
Which is excusable, I guess, since the movie is about the Washington Post. But then The Post is supposed to be about so many things. It’s about the First Amendment, depicting the agonies of the Post’s editor Ben Bradlee, and its owner, Katherine Graham, as they defy the Nixon administration to publish the top-secret Pentagon Papers. It’s about feminism and the personal evolution of Mrs. Graham from an insecure Georgetown socialite to Master of the Boardroom. It’s the story of the lonely courage of the leaker/whistleblower/traitor (your call) Daniel Ellsberg. It is also, so I read in the Post, a warning about the imperial designs of President Trump to smother a free press. And it’s been understood as a straightforward tale of political history, though the liberties Spielberg takes with his based-on-a-true-story are so extreme as to render it useless as a guide to what happened in the summer of 1971.
Running beneath it all is the motive that animates so many Washington movies: an impatience with the stuttering, halting processes of self-government. The wellspring from which the Washington movie flows is Frank Capra’s Mr. Smith Goes to Washington. The plot is familiar to everyone. Mr. Smith, a small-town bumpkin played by Jimmy Stewart—talk about stuttering and halting!—is appointed by sinister political bosses to a vacant Senate seat, on the assumption that he will be easily manipulated, like a movie audience. Instead, Smith stumbles upon an illicit land deal and exposes the Senate as a den of thieves. His filibustering floor speech rouses a populist outpouring from an army of alarmingly cute children. By the end of the movie, Mr. Smith has restored the nation to its democratic ideals.
Capra intended his movie to be a hymn to those ideals, and for nearly 80 years that’s what audiences have taken it to be. It is no such thing. Mr. Smith seethes with contempt for the raw materials of democracy: debate, quid pro quo deal-making, back-scratching compromise—all the tedious, unsightly mechanics that turn democratic ideals into functioning self-government. In Capra’s telling, democracy can be rescued only by anti-democratic means. An appointed charismatic savior (he’s not even elected!) uses a filibuster (favorite parliamentary trick of bullies and autocrats) to release the volatile pressure of a disenfranchised mob (the great fear of every democratic theorist since Aristotle). From Mr. Smith to Legally Blonde 2, the point of the Washington movie is clear: Left to its own devices, without an outside agent to penetrate it and cleanse it of its sins, self-government sinks into corruption and despotism.
Steven Spielberg is the closest thing we have to Capra’s successor. Like all his movies, The Post has many charms: a running visual joke about Bradlee’s daughter making a killing with her lemonade stand threads in and out of the heavier moments like a rope light. On the other hand, his painstaking obsession with period detail often fails: A hippie demonstration against the Vietnam War looks as if it’s been staged by the cast of Hair. The set-piece speeches are insufferable, an icky glue of sanctimony and sentimentality. What we call the Pentagon Papers was a classified history of the lies, misjudgments, and incompetence of four presidents, from Harry Truman to Lyndon Johnson, ending in 1968. Sometimes the speechifying is directed at the malfeasance of these men, as when Bradlee bellows: “The way they lied—those days have to be over!”
Weirdly, though, the full force of the movie’s indignation is aimed at Richard Nixon. Historians might point out that Nixon wasn’t even president during the period covered by the Pentagon Papers. Intelligence officials told the president that the release of the papers would pose an unprecedented threat to national security. He ordered the Justice Department to sue to prevent the New York Times and the Post from publishing the top-secret material. In the movie’s account, this ill-judged if understandable response is equivalent to the official, strategic lies that accompanied tens of thousands of American soldiers to their deaths.
A particularly rich moment comes when Robert McNamara warns Mrs. Graham about Nixon’s capacity for evil. As Kennedy and Johnson’s defense secretary, McNamara was an early version of Saturday Night Live’s Tommy Flanagan, Pathological Liar: The Viet Cong are on the run! Yeah, sure, that’s the ticket! As much as anyone, McNamara, with his stupidity and dishonesty, guaranteed the tragedy of Vietnam. And yet here he is, issuing a clarion call to Mrs. Graham. “Nixon will muster the full power of the presidency, and if there’s a way to destroy you, by God, he’ll find it!” Later Bradlee compares Nixon to his predecessors: “He’s doing the same thing!”
Um, no. From his inauguration in 1969 onward, Nixon’s every move in Vietnam was intended to extricate the U.S. from the quicksand previous presidents had led us (and him) into. In this case, if in no other, Nixon was the good guy. He had nothing to lose, personally, from the publication of the Pentagon Papers, and maybe a lot to gain. After all, they demonstrated the villainy of his predecessors, not his own. (That came later.)
Yet the movie can’t entertain the possibility that Nixon could act on anything but the basest motives. He is a sinister presence. We see him through the Oval Office window, always alone, with his back turned, stabbing the air with a pudgy finger and cursing the Washington Post to subordinates over the phone. It’s actually Nixon’s voice in the movie, taken from the infamous tapes. Unfortunately, the actor’s movements don’t synchronize with the words; in such a somber thriller, the effect is inadvertently comic. It reminded me of watching the back of George Steinbrenner’s head in Seinfeld while Larry David spoke the Yankee owner’s dialogue. And Nixon was no Steinbrenner.
The most plausible explanation is that Nixon, in trying to stop publication of the Pentagon Papers, was doing what he said he was doing: his job. American voters had elected him to protect national security and, not incidentally, the prerogative of the president and the federal government to determine how best to protect it, including determining whether sensitive information should be kept secret. If he didn’t do his job the way voters wanted him to, they could get rid of him next time. You know, like in a democracy.
Ben Bradlee, Katherine Graham, and Stephen Spielberg, not to mention those teary audiences, have no patience with such niceties. As it happens, in the end, the Pentagon Papers were a bust. The sickening detail they disclosed deepened but did not broaden the historical record, and by all accounts their impact on national security was negligible. Those facts don’t alter the creepiness of The Post’s premise—that the antagonists of an elected regime are allowed to go outside the law when it suits their view of the national interest. Charismatic saviors (and few people were more charismatic than Ben Bradlee) can save democracy from itself, but only by ignoring the requirements of democracy. Spielberg continues the tradition of the Washington movie. The Post is Capraesque—in the only true sense of the word.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Is Harvard assaulting the rights of students to free association in the name of a diversity standard it doesn’t live up to itself?
arvard College is home to six all-male “final clubs.” Their members have access to houses in which they eat, socialize, and form bonds with their fellows. These clubs are as historic as they are renowned; most were formed in the 19th century and have had Kennedys, Roosevelts, and an endless procession of politicians, writers, and businessmen as former members. From the time of their origination, these exclusive institutions have been an object of fascination. When doors are closed, and only a small, elite group selected from an already hyper-elite campus has been invited inside, jealousy, curiosity, and frustration are sure to prevail.
The final clubs are financially independent from Harvard and have been entirely unaffiliated with the university since the 1980s, when the administration and the clubs clashed over the latter’s refusal to admit women. But that conflict, which had cooled over time, has recently resurfaced in a new and heightened manner.
In March 2016 Rakesh Khurana, the dean of Harvard College, set an April 15 deadline for the final clubs, at which time they were to inform the administration whether they would change course and become co-ed. Two forces drove Khurana’s action. The first was a report by Harvard’s Task Force on Sexual Assault Prevention released days earlier, after years of research. The report indicated that students who were involved with the final clubs were significantly more likely to have experienced some form of assault than those who were not. The second impetus was the administration’s position that the final clubs—and the ways in which they screened members—were in direct conflict with the ethos of the university.
The deadline passed without response from the clubs. On May 6, 2016, Dean Khurana wrote a letter to Harvard President Drew Faust. He proposed that, beginning with incoming freshmen who would matriculate in the fall of 2017, students who became members of what he termed “unrecognized single-gender social organizations” should be ineligible for leadership positions in Harvard organizations—meaning they could not serve as publication editors, captains of sports teams, leaders of theatrical troupes, and the like. And they would also be ineligible for letters of recommendation from the dean, necessary for many prestigious postgraduate opportunities such as the Rhodes and Marshall scholarships.
Khurana’s letter, and the sanctions proposed within, quickly became a cause célèbre. Harry R. Lewis, a professor of computer science and himself a former dean of the college, wrote Khurana a letter expressing his concern that “by asserting, for the first time, such broad authority over Harvard students’ off-campus associations, the good you may achieve will in the long run be eclipsed by the bad: a College culture of fear and anxiety about nonconformity.” Lewis went on to note:
The reliance on your judgement of what count[s] as Harvard’s values, and using that judgment to decide which students will receive institutional support, is a frightening prospect….The discretion exercised by the dean and his representatives will chill the activism of students in causes that might also be considered noncompliant with Harvard standards—for example, advocacy for a religion that does not allow women to be full participants, or a political party that opposes affirmative action. Such groups are excluded from your mandate, but only as a matter of your discretion. Why wouldn’t activism for such organizations color the support the College would offer their members, on the basis that such students are showing that their true colors are not pure Crimson?
Lewis also referenced the faculty’s responsibilities and noted that there was no precedent in Harvard’s Handbook for Students for the sanctions, thus suggesting that Khurana’s proposals might be outside the administration’s jurisdiction.
In September 2016, Khurana detailed the responsibilities of the “Single-Gender Social Organizations Implementation Committee.” The committee was tasked with
consulting broadly with the College community to address the following questions: 1) What leadership roles and endorsements are affected by the policy; 2) How organizations can transition to fulfill the expectations of inclusive membership practices; and 3) How the College should handle transgressions of the policy.
In addition to the committee’s work, the faculty went through several rounds of motions and debate, discussing myriad permutations of the sanctions, as well as the validity of the sanctions themselves.
In December 2017, the discussions came to a halt. Harvard’s administration flatly announced it would engage in sanctions against students who joined those “unrecognized single-gender social organizations,” or USGSOs. This ostensibly final decision has provoked renewed outrage from students, faculty, and alumni, who have grounded their varied objections in ethical, philosophical, and legal concerns.U
ntil the 1960s shattered the American elite consensus on such matters, the collegiate experience was vastly different for students. Universities used to view their role as being in loco parentis—serving in place of the parents from whom their charges had recently separated. Today, on Harvard’s enchanting campus, teenagers and twentysomethings tend to rule the roost. Students have tremendous flexibility in building their course schedules, and rare is the lecture professor who takes attendance. Undergraduates come and go as they please, to and from wherever they please, with whomever they please, from the darkest hours of the night to the earliest hours of the morning.
But from the time America’s colleges came into being in the 17th and 18th centuries until just a few decades ago, these institutions imposed rules and regulations, curtailed freedoms, and designed a microcosmic world in which young adults would—in theory—learn how to navigate the reality that awaited them after graduation. They were eased into the world in a setting that constricted their choices and where the powers that be very consciously, and intentionally, refrained from treating them like adults. This was most evident in the controls placed on contact between the sexes.
A 1989 Harvard Crimson article by Katherine E. Bliss detailed the so-called parietal rules of the 1960s. It noted that “in 1964, the primary goal of College administrators was maintaining ‘an open door and one foot on the floor’ policy for students entertaining guests of the opposite sex in their rooms.” At that time, the student body and the administration were in conflict over the right to do as they pleased in their own dorms: “Students in 1964 were concerned with lengthening the number of hours they were allowed to spend with members of the opposite sex in the privacy of their own rooms.” If this sounds quaint, consider Bliss’s next point. “Few,” she observed, “could appreciate the fact that only a decade earlier, men and women were not allowed to enter the dormitories of the opposite sex at all.”
The original parietal rules meant that the women of Radcliffe, Harvard’s sister college, could have been in the Harvard Houses only between the hours of 4 and 7 p.m. Robert Watson, a Harvard dean, explained at the time: “We have to watch the mores of our students. I do not want to see Harvard play a leading role in relaxing the moral code of college youth.” Indeed, he went on to say that “the college must follow the customs of the time and the community.…We cannot have rules more liberal than a standard generally accepted by the American public.”
Is there a single standard generally accepted by the American public today? For most of the country—with exceptions in deeply religious Jewish, Christian, and Islamic communities—ours is not an age that concerns itself with the amount of time that men and women spend together in solitude. But that doesn’t mean our era isn’t concerned with the moral development of our youth. On the contrary, leaders of America’s elite institutions today are as preoccupied with strengthening the souls of their charges as were the men who designed the parietal codes all those years ago. Only their aim is not sexual purity anymore, but rather social diversity. It is the heart and soul of the moral vision of our times, and administrators today are no less determined to see that students hew to that standard. But in their effort to serve in loco parentis in this fashion, educators are leaping across ethical—and possibly, legal—lines.
The fraternity-like final clubs have always been difficult to get into, much like Harvard itself. And for many years, the all-male final clubs were certainly characterized by discrimination. In a 1965 piece for the Crimson, Herbert H. Denton Jr., then an undergraduate, noted that while “the tacit ban on Jews has been relaxed in most clubs,” the “ban on Negroes is still in effect.” The same cannot be said today; while several of the final clubs are trying to retain their character by remaining single-gender organizations, they do not screen would-be members on the basis of race or religion.
Nonetheless, the administration has determined that they espouse values and ideas contrary to the Harvard spirit and must consequently be treated as an anachronistic wrong to be extirpated. In a statement issued in December, President Faust (along with William F. Lee, senior fellow of the Harvard Corporation) declared that
the final clubs in particular are a product of another era, a time when Harvard’s student body was all male, culturally homogenous, and overwhelmingly white and affluent. Our student body today is significantly different. We self-consciously seek to admit a class that is diverse on many dimensions, including on gender, race, and socioeconomic status.
The clubs have strict rules about speaking with the press, and every member I spoke with—both former and current students—did so on the condition of anonymity. Many brought up the topic of diversity, noting that in their experience, the members of their clubs were diverse in both ethnic and socioeconomic respects. Members of multiple clubs told me about policies under which an inability to pay club dues has no bearing on whether or not a student will be accepted. Indeed, one went so far as to note that the financial-aid offer is blatantly highlighted during the initiation process, so that those lower on the socioeconomic ladder are not even temporarily burdened by the misconception that their financial status might affect their membership.
The final clubs, like Harvard itself, may indeed be a product of another era. But just as Harvard has evolved, the final clubs have changed. Faust, Lee, and all of the actors in the anti-final-clubs camp, ignore this. They also espouse a position that is as illogical as it is incoherent: Faust and Lee claim both that “students may decide to join a USGSO and remain in good standing” and that “decisions often have consequences, as they do here in terms of students’ eligibility for decanal1 endorsements and leadership positions supported by institutional resources.”
Most parents would not believe that their sons and daughters were in “good standing” if they came home from campus for winter break and told them they would be unable to be editor of the newspaper, captain of the debate team, or eligible for a Rhodes or Marshall scholarship. Yet Faust and Lee insist that “the policy does not discipline or punish the students.” It merely “recognizes that students who serve as leaders of our community should exemplify the characteristics of non-discrimination and inclusivity that are so important to our campus.” It’s hard to believe that Faust and Lee might honestly think that excluding students from leadership roles or prestigious postgrad opportunities would be construed as anything other than a punishment.
So why the insistence to the contrary? If the final clubs are, in the administration’s eyes, archaic, narrow-minded, discriminatory organizations, why not come out with an honest statement that calls for disciplining the students who dare to participate in these institutions? Lewis, the former dean, has explained this by making reference to what Faust and Lee do not mention—namely, Harvard’s Statutes—the internal bylaws governing the institution. Lewis cites part of the 12th statute, which lays out that “the several faculties have authority…to inflict at their discretion, all proper means of discipline.” He notes that “by declaring that ineligibility for honors and distinctions are ‘not discipline,’ what President Faust and Mr. Lee are saying is that the Statutes are not implicated, the matter is not one for the Faculty to decide, and no Faculty vote is needed to carry out the policy.” Indeed, Lewis notes that “it is important that the…policy not be discipline, because if it were discipline, and disciplinary action were taken against a student without a Faculty vote authorizing that policy, that student could challenge the action as not properly authorized.”
There is something else the Faust-Lee statement does not reference—and tellingly. In the beginning of the Harvard administration’s war on final clubs, concerns over sexual assault seemed to form the core of the issue. The Task Force on Sexual Assault Prevention reported that 47 percent of female college seniors who were in some way involved in final clubs—either because they attend events at the male clubs, or because they themselves are members of female clubs—said they had experienced “nonconsensual sexual contact since entering college.” Since “31 percent of female Harvard seniors reported nonconsensual sexual contact since entering college,” the report said, the data proved that “a Harvard College woman is half again more likely to experience sexual assault if she is involved with a Club than the average female Harvard College senior.” But Harvard’s sexual assault survey also found that 75 percent of “incidents of nonconsensual complete and attempted penetration reported by Harvard College females” happened in…Harvard dorms.
The report is sloppy and lumps together things that are not alike. For example, the Porcellian—Harvard’s oldest final club—does not allow any nonmembers through its doors. Charles Storey, who was then the Porcellian’s graduate president, provided a statement to the Crimson in which, among other things, he claimed that the club was “being used as a scapegoat for the sexual assault problem at Harvard despite its policies to help avoid the potential for sexual assault.” The Porcellian, he said, was “mystified as to why the current administration feels that forcing our club to accept female members would reduce the incidence of sexual assault on campus.” Indeed, Storey said, “forcing single gender organizations to accept members of the opposite sex could potentially increase, not decrease the potential for sexual misconduct.”
A day later, Storey apologized for his statement. A few days after that, he resigned as the Porcellian’s graduate president. His reasoning was admittedly inelegant, as it could be interpreted to suggest that club members would be unable to restrain themselves from committing sexual assault should women enter their domain. But Storey was not incorrect in pointing out that, by definition, women could not be subjected to unwanted touching in the Porcellian clubhouse if they were not allowed inside. For a club like the Porcellian, then, where instances of male-on-female sexual assault within the house are currently nonexistent, going co-ed would inherently guarantee that the opportunity for assault would expand. And that is why it is noteworthy (Storey’s humiliation notwithstanding) that the Faust-Lee declaration eliminated the attack on the final clubs for their ostensibly heightened role in unwanted sexual conduct. And why the entirety of the case against them now rests on their failure to hew to the administration’s convictions on gender egalitarianism.
The role that final clubs play in Harvard social life has been a contentious topic for decades. The perception has long been that socially, the members of Harvard’s male final clubs have too much power. On a campus with limited space for social gathering, the final-club mansions are often the source of the college’s most sought-after nightlife. Arguments have been made consistently over time that the exclusionary practices of the clubs—they typically accept only 10 to 25 new members a year—make for unpleasant and unfair campus social dynamics. But again, this conversation is happening at Harvard, an institution that prides itself on its prestige and exclusivity, and which accepted a mere 5.2 percent of its applicants to the 2021 class.
Lewis, the former dean, is not exactly a natural ally for the clubs. He told me that he was “pretty tough with them” during his tenure, and that he was “instrumental in trying to get some of the bad behavior of some of the final clubs under control.” The issues that arose during his time as dean seem to have mostly been related to parties that grew too loud or students who became too drunk. But confronting specific problems as they arise is an approach entirely different from issuing an all-encompassing sanction on free association. At Harvard, specifically, the implications of such a policy could have long-term ramifications. “As an educational institution that, for better or worse, graduates more than its fair share of the leadership of the country, in both industry and technology, and government and law,” Lewis said, “we should not be teaching students that the way you control social problems is by creating bans and penalties against joining organizations.” His “bigger worry,” he said, is that “students will come to think it’s a reasonable thing to do.”
Beyond all these considerations lies an additional layer of complication: legality. Even as a private institution, Harvard’s autonomy may not be as absolute as it seems to believe. I spoke by phone with Harvey Silverglate, a lawyer who is currently representing the Fly, one of the clubs. He told me that “Harvard is misinformed if it has been told by its lawyers or by the office of the general counsel that it can do what it is trying to do, that is to say, punish a private off-campus club, punish Harvard students for joining a legal off-campus club, that is not on Harvard property, and over which the university has no control.” If Harvard goes forward with its plan, Silverglate noted, it will have “overstepped its legal powers.” He spoke extensively about the specific challenges that Harvard would face under Massachusetts state law, explaining that there are free-speech provisions in the Massachusetts constitution that are more protective of speech than the First Amendment to the U.S. Constitution. In fact, Silverglate noted, the state’s supreme court has ruled in several instances that Massachusetts’s declaration of rights “limits the power of private institutions over the people it governs.”
In its desire to avoid a lawsuit, the Harvard administration—or the team of lawyers that doubtlessly advised it—carefully crafted a rule that would apply equally to men and women. Had the sanctions applied solely to male-only clubs, the university would likely have been faced with a federal lawsuit or investigation into gender discrimination. Yet despite the male final clubs being the primary target of the sanctions, they seem to have done the most harm so far to Harvard’s fraternities, sororities, and female final clubs.
One female student I spoke with is a member of one of the originally all-female final clubs that has recently gone co-ed rather than face the sanctions. She explained that within the club, there is a “feeling of resentment.” The USGSOs were all given the choice to either go co-ed or face the sanctions. “The girls clubs,” she told me, “have accepted it because they don’t have a lot of money.” While the male clubs have old and powerful alumni—and the money that comes with them—the female clubs are young and, by comparison, poor. “The boys can all sue,” she said, but “the girls clubs don’t have that privilege.” Having men in the club has certainly changed things for her. She explained: “It’s definitely different—I loved having an all-female space, and there was lots of merit to that socially and even in terms of networking.… I had this strong female network, and that was kind of eroded by going co-ed.”
Sorority members are facing similar challenges, but unlike the male and female final clubs that do not answer to a national body, they are unable to adapt as they see fit. Sororities and fraternities are unable to go co-ed without violating the rules of their national charters; the sanctions policy therefore affects their organizations most.
I spoke by phone with Evan Ribot, a Harvard alumnus from the class of 2014 who was president of the fraternity AEPI while on campus. Stressing that he could speak only for himself, and not on behalf of AEPI or the AEPI alumni network, he told me there was a “tenuous relationship between the administration and the fraternities” when he was on campus. “There was a sense that we operated in a gray zone because the university knew we existed,” he told me. “So we weren’t underground, but we also were not a recognized group.” As a result of the sanctions, AEPI at Harvard has dissolved itself and become a new organization, the gender-neutral “Aleph.” The organization is no longer affiliated with AEPI national.
“It’s a shame,” he said, “because some of my best friends were looking to join AEPI not because they wanted to be in an exclusionary single-sex organization but because they were looking for a place to fit in on a challenging campus.” The same is true for women: Ribot noted “The sororities were an avenue for women to find their own spaces—not because they were looking to exclude men but because there is an inherent value to a group of women hanging out, just like there can be an inherent value to have men hanging out.… It’s not rooted in exclusion.”
In some circumstances, it appears, Faust agrees. She herself attended Bryn Mawr—a women’s college— and serves as a special representative on the board of trustees of her alma mater. “It is impossible to figure out how Faust can reconcile helping to provide that singular experience to women while at the same time denying any portion of that experience to the women she is responsible for at Harvard,” said Richard Porteus, graduate president of the Fly Club. He graduated from Harvard in 1978 and was elected a member of the Fly Club in 1976. He spoke of the diversity of his club class and reflected that while “there were some people whose names also appeared on Harvard buildings,” he “didn’t come from wealth” and was not only elected to the club but became an officer. Porteus explained that “one’s socioeconomic standing did not matter.” All that mattered, he said, was “the potential for forming life-long friendships.”
The debate over Harvard’s final clubs would have taken place in an entirely different framework if we were still living in a time when university administrators saw their role as fill-in parents—and if that role were viewed as a comfort by the parents themselves. But today’s universities are, for better or worse, largely a free-for-all. The curtailing of certain freedoms thus becomes all the more apparent, and all the more disturbing, when measured against the backdrop of a prevailing “you do you” attitude. The core of the administration’s position seems to be reinforced by an overwhelming need to groom a student body that shares all the same beliefs and values—those that echo the principles that the administration itself espouses. If it deems single-sex social groups discriminatory, then there is no room for those students who see them not as beacons of gender exclusivity but as opportunities for friendship and support. In an educational institution, the only kind of diversity that should matter is diversity of thought. That’s a lesson the Harvard administration desperately needs to learn.
Harvard’s own questionable record on diversity is currently under harsh scrutiny—and not because of the behavior of clubs that have a tenuous connection to the university’s educational mission. Research has demonstrated that to gain entry into an institution like Harvard, Asian-American applicants must score an average of 140 points higher on their SATs than white applicants, 270 points higher than Hispanic applicants, and an astonishing 450 points higher than African-American applicants. The Justice Department has taken note and is investigating the matter. In December, the New York Times reported that the university has agreed to give the DOJ access to applicant and student records. That Harvard’s administration has become consumed with the goal of bringing an end to institutions that fail to meet a 21st-century standard for diversity is not without its savage ironies.
1 Meaning something a dean does.
Choose your plan and pay nothing for six Weeks!
Review of 'In the Enemy’s House' By Howard Blum
Nearly a decade would pass until the FBI and NSA began to release the actual Venona transcripts in 1995. In the years since, a number of books (including several co-authored by me) have analyzed the Venona revelations, while others have mined Communist International files and the KGB archives. Virtually all the major mysteries about Soviet espionage in the United States have been resolved by these once-secret documents. In addition to confirming the guilt of the Rosenbergs, Alger Hiss, Harry Dexter White, and virtually every other person accused of spying in the 1940s by the ex-spies Whittaker Chambers and Elizabeth Bentley, these books have exposed several important and previously unknown agents such as Theodore Hall, Russell McNutt, and I.F. Stone. Indeed, the only accused spy who turns out to have been innocent (although he was a secret Communist almost up until the day he took charge of developing an atomic bomb) was J. Robert Oppenheimer.
A handful of espionage deniers, centered around the Nation magazine, continue to argue, against all evidence and logic, that Alger Hiss is still innocent. The Rosenberg children continue to distort their mother’s role in espionage. And some hard-core McCarthyites still demonize Oppenheimer. But in truth, the bloody battle over who spied is over.
Lamphere’s book emphasized his collaboration with the Army cryptographer Meredith Gardner in the hard work of unraveling the spy rings using the Venona cables. Employing those 1986 recollections as a template, the Vanity Fair contributor Howard Blum has now given us In the Enemy’s House, an overly dramatized but largely accurate account of the friendship between the outgoing, hard-driving, atypical G-man Lamphere and the shy, scholarly, soft-spoken Gardner as they worked together to find and prosecute those Americans who had betrayed their nation.
Blum intersperses the American hunt for spies with the recollections of Julius Rosenberg’s KGB controller, Alexander Feklisov, who ran Rosenberg in 1944 and 1945 and supervised Fuchs in Great Britain from 1947 to 1949. Feklisov watched with mounting dread as the KGB’s atomic spy networks were exposed, both because of Venona and the KGB’s own blunders—most notably because the Russians used Harry Gold, Fuch’s contact, to pick up espionage material from David Greenglass, who was Julius Rosenberg’s brother-in-law and part of his spy ring.
Blum also uses information from many of the scholarly accounts that have already appeared, although not always carefully. His only new source of data comes from interviews with members of the Lamphere and Gardner families and access to their personal notebooks. But while he provides a list of his sources for each chapter, Blum does not use footnotes, so that although many of the personal and emotional reactions to the investigation he attributes to people, and especially to Lamphere, presumably come from these sources, it is never clear whether they are based on contemporaneous written notes or third-party recollections of events more than 50 years in the past.
Such objections are not mere academic carping. While Blum successfully turns this oft-told story into an interesting and suspenseful narrative, his approach comes at a cost. For example: He is eager to transform Lamphere from a diligent and resourceful FBI investigator who often chafed at the bureaucracy and petty rules that governed the agency into a full-blown rebel who almost singlehandedly forced the FBI to take up the problem of Soviet espionage. To do so, Blum suggests that until the FBI received an anonymous letter in Russian in August 1943 alleging widespread spying and naming KGB operatives, the Bureau regarded the investigation of potential Soviet spies as useless because allies did not spy on each other.
This is wrong. In fact, the FBI had already mounted two large-scale investigations—one of Comintern activities in the United States undertaken in 1940 and the other of attempted espionage directed at atomic-bomb research at the Radiation Laboratory in Berkeley, which began in early 1943. Both had unearthed information on atomic espionage. These included discomfiting details about Robert Oppenheimer’s Communist connections; efforts by Steve Nelson, a CPUSA leader in the Bay Area in contact with known Soviet spies, to obtain atomic information; and contacts between a Soviet spy and Clarence Hiskey, a chemist on the Manhattan Project.
At one point, Blum renders one of Hiskey’s contacts, Zalmond Franklin, as Franklin Zelman and mischaracterizes him as “a KGB spook working under student cover.” In fact, Franklin was a veteran of the Abraham Lincoln Brigade working as a KGB courier. In any event, the FBI neutralized this threat by transferring Hiskey from Chicago to a military base near the Arctic Circle, thereby scaring his scientific contacts (whom he had introduced to a Soviet agent) into cooperating with the Bureau.
There are other occasions where Blum demonstrates an uncertain grasp of the history of Soviet intelligence. He misstates Elizabeth Bentley’s motives for defecting; angry at being pushed aside by the Soviets, she feared she was under FBI surveillance. And he claims that only three witnesses testified against the Rosenbergs (Ethel’s brother and sister-in-law and Harry Gold), which leaves off others (Bentley, Max Elitcher, and the photographer who had taken passport photos for the family just prior to their arrests).
Blum’s account of the way the KGB encoded and enciphered its messages is oversimplified. The mistake that made it possible for American counterintelligence to break into the Soviet messages was their intelligence services’ use of some one-use-only pads a second time. Not all of the one-time pads were used twice, and only if such a pad was used twice could the FBI strip the random numbers from the message sent by Western Union. That process allowed Gardner to attempt to break the underlying code. The vast majority of the Soviet cables remained unbreakable, and many could be only partially decrypted. And most of the decrypted cables had nothing to do with atomic espionage but concerned the stealing of diplomatic, political, industrial, and other military secrets.
Partly to heighten suspense, Blum misrepresents or distorts the timelines on matters involving Klaus Fuchs and the Rosenberg ring. He harps on Lamphere’s frustration about not being able to use the decrypts in court, but the FBI had concluded it was highly unlikely that they could be legally introduced into evidence without exposing valuable cryptological techniques, a conflict Lamphere surely understood. That very problem helps explain the FBI’s inability to prosecute Theodore Hall, the youngest physicist at Los Alamos, who had been exposed as a Soviet spy. Blum mistakenly suggests that the FBI agent in Chicago who investigated Hall was unaware of Venona. But that agent did know; the problem was that when the FBI began its investigation in the spring of 1950, Hall had temporarily ceased spying. He was eventually brought in for questioning, but neither he nor his one-time courier and friend, Saville Sax, broke and confessed. Lacking independent evidence, the FBI was stymied.T he most significant flaw of In the Enemy’s House is its assertion that Ethel Rosenberg’s conviction and execution were monumental acts of injustice that disillusioned both Lamphere and Gardner, soured their sense of accomplishment, and left them consumed by guilt. It is true that Lamphere had opposed Ethel’s execution and had drafted a memo that J. Edgar Hoover sent to the judge urging she be spared as the mother of two young sons. Gardner had translated one Venona message that indicated Ethel knew of her husband’s espionage but because of her delicate health “did not work,” which Gardner interpreted to mean she was not part of the spy ring. But, as Lamphere pointed out in his own book, her brother David Greenglass had testified to her involvement in his recruitment. And KGB messages available following the collapse of the Soviet Union now make clear that Ethel had played a key role in persuading her sister-in-law, Ruth Greenglass, to urge her husband to spy.
In The FBI-KGB War, Lamphere never evinced deep moral qualms about their fate. He expressed a more complex set of emotions. “I knew the Rosenbergs were guilty,” he writes, “but that did not lessen my sense of grim responsibility at their deaths.” And he calls claims that the case was a mockery of freedom and justice both “abominable and untruthful.” Blum insists that Gardner was “stunned” by their deaths and quotes him as saying somewhere: “I never wanted to get anyone in trouble” (which would suggest a monumental naiveté if true).
Blum’s claim that Lamphere and Gardner had condemned themselves “to another sort of death sentence” for their roles is a wild exaggeration. So, too, is his charge that Lamphere believed that in the Rosenberg case the United States “might prove to be as ruthless and vindictive as its enemies.”
Finally, Blum links Lamphere’s decision to leave the FBI for a high-level position in the Veteran’s Administration to a sense of lingering guilt. But in his own book, Lamphere attributes the move to the frustration he felt once he realized he would be stuck as a Soviet espionage supervisor for years to come. Blum links Gardner’s brief posting to Great Britain to work with its code-breaking agency as an effort to escape his guilt, but he never mentions that Gardner returned to work at the National Security Agency for many years.
Retired intelligence agents friendly with both men have no recollection of their expressing regret about their role in the Rosenberg case. It is possible that they may have made some such comment to a family member or jotted down something in a notebook, but without very specific and sourced comments, the idea that they ever regretted their work exposing Soviet spies is nonsense that mars Blum’s otherwise entertaining account.
Choose your plan and pay nothing for six Weeks!
What we got instead was a combination of celebrity puffery and partisan cheap shots at the Trump administration. The politics of North and South Korea, and the equally complex and intricate relations between these two countries and China, Japan, Russia, and the United States, were reduced to just another amateur sport. Ignorant and supercilious reporters transposed the clichés of the electoral horse race, complete with winners, losers, buzz, and sick burns, to nuclear brinkmanship. Major news organizations could not have done Kim’s job any better for her.
A representative example was written by no less than seven CNN reporters and researchers who concluded, “Kim Jong Un’s sister is stealing the show at the Winter Olympics.” The lead of this news article—I repeat, news article—was the following: “If ‘diplomatic dance’ were an event at the Winter Olympics, Kim Jong Un’s younger sister would be favored to win gold.” Gag me.
Then the authors let loose this howler: “Seen by some as her brother’s answer to American first daughter Ivanka Trump, Kim, 30, is not only a powerful member of Kim Jong Un’s kitchen cabinet but also a foil to the perception of North Korea as antiquated and militaristic.” Kim’s “Kitchen Cabinet”—why, he’s just like Andrew Jackson. And how could anyone have the “perception” that North Korea is “antiquated” and “militaristic”? Sure, they might threaten the world with nuclear annihilation. But have you seen Donald Trump’s latest tweet?
New York Times reporters are either smarter or more efficient than their peers at CNN, because it took only two of them to write “Kim Jong-Un’s sister turns on the charm, taking Pence’s spotlight.” Motoko Rich and Choe Sang-Hun described Kim’s “sphinx-like smile” and “no-nonsense hairstyle and dress, her low-key makeup, and the sprinkle of freckles on her cheeks.” They contrasted the “old message” of Vice President Pence, who has no freckles, with Kim’s “messages of reconciliation.” They cited one Mintaro Oba, a “former diplomat at the State Department specializing in the Koreas, who now works as a speechwriter in Washington.” What they did not mention is that Oba worked at Barack Obama’s State Department and writes speeches for a Democratic firm. Not that he has an axe to grind or anything.
The typical Kim puff piece began with her charm, grace, poise, statesmanship, and desire for unity and peace. Then, 10 paragraphs later, the journalist would mention that oh, by the way, North Korea is a totalitarian hellscape that Kim’s family has been plundering for over half a century. For instance, describing the South Korean reaction to Kim, Anna Fifield of the Washington Post wrote,
They marveled at her barely-there makeup and her lack of bling. They commented on her plain black outfits and simple purse. They noted the flower-shaped clip that kept her hair back in a no-nonsense style. Here she was, a political princess, but the North Korean “first sister” had none of the hallmarks of power and wealth that Koreans south of the divide have come to expect.
A political princess! It’s like Enchanted, except with gulags and famine.
Deep in Fifield’s article, however, we come across this sentence: “Certainly, Kim, who is under U.S. sanctions for human rights abuses related to her role in censoring information, was treated like royalty during her visit.” Just thinking out loud here, but maybe human-rights abuses and censorship deserve more than a glancing reference in a subordinate clause. Fifield went on to say that “Vice President Pence, who was also in South Korea for the opening of the Winter Olympics but studiously avoided Kim, had worried in advance that North Korea would ‘hijack’ the Olympic Games with its ‘propaganda.’” Now where could he have gotten that idea?
The fascination with Kim revealed both the superficiality and condescension of much of our press. Fifield’s colleague, national correspondent Philip Bump, tweeted out (and later deleted) a photo of Kim sitting behind Pence at the opening ceremonies with the comment, “Kim Jong Un’s sister with deadly side-eye at Pence,” as if he were being snarky about an episode of Real Housewives.
When Kim departed the Olympics, Christine Kim of Reuters wrote an article headlined, “Head held high, Kim’s sister returns to North Korea.” Here’s how it began:
A prim, young woman with a high forehead and hair half-swept back quietly gazes at the throngs of people pushing for a glimpse of her, a faint smile on her lips and eyelids low as four bodyguards jostle around her.
The Reuters piece ends this way: “Her big smiles and relaxed manner left a largely positive impression on the South Korean public. But her sometimes aloof expression and high-tilted chin also spoke of someone who sees herself ‘of royalty’ and ‘above anyone else,’ leadership experts and some critics said.” Thank goodness for the experts.
Kim Jong Un could not have anticipated more glowing coverage for his sister, for the robot-like cheerleaders he sent alongside her, or for his transparent attempt to drive a wedge between South Korea and its democratic allies. “North Korea has emerged as the early favorite to grab one of the Winter Olympics’ most important medals: the diplomatic gold,” wrote Soyoung Kim and James Pearson of Reuters, who called Pence “one of the loneliest figures at the opening event.” Quoting on background “a senior diplomatic source close to North Korea,” Will Ripley of CNN wrote an article headlined, “Pence’s Olympic trip a ‘missed opportunity’ for North Korea diplomacy.” But who was Ripley’s source? Dennis Rodman?
What most disturbed me was the difference in coverage of Kim Yo Jong and Fred Warmbier, whose son Otto died last year after being tortured and held captive in North Korea. Fred Warmbier accompanied Pence to the Olympics as a reminder of the North’s inhumanity and menace. Journalists ignored, dismissed, and even criticized this grieving man. Among many examples of thoughtlessness and callousness was a Politico tweet that read: “Fred Warmbier criticizes North Korean Olympic spirit.” He must have missed Kim’s freckles.
Washington Post columnist Christine Emba asked: “Is Otto Warmbier a symbol, or a prop?” You see, Emba wrote, “Otto’s father may want his son to be a symbol. But the nature of his escort risks turning him into a prop.” Why? Well, because “symbols stand for something” while “props are used by someone.” And “the Trump administration, which hosted Warmbier, is made up of shameless instrumentalizers who have made clear that they stand for very little.” So there you go. We should be skeptical of Fred Warmbier because Trump.
Emba’s not all wrong. There were a lot of props and tools at the Olympics. You could find them in the press box.
Choose your plan and pay nothing for six Weeks!
was nine when I made my first trip to Israel in June of 1968, almost exactly a year after the Six-Day War. My parents had been in Italy the autumn before, and while vacationing in Rome they learned that there were inexpensive flights leaving twice a week for Tel Aviv. The whole of Israel was giddy at the time, unburdened by their insecurities for the moment with the stunning success of their having just won the Six-Day War and their having increased the total size of their young, besieged nation by more than two-thirds.
My mother finally found a use for the crumpled phone numbers of distant Israeli relatives she’d been carrying in her purse for the past several months, relatives on both her father’s and her mother’s side, Romanians all. Osnat, my mother’s second cousin once removed, had had the misfortune of remaining in Europe while the Nazis were on the move. She spoke of having spent five days hiding from the Germans in the liquid filth of an outhouse and breathing through a tube when they came near.
Meeting scores of warm and loving relatives and having been feted by them as “our dear American Mishpacha” was partly why my parents were both so taken with Israel—that and the Israeli people themselves, the Sabras, so proud and brash, and the ancient beauty of the land. With some talk of perhaps making Aliyah, or at least exploring the idea of our moving to Israel, my parents, my siblings, my first cousins, and my Grandma Rose and her younger brother, Uncle Sol, gathered up a month’s worth of warm-weather clothing and flew en masse to Tel Aviv. We were greeted at Lod Airport by a crush of relations, all of them clambering to hug and kiss us. And then as the sun descended into the Mediterranean and night fell over the coastal plain, they drove us all north in a rag-tag caravan of tiny old Fiats, Renaults, and Peugeots to the beach town of Netanya, where we stayed for the entire summer in a tiny flat just behind the home Osnat shared with her husband, Shlomo.
Days later, I’m with my father and my brother Paul at the Wailing Wall. It’s weird to think that only a week ago I was at home watching Gilligan’s Island and looking for my dad’s Japanese Playboys in the bottom drawer of his bedroom closet during the commercials. Now, I’m in Jerusalem, in the glaring sun beneath this gigantic wall of stone. When I’m sure no one’s looking, I put both hands on the wall, and then I touch my forehead to it. The stones are colder than you’d think they’d be in all this heat.
For reasons I don’t understand, I start to cry. I’d be embarrassed if my brother or my dad saw me like this, so I pretend that I’m praying. I wonder, though, am I just crying because you’re supposed to cry here? If the rabbis from the Talmud Torah had shown me pictures of some random bridge in Saint Paul from the time I was in nursery school, would I have cried at that, too?
When I look up at the wall again, I see some birds’ nests and a million pieces of paper with people’s prayers in them, all stuffed into the cracks between the stones. Everyone who comes here wants God’s attention. I’ll bet He loves all the notes. They probably make Him feel like someone gives a shit about the cool stuff He does.I
had been born a Jew in Minneapolis. Growing up Jewish there wasn’t a good or a bad thing any more than growing up with snow was good or bad. It just was. Because we Jews were so few, being one made us all feel different. It wasn’t a difference we’d asked for or earned either. It, too, just was. It was natural for us, that is, becoming somewhat Jew-centric. We were fond of staying close to one another, close to our causes and to our history, it was just a natural reaction to being the “other.”
It’s 1970 and I’m in junior high, on my way to English, when I see Nelson Gomez, Stuey Nyberg, and Craig Walner. They’re hip-checking kids into the tall metal lockers that line the hall. They are the three kings of the Westwood Junior High’s dirtball dynasty, young hoodlums who regularly and without fear skip school, smoke filter-less Marlboros, and shout “Fuck you, faggot” to students and staff members alike, save perhaps for Mr. H, the anti-Semitic shop teacher with whom they have forged an abiding friendship.
To the left and right of me, hapless students fly, body-slammed with alarming speed into the lockers by the three of them. It doesn’t escape my notice that these unfortunates have not been chosen randomly. There goes Brian Resnick. Next it’s Shelly Abramovitz and then Alvin Fishbein. As I round the corner, Stuey Nyberg grabs my second cousin, Elaine Kamel, by the shoulders and slams her face-first into her own locker. She and they were selected for no other reason than their Jewishness.
I grab Stuey by his neck with both hands and I claw at him until my fingernails pierce his pale skin and blood spurts from his jugular. Now I take the clear plastic aquarium algae scraper that I made in Mr. H’s shop class this very morning and use it to gouge out one of Nelson Gomez’s eyeballs, making sure he can see it in the palm of my hand with his remaining eye. Craig Walner tries to run, but I catch him by his mullet and shove his head into Elaine Kamel’s locker. I slam her locker door on him again and again. I don’t stop until his head is severed from his neck…
…and my daydream comes to an abrupt halt when Stuey Nyberg says, “Himmelman, it’s your turn to meet the lockers, you fucking kike.” Without a word of warning, he clouts me with a stinging jab right to my nose. It’s the first time I’ve ever been hit in the face, and while it’s agonizing, the blow is also somehow euphoric. I’m supercharged with adrenaline, I feel as if I’m on fire. But of course, I don’t hit Stuey back. God, no. I simply stand there glowering at the three of them, blood dripping from my large Jewish nose. And for the first time in my life, I feel downright heroic. I look around me and I see that, for now at least, our bitterest enemies have stopped hip-checking what feels like the entire Jewish nation.
Six months later it’s summer vacation, and we Himmelmans fly from Minneapolis to New York and connect with a nonstop to Tel Aviv. In less than two days, I’m on a towel on the beach in Netanya looking out at the cerulean blue of the Mediterranean.
As I lay on the hot sand, Mirage fighter jets with blue Jewish stars emblazoned under their wings suddenly streak so low across the water that I can smell jet fuel. As they scream overhead, the whole beach seems to shake. With a strange sense of clannish pride, I laugh and stare up at the planes as they accelerate and finally rocket out of range.
My father died, after suffering from Stage IV lymphoma for five years, in 1984. I was 25 years old. A year later, I was living in the Twin Cities working on music with my band when I received a call from a woman named Ruth Grosh. She asked if I’d be willing to write some songs for a therapeutic teddy bear she’d dreamed up called Spinoza Bear. Ruth, a bona fide subversive by nature and New Age before anyone had even come up with the term, named her ursine brainchild after Baruch Spinoza, the heretical 17th-century Jewish philosopher. Spinoza was seen as harmful to, and at odds with, the views of the Jewish establishment of Amsterdam at the time. Eventually, both he and his writings were placed under a religious ban called a “cherem” by the Dutch Jewish community where he lived and worked. Aside from the fact that he was reviled for his modernist views, no one had much bad to say about him personally, except that “he was fond of watching spiders chase flies.”
The songs were to play off a battery-operated tape deck that fit into a zippered pouch beneath the soft brown fur of the bear’s stomach. A red heart-shaped knob on the bear’s chest served as the on-off switch. By today’s standards, the technology would seem crude, but at the time, with just a modicum of suspension of disbelief, it was possible to feel that the voice of the bear along with the music was issuing directly from its cheery muzzle. As to whom to hire to be the voice of Spinoza Bear, it was decided after some deliberation that not only would I write and sing the songs, I should also be the kind, concerned voice of the bear itself.
Each of the dozen or so cassette tapes that were eventually recorded had themes of self-empowerment, a kind of you-can-make-it-if-you-try bent. After just two years, the bear became a huge success—not as some plebeian, retail teddy, but as something greater. Spinoza Bear soon found his way into hospitals, health clinics, and centers for healing of all kinds. By holding the bear and listening closely to his stories and songs of wellness and inner light, rape victims, grief-stricken parents, bone-lonely pensioners, autistic kids, as well as children on cancer wards all across America found it possible to relieve some of their pain and fear.
Aside from the good works, the bear provided me with twenty grand in seed money that our band, Sussman Lawrence, used to set sail for New York City in 1985.
We were five new-wave rockers in an Oldsmobile Regal Vista Cruiser wagon, and two roadies in a spanking-new Dodge cube van. The van, which we were overjoyed to discover, had been hastily christened from bumper to bumper with graffiti sometime during our 45-minute debut set at CBGBs, the legendary East Village rock-and-roll club, only days after arriving on the East Coast.
Given the high cost of living in New York City, New Jersey seemed the next best thing. As it turned out, there were very few homeowners interested in renting a house to a band. I hatched a plan, which involved my calling on a middle-aged real-estate agent named Carol we’d found advertising in a Bergen County newspaper. When I finally got her on the line, I explained to her that we were medical students enrolled that fall at nearby Rutgers University and in need of a quiet place to live and study.
The following morning, as the rest of the guys waited outside in the Oldsmobile, I and my cousin Jeff, our band’s gifted keyboard player, showed up at Carol’s office in suits and ties we’d purchased at a local thrift shop and carrying responsible-looking briefcases. I had boned up on some medical terms as well, orthopedic surgical techniques mostly, in case she needed proof that we were actually who we were claiming to be. But there had been no need. We had the cash and seemed honest enough—“honest enough” to let her know that a few of us were also part-time musicians and that there might be some music playing, quietly of course, from time to time, just to ease the strain of our intense studies.
Two days later, Jeff and I woke up early, signed the lease papers, and pulled our now multihued, invective-laden cube van into the driveway of 133 Busteed Drive in Midland Park, New Jersey.
Trying for as much discretion as possible, lest the neighbors notice anything out of the ordinary, we backed the van up to the garage, lugged the gear up a short flight of stairs and into a large, unfurnished living room. Once upstairs, we began unloading beer-stained amplifiers, at least a dozen guitar cases, a drum set packed tightly into three large metal flight cases, assorted keyboards, and an entire public-address system and lighting rig. Aside from some bad scrapes in the hardwood floor and a gaping hole or two in the walls on our way in, the load-in was accomplished with speed and efficiency. We were up and practicing by late afternoon, our new-wave rock blaring fast and loud into the New Jersey autumn night.
A month after settling in, Ruth Grosh reached me at dinnertime by long distance, in the squalor of our band-house collective. After some catching up, she gently let me know me that some psychic friends had explained to her that I had just a few months left on the planet. “What!” I said, “they told you I was gonna die?” Ruth was practiced at this kind of thing, it seemed, although her nonchalance about my imminent demise didn’t make me feel any less concerned. “They asked me to find out if you’d like to come in for a free consultation,” she said. I was due to fly back to Minneapolis later that week anyway, and I figured I might as well find out what all this planet-leaving nonsense was about.
Back home, on the morning of my appointment with the psychics, I found my mother, who was normally quite composed, flitting around the kitchen and singing quietly to herself. She had agreed to a lunch date that afternoon with the contra bass player from the Minnesota symphony, her first since my dad had died almost two years before.
“Does this blouse look good on me?” she asked. “Be honest.”
“Yeah, it looks great,” I said.
I was uncomfortable in the extreme watching my mother dart around the house like a schoolgirl primping for a date with some dude who wasn’t my dad. True, it’d been two years since he’d died, and given all that she’d been through, it wasn’t like she didn’t deserve to live a little. After all, I thought, it was just lunch. But the more I saw of this weird, giddy side of her, the less I liked it. A car honked. It was Ruth.
She and I rode wordlessly as Japanese New Age wooden flutes intoned from her car stereo. We arrived after twenty minutes at the northern suburb of Brooklyn Center, and Ruth parked her car near a long row of newly built town houses. A man and a woman in their mid-forties greeted us at the front door, both smiling in a scary, off-putting way. They appeared to be a kind of husband-and-wife psychic tag team, and they rushed headlong into the consultation by asking if I’d like to give them some names of people I knew.
“We’ll be able to tell you all about them,” the woman said and smiled again. I thought it was just some cheesy method of showing off.
“The first names are enough,” said the man.
“Okay, let’s go with Jeff,” I said.
My cousin Jeff is a musical genius, a pianist of remarkable facility, who’s had to contend with neuromuscular tics most of his life. The two psychics were seated facing each other in cheap leather armchairs. In an instant, they were both precisely mimicking my cousin’s facial tics. I recognized each of them from the names Jeff and I had given them. When Jeff’s thumbs bent downward spasmodically, we called it “Southerner.” When his palms flexed upward in a sort of hand-waving motion, we called it “Reckless Greeter.” In another, with his eyebrows pinched together, lips compressed, and eyes blinking, Jeff looked like someone who was very curious about his environment. We called that one “Curious Man.” His most frequent tic was also his most unsettling. We called that one “Round the World.” It involved his eyeballs rolling uncontrollably in their sockets. Suddenly, to my astonishment, the corners of both of the psychics’ mouths had formed narrow half smiles. Their eyebrows began squeezing together; their eyes were blinking—open-shut-open-shut—perfectly mimicking Jeff’s Curious Man.
“The music, he can’t stop the music,” the woman shouted in excitement. Her husband, whose hands then began a remarkable imitation of Reckless Greeter added, “Yes, good God, the music! Can’t you feel it just pouring out of him?”
I was thinking this had to be some kind of brilliant trick, albeit a devilish one. It was astonishing, yes, but I wasn’t yet convinced that they were real. Next, I said the name “Beverly,” my mother’s, and they both giggled. It’s disconcerting to see adults giggle at any time, but when a pair of middle-aged psychics giggle at the mention of your bereaved mother’s name, it’s triply so.
“She’s doing something she feels guilty about,” the woman offered.
“Yes,” said the man. “Something she’s afraid of doing, but it seems to us that she’s also very excited.”
Almost in unison, the psychics said, “She’s acting like a little schoolgirl today!”
How in hell could they have known what I’d just experienced myself for the first time in my life that very morning? If these two freaks had wanted my undivided attention, they sure as hell had it now.
The room fell silent. I didn’t dare speak. They had officially scared the living daylights out of me with their last trick. Soon, they broached the subject I’d come all this way to talk about.
“Is it your wish to leave the planet?” the woman asked, more casually than I would have imagined possible for someone questioning a fellow human being about whether he wanted to live or die.
I paused and breathed deeply for a minute or so. It was a question I stopped and thought about longer than a mentally stable person might have.
“No,” I finally told them, “I have no intention of leaving anytime soon.”
This seemed to relieve them. The man said, “The reason we’ve been so concerned about you is that we believe music is more important to you than you may be aware. It represents your very essence, and by working as single-mindedly as you have to get a record deal, with the kind of music you’ve been making with your band, you’ve been cheapening and compromising your integrity. You’ve been, in a sense, unfaithful to your muse. That’s what’s causing this spiritual disconnect and, should it continue, my wife and I both feel like it will shorten your stay here.”
His wife took over: “What you need to do is uncover a deeper, more honest expression in your music, something closer to the bone. We know you love the blues and reggae. We think it’ll be helpful to start playing music you love, rather than music you think will sell.”
By this time, tears were spilling down my cheeks. “There’s this song,” I began telling them, “that I wrote for my dad over two years ago on Father’s Day, that almost no one has heard. It’s something that was written with the sole intention of connecting with him before he died. It’s on a cassette tape, just sitting there on a shelf in my closet.”
“Why not put that song out as your next single,” the man said.
I was suddenly speechless. Why had I never thought of this? It was such a simple yet profound idea. I flew back to New Jersey, determined to release not just the one song, but an entire album dedicated to my father.
The guys picked me up in the Oldsmobile at Newark Airport the next day. We were standing around the luggage carousel waiting for my bags when I told them I was going to record a solo record, a tribute to my father, whom they all loved and respected.
My bandmates understood this was something I needed to do. They also knew it wasn’t just talk. A solo album, produced for whatever reasons, also signaled the possibility that the ethos of the band may well have been coming to an end. Nevertheless, they played their hearts out on the record and, by doing so, tacitly gave me their blessings and their assurances that whatever happened with it would be for the best.
The recording featured the song I’d written for my dad, and it eventually became my debut album, This Father’s Day, for Island Records.
Its release also became a powerful catalyst for me personally. It took me from where I had been, locked up in pain and confusion, to some other, more hopeful place. Even before my meeting with the psychics, I thought I’d gotten beyond most of the hurt, that it was simply time to grit my teeth and persevere. It had been two years, after all. But I was mistaken. The process of mending broken hearts is never as pat as that. As much as I needed to forget, to emerge clear-eyed from the jumble and rawness of my father’s death, I knew I’d have to face my worst fears again and again. But I felt ready. I also knew, in a way I hadn’t before, that I really didn’t want to die.
While my father was suffering in the last five years of his life, I found myself in a different state of mind from that of my friends and bandmates, who were, for the most part, blithely moving through their young lives. I’m not saying pain made me wise; it’s just that it can, for those willing to accept its hard lessons, provide a bit of perspective, shine some light on what’s sacred and what’s less so.
During those years I was working very hard to become famous, whatever that might have meant. I felt that I needed to reach some level of achievement before my dad died. I suppose I was conducting a search for miracles. It’s no wonder. For my family and for me at least, miracles seemed to have been in very short supply back then.
It’s miracles after all, that compel us forward, that encourage us to move with some degree of willingness into the next day. But, despite what we might believe, it’s hardly ever the big ones that truly move us. The sea can split, we can win the lottery, we can even become rock stars, and still, those phenomenal circumstances are never what matter most. In the end, the only miracle worth wishing for is the ability to be made aware of the smallest splendors, the most inconsequential truths, and the overlooked rhythms that connect us to the people and things we love.
I felt a kind of heat rising up around me in those days, a sense that what had long been static was now stuttering back into motion. There was a pleasant strangeness to the feeling, but like many things that at first strike us as unusual, it wasn’t wholly unfamiliar, either. I’d felt that same unnamable sensation, lying awake in my bed in the dark as a young child, focusing on individual moonlit snowflakes as they fell outside my window. I felt it again in Jerusalem, at nine years old, when I first touched the sunbaked stones of the Western Wall. I felt it the first time I’d snorkeled in the Red Sea and became drunk from sheer beauty. I felt it the frigid November morning we buried my father. I felt it on the evening I finally met my wife, and again, the moment when each of my children was born.
The circumstances were wildly varying, but in each instance there was a sense of being taken from one place to another, of inertia finally giving way to movement. It was as if my mundane life had cracked open and I saw, arrayed in front of me, some image of the unseen hand that forms and directs the universe.M
y first experiences in Crown Heights, Brooklyn, at age 27 were catalytic. A rabbi named Simon Jacobson had posed a single question and it, too, set me into motion: “Why is walking on the surface of the Earth any less miraculous than flying above it?” he’d asked.
The idea that the world is a wondrous, mysterious place—even as we are destined to walk on the mundane surface of it, even if we cannot truly fly—is both a liberating and comforting notion. Being attuned to wonder is my preferred condition. Perhaps it’s natural for each of us. But why, then, are so many moments not imbued with this sense of the miraculous? Why is there such a divide between barely sensing and deeply feeling?
What I did know in the autumn of 1987, with a certainty I hadn’t known before—perhaps couldn’t have known—was that I needed to get married. I had awakened to the idea that there was nothing I was doing with my life, not my music, not my friendships, not my finally getting that almighty record deal, more important than finding the right woman with whom to create a family and live out my days. I also knew that to do this, I would need to create a powerful forcing frame for myself, not one that would constrict or limit me, but one that would allow me to channel my outsized ego and my creative proclivities toward more productive ends than I’d ever dreamed possible.
Eventually, I made a sort of pact with myself, a silent, personal agreement. It came down to this simple declaration: The next time I sleep with a woman, it will be with my wife. This meant that I had to extricate myself from my longtime girlfriend. Though I was, and still am, extremely fond of her, I could never envision her as a lifetime partner or the mother of my children. In addition, our arrangement was somewhat nebulous, and so this new, self-imposed structure also meant that I’d have to cut off any contact with the other women with whom I was having casual sex. I had to make a fundamental cultural and emotional shift. I would need to wean myself away from years of assumptions about the very nature of what a modern relationship meant. I would have to forge a new way of looking at women, at my role as a man, and at the world at large.
It became clear to me that the freedom I had always longed for could be obtained only through the somewhat paradoxical means of setting limits, delaying gratification, and cutting away many experiences that an all-pervasive consumerist culture had been (and continues to be) hell-bent on selling. If you’ll allow me, I’ll explain this further by way of metaphor.
Music is among the most transcendent of all art forms, both for the performer and listener. Since it has no form or substance, it can easily serve as a model for the boundlessness of spirituality. But as anyone who has mastered a musical instrument knows, musical ideas are expressed almost exclusively by means of structure and restriction, words very few of us would correlate with freedom.
At first glance, this seems like a paradox. How could something as liberating and intangible as music be based on restriction? Not only is music based on restriction, I’d go so far as to say that, aside from the existence of raw sound—elemental white noise, if you will—the only other thing that allows music to take place, the only thing that differentiates it from this pure noise, is what sounds the musician chooses to leave behind. In this sense, music comes about not by choosing notes but by the elimination of notes. Take a look at the idea in this somewhat inverse manner: Only by rejecting all other sonic choices are we left with the ones we truly desire. To make music, we don’t add, we subtract.
Here’s how something as commonplace as the key signature of a particular piece of music also reflects this idea. Unless you were trying to achieve a harsh atonal musical effect, you wouldn’t want to be playing in the key of B-flat minor while your key signature called for you to be playing in A major. The ensuing “music” would sound like a chaotic racket to most people. The time signatures of compositions, along with their tempos, which require that a particular note last only so long and that it be played at a particular speed, also function with this same principle—creation by negation. Avoiding the time signature, or playing at any speed without regard for the overall tempo, is another good way to produce only noise.
It is only through adherence to the limiting factors of time and tempo that music can take shape. In that same sense, if it weren’t for the constraint of playing only certain keys on a piano, and thereby negating all other choices, you would hear only noise. Anyone who has heard his or her toddler pounding away on a piano knows exactly what this sounds like.
Most, if not all, musical instruments also work on this principle of restriction. The trumpet, for example, is based upon compression and restriction. If the air a player blows into the trumpet’s mouthpiece weren’t compressed and regulated by the embouchure, the only sound you’d be able to hear would be a soft wind-like noise passing through the horn.
As I became more and more immersed in the wisdom of Jewish thought and practice, the idea of freedom-in-structure became clearer and ever more personally relevant. If it was true for music I wondered, how much more true must it be for all of life itself? And given that human sexuality (whether or not the participants engaged in a sexual act are conscious of it) concerns the creation of life, it occurred to me that causing dissonance in that most meaningful—dare I say mystical—arena of life was something I definitely needed to avoid.
I knew I had to place a set of restrictions on myself in order to make music out of my life, as opposed to just raw sound. Although this conception of the universe felt new to me, new in the sense that it was radically different from the one I’d been acting on for so many years, it wasn’t unfamiliar. Without my knowing it, I had undergone an awakening. I became alert to a perspective I recalled vaguely, even from my earliest childhood. It was as if I could see something important forming (though what it was, was still unclear) out of a barely examined and often fleeting sliver of thought. All at once, the world around me seemed to feel very much as it did when I was a child. I could remember clearly, lying feverish in bed, waiting for sleep, with every last thing in the world unknown and unexplained.
It was frightening as an adult to feel these thoughts growing stronger and more pervasive, but it also felt safe in ways—as though there’d been a kind of revelation, one that seemed to say: “Peter, son of David, there is a purpose to everything you’ve experienced in the recent past and everything you see before you now. From this moment on, there are things you must do and ways you must act.”
The mantra to live without restrictions, which had guided me for most of my life, seemed at that point to be leading me only to chaos. I believed I could, and must, do better for myself. My most fervent wish was no longer to become a rock star; it was to create my own family, one that could become a replacement for the one I’d been missing, the one that had changed so drastically when my father died.
So, in a tour bus rolling across the American continent, I did the three most practical things I could think of: I stuck to my private pact, I dreamed, and I prayed several times a day to an unseen Deity for strength and for love.
This part of the story really begins a few months after my dad’s funeral, when I found myself in a cramped apartment in South Minneapolis auditioning some songs I’d written for a local performer named Doug Maynard. I sang him a few things and he nodded quietly. Doug wasn’t a big talker. Finally he chose one. “Man, I think I could do this justice,” he said. It was called “My First Mistake.”
You taste like pepper frosting on a granite cake.
Baby fallin’ in love with you was my first mistake…
Less than a year later, Doug was found dead in his living room, stone-drunk and drowned on his own vomit at the age of forty. Before this happened, however, he had introduced me to his manager, who had introduced me to a New York City music lawyer, who had introduced me to a record producer named Kenny Vance.
Kenny had worked with a lot of famous people and he wasn’t particularly shy about mentioning just whom. “I used to date Diane Keaton,” he told me. “I know Woody Allen—been in a couple of his films. I was the music director for Saturday Night Live.” Then he said, “Tonight I’m gonna take you to my main connection, a religious Jew in Brooklyn.”
Before long, Kenny and I were crossing the Brooklyn Bridge. We arrived at an apartment in Crown Heights where Kenny’s friend, Simon Jacobson, greeted us. I liked Simon right off the bat. His eyes reflected some essential paradox, some awareness that being alive is both a source of great humor and great sadness. His wife, Shaindy, introduced herself with a gracious smile and placed glass bowls of almonds and chocolate-covered coffee beans on a yacht-sized table before excusing herself to tend to her young children. The thing I didn’t understand at first was how a big hirsute guy like Simon, in an oversize yarmulke, with a massive beard and in a white polyester button-up, was able to land such a good-looking wife. I soon learned that around these parts, it wasn’t the guy who could throw a football the farthest who got the girl. Simon had another thing going for him.
His, at the time, was to memorize every word of the Lubavitcher Rebbe’s Shabbos dissertations and record them on Saturday night for publication later in the week. To understand the scope of the job, it’s necessary to know that when the Rebbe spoke, it was often for four or more hours straight, without breaks, without notes, and in a manner of cyclical and increasing complexity. To make things even more challenging, the Rebbe wasn’t freestyling. Everything he taught was derived from a compendium of source materials that ranged into the tens of thousands of books. And they could not be recorded because it was the Sabbath and no electricity could be used.
When I once mentioned to Simon how awed I was at his ability to memorize this much information, he looked at me and said: “The memorization is the least of it. It’s the task of compiling it with the proper source notes that’s the real challenge. Every day I correspond with the Rebbe, and he writes me back with perfect editor’s notes. Once I wrote and said I didn’t understand a particular passage and couldn’t find the source for it. The Rebbe had a sharp sense of humor. He sent me back a markup with a big red circle, not just on the sentence I was having an issue with, but around the whole page, with the words, ‘What do you understand?’”
It was getting late. Kenny had left me there and driven back to the city. As Simon spoke to me, I kept looking up at the oil paintings of shtetl life and the Rebbe hanging on the walls. I was prodded more by fatigue than bravado when I finally asked, “What’s the deal with those pictures of the Rebbe? They seem sort of cultish to me.”
“I like the pictures,” he said, “To me, the Rebbe is like a very inspiring grandfather, and I get a lot out of reflecting on the things he says and the way he lives his life. There are people for whom there is no sense of self. People called Tzadikim, and they have no need for personal gain. A Tzadik lives only to serve others and they can do anything they wish.”
“Really,” I asked with just a hint of comic disdain. “Can they fly?”
“Understand, I’ve never seen anyone fly,” Simon answered. “But for a Tzadik, the act of flying is no greater miracle than the act of walking.”
This idea stunned me. Not because it was new. The things that move us most never are. They are things we already know, beliefs that are buried away inside us. Of course, when you stop and think about it, there’s absolutely no difference between the weights of the two miracles, walking and flight. It’s just that we non-Tzadikim get so tired of the one that happens all the time.
At that moment, at that table in Brooklyn, I started thinking about the little-known rhythm-and-blues singer Doug Maynard. I was remembering the sound of his voice and simultaneously considering the infinite number, the impossible number, of tiny coincidences—the tendrils, if you will, that in their unfathomable complexity, had guided me to that particular apartment on that particular night. The thought was so vivid, it was as if I could hear Doug singing again. Singing most soulfully, most truthfully about the joy, and the sweat, and the pain of this world. It wasn’t long after that I met the Lubavitcher Rebbe for the first time. He handed me a bottle of vodka and a blessing for success, and I started becoming more Jewishly observant right away: keeping Shabbos in my tiny apartment in Hell’s Kitchen, keeping kosher, and putting on tefillin. I married Maria two years later. We’ve been married for nearly 30 years.
About a year ago my cousin Jeff asked me what it had been like to meet the Rebbe. This is exactly how I answered him.
“You know when you’ve done something you think is horrible (whatever the hell it may be) and you start going down—deeper and deeper into the rabbit hole of regret? When you’re in so deep that you start to feel like the biggest loser ever born, like nothing is possible, that nothing good is ever gonna come your way, and that you can’t even face yourself in the mirror?”
“Sure,” Jeff said. “I’ve been there.”
“Well,” I said, “meeting the Rebbe was the exact opposite of what I just described.”