Of the leading political figures of the age, Ronald Reagan was perhaps the most sharply defined. He stood without ambiguity…
When two-and-a-half years ago Ronald Reagan was elected to the Presidency, almost everyone expected that there would be a marked change in the direction of American foreign policy. Nor was there much disagreement over the nature of this anticipated change. How could there have been? Of the leading political figures of the age, Ronald Reagan was perhaps the most sharply defined. He stood without ambiguity for the view that the conflict between the United States and the Soviet Union was the central issue of our time; that it could be defined as a struggle between good and evil; that in this struggle the United States had been falling behind while an expansionist Soviet Union was forging ahead; and that unless we made every effort to restore and assert our power, the future would belong to the forces of totalitarian Communism.
Obviously, by itself this view did not yield a blueprint for day-to-day action in international affairs. But just as obviously it suggested motion in a certain direction: a significant increase in defense spending so as to restore the deteriorating military balance, and a new determination to resist the expansion of Soviet imperial control and influence. No one who voted for Reagan could have had any doubt that this was what he would aim for, and it was therefore reasonable to suppose that the decisive majority by which he was elected signified the crystallization of a new consensus in American public opinion on the seriousness of the Soviet threat and the need to take action against it.
It is very important to recognize, however, that Reagan did not create this new consensus. Actually, it would be more accurate to say that it created him; or, to be still more precise, that its prior existence made his election possible. As evidence of this proposition, we can point to the dramatic rise in alarm over the Soviet threat after the invasion of Afghanistan in 1979. We can also point to the growth of support for increases in defense spending charted throughout the 1970’s by all the public-opinion polls. And we can, finally, point to a palpable intensification of nationalist sentiment in the country, beginning with the surprising outburst of patriotism that accompanied the bicentennial celebrations of 1976 and culminating in the pro-American demonstrations provoked by the humiliating seizure of the hostages in Iran three years later.
But even more striking than any of this was the radical alteration in both the tone and substance of the Carter administration’s foreign policy in its fourth and—as it would turn out—final year in office. Jimmy Carter, who throughout his campaign for the Presidency in 1976 had promised to cut defense spending by at least $5 billion and also never to lie to the American people, could now be discovered boasting that he had broken both of these promises by raising defense spending in his first three years as President. Carter, who had begun by congratulating the nation on having overcome its “inordinate fear of Communism,” and who had spoken of the obsolescence of military power as a factor in international conflict, now not only grew alarmed over the prospect of a Soviet takeover of the Persian Gulf, but enunciated a new presidential doctrine committing the United States to the use of force in order to prevent it. Carter, who had begun by stigmatizing the American effort to save South Vietnam from Communism as a symptom of “intellectual and moral poverty,” and who had cooperated in administering the coup de grâce to the Somoza government in Nicaragua, now cut off American aid to the Communist-dominated Sandinista regime which had replaced Somoza, and in addition sent money and military advisers to El Salvador to help prevent a Communist-dominated guerrilla force from taking power there.
Without denying that these highly dramatic reversals represented a conscientious effort by a sitting President of the United States to discharge his constitutional responsibilities as the guardian of the national security, I would nevertheless maintain that Carter the President would not have done such things without the permission (or even, perhaps, the urging) of Carter the politician. For the politician in Carter could see all too clearly that a shift in the climate of opinion was robbing his policies as President of the popularity they had briefly seemed to enjoy and thereby jeopardizing his chances for reelection to a second term.
In the end, Carter lost to a much more plausible and, to all appearances much more reliable, exponent of the policies to which Carter himself had so recently become a convert. The new consensus on the Soviet threat which had already waxed strong enough to force a change of direction on Jimmy Carter was by now too strong to settle for him when in Ronald Reagan it could get the real thing.
Yet no sooner had it swept Reagan into office than questions began to be raised about the precise meaning and limits of the new consensus. Those who opposed Reagan, and some who supported him, were quick to deny that his election had provided him with a clear “mandate” in foreign policy. No one went so far as to deny that Carter had been badly hurt by the national humiliation over Iran and his inability to do anything about it, but many denied that the Iranian episode was much more than a freakish accident. Reagan had won, they said, mainly because of economic factors, or because so many different groups had come to dislike Carter for a great variety of reasons forming no coherent political pattern. In any case, if Reagan should make a serious attempt to put his “simplistic” view of the world into practice, he would soon find himself frustrated by “reality” (by which his opponents meant their own view of the world).
At first not much consolation could be derived by Reagan’s opponents from this prediction. In his early months in office, he seemed bent on doing almost exactly What the critics were so sure he would be unable to do. Within days of moving into the White House, he spoke of Communism as a bizarre historical phenomenon destined to disappear in the foreseeable future, and he gave every indication of wishing to hurry the process along. In his economic program, the only area of government spending to be increased rather than cut was defense. The emphasis on arms control, which had been so marked in the past three administrations, was to be muted in favor of an arms build-up aimed at restoring the strategic balance between the United States and the Soviet Union. Through his Secretary of State, Alexander M. Haig, Jr., he served unambiguous notice that he regarded the guerrilla movement in El Salvador as an effort by the Soviet Union, acting through Cuba and Nicaragua, to extend its imperial reach in Central America, and he expressed his determination to prevent this.
Not surprisingly, there were cries of alarm, especially over the language in which these intentions were described. But anyone looking more closely than it suited the critics to do could see that they had less to worry about than they imagined. First of all, the Reagan administration implicitly agreed with its opponents in interpreting the election not so much as a mandate for changing the foreign policy of the nation as for reforming the economy. Most of the energy during Reagan’s early months in office went into his economic program, while foreign policy was treated almost as a distraction. Thus the heavy emphasis placed on holding the line in Central America, and especially for the moment in El Salvador, was soon softened, evidently because the White House feared that the controversy Haig had provoked both in the Congress and in the media would undercut support for the President’s economic progam.
Nor was this the only sign that wherever the interests of Reagan’s economic program conflicted with the interests of his foreign policy, the former would be favored over the latter. For example, he revoked the grain embargo instituted by Carter against the Soviets in response to the invasion of Afghanistan, even though he was supposedly against doing anything to help or strengthen them. No matter that this move was rationalized with the argument that embargoes were ineffective and that we were doing more harm to ourselves than to the Soviets. The fact was that on this issue, Reagan showed that his was an administration which—in George Will’s devastating characterization—“loved commerce more than it loathed Communism,” a characterization that would later be richly confirmed by the decision to go on subsidizing the Polish economy even after the Soviet-ordered institution of martial law by the Quisling Jaruzelski regime.
To be sure, there were two major exceptions to this subordination of foreign policy to economic considerations. One was the President’s brave and stubborn determination to hold out for a significant increase in defense spending even when the pressure to back down became nearly intolerable. Yet the very erosion of the support Reagan had originally enjoyed on this issue could be blamed in some measure on his own decision to give priority to the economy over foreign policy. For if reducing government spending was our most important order of business, there was no way that the defense budget could be spared from the ax, and there would always be enough evidence of “waste” in the Pentagon to reassure deficit-minded Republicans that in opposing the President on this issue they were not endangering the national security. (Most Democrats were already reassured.) Here indeed was a “reality” to frustrate Reagan’s ideas, but in this case it was a Coleridgean reality that he himself was at least half helping to create.
The other great exception to the favoring of commerce over anti-Communism was Reagan’s staunch opposition to the construction of a pipeline that would carry natural gas from the Soviet Union to Western Europe. Here too, however, his ideas were undermined by a reality he himself had helped to create. As apologists for the pipeline kept saying, by selling American grain to the Soviets, Reagan had not exactly put himself in the best position to demand that the Europeans refuse to sell equipment for building the pipeline. American opponents of the pipeline countered by arguing that there were great differences between the two deals. Grain sales, they said, were a straight form of trade, whereas the subsidized pipeline deal was a form of aid; grain sales cost the Soviets hard currency, whereas the Soviets would ultimately earn hard currency through the pipeline; grain sales gave the Soviets no political leverage over the United States, whereas becoming the supplier of energy to Western Europe would enable the Soviets to threaten a cutoff as a way of exerting pressure in some future crisis.
Yet whatever the merits of these arguments, in the eyes of Reagan’s European critics and their American allies, the United States was at the very best being inconsistent and at the worst hypocritical. Either Reagan wanted to declare “economic warfare” on the Soviet Union or he did not. But if he did, he could not ask the Europeans to shoulder the burden while decreeing a special exemption for the American farmer.
The net result of this assignment to economic policy of a higher priority than foreign policy was the creation of a vacuum into which the opposition to the 1980 consensus on the Soviet threat was able to move. Discredited by Iran and Afghanistan and demoralized by Reagan’s landslide victory, the opposition (which included Republicans as well as Democrats) was now handed a chance to regroup much sooner than it had expected. Even its severest critics would have to grant that it went on to make the most of this happy windfall.
The opposition to the 1980 consensus on the Soviet threat was, and is, heavily influenced by two closely related though distinguishable elements: pacifism and isolationism. I am well aware that most members of the opposition would indignantly deny that their ideas can be identified either with pacifism or with isolationism. But as I hope to show, my use of these terms is fully warranted by the historical pedigree of what the opposition says and the logical consequences of the policies it advocates.
Of the two major elements whose influence has shaped the opposition to the 1980 consensus, pacifism is at once the more elusive and the more pervasive. It is elusive because there are in the United States only a minuscule number of people frankly and openly committed to pacifism in the strict sense of the term: the belief that war is the greatest of all evils and that nothing, literally nothing, nothing whatever, is worth defending by force of arms or can justify resorting to war. Indeed, even among the few self-declared pacifists in America, there are many who make an exception for “wars of national liberation” and are even willing to defend terrorism.
But if pacifism in the strict sense can scarcely be said to exist in America, pacifism in a looser form has become more influential than it has ever been before except perhaps in England in the period between the two world wars. What made pacifism so fashionable then was the carnage of World War I—a war that no one seemed able to explain or justify and yet that had decimated an entire generation of young men who went blindly to the slaughter mouthing “mindless” and “meaningless” patriotic slogans.
This pacifist tide was fed, however, not only by memories of World War I but also by apocalyptic visions of what a second world war would be like. It was widely believed in the 30’s that there was no defense against aerial bombardment, and that the next war would therefore spell the end of the world, or at least of “civilization as we know it.” Along with being evil, then, war had become senseless and could no longer be seen in Clausewitzian terms as a continuation of policy by other means.
The same combination of disillusioned memory and apocalyptic anticipation is at work in the spread of pacifism in America today. The memory in our case is of course the memory of Vietnam whose effect on American attitudes toward war in general has been strikingly similar to the effect of World War I on the British in the 20’s and 30’s. In 1933, the notorious resolution “that this house will in no circumstances fight for its king and country” carried the day in a debate at Oxford; fifty years later, in 1983 (just after, ironically, the same resolution had been debated again at Oxford and this time defeated), a joint session of the U.S. Congress cheered when Ronald Reagan interrupted his appeal for increased aid to El Salvador with the words: “Now, before I go any further, let me say to those who invoke the memory of Vietnam: there is no thought of sending American combat troops to Central America.” The Senators and Congressmen cheered not merely because they approved of Reagan’s declaration in the case of Central America but because the “thought of sending American combat troops” anywhere, or for any purpose, has been rendered almost taboo by “the memory of Vietnam.” Even those, both in and out of Congress, who insist that there are “places where they would favor American action,” as Meg Greenfield of the Washington Post puts it, “never can seem to think of one this side of San Diego.” In America today, slogans like “No More Vietnams,” “Hell No, We Won’t Go,” and “Nothing is ever settled by force” have become the functional equivalent of the resolution never to fight for king and country.
As in the 30’s, moreover, the pacifist attitudes growing out of the country’s most recent experience of war have been reinforced by apocalyptic visions of the future. What the idea of aerial bombardment did for British pacifism in the 30’s, the idea of nuclear missiles has done for American (and of course West European) pacifism in the 80’s. But nuclear weapons have been a much greater blessing to pacifism than aerial bombardment ever was. Whereas not everyone in the 30’s agreed that there was no defense against aerial bombardment, virtually everyone in the United States today believes that there is no defense against nuclear missiles. More and more Americans have come to doubt, furthermore, that a limited nuclear war is possible. It is now almost universally assumed that any use of nuclear weapons anywhere would inevitably escalate into all-out nuclear war and the certain destruction of the entire world. This means that for the first time ever, the basic pacifist premise—that war is a greater evil than any objective for which it might be fought—has acquired plausibility in the eyes of many people otherwise not inclined to pacifism either by temperament or by philosophy. Hence the emergence of what has been called nuclear pacifism.
Nuclear pacifism expresses itself in a variety of positions. At its most forthright and logical, it calls for unilateral disarmament on the plainly sensible ground that if nuclear weapons can never and must never be used, there is no point in possessing them at all. Some unilateralists think that if the West gave up its nuclear arsenal, the Soviets would follow suit. Others admit that the Soviet Union might take advantage of such a move to compel a Western surrender. But whether they are optimistic or pessimistic about the Soviet response, all unilateralists by definition agree that the West should immediately begin getting rid of its own nuclear weapons without waiting for the Soviets to respond in kind.
Although unilateralism has become a serious political force in Western Europe, it has thus far made very little headway in the United States. Perhaps the closest any reputable American group has come to endorsing unilateralism is the pastoral letter of the Roman Catholic bishops (“The Challenge of Peace: God’s Promise and Our Response”). To be sure, the bishops explicitly say that they “do not advocate a policy of unilateral disarmament.” They are willing to accord “a strictly conditioned moral acceptance” to the temporary or interim possession of nuclear weapons by the West as a deterrent, provided that deterrence is “used as a step on the way toward progressive disarmament.” Yet they declare their “profound skepticism about the moral acceptability of any use of nuclear weapons.” But if it is immoral to use nuclear weapons under any circumstances (even in retaliation for a nuclear attack), they might just as well be renounced unilaterally for all the good they do even as a deterrent or a bargaining chip.
The West German bishops, whose minds have been concentrated wonderfully by the overwhelming superiority of the Soviet conventional forces poised against their country and deterred only by the NATO promise that an invasion would if necessary be met with a nuclear response, have been sensitive to the unilateralist implications contained in the casuistical formulations of their American brethren. For their part, the German bishops have come out in support of NATO’s policy on this point. So too have the French bishops.
Here, then, we have the representatives of a constituency which not long ago was among the most hawkish in America, and perhaps the most resolutely anti-Communist, throwing their moral and political weight on the side of a position verging on unilateral disarmament, and doing so in the full awareness (as the pastoral letter suggests) that this might well result in a Soviet-dominated world. What more vivid measure could there be of the great boost that nuclear weapons have given to pacifism in America?
Historically, pacifist thought, while in itself always enjoying only a limited appeal and often operating on the margins of political debate, has nevertheless exerted a great influence on the mainstream—not under its own doctrinal flag but in the bowdlerized form of illusions about and pressures for disarmament. In the period between the two world wars, these pacifist-inspired illusions and pressures gave us the Kellogg-Briand Pact of 1928 renouncing war. They also found more concrete expression in the Washington naval armaments treaty of 1922 (limiting the number of American, British, Japanese, French, and Italian warships) and the London naval agreement of 1930 (which set limits on the size of submarines and other warships).
The best that can be said for these efforts is that if their purpose was to prevent or lessen the risk of war by reducing armaments, they obviously failed. But the worst that can be said for them—that if they had any effect at all, it was to increase rather than decrease the chances of war—is closer to the truth. Thus the naval agreement of 1922, recently cited by the historian Gaddis Smith as a successful example of a “freeze,” is seen by Barbara Tuchman (who is at least as dovish as Smith in her attitude toward nuclear weapons) to have “fueled the rising Japanese militarism that led eventually to Pearl Harbor.”
Mrs. Tuchman’s judgment is shared by most historians, as is the view taken by Eugene V. Rostow, the former director of the Arms Control and Disarmament Agency, both of the Washington “freeze” of 1922 and of the limitations later negotiated in London in 1930. “The post-World War I arms-limitation agreements . . . helped to bring on World War II, by reinforcing the blind and willful optimism of the West, thus inhibiting the possibility of military preparedness and diplomatic actions through which Britain and France could easily have deterred the war.”
Some who accept this assesssment (including Mrs. Tuchman) think that the invention of nuclear weapons has changed everything by making the prevention of war a more overriding imperative than it was in the pre-nuclear age. But as Mrs. Tuchman herself recognizes, the record thus far simply does not bear out this idea. On the contrary, negotiations between the United States and the Soviet Union over nuclear weapons show almost exactly the same characteristics as the arms-control agreements of the pre-nuclear past.
First of all, negotiations over nuclear weapons have not led to real reductions in the quantity or quality of those weapons, and such limitations as they have succeeded in establishing have not notably lessened the risk of war. Take as an example even the proudest single achievement of arms control in the nuclear age—the Test Ban Treaty of 1963. Far from eliminating or even cutting down on the testing of nuclear weapons (and therefore of their further development), the treaty has been followed by an increase in the number of such tests. The only effect the treaty has had on testing has been to drive it underground. That may, as Mrs. Tuchman drily notes, be a gain for the environment but it is not a gain for disarmament.
Secondly, arms-control agreements in the nuclear age, like the disarmament agreements of the 20’s and 30’s, have resulted in cutbacks by the democratic side and increases by the totalitarian side. Under SALT I, the Soviet Union took full advantage of what was legally permitted and forged ahead to increase the quantity of its nuclear weapons while also improving their quality. This is exactly how the Japanese and later the Germans acted in the 1930’s. The United States (following the precedent set by itself and the other Western democracies in the 1930’s after the naval agreements) either stood still or cut back in the years after SALT I was ratified. The one significant advance we did make, the placing of more than one warhead on a single missile (MRV), is now regarded by almost all arms-control enthusiasts as “destabilizing.” But in view of the fact that this innovation was developed in order to conform to the provisions of SALT I (which limited the number of missiles rather than the number of warheads), it demonstrates that the process of arms control has not even been capable of achieving one of its minimal objectives, which is (in the words of the Scowcroft Report) to “help channel modernization into stabilizing rather than destabilizing paths.”1
There is nothing arbitrary or accidental about this record of failure. It stems directly from the pacifist illusion that wars are caused by arms and can therefore be prevented by reducing or eliminating arms. But wars are not caused by arms. Salvador de Madariaga, who chaired the League of Nations Disarmament Commission, came to believe that disarmament was a “mirage” because it tackled the problem of war “upside down and at the wrong end. . . . Nations don’t distrust each other because they are armed; they are armed because they distrust each other. And therefore to want disarmament before a minimum of common agreement on fundamentals is as absurd as to want people to go undressed in winter.”
This simple and unanswerable observation explains why on the one hand there are no arms on the border between the United States and Canada and why on the other hand it is indeed “absurd” to expect that anything much can be done about the arms on the border between East and West Europe.
But there is a further point to be made. The “common agreement on fundamentals” that de Madariaga rightly sees as a necessary precondition of disarmament can never be reached with a nation whose ambitions are to overturn the existing international system and to replace it with a new system in which it will enjoy hegemony. Because Nazi Germany was such a nation, it was foolish of Chamberlain and other Western leaders to imagine that war with Hitler could be avoided by negotiated concessions (or “appeasement,” to use the then respectable term). And because the Soviet Union is also such a nation, it is equally foolish to imagine that a “common agreement on fundamentals” can be arrived at between Moscow and the West.
To say that the Soviet Union’s aim is to create a new international system in which it would enjoy hegemony is not to suggest (as the vulgar caricature has it) that there is a “timetable” or a “blueprint” for world conquest guiding every action the Kremlin takes. It is, however, to recognize that the strategy of the Soviet Union is to move toward a greater and greater expansion of its power and influence, at a pace and by tactical means that combine maximum prudence with maximum opportunism.
In other words, wherever a chance presents itself and the risks are not too great, the Soviets will take advantage of it. If force must be used, as in Afghanistan, it will be used, but the clear preference of the Soviet leadership is either to employ surrogates to do the fighting, or better still, to win through intimidation rather than through war.
Since their military arsenal is designed to serve this expansionist strategy, the Soviets will never voluntarily surrender an advantage in the balance of military power. Nor will they ever enter into (or honor) any agreement that prevents them from achieving military superiority. To accept anything less than superiority—even equality or parity—would be tantamount to accepting the present international system. Indeed, because their ideological or political attractiveness has diminished in recent years, and because they suffer from a great disadvantage to the West in the economic area as well, their reliance on military power has increased and will ineluctably grow in the future. For how else can they compensate for their other weaknesses in the overall “correlation of forces”?
The United States, by contrast, leads an alliance whose strategic objective is to maintain “stability,” and our military arsenal is designed to serve this defensive purpose. Far from pursuing superiority, the United States voluntarily gave it up, allowing the Soviets to achieve parity on the theory that they had surrendered their originally revolutionary aims and had now become a “status-quo power,” content with the present international arrangements. The Soviets themselves made nonsense of this theory by continuing their military buildup even after they had caught up and reached parity. In addition they sent Cuban surrogates to Africa, and then their own troops into Afghanistan (while also providing military support to Communist guerrillas in Central America). So much, then, for the idea that they had become a “status-quo power,” and so much therefore for the chances of arriving at de Madariaga’s “common agreement on fundamentals.”
The brutal truth is that with one side eager (for economic or other reasons) to cut back on its armaments and the other side eager to consolidate and enhance its advantages, disarmament negotiations offer only a fraudulent hope. In the 30’s, the Germans and the Japanese built up their armed forces (with or without cheating) because they wanted to do so, while the democracies—pushed by internal political and economic pressures to disarm—did not even fulfill their legal quotas under the various disarmament agreements. A similar pattern has developed in our own day under cover of the SALT process, during which we have either stood still or moved back while the Soviets have built and built and built, not only expanding and refining their nuclear arsenal but enlarging and improving every category of their conventional force as well. Yet so pervasive has the influence of the pacifist illusion become in the West that, even in the face of all this, hope continues to be invested in disarmament and the opposition to the 1980 consensus on the Soviet threat clamors for new and ever more radical measures.
It is in the guise of these new measures that the second major influence behind the opposition to the 1980 consensus—isolationism—has been able to stage a sensational comeback in American political culture. Like pacifism (to which it has no necessary logical connection but with which it can and always has comfortably allied itself), isolationism claims very few open adherents in the United States. For like pacifism too, isolationism was so discredited by World War II that those who have continued to believe in it, or those who have rediscovered it in recent years, rarely invoke its name in talking about their position. As if this did not cause enough trouble for frank and honest discussion, the name of isolationism (or sometimes neo-isolationism) has occasionally been claimed by writers like Robert W. Tucker and William Pfaff who for better or worse are not truly entitled to it. (Almost the only political commentator with any visibility today who both claims the title and is truly entitled to it is Earl Ravenal.)
However isolationism may be defined in the abstract, historically it has mainly meant a policy of American disengagement from the affairs, and especially the wars, of Europe. This is why the two latest manifestations of the anti-nuclear movement in the United States—the proposal that we commit ourselves to “no first use” of nuclear weapons, and the proposal that we commit ourselves to a “freeze” on the building, testing, and deployment of all such weapons—can legitimately be described as forms of isolationism.
It is true that most proponents of these measures deny that their intention is to disengage the United States from Europe, or that this would be the effect. Thus McGeorge Bundy, George F. Kennan, Robert S. McNamara, and Gerard Smith (who have now become collectively known in certain European circles as the “American gang of four”) take great pains in their famous Foreign Affairs article endorsing no-first-use to insist that they come not to destroy but to strengthen the American commitment to the defense of Western Europe. It is, they write, the “disarray that currently besets the nuclear policy and practices of the Alliance,” and specifically the divisive debate over the proposed deployment of the new intermediate-range missiles in Western Europe, that led them to back the idea of no-first-use. A no-first-use policy would, they believe, restore credibility to NATO’s deterrent and is therefore a good idea on military grounds. But in their view, “The political coherence of the Alliance, especially in times of stress, is at least as important as the military strength required to maintain credible deterrence. Indeed the political requirement has, if anything, an even higher priority.”
How does no-first-use measure up to that overriding political requirement? Perfectly, the American gang of four tells us: “. . . the value of a no-first-use policy . . . is first of all for the internal health of the Western Alliance itself.” And so far as West Germany in particular is concerned, “A policy of no-first-use would not and should not imply an abandonment” of the American guarantee but “only its redefinition.”
This complacent judgment by the American gang of four is not shared by an equally distinguished West German gang of four whose minds have been as wonderfully concentrated by the prospect of Soviet domination as have the minds of their clerical compatriots.2 According to the Germans, not only would renunciation of first use fail to contribute to the “internal health of the Western Alliance itself,” but it would have the opposite effect of increasing insecurity and fear. Nor do the Germans agree that no-first-use would mean nothing more than a “redefinition” of “the American protective guarantee” to Western Europe. As they see it, this “redefinition” would define the “present commitments of the United States” right out of existence.
In short, far from being the best means “for keeping the Alliance united and effective,” the Germans assert that “the proposed no-first-use policy would destroy the confidence of Europeans and especially of Germans in the European-American Alliance as a community of risk, and would endanger the strategic unity of the Alliance and the security of Western Europe.”
The Germans are right. NATO has relied on the threat of a nuclear response to deter not a nuclear attack on Western Europe but an invasion by conventional forces. The reason for this reliance on a nuclear response is that the conventional forces of NATO have never been large enough to repel a conventional Soviet invasion. Not being adequate to repel in actual combat, they are also inadequate to deter such an invasion. Therefore to renounce first use means renouncing deterrence of a conventional war; it is also to counsel surrender in the face of an inevitable defeat by decisively superior forces. (On this point the Germans do not diplomatically mince words: “The advice of the authors to renounce the use of nuclear weapons even in the face of pending conventional defeat of Western Europe is tantamount to suggesting that ‘rather Red than dead’ would be the only remaining option for those Europeans then still alive.”)
The only way around this trap is to create a Western conventional capability that would be a match for the conventional Soviet forces arrayed against Western Europe. With the rise of anti-nuclear sentiment in the last year or two, this solution has become more and more popular. Except for the American bishops, everyone, it seems, is now in favor of a conventional military build-up. Even critics of the “military-industrial complex” who have complained without let-up about the “bloated” military budget can nowadays be found paying their rhetorical respects to the need for larger and better conventional forces.
But the fact is that nuclear weapons are much cheaper than conventional forces; they give “more bang for a buck,” in the phrase used during the Eisenhower years to justify a greater reliance on them in our overall military posture. How many of those both in the United States and Europe agitating against nuclear and for conventional weapons would be willing to spend the enormous sums that would be needed to build the requisite number of tanks and artillery and munitions? And what about the manpower? What about the draft that would have to be instituted in the United States and extended in Western Europe also at enormous financial cost (not to mention political unrest)?
There is reason, then, to doubt the sincerity of many of the pious genuflections before the newly fashionable idol of a conventional balance of power. But it is not the sincerity of the American gang of four that comes into question when they too pay their obeisances to the conventional defense of Europe; it is, rather, their intellectual and political seriousness. “It seems clear,” they write, “that the nations of the Alliance together can provide whatever forces are needed, and within realistic budgetary constraints.” Since no evidence is adduced to support this astonishing claim, one wonders why “it seems clear.” It is not, at any rate, clear to everyone. According to one extremely optimistic assessment—a report by the European Security Study entitled “Strengthening Conventional Deterrence in Europe”—NATO conventional forces could be adequately upgraded through new technologies over a period of ten years at a cost of only an additional 1 percent above “the present NATO commitments [of an annual real growth of 3 percent in defense spending] if such commitments are sustained and extended beyond 1986.” Yet even the optimistic authors of this report “recognize that political pressures generated by the current economic situation in the NATO countries make it difficult to achieve even the present NATO commitments.”
As for the United States in particular, former Secretary of State Alexander Haig (who was once commander-in-chief of NATO) estimates that an adequate conventional defense would mean “tripling the armed forces, and putting the economy on a wartime footing.” Possibly this estimate is too pessimistic. Even so, half of the entire military appropriation requested by the Reagan administration for this year will go to the conventional defense of Europe. Is it “clear” to anyone that more could be made available?
The only way around this trap is to envisage the additional burden being shouldered by the Europeans themselves. And that is precisely the objective of several proponents of no-first-use like Irving Kristol and Herman Kahn who are generally hawkish in their ideas about defense and whose espousal of no-first-use therefore comes as a surprise. But Kristol thinks that the dependence of Europe on the United States has sapped Europeans of the will to defend themselves. Therefore an American withdrawal in the form of a policy of no-first-use (coupled with the removal of American troops who would no longer be needed as a “tripwire”) might shock the Europeans into doing whatever would be necessary to insure their own defense. Kahn, who differs from Kristol on the issue of withdrawing troops, agrees with him that no-first-use would have a salutary effect on the Europeans.
Both Kristol and Kahn admit, however, that the Europeans might well be shocked by this policy not into building an adequate defensive capability of their own but rather into collapsing before the intimidating military might of the Soviet Union. Kahn’s vision of this possibility extends only to a neutralized Germany, but he thinks “we can live with that.” Kristol foresees worse: appeasement leading to the Finlandization of Western Europe as a whole. But if this were to come about, it would in Kristol’s stern opinion prove that the Europeans were “simply unworthy” of the liberties they enjoy (and, he adds, the same harsh judgment of “political decadence” would be passed by future historians on the United States as well if we in our turn were to refuse “the burden of large, expensive, conventional military establishments, so that we can meet our responsibilities without always and immediately raising the specter of nuclear disaster”).
In any case, Kristol does not doubt that an American policy of no-first-use would put Western Europe at the mercy of the Soviet Union unless it were accompanied by a massive build-up of conventional military force. (Kahn adds the requirement of a credible strategy for fighting a limited nuclear war if the Soviets should use nuclear weapons first.) But there is little likelihood that a conventional build-up will be undertaken either by the Europeans or by the United States. If there is to be a barrier to Soviet domination of the West, it will have to continue taking the form of nuclear weapons. Kristol may be right in saying that this Western dependency on nuclear weapons should never have been allowed to develop. But it is hard to imagine democratic societies placing themselves in peacetime on the kind of permanent war footing that an adequate conventional defense would have required. It is harder still to imagine a future reversal of this situation with expensive welfare states now in place everywhere in the democratic world. To remove nuclear weapons from the picture, then, is for all practical purposes to give the Soviets a decisive edge.
If a policy of no-first-use would do this in Europe, so too would a freeze (since it would prevent deployment of the intermediate-range Pershing 2’s and cruise missiles needed to balance the Soviet SS-20’s). That much is obvious. What is perhaps less obvious is that a freeze would give the Soviet Union a decisive edge not only over the Europeans but over the United States as well.
Proponents of the freeze all deny that the Soviet Union has achieved superiority over the United States in nuclear weaponry. Although most, if not indeed all, of them think that superiority is in any case a meaningless concept as applied to nuclear weapons, they still make a great and indignant point of insisting that a “rough parity” in strategic forces now exists between the two superpowers.
The Reagan administration does not agree. Its position is that the Soviets have an edge because their missiles are now sufficiently powerful and sufficiently accurate to take out our land-based ICBM’s in a first strike, thus depriving us of the means to do anything other than attack their civilian population, after which they would still have enough left over to retaliate in kind against our cities. In our aging Minutemen we have no matching capability, and until that force of land-based ICBM’s is modernized by the deployment of MX or some substitute like the smaller single-warhead “Midgetman,” the Soviets will continue to enjoy an edge. It follows that the “window of vulnerability” is still open. A freeze would prevent us from closing it and hence would lock us into a position of strategic inferiority.3
Yet even if Reagan’s critics were right in claiming that the window of vulnerability is a myth and that the nuclear balance is about equal, a freeze (even a mutual and verifiable freeze) would still lock the United States into a position of strategic inferiority. The reason, simply, is that with a freeze, Soviet superiority in conventional arms would become and remain the decisive factor in the overall balance of military power.
Unlike no-first-use, which would leave Western Europe open to a Soviet invasion (though this in itself would probably suffice to bring about a gradual political capitulation without any troops and tanks actually moving across the borders), the freeze would not expose the United States to any such threat. But—again, unless there were a massive conventional build-up, which, again, is unlikely—a freeze would signify the acquiescence of the United States in a balance of military power clearly favorable to the Soviet Union. This, in turn, would necessitate a very severe contraction of American commitments around the world. For our own defense, we would rely on “minimum deterrence”—that is, a presumably (though at best only temporarily) invulnerable force of submarines armed with nuclear weapons capable of devastating the Soviet Union in retaliation for an attack on the United States itself. The rest of the world we would leave to deal as best it could with the unchecked might of the Soviet Union. Soon enough, however, alone in a sea of Finlandized and Vichyized regimes, we too would find what John F. Kennedy called the “red tide” lapping at our political shores and inexorably eroding our independence and our liberty.
The isolationism that is implicit in the freeze movement, then, goes even farther than the isolationism hiding behind no-first-use. But even the freeze does not go so far as the variety of isolationism that has surfaced in the debate over Central America. If, historically, isolationism has meant American disengagement from Europe, it has also meant the determination to keep the Americas free of foreign influence. The Monroe Doctrine, indeed, was promulgated as the corollary to an isolationist foreign policy. Yet there is now a school of thought in Congress and the media which denies that the United States has the right to fight against the spread of Soviet influence even in the Americas.
Of course, it can be argued that the Monroe Doctrine has already been abrogated by the transformation of Cuba into a Soviet satellite, and that it is a little late to invoke it now in connection with El Salvador and Nicaragua. But the radical new isolationism which has appeared among us on this issue does not rest content even with the de-facto repeal of the Monroe Doctrine. In what is certainly one of the most bizarre pieces of legislation in the history of American foreign policy, the Congress of the United States has in effect demanded that we not only forget about the Monroe Doctrine but that we observe the Brezhnev Doctrine in its place. Under the Brezhnev Doctrine, once a country has become “socialist” (i.e., Communist) it must remain “socialist”; all “socialist” revolutions are to be considered irreversible. Congress evidently agrees. By enacting the Boland Amendment, which forbids the U.S. government to assist in overthrowing the “socialist” Sandinista regime, Congress has virtually written the Brezhnev Doctrine into American law.
But we are not yet done with the incredible perversity of the new isolationists on this issue. Not satisfied with turning the United States into the virtual enforcer of the Brezhnev Doctrine where Nicaragua is concerned, they are also doing their best to help the guerrillas in El Salvador get into power, despite the fact that these guerrillas are openly connected to the Soviet Union through Nicaragua and Cuba. In Congress and in the media, the new isolationists work to obstruct the giving of aid; they devote all their energies to attacking the elected government of El Salvador for its abuses of human rights; they ridicule the administration’s judgment that these abuses are declining; and they loudly and persistently demand that the guerrillas be given a share of power.
Adding intellectual insult to political injury, they claim to be doing all this because they wish to prevent a Communist victory in El Salvador, and they wax righteous with anyone who suggests otherwise. Thus one Congressman who has participated in these various efforts has attacked UN Ambassador Jeane Kirkpatrick for observing that there are those in Congress “who would actually like to see the Marxist forces take power in El Salvador.” “It is,” declared the Congressman, “slander and McCarthyite nonsense to say that members of Congress want to see Marxism triumph, in El Salvador or anywhere else.” In a similar vein, Senator Christopher Dodd of Connecticut, replying for the Democrats to the President’s appeal for increased aid to El Salvador, began his assault on Reagan’s speech by affirming his opposition to “the establishment of Marxist states in Central America.”
These loud protestations are all very well, but if we ask what political views like those of Senator Dodd logically imply, we have to conclude that Ambassador Kirkpatrick’s charge, far from being slanderous or McCarthyite, verges on the self-evident. For what outcome other than a Marxist victory in El Salvador can be expected from a policy that restricts military aid to the government while simultaneously hampering efforts to interdict the flow of arms to the guerrillas; that puts continual pressure on the government to institute wide-ranging reforms in the midst of a guerrilla war; and that insists that the government enter into some form of coalition with the Communist-dominated guerrilla forces? It is hard to think of a better recipe for a Marxist victory in El Salvador than this combination of policies.
During the Vietnam war those who advocated accommodation with the Vietcong were able to persuade themselves that the National Liberation Front was indigenous to South Vietnam rather than an instrument of the Communist regime in the north; that although it included Communists, it was not dominated by them; and that it was fighting against the oppressions and repressions of the Diem and Thieu regimes. We now know from Hanoi itself that all these claims were false and that those in the United States who believed them were deceived. Similarly with Castro’s rebellion against the Batista regime in Cuba. Though we now know from Castro’s own mouth that he was a Communist from the beginning, when at first he claimed to be a Jeffersonian Democrat almost everyone in the United States believed him.
Things are very different today. As Ambassador Kirkpatrick points out in an article in the Washington Post, “what distinguishes the current debate about military and economic aid for Central America from similar disputes about China, Cuba, Vietnam, and Nicaragua is that we have fewer illusions and more information.” Hardly anyone claims any longer that the regime in Nicaragua is a coalition of different political groups whose objective is to create a pluralistic democracy there. The Sandinistas “are done with dissembling,” and have by their candor “denied their international supporters the comforts of ambiguity.” They openly proclaim, as one Sandinista leader puts it, that “We guide ourselves by the scientific doctrines of the Revolution, by Marxism-Leninism.” They make no effort to hide their close association with Cuba, which has sent thousands of teachers, managers, and military advisers to help them move more smoothly toward a fully totalitarian society and to enlarge and strengthen an army which is already the most powerful in the region.
Nor are the Sandinistas connected to the Soviet Union only indirectly, through Cuba. Recently, the Soviets began building a new port on the strategically important Pacific coast of Nicaragua, with the ostensible purpose of servicing Soviet fishing boats. More recently still, a member of the Nicaraguan junta said that his government would consider installing Soviet nuclear missiles in Nicaragua if requested by Moscow to do so. No wonder one French observer thinks “we are headed for a slow-motion replay of the Cuban missile crisis.”
In El Salvador, too, “the comforts of ambiguity” have largely disappeared. The elections of March 1982 in El Salvador, with their huge turnout despite threats of guerrilla reprisal, have made it hard to go on maintaining that the guerrillas enjoy great popular support at home, and the documentary evidence has made it more and more difficult to deny that they are (in Ambassador Kirkpatrick’s words) “directed from command-control centers in Nicaragua, armed with Soviet-bloc arms delivered through Cuba and Nicaragua, bent on establishing in El Salvador the kind of one-party dictatorship linked to the Soviet Union that already exists in Nicaragua.”
Beyond having every reason to know who the guerrillas are, those who advocate a “political solution” in the form of “power sharing” in El Salvador also have every reason to know what invariably happens to such arrangements. Nicaragua is only the most recent example of how a coalition in which Communists are included soon ceases to be a coalition and becomes a one-party regime.
Given all this, to say that the new isolationists would like to see a Marxist regime take over in El Salvador may be the only alternative to the truly slanderous charge that they are so stupid and so ignorant of history that they cannot understand the clear implication of what they say and do.
But why would anyone in Congress or anywhere else wish to see a Marxist regime take power in El Salvador? In the vast majority of instances, the answer obviously cannot be that they are Marxists themselves or that they are sympathetic to Communism. But nowadays it is not necessary to be either a Marxist or a Communist sympathizer in order to believe that Communism is the wave of the future, at least in the “Third World,” and that to range oneself against it is to be on “the losing side.” Thus Senator Dodd: “American dollars alone cannot buy military victory . . . in Central America. If we continue down that road, if we continue to ally ourselves with repression, we will not only deny our own most basic values, we will also find ourselves once again on the losing side.”
One would never guess from these words that 68 percent of the dollars we have sent to El Salvador have gone to economic rather than military aid; that what we have allied ourselves with in El Salvador is a democratically elected government; that it is trying with some success both to carry social reform forward and to cut down on the murders and other horrors that always and everywhere accompany guerrilla war; that if the guerrillas came to power they would be far more repressive than the present government in El Salvador. Despite all this, Senator Dodd declares that we are standing against “the tide of history” instead of moving with it.
Outside the halls of Congress, among columnists, editorialists, and academics, the idea that a Communist victory is the inevitable wave of the future comes out even more clearly. How, asks Anthony Lewis of the New York Times, can a government as bad as the one in El Salvador “win a war, whatever aid it gets,” against guerrilla forces “powerfully motivated by a desire to change a society long marked by brutality and exploitation?”
Again, one would never guess that the government in El Salvador has demonstrated in a free election that it enjoys vast popular support and that the grievances of the people have not led them to support the alternative represented by the guerrillas. Nor, when it comes to Nicaragua, does Lewis assume the invincibility of the powerfully motivated guerrillas fighting against a brutal and oppressive regime there. But of course, being against the Sandinistas, they must be unregenerate Somocistas (even though old leaders of the fight against Somoza are prominent among them); and being anti-Communists, they cannot be regarded as the inevitable victors in a struggle against a Communist regime.
But the most, honest of all the statements yet published on these issues is by Seweryn Bialer, who directs the Research Institute on International Change at Columbia University. After declaring that “it is simply unrealistic to expect that American support for the Salvadoran government can prevent the insurrectionist forces from making significant advances—and perhaps even winning the war—in the next two years,” Bialer goes on to conclude flatly that it is also “unrealistic for the United States to hope to defeat Communist—or potentially Communist—regimes in the region.” Bialer knows this “from talks with representatives of the Salvadoran guerrillas, Sandinista leaders, and Cuban officials,” who have assured him that they will win. He also knows from the same sources that the Nicaraguan guerrillas cannot be expected to “defeat the Sandinistas or prevent their evolution toward Communism.” But of course he really knows it from the assumption he makes that the Salvadoran guerrillas are (to revert to Senator Dodd’s telling image) moving with the tide of history while the Nicaraguan guerrillas are moving against it.
Besides believing that Communism is the wave of the future, the new isolationists evidently believe that Communist regimes are on the whole better for the people who live under them than the “corrupt” and “repressive” governments they replace. On this point, politicians are unable to speak with the same degree of candor as a columnist like Anthony Lewis or an organization like the American Friends Service Committee. Where El Salvador is concerned, although Lewis is under “no illusion that the guerrilla forces and their leaders are all noble democrats, believers in government under law,” he nevertheless tells us that what they are fighting against is “brutality and exploitation.” Now, only yesterday Lewis was railing against the brutality and exploitation of the Thieu regime in South Vietnam only to discover (if indeed he yet has) that it was a paradise compared with what the “powerfully motivated” Vietnamese Communists had in store for the people of South Vietnam. But this time he is sure it will be different. Though the Sandinistas “do indeed have human-rights violations on their record,” Lewis says, “what has happened in Nicaragua in the last few years is pretty tame stuff compared to what has happened—and is still happening—in El Salvador.” After all, only a hundred civilians have been killed in Nicaragua during the past few years as compared with more than 30,000 in El Salvador.
The Anthony Lewis who throws these figures around is the same Anthony Lewis who in writing first about the Christmas 1972 bombing of Hanoi and then about the Israeli invasion of Lebanon uncritically accepted false casualty statistics to discredit the United States in the former case and Israel in the latter. Here, at it once again, he fails to consider that the guerrillas must have been responsible for at least some portion of the 30,000 civilians killed in El Salvador. Nor does he notice that during the period in question the war in El Salvador was still raging while one phase of the war in Nicaragua was over and the next not yet really begun. Nor does it occur to him that the Sandinistas are now in the process of consolidating their power and extending their control with the ultimate objective of turning the country into a totalitarian society on the model of Cuba. Nor does he recognize that Castroism, like every other example of Communist rule the world has ever known, has brought nothing but political repression, economic misery, and cultural starvation. Nor does he take into account the fact that the young men of Cuba have been turned into the cannon fodder of Soviet imperialism in Africa. None of these things disturbs Lewis’s belief that Nicaragua will be different.
Indeed, he and many others already detect signs that it is. Conditions, Lewis assures us, are better there than in El Salvador, and according to the national coordinator of the Human Rights Program of the American Friends Service Committee: “In many aspects of Nicaraguan life—nutrition, education, health care, and land reform—there have been tremendous improvements.” Having sung this old familiar song whose strains have echoed in countless reports from Stalin’s Russia, Mao’s China, Ho’s North Vietnam, and Castro’s Cuba, this Quaker guardian of human rights acknowledges that there have been a few violations in Nicaragua. For these, however, he mainly blames not the government but “attempts to destabilize the government.” Needless to say, he offers no such apology for the human-rights violations in El Salvador. There, he no doubt feels, the people will go on suffering until the inevitable arrival of the same blessings that the Sandinistas are now bringing to the people of Nicaragua and which would be more abundant still if not for “attempts to destabilize the government.”
But even if a Communist victory were both inevitable and morally desirable as compared with the alternative in El Salvador, would it not still be a blow to the interests of the United States?
Not necessarily, says Seweryn Bialer. Admittedly the Sandinistas, like Castro before them and (although here Bialer is less forthright) the guerrillas in El Salvador after them, are bent on creating Communist states. Admittedly nothing the United States could have done, “neither ‘carrots’ nor ‘sticks,’ . . . could have prevented the Cuban evolution into a Communist state,” and the same is true of Nicaragua and (presumably) El Salvador. However, Bialer believes, “a less bellicose policy toward Cuba might well have prevented it from becoming a satellite of the Soviet Union.” Where Cuba is concerned, it is now too late: we have “probably missed the opportunity to separate what is authentically Cuban in the Cuban revolution from the influence of the Soviet Union in Havana.” But elsewhere in Central America it is not yet too late: we still have a chance, through “a shrewder, more deft United States policy [to] prevent El Salvador and Nicaragua from moving into the Soviet orbit.”
What we should do, according to this analysis, is coopt the Communist revolution in Central America. For “the only plausible way to prevent Soviet influence in the United States’ own backyard” is to accept and even promote the spread of Communism in the United States’ own backyard. Instead of making Central America safe for such brutal dictatorships as the one in Guatemala, which is how Bialer characterizes our present policy, we should—to put it more nakedly than Bialer himself does—be working to make the region safe for national Communism.
Never mind that there is no evidence for Bialer’s assertion that Castro once was, or that the Sandinistas or the guerrillas in El Salvador now are, “interested not in Soviet goals but rather in . . . independence, social reform, and economic development.” Never mind that Castro himself has given the lie to the idea that he was driven into the arms of the Soviet Union by a “bellicose” American policy (which in any case was not at all bellicose in the immediate aftermath of Castro’s victory and only became so as he moved through his own revolutionary ardor into the Soviet camp). Never mind the simple fact that the United States not only helped in the end to topple the Somoza regime in Nicaragua after many years of supporting it, but initially welcomed the new regime in Nicaragua, sending it more economic aid in its first eighteen months in power than it had given to Somoza in the preceding twenty years. Never mind that as in the earlier case of Castro, these friendly relations with the Sandinistas turned sour as it became clear even to a sympathetic Carter administration that they were both failing to keep their democratic promises at home and also actively working with Soviet and Cuban help to promote a “revolution without frontiers” in El Salvador and elsewhere in the region. All these inconvenient truths to the contrary notwithstanding, Bialer and others can still assure us that all the Sandinista government cares about is “its own independence, social reform, and economic development.”
Both Lewis and Bialer (among many other commentators) freely concede that the United States could prevent a Communist victory in El Salvador (and could reverse the Communist revolution in Nicaragua) if it sent its own troops in to do the job. But the not-so-hidden term in their analysis is that the United States will not and cannot intervene militarily in Central America. Bialer: “It is difficult if not impossible to imagine that Congress and the American public would agree to such a course.” Lewis: “Public feeling against any dispatch of U.S. combat forces to El Salvador is so great that it is hard to see how any President could send them.”
It is here that we arrive finally at the juncture where pacifism and isolationism—the two great shapers of the opposition to the 1980 consensus—meet and merge into a single mighty wave of appeasement.
Of course, the term appeasement itself retains its pejorative ring—so much so that in what may well be the prize polemical trick of the age, one opponent of Reagan has tried to discredit him by pointing out that Neville Chamberlain, the great apostle of appeasement, was also anti-Soviet. But appeasement by any other name smells as rank, and the stench of it now pervades the American political atmosphere. It would indeed be astonishing if this were not the case, since appeasement (as the word itself reveals) is the natural offspring of pacifism and the policy most compatible with isolationism.
Those like Bialer who call for the appeasement of Communism in Central America tell us that this is “the only way to fight Soviet influence” in the region. But the spirit of appeasement does not always disguise itself as a clever tactic for opposing Soviet expansionism. More often it appears in the shape of a rush to apologize for or explain away or even justify every aggressive move the Soviet Union makes. Thus many of the same people who think that the United States has no right or is ill-advised to intervene in Central America against the spread of Communist regimes there are quick to defend (while of course piously deploring) the Soviet intervention in Afghanistan or the suppression of Solidarity in Poland on the ground that keeping friendly regimes in countries so close to its own borders is a legitimate security interest of the Soviet Union.
Similarly, many of the same people who oppose the proposed deployment in Europe of the new intermediate-range missiles are willing, nay eager, to justify the Soviet deployment of such missiles in Europe and to translate the sophistries of Moscow’s case into terms that sound very reasonable to American ears. When, for example, Irving Kristol asked why the Soviets decided to deploy the SS-20’s in Europe and arrived at the surely correct answer that they did so for the purposes of political intimidation, he was immediately countered by Raymond L. Garthoff of the Brookings Institution who came up with “a perfectly understandable Soviet military rationale for modernization, without resort to speculation on intentions for a first strike or political pressure.”
The same impulse to deny or even cover up evidence of Soviet malevolence—and again by people, especially in the media, who leap at and magnify even the faintest indication of American wrongdoing—can be sniffed out in several other areas as well. Even before the attempted assassination of Pope John Paul II, there was a widespread refusal to credit the abundant evidence that the Soviets had been deeply involved in international terrorism. And then, even after the attempt, journalists who had never hesitated to convict the United States of outlandish charges merely on the basis of rumor, willfully blinded themselves for many months to the increasingly obvious conclusion that the Soviets were the guilty party. A comparable degree of skepticism has been manifested, and also for an extraordinarily long time, toward the evidence that the Soviets, in violation of the Biological Weapons Convention of 1972, had been using the poisonous chemicals known as “yellow rain” in Laos, Cambodia, and Afghanistan.
While many of the skeptics have finally been forced to come around on the assassination attempt and on “yellow rain,” no such readiness to give up exonerating the Soviets has yet materialized on two other issues. One is Soviet involvement in Central America, for which an impossible standard of proof is demanded (again in contrast to how the United States is treated). The other is the issue of Soviet cheating on SALT. When President Reagan said recently that “There have been increasingly serious grounds for questioning [Soviet] compliance with the arms-control agreements that have already been signed,” he was instantly denounced for what a New York Times editorial called “loose talk about Soviet cheating.” Other commentators charged Reagan with hypocrisy: since he himself opposed ratification of SALT II, by what right did he accuse the Soviets of violating it?
In any event, said Tom Wicker of the New York Times, even though Reagan had promised “to refrain from actions which undercut SALT II so long as the Soviet Union shows equal restraint,” he himself had made proposals that “numerous experts” considered violations of the treaty. To which one of these experts, a former arms-control official in the Carter administraton named William E. Jackson, Jr., added that it would not be surprising if Moscow had “long since concluded that the unratified [SALT II] treaty is a dead letter.”
To sum up the Wicker-Jackson position: there is no conclusive proof that the Soviets actually violated SALT II, and even if there were, they would be justified in doing so by the way “the Reagan administration has trashed the very idea” of preserving SALT II and has “repeatedly denigrated the arms-control achievements of Presidents Nixon, Ford, and Carter.”
In the past apologies for Soviet behavior usually arose out of love or admiration or sympathy. But that is not what we are dealing with here. The new species of apologetics comes not from Communists or fellow-travelers but from people who are so driven by the fear of Soviet power and so mesmerized by pacifist illusions that they will go to any lengths to persuade themselves and others that safety can be found in negotiations with the Soviet Union.
Sometimes this pretense is maintained by dismissing or denying realities like the size and scope of the Soviet military build-up and the aggressive political strategy that has accompanied it in violation of the promise implicit in the Basic Principles of Détente of 1972; or by dismissing or denying the evidence that the Soviets have certainly violated the 1972 treaty prohibiting the use of chemical weapons, and have almost certainly cheated on SALT I. Yet even when denial is made impossible by an avalanche of incontrovertible evidence, the very acknowledgment of these previously suppressed realities is usually accompanied by intensified affirmations of the need to pursue and reach agreements.
The best recent illustration of how fear begets pacifist illusions which then beget appeasement is a column entitled “Sarajevo and St. Peter’s” by Flora Lewis of the New York Times. Miss Lewis here begins by quoting a British historian who had warned against pursuing the facts of the assassination attempt on the Pope because “the echo of a bullet at Sarajevo set off World War I.” Miss Lewis disagrees. The facts, she says, “should not, and probably cannot, be stifled. History and Western dignity demand the truth.” What then is the “warning” sounded by the horrible realization that “the line of responsibility leads directly to Moscow’s KGB and to the man who was then its chief and is now the Soviet leader, Yuri Andropov”? Does this mean that agreements with such a man and such a nation are worthless? Not in the least, Miss Lewis tells us: “It means getting on with arms negotiations, engaging determinedly in a search for peace with an adversary too dangerous to defy or discount. The issue isn’t mutual trust, it is everybody’s survival in a world where dirty tricks are all too possible, and so is total disaster. The appropriate lesson of Sarajevo now is to face facts, and therefore plan for peace.”
One can scarcely imagine a more vivid expression of the spirit of appeasement which has been bred by the resurgence of pacifism and isolationism in the past two years.
If the opposition to the 1980 consensus on the Soviet threat and the need to take action against it is shaped by these elements (traveling, to repeat, under different names), what are its prospects for the future?
In trying to answer this question, the beginning of wisdom is to recognize that despite appearances to the contrary, we are not dealing here with a struggle that divides neatly along party lines. On the two main issues we have been examining—defense and Central America—the Democratic Jimmy Carter and the Republican Ronald Reagan have been surprisingly close. As I have already pointed out, it was Carter who cut off American aid to Nicaragua and sent money and military advisers to El Salvador; and it was also Carter who endorsed the MX, agreed to deploy the Pershing 2’s and cruise missiles in Europe, and who withdrew SALT II because the votes for ratification could not be mustered in the (Democratically-controlled) Senate. Conversely, there are many Republicans in the House and Senate who are against or are lukewarm toward these same policies even when espoused by a President of their own party, and it is an open secret that many Democrats disagreed with Senator Dodd’s attack on Reagan’s speech about Central America. There are, then, Republicans in the opposition to the 1980 consensus and there are Democrats who remain part of that consensus.
Nor does the debate divide neatly along a liberal-conservative or Left-Right axis. The liberal New York Times opposes the freeze, while a conservative like former Secretary of the Treasury William E. Simon calls for cuts in the defense budget. A conservative like Irving Kristol supports withdrawal of American troops from Europe, while liberals like Morton Kondracke of the New Republic and Richard Holbrooke (formerly of the Carter State Department and Foreign Policy magazine) oppose withdrawal of American support from El Salvador.
There is hope in these crisscrossings and incoherent combinations. For if it is true that the opposition to the 1980 consensus on the Soviet threat was given an ideal chance to regroup and mobilize by Reagan’s decision to pay more attention to the economy than to foreign policy, then the recent change in the balance of presidential attention might serve to restore the consensus to some approximation of its former bipartisan strength and confidence. The series of speeches Reagan has made in the past few months defending his policies on defense and Central America has already had an effect. The MX has survived a major congressional challenge, and Congress has also accepted a larger increase in defense spending than the opposition had not so long ago bargained for. On Central America, too, the opposition in Congress has been forced to back down. It has not (yet) succeeded in cutting off all aid to the Nicaraguan guerrillas or in forcing the government of El Salvador to submit to the demands of the guerrillas there.
On the other hand, Reagan has been forced to back down as well. To get the MX and other elements of his rearmament program, he has had to enter into an arms-control process which he once gave every indication of understanding to be a fraud and a trap; and he has also had to move more slowly and cautiously in Central America than he presumably would have wished.
Reagan has been forced to act in these ways largely because the consensus that elected him has been frightened by the relentless pounding and the demagogic appeals of the opposition. Even so, the American people have not changed their minds about the seriousness of the Soviet threat. We know this from the fact that in all the polls large majorities say that they are very worried about it. But the influence of the opposition shows in the equally large majorities who place their hopes in arms-control negotiations and who are especially reluctant to send American troops to Central America. As a politician, Ronald Reagan, confronted by this twin reluctance, has been compelled to bend.
But those who still hold with the 1980 consensus on the Soviet threat, and who are not politicians, have no compelling need to bend. They are free to speak plainly, and they have a great responsibility to do so. They have a great responsibility to go on saying that the Soviet threat can only be successfully met by a policy of strength and resolve which will inevitably entail larger defense budgets and a continued reliance on nuclear weapons; that the hopes vested in arms control are delusory and dangerous, and serve mainly as a respectable cover for isolationism and appeasement; that we can deter a war with the Soviet Union only if we are prepared and willing if necessary to fight; that if the United States cannot prevent a Communist victory in El Salvador, it will stand revealed as a spent and impotent force; and that the United States must therefore do whatever may be required, up to and including the dispatch of American troops, to stop and then to reverse the totalitarian drift in Central America.
In short, they have a great responsibility to go on demonstrating that pacifism and isolationism in any guise and under any name can only give us a world fashioned in the image of the Soviet Union. I for one do not believe that the American people will cooperate knowingly in the emergence of such a world. And that is why I think the spirit of appeasement now hovering so heavily over the land can still be blown away by a renewed, persistent, and unembarrassed appeal to the realism, the sense of honor, and the patriotism that erupted after Iran and Afghanistan and then swept Ronald Reagan into office only two-and-a-half years ago.
1 This statement occurs in a brief summary of the non-pacifist case, such as it is, for arms-control negotiations with the-Soviet Union. What little there is to be said in favor of arms control from a non-pacifist perspective is also well put at greater length in “The Realities of Arms Control” by the Harvard Nuclear Study Group (Atlantic, June 1983).
2 The four are Karl Kaiser, who directs the leading German research institute on foreign affairs; Georg Leber, a labor leader and a former Social Democratic Defense Minister; Alois Mertes, the parliamentary foreign-policy spokesman of the Christian Democrats; and Franz-Josef Schulze, a retired general who has served in various high positions in NATO. Like their American opponents in this debate, then, the Germans are a bipartisan group with much professional experience in foreign and defense policy.
3 It is widely asserted that the Scowcroft Commission appointed by Reagan to advise on the deployment and basing of the MX has exposed the “window of vulnerability” as a myth. But the Scowcroft Report does no such thing. It clearly acknowledges that our land-based ICBM's need to be modernized. It also acknowledges that they are vulnerable. Recognizing, however, that the only way to solve the problem of vulnerability in the short term—namely, the Carter scheme of movable multiple shelters (MPS)—had to be rejected because of “local political opposition,” the Scowcroft Commission takes such comfort as it can find in the idea that the other legs of the strategic triad will temporarily compensate for this vulnerability until it is eventually cured by the substitution of smaller single-warhead missiles for the MX.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Appeasement By Any Other Name
Must-Reads from Magazine
f all the surprises of the Trump era, none is more notable than the pronounced shift toward Israel. Such a shift was not predictable from Donald Trump’s conduct on the campaign trail; as he sought the Republican nomination, Trump distinguished himself by his refusal to express unqualified support for Israel and his airy conviction that his business experience gave him unique insight into how to strike “a real-estate deal” to resolve the Israeli–Palestinian conflict. In addition, his isolationist talk alarmed Israel’s friends in the United States and elsewhere if for no other reason than that isolationism, anti-Zionism, and anti-Semitism often go hand in hand in hand.
But shift he did. In the 14 months since his inauguration, the new president has announced that the United States accepts Jerusalem as Israel’s capital and has declared his intention to build a new U.S. Embassy in Jerusalem, first mandated by U.S. law in 1996. He has installed one of his Orthodox Jewish lawyers as the U.S. ambassador and another as his key envoy on Israeli–Palestinian issues. America’s ambassador to the United Nations has not only spoken out on Israel’s behalf forcefully and repeatedly; Nikki Haley has also led the way in cutting the U.S. stipend to the refugee relief agency that is an effective front for the Palestinian terror state in Gaza. And, as Meir Y. Soloveichik and Michael Medved both detail elsewhere in this issue, his vice president traveled to Israel in January and delivered the most pro-Zionist speech any major American politician has ever given.
Part of this shift can also be seen in what Trump has not done. He has not signaled, in interviews or in policy formulations, that the United States views Israeli actions in and around Gaza and the West Bank as injurious to a future peace. And his administration has not complained about Israeli actions taken in self-defense in Lebanon and Syria but has, instead, supported Israel’s right to defend itself.
This marks a breathtaking contrast with the tone and spirit of the relationship between the two countries during the previous administration. The eight Obama years were characterized by what can only be called a gut hostility rooted in the president’s own ideological distaste for the Jewish state.
The intensity of that hostility ebbed and flowed depending on circumstances, but from early 2009, it kept the relationship between the United States and Israel in a condition of low-grade fever throughout Barack Obama’s tenure—never comfortable, never easy, always a bit off-kilter, always with a bit of a headache that never went away, and always in danger of spiking into a dangerous pyrexia. That fever spike happened no fewer than five times during the Obama presidency. Although these spikes were usually portrayed as the consequences of the personal friction between Obama and Israeli Prime Minister Benjamin Netanyahu, that friction was itself the result of the ideas about the Middle East and the world in general Obama had brought with him to the White House. In this case, the political became the personal, not the other way around.
Given the general leftish direction of his foreign-policy views from college onward, it would have been a miracle had Obama felt kindly disposed toward the Jewish state’s own understanding of its tactical and strategic condition. And Netanyahu spoke out openly and forcefully to kindly disposed Americans—from evangelical Christians to congressional Republicans—about the threats to his country from nearby terrorism and rockets, and a developing nuclear Iran 900 miles away. His candor proved a perpetual irritant to a president whose opening desire was to see “daylight” (as he said in February 2009) between the two countries. Obama caused one final fever spike as he left office by refusing to veto a hostile United Nations resolution. This appeared churlish but was, in fact, Obama allowing himself the full rein of his true and long-standing convictions on his way out the door.T
he things Trump both has and has not done should not seem startling. They constitute the baseline of what we ought to expect one ally would say and not say about the behavior of another ally. But as Obama’s disgraceful conduct demonstrated, Israel is not just another ally and never has been. It is a unique experiment in statehood—a Western country on Mideast soil, born from an anti-colonialist movement that is now viewed by many former colonial powers as an unjust colonial power, created by an international organization that is now largely organized as a means of expressing rage against it.
Historically, American leaders have had to reckon with these unique realities—and the fact that the hostile nations surrounding Israel and hungering for its destruction happen to sit atop the lifeblood of the industrial economy. The so-called realists who claim to view the world and the pursuit of America’s interests through cold and unsentimental eyes have experienced Israel mostly as a burden.
Through many twists and turns over the seven decades of Israel’s existence, they have felt that America’s support for Israel is mostly the result of short-sighted domestic political concerns for which they have little patience—the wishes of Jewish voters, or the religious concerns of evangelical voters, or post-Holocaust sympathy that has required (though they would never say it aloud) an unnatural suspension of our pursuit of the American national interest.
Israel created problems with oil countries, and with the United Nations, and with those who see the claims for the necessity of a Jewish state as a form of special pleading. As a result, the realists have spent the past seven decades whispering in the ears of America’s leaders that they have the right to expect Israel to do things we would not expect of another ally and to demand it behave in ways we would not demand of any other friendly country.
The realists and others have spent nearly 50 years propounding a unified-field theory of Middle East turmoil according to which many if not all of the region’s problems are the result of Israel’s existence. Were it not for Israel, there would not have been regional wars in 1956, 1967, 1973, and 1982—no matter who might have borne the greatest degree of responsibility for them. There would have been other conflicts, but not this one. There would have been no world-recession-inducing oil embargo in 1973 because there would have been no response to the Yom Kippur War. Were it not for Israel, for example, there would be no Israeli–Palestinian problem; there would have been some other version of the problem, but not this one.
Unhappiness about the condition of the Palestinians in a world with Israel was held to be the cause of existential unhappiness on the Arab street and therefore of instability in friendly authoritarian regimes throughout the Middle East. Meanwhile, Israel’s own pursuit of what it and its voting populace took to be their national interests was usually treated with disdain at the very least and outright fury at moments of crisis.
It was therefore axiomatic that the solution to many if not most of the region’s problems ran right though the center of Jerusalem. It would take a complex process, a peace process, that would lead to a deal—a deal no one who believed in this magical process could actually describe honestly and forthrightly or give a sense as to what its final contours would be. If you could create a peace process leading to a deal, though, that deal itself would work like a bone-marrow transplant—through a mysterious process spreading new immunities to instability in the Middle East that would heal the causes of conflict and bring about a new era.
Again, this was the view of the realists. With Israel’s 70th anniversary coming hard upon us, the question one needs to ask is this: What if the realists were nothing but fantasists? What if their approach to the Middle East from the time of Israel’s founding was based in wildly unrealistic ideas and emotions? Central to their gullibility was the wild and irrational idea that peace was or ever could be the result of a process. No, peace is a condition of soul, an exhaustion from the impact of conflict, born of a desire to end hostilities. Only after this state is achieved can there be a workable process, because both parties would already have crossed the Rubicon dividing them and would only then need to work out the details of coexistence.
There was no peace to be had. The Arab states didn’t want it. The Palestinians didn’t want it. The Israelis did and do, but not at the expense of their existence. The Arabs demanded concessions, and the Israelis have made many over the years, but they could not concede the security of the millions of Israel’s citizens who had made this miracle of a country an enduring reality. The realists fetishized “process” because it seemed the only way to compel change from the outside. And so Israel has borne the brunt of the anger that follows whenever a fantasist is forced to confront a reality he would rather close his eyes to.
That is why I think what Trump and his people have done over the past 14 months represents a new and genuine realism. They are dealing with Israel and its relationships in the region as they are, not as they would wish them to be. They are seeing how the government of Egypt under Abdel Fattah el-Sisi is making common cause with Israel against the Hamas entity in Gaza and against ISIS forces in the Suez. They are witness to the effort at radical reformation in Saudi Arabia under Muhammad bin-Salman—and how that seems to be going hand in hand with an astonishing new concord between Israel and the Desert Kingdom over the common threat from Iran. This is a harmonizing of interests that would have seemed positively science-fictional in living memory.
Mostly, what they are seeing is that an ally is an ally. Israel’s intelligence agencies are providing the kind of information America cannot get on its own about Syria and Iran and the threat from ISIS. Israel is a technological powerhouse whose innovations are already helping to revolutionize American military know-how. Israel’s army is the strongest in the world apart from the regional superpowers—and the only one outside Western Europe and the United States firmly locked in alliance with the West. Things are changing radically in the Middle East, and as the 21st century progresses it is possible that Israel will play a constructive and influential role outside its borders in helping to maintain and strengthen a Pax Americana.
Donald Trump is a flighty man. All of this could change. But for now, the replacement of the false realism of the past with a new realism for the 21st century seems like a revolutionary development that needs to be taken very, very seriously.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
f the making of Washington movies, there is no end. Kohelet said this in Ecclesiastes, I think. Or maybe it was Gene Shalit on the Today Show. It’s a truism in any case. Steven Spielberg’s latest entry in the genre, The Post, is for many Washingtonians the most powerful example in the long line. When the movie opened here in late December, there were reports of audiences cheering lustily and even dissolving in tears at the movie’s end, as if they were watching a speech by President Obama. The local paper ran news articles about it, along with numberless feature stories, interviews, op-eds, fact-checks, reviews, and reviews of reviews.
Which is excusable, I guess, since the movie is about the Washington Post. But then The Post is supposed to be about so many things. It’s about the First Amendment, depicting the agonies of the Post’s editor Ben Bradlee, and its owner, Katherine Graham, as they defy the Nixon administration to publish the top-secret Pentagon Papers. It’s about feminism and the personal evolution of Mrs. Graham from an insecure Georgetown socialite to Master of the Boardroom. It’s the story of the lonely courage of the leaker/whistleblower/traitor (your call) Daniel Ellsberg. It is also, so I read in the Post, a warning about the imperial designs of President Trump to smother a free press. And it’s been understood as a straightforward tale of political history, though the liberties Spielberg takes with his based-on-a-true-story are so extreme as to render it useless as a guide to what happened in the summer of 1971.
Running beneath it all is the motive that animates so many Washington movies: an impatience with the stuttering, halting processes of self-government. The wellspring from which the Washington movie flows is Frank Capra’s Mr. Smith Goes to Washington. The plot is familiar to everyone. Mr. Smith, a small-town bumpkin played by Jimmy Stewart—talk about stuttering and halting!—is appointed by sinister political bosses to a vacant Senate seat, on the assumption that he will be easily manipulated, like a movie audience. Instead, Smith stumbles upon an illicit land deal and exposes the Senate as a den of thieves. His filibustering floor speech rouses a populist outpouring from an army of alarmingly cute children. By the end of the movie, Mr. Smith has restored the nation to its democratic ideals.
Capra intended his movie to be a hymn to those ideals, and for nearly 80 years that’s what audiences have taken it to be. It is no such thing. Mr. Smith seethes with contempt for the raw materials of democracy: debate, quid pro quo deal-making, back-scratching compromise—all the tedious, unsightly mechanics that turn democratic ideals into functioning self-government. In Capra’s telling, democracy can be rescued only by anti-democratic means. An appointed charismatic savior (he’s not even elected!) uses a filibuster (favorite parliamentary trick of bullies and autocrats) to release the volatile pressure of a disenfranchised mob (the great fear of every democratic theorist since Aristotle). From Mr. Smith to Legally Blonde 2, the point of the Washington movie is clear: Left to its own devices, without an outside agent to penetrate it and cleanse it of its sins, self-government sinks into corruption and despotism.
Steven Spielberg is the closest thing we have to Capra’s successor. Like all his movies, The Post has many charms: a running visual joke about Bradlee’s daughter making a killing with her lemonade stand threads in and out of the heavier moments like a rope light. On the other hand, his painstaking obsession with period detail often fails: A hippie demonstration against the Vietnam War looks as if it’s been staged by the cast of Hair. The set-piece speeches are insufferable, an icky glue of sanctimony and sentimentality. What we call the Pentagon Papers was a classified history of the lies, misjudgments, and incompetence of four presidents, from Harry Truman to Lyndon Johnson, ending in 1968. Sometimes the speechifying is directed at the malfeasance of these men, as when Bradlee bellows: “The way they lied—those days have to be over!”
Weirdly, though, the full force of the movie’s indignation is aimed at Richard Nixon. Historians might point out that Nixon wasn’t even president during the period covered by the Pentagon Papers. Intelligence officials told the president that the release of the papers would pose an unprecedented threat to national security. He ordered the Justice Department to sue to prevent the New York Times and the Post from publishing the top-secret material. In the movie’s account, this ill-judged if understandable response is equivalent to the official, strategic lies that accompanied tens of thousands of American soldiers to their deaths.
A particularly rich moment comes when Robert McNamara warns Mrs. Graham about Nixon’s capacity for evil. As Kennedy and Johnson’s defense secretary, McNamara was an early version of Saturday Night Live’s Tommy Flanagan, Pathological Liar: The Viet Cong are on the run! Yeah, sure, that’s the ticket! As much as anyone, McNamara, with his stupidity and dishonesty, guaranteed the tragedy of Vietnam. And yet here he is, issuing a clarion call to Mrs. Graham. “Nixon will muster the full power of the presidency, and if there’s a way to destroy you, by God, he’ll find it!” Later Bradlee compares Nixon to his predecessors: “He’s doing the same thing!”
Um, no. From his inauguration in 1969 onward, Nixon’s every move in Vietnam was intended to extricate the U.S. from the quicksand previous presidents had led us (and him) into. In this case, if in no other, Nixon was the good guy. He had nothing to lose, personally, from the publication of the Pentagon Papers, and maybe a lot to gain. After all, they demonstrated the villainy of his predecessors, not his own. (That came later.)
Yet the movie can’t entertain the possibility that Nixon could act on anything but the basest motives. He is a sinister presence. We see him through the Oval Office window, always alone, with his back turned, stabbing the air with a pudgy finger and cursing the Washington Post to subordinates over the phone. It’s actually Nixon’s voice in the movie, taken from the infamous tapes. Unfortunately, the actor’s movements don’t synchronize with the words; in such a somber thriller, the effect is inadvertently comic. It reminded me of watching the back of George Steinbrenner’s head in Seinfeld while Larry David spoke the Yankee owner’s dialogue. And Nixon was no Steinbrenner.
The most plausible explanation is that Nixon, in trying to stop publication of the Pentagon Papers, was doing what he said he was doing: his job. American voters had elected him to protect national security and, not incidentally, the prerogative of the president and the federal government to determine how best to protect it, including determining whether sensitive information should be kept secret. If he didn’t do his job the way voters wanted him to, they could get rid of him next time. You know, like in a democracy.
Ben Bradlee, Katherine Graham, and Stephen Spielberg, not to mention those teary audiences, have no patience with such niceties. As it happens, in the end, the Pentagon Papers were a bust. The sickening detail they disclosed deepened but did not broaden the historical record, and by all accounts their impact on national security was negligible. Those facts don’t alter the creepiness of The Post’s premise—that the antagonists of an elected regime are allowed to go outside the law when it suits their view of the national interest. Charismatic saviors (and few people were more charismatic than Ben Bradlee) can save democracy from itself, but only by ignoring the requirements of democracy. Spielberg continues the tradition of the Washington movie. The Post is Capraesque—in the only true sense of the word.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Is Harvard assaulting the rights of students to free association in the name of a diversity standard it doesn’t live up to itself?
arvard College is home to six all-male “final clubs.” Their members have access to houses in which they eat, socialize, and form bonds with their fellows. These clubs are as historic as they are renowned; most were formed in the 19th century and have had Kennedys, Roosevelts, and an endless procession of politicians, writers, and businessmen as former members. From the time of their origination, these exclusive institutions have been an object of fascination. When doors are closed, and only a small, elite group selected from an already hyper-elite campus has been invited inside, jealousy, curiosity, and frustration are sure to prevail.
The final clubs are financially independent from Harvard and have been entirely unaffiliated with the university since the 1980s, when the administration and the clubs clashed over the latter’s refusal to admit women. But that conflict, which had cooled over time, has recently resurfaced in a new and heightened manner.
In March 2016 Rakesh Khurana, the dean of Harvard College, set an April 15 deadline for the final clubs, at which time they were to inform the administration whether they would change course and become co-ed. Two forces drove Khurana’s action. The first was a report by Harvard’s Task Force on Sexual Assault Prevention released days earlier, after years of research. The report indicated that students who were involved with the final clubs were significantly more likely to have experienced some form of assault than those who were not. The second impetus was the administration’s position that the final clubs—and the ways in which they screened members—were in direct conflict with the ethos of the university.
The deadline passed without response from the clubs. On May 6, 2016, Dean Khurana wrote a letter to Harvard President Drew Faust. He proposed that, beginning with incoming freshmen who would matriculate in the fall of 2017, students who became members of what he termed “unrecognized single-gender social organizations” should be ineligible for leadership positions in Harvard organizations—meaning they could not serve as publication editors, captains of sports teams, leaders of theatrical troupes, and the like. And they would also be ineligible for letters of recommendation from the dean, necessary for many prestigious postgraduate opportunities such as the Rhodes and Marshall scholarships.
Khurana’s letter, and the sanctions proposed within, quickly became a cause célèbre. Harry R. Lewis, a professor of computer science and himself a former dean of the college, wrote Khurana a letter expressing his concern that “by asserting, for the first time, such broad authority over Harvard students’ off-campus associations, the good you may achieve will in the long run be eclipsed by the bad: a College culture of fear and anxiety about nonconformity.” Lewis went on to note:
The reliance on your judgement of what count[s] as Harvard’s values, and using that judgment to decide which students will receive institutional support, is a frightening prospect….The discretion exercised by the dean and his representatives will chill the activism of students in causes that might also be considered noncompliant with Harvard standards—for example, advocacy for a religion that does not allow women to be full participants, or a political party that opposes affirmative action. Such groups are excluded from your mandate, but only as a matter of your discretion. Why wouldn’t activism for such organizations color the support the College would offer their members, on the basis that such students are showing that their true colors are not pure Crimson?
Lewis also referenced the faculty’s responsibilities and noted that there was no precedent in Harvard’s Handbook for Students for the sanctions, thus suggesting that Khurana’s proposals might be outside the administration’s jurisdiction.
In September 2016, Khurana detailed the responsibilities of the “Single-Gender Social Organizations Implementation Committee.” The committee was tasked with
consulting broadly with the College community to address the following questions: 1) What leadership roles and endorsements are affected by the policy; 2) How organizations can transition to fulfill the expectations of inclusive membership practices; and 3) How the College should handle transgressions of the policy.
In addition to the committee’s work, the faculty went through several rounds of motions and debate, discussing myriad permutations of the sanctions, as well as the validity of the sanctions themselves.
In December 2017, the discussions came to a halt. Harvard’s administration flatly announced it would engage in sanctions against students who joined those “unrecognized single-gender social organizations,” or USGSOs. This ostensibly final decision has provoked renewed outrage from students, faculty, and alumni, who have grounded their varied objections in ethical, philosophical, and legal concerns.U
ntil the 1960s shattered the American elite consensus on such matters, the collegiate experience was vastly different for students. Universities used to view their role as being in loco parentis—serving in place of the parents from whom their charges had recently separated. Today, on Harvard’s enchanting campus, teenagers and twentysomethings tend to rule the roost. Students have tremendous flexibility in building their course schedules, and rare is the lecture professor who takes attendance. Undergraduates come and go as they please, to and from wherever they please, with whomever they please, from the darkest hours of the night to the earliest hours of the morning.
But from the time America’s colleges came into being in the 17th and 18th centuries until just a few decades ago, these institutions imposed rules and regulations, curtailed freedoms, and designed a microcosmic world in which young adults would—in theory—learn how to navigate the reality that awaited them after graduation. They were eased into the world in a setting that constricted their choices and where the powers that be very consciously, and intentionally, refrained from treating them like adults. This was most evident in the controls placed on contact between the sexes.
A 1989 Harvard Crimson article by Katherine E. Bliss detailed the so-called parietal rules of the 1960s. It noted that “in 1964, the primary goal of College administrators was maintaining ‘an open door and one foot on the floor’ policy for students entertaining guests of the opposite sex in their rooms.” At that time, the student body and the administration were in conflict over the right to do as they pleased in their own dorms: “Students in 1964 were concerned with lengthening the number of hours they were allowed to spend with members of the opposite sex in the privacy of their own rooms.” If this sounds quaint, consider Bliss’s next point. “Few,” she observed, “could appreciate the fact that only a decade earlier, men and women were not allowed to enter the dormitories of the opposite sex at all.”
The original parietal rules meant that the women of Radcliffe, Harvard’s sister college, could have been in the Harvard Houses only between the hours of 4 and 7 p.m. Robert Watson, a Harvard dean, explained at the time: “We have to watch the mores of our students. I do not want to see Harvard play a leading role in relaxing the moral code of college youth.” Indeed, he went on to say that “the college must follow the customs of the time and the community.…We cannot have rules more liberal than a standard generally accepted by the American public.”
Is there a single standard generally accepted by the American public today? For most of the country—with exceptions in deeply religious Jewish, Christian, and Islamic communities—ours is not an age that concerns itself with the amount of time that men and women spend together in solitude. But that doesn’t mean our era isn’t concerned with the moral development of our youth. On the contrary, leaders of America’s elite institutions today are as preoccupied with strengthening the souls of their charges as were the men who designed the parietal codes all those years ago. Only their aim is not sexual purity anymore, but rather social diversity. It is the heart and soul of the moral vision of our times, and administrators today are no less determined to see that students hew to that standard. But in their effort to serve in loco parentis in this fashion, educators are leaping across ethical—and possibly, legal—lines.
The fraternity-like final clubs have always been difficult to get into, much like Harvard itself. And for many years, the all-male final clubs were certainly characterized by discrimination. In a 1965 piece for the Crimson, Herbert H. Denton Jr., then an undergraduate, noted that while “the tacit ban on Jews has been relaxed in most clubs,” the “ban on Negroes is still in effect.” The same cannot be said today; while several of the final clubs are trying to retain their character by remaining single-gender organizations, they do not screen would-be members on the basis of race or religion.
Nonetheless, the administration has determined that they espouse values and ideas contrary to the Harvard spirit and must consequently be treated as an anachronistic wrong to be extirpated. In a statement issued in December, President Faust (along with William F. Lee, senior fellow of the Harvard Corporation) declared that
the final clubs in particular are a product of another era, a time when Harvard’s student body was all male, culturally homogenous, and overwhelmingly white and affluent. Our student body today is significantly different. We self-consciously seek to admit a class that is diverse on many dimensions, including on gender, race, and socioeconomic status.
The clubs have strict rules about speaking with the press, and every member I spoke with—both former and current students—did so on the condition of anonymity. Many brought up the topic of diversity, noting that in their experience, the members of their clubs were diverse in both ethnic and socioeconomic respects. Members of multiple clubs told me about policies under which an inability to pay club dues has no bearing on whether or not a student will be accepted. Indeed, one went so far as to note that the financial-aid offer is blatantly highlighted during the initiation process, so that those lower on the socioeconomic ladder are not even temporarily burdened by the misconception that their financial status might affect their membership.
The final clubs, like Harvard itself, may indeed be a product of another era. But just as Harvard has evolved, the final clubs have changed. Faust, Lee, and all of the actors in the anti-final-clubs camp, ignore this. They also espouse a position that is as illogical as it is incoherent: Faust and Lee claim both that “students may decide to join a USGSO and remain in good standing” and that “decisions often have consequences, as they do here in terms of students’ eligibility for decanal1 endorsements and leadership positions supported by institutional resources.”
Most parents would not believe that their sons and daughters were in “good standing” if they came home from campus for winter break and told them they would be unable to be editor of the newspaper, captain of the debate team, or eligible for a Rhodes or Marshall scholarship. Yet Faust and Lee insist that “the policy does not discipline or punish the students.” It merely “recognizes that students who serve as leaders of our community should exemplify the characteristics of non-discrimination and inclusivity that are so important to our campus.” It’s hard to believe that Faust and Lee might honestly think that excluding students from leadership roles or prestigious postgrad opportunities would be construed as anything other than a punishment.
So why the insistence to the contrary? If the final clubs are, in the administration’s eyes, archaic, narrow-minded, discriminatory organizations, why not come out with an honest statement that calls for disciplining the students who dare to participate in these institutions? Lewis, the former dean, has explained this by making reference to what Faust and Lee do not mention—namely, Harvard’s Statutes—the internal bylaws governing the institution. Lewis cites part of the 12th statute, which lays out that “the several faculties have authority…to inflict at their discretion, all proper means of discipline.” He notes that “by declaring that ineligibility for honors and distinctions are ‘not discipline,’ what President Faust and Mr. Lee are saying is that the Statutes are not implicated, the matter is not one for the Faculty to decide, and no Faculty vote is needed to carry out the policy.” Indeed, Lewis notes that “it is important that the…policy not be discipline, because if it were discipline, and disciplinary action were taken against a student without a Faculty vote authorizing that policy, that student could challenge the action as not properly authorized.”
There is something else the Faust-Lee statement does not reference—and tellingly. In the beginning of the Harvard administration’s war on final clubs, concerns over sexual assault seemed to form the core of the issue. The Task Force on Sexual Assault Prevention reported that 47 percent of female college seniors who were in some way involved in final clubs—either because they attend events at the male clubs, or because they themselves are members of female clubs—said they had experienced “nonconsensual sexual contact since entering college.” Since “31 percent of female Harvard seniors reported nonconsensual sexual contact since entering college,” the report said, the data proved that “a Harvard College woman is half again more likely to experience sexual assault if she is involved with a Club than the average female Harvard College senior.” But Harvard’s sexual assault survey also found that 75 percent of “incidents of nonconsensual complete and attempted penetration reported by Harvard College females” happened in…Harvard dorms.
The report is sloppy and lumps together things that are not alike. For example, the Porcellian—Harvard’s oldest final club—does not allow any nonmembers through its doors. Charles Storey, who was then the Porcellian’s graduate president, provided a statement to the Crimson in which, among other things, he claimed that the club was “being used as a scapegoat for the sexual assault problem at Harvard despite its policies to help avoid the potential for sexual assault.” The Porcellian, he said, was “mystified as to why the current administration feels that forcing our club to accept female members would reduce the incidence of sexual assault on campus.” Indeed, Storey said, “forcing single gender organizations to accept members of the opposite sex could potentially increase, not decrease the potential for sexual misconduct.”
A day later, Storey apologized for his statement. A few days after that, he resigned as the Porcellian’s graduate president. His reasoning was admittedly inelegant, as it could be interpreted to suggest that club members would be unable to restrain themselves from committing sexual assault should women enter their domain. But Storey was not incorrect in pointing out that, by definition, women could not be subjected to unwanted touching in the Porcellian clubhouse if they were not allowed inside. For a club like the Porcellian, then, where instances of male-on-female sexual assault within the house are currently nonexistent, going co-ed would inherently guarantee that the opportunity for assault would expand. And that is why it is noteworthy (Storey’s humiliation notwithstanding) that the Faust-Lee declaration eliminated the attack on the final clubs for their ostensibly heightened role in unwanted sexual conduct. And why the entirety of the case against them now rests on their failure to hew to the administration’s convictions on gender egalitarianism.
The role that final clubs play in Harvard social life has been a contentious topic for decades. The perception has long been that socially, the members of Harvard’s male final clubs have too much power. On a campus with limited space for social gathering, the final-club mansions are often the source of the college’s most sought-after nightlife. Arguments have been made consistently over time that the exclusionary practices of the clubs—they typically accept only 10 to 25 new members a year—make for unpleasant and unfair campus social dynamics. But again, this conversation is happening at Harvard, an institution that prides itself on its prestige and exclusivity, and which accepted a mere 5.2 percent of its applicants to the 2021 class.
Lewis, the former dean, is not exactly a natural ally for the clubs. He told me that he was “pretty tough with them” during his tenure, and that he was “instrumental in trying to get some of the bad behavior of some of the final clubs under control.” The issues that arose during his time as dean seem to have mostly been related to parties that grew too loud or students who became too drunk. But confronting specific problems as they arise is an approach entirely different from issuing an all-encompassing sanction on free association. At Harvard, specifically, the implications of such a policy could have long-term ramifications. “As an educational institution that, for better or worse, graduates more than its fair share of the leadership of the country, in both industry and technology, and government and law,” Lewis said, “we should not be teaching students that the way you control social problems is by creating bans and penalties against joining organizations.” His “bigger worry,” he said, is that “students will come to think it’s a reasonable thing to do.”
Beyond all these considerations lies an additional layer of complication: legality. Even as a private institution, Harvard’s autonomy may not be as absolute as it seems to believe. I spoke by phone with Harvey Silverglate, a lawyer who is currently representing the Fly, one of the clubs. He told me that “Harvard is misinformed if it has been told by its lawyers or by the office of the general counsel that it can do what it is trying to do, that is to say, punish a private off-campus club, punish Harvard students for joining a legal off-campus club, that is not on Harvard property, and over which the university has no control.” If Harvard goes forward with its plan, Silverglate noted, it will have “overstepped its legal powers.” He spoke extensively about the specific challenges that Harvard would face under Massachusetts state law, explaining that there are free-speech provisions in the Massachusetts constitution that are more protective of speech than the First Amendment to the U.S. Constitution. In fact, Silverglate noted, the state’s supreme court has ruled in several instances that Massachusetts’s declaration of rights “limits the power of private institutions over the people it governs.”
In its desire to avoid a lawsuit, the Harvard administration—or the team of lawyers that doubtlessly advised it—carefully crafted a rule that would apply equally to men and women. Had the sanctions applied solely to male-only clubs, the university would likely have been faced with a federal lawsuit or investigation into gender discrimination. Yet despite the male final clubs being the primary target of the sanctions, they seem to have done the most harm so far to Harvard’s fraternities, sororities, and female final clubs.
One female student I spoke with is a member of one of the originally all-female final clubs that has recently gone co-ed rather than face the sanctions. She explained that within the club, there is a “feeling of resentment.” The USGSOs were all given the choice to either go co-ed or face the sanctions. “The girls clubs,” she told me, “have accepted it because they don’t have a lot of money.” While the male clubs have old and powerful alumni—and the money that comes with them—the female clubs are young and, by comparison, poor. “The boys can all sue,” she said, but “the girls clubs don’t have that privilege.” Having men in the club has certainly changed things for her. She explained: “It’s definitely different—I loved having an all-female space, and there was lots of merit to that socially and even in terms of networking.… I had this strong female network, and that was kind of eroded by going co-ed.”
Sorority members are facing similar challenges, but unlike the male and female final clubs that do not answer to a national body, they are unable to adapt as they see fit. Sororities and fraternities are unable to go co-ed without violating the rules of their national charters; the sanctions policy therefore affects their organizations most.
I spoke by phone with Evan Ribot, a Harvard alumnus from the class of 2014 who was president of the fraternity AEPI while on campus. Stressing that he could speak only for himself, and not on behalf of AEPI or the AEPI alumni network, he told me there was a “tenuous relationship between the administration and the fraternities” when he was on campus. “There was a sense that we operated in a gray zone because the university knew we existed,” he told me. “So we weren’t underground, but we also were not a recognized group.” As a result of the sanctions, AEPI at Harvard has dissolved itself and become a new organization, the gender-neutral “Aleph.” The organization is no longer affiliated with AEPI national.
“It’s a shame,” he said, “because some of my best friends were looking to join AEPI not because they wanted to be in an exclusionary single-sex organization but because they were looking for a place to fit in on a challenging campus.” The same is true for women: Ribot noted “The sororities were an avenue for women to find their own spaces—not because they were looking to exclude men but because there is an inherent value to a group of women hanging out, just like there can be an inherent value to have men hanging out.… It’s not rooted in exclusion.”
In some circumstances, it appears, Faust agrees. She herself attended Bryn Mawr—a women’s college— and serves as a special representative on the board of trustees of her alma mater. “It is impossible to figure out how Faust can reconcile helping to provide that singular experience to women while at the same time denying any portion of that experience to the women she is responsible for at Harvard,” said Richard Porteus, graduate president of the Fly Club. He graduated from Harvard in 1978 and was elected a member of the Fly Club in 1976. He spoke of the diversity of his club class and reflected that while “there were some people whose names also appeared on Harvard buildings,” he “didn’t come from wealth” and was not only elected to the club but became an officer. Porteus explained that “one’s socioeconomic standing did not matter.” All that mattered, he said, was “the potential for forming life-long friendships.”
The debate over Harvard’s final clubs would have taken place in an entirely different framework if we were still living in a time when university administrators saw their role as fill-in parents—and if that role were viewed as a comfort by the parents themselves. But today’s universities are, for better or worse, largely a free-for-all. The curtailing of certain freedoms thus becomes all the more apparent, and all the more disturbing, when measured against the backdrop of a prevailing “you do you” attitude. The core of the administration’s position seems to be reinforced by an overwhelming need to groom a student body that shares all the same beliefs and values—those that echo the principles that the administration itself espouses. If it deems single-sex social groups discriminatory, then there is no room for those students who see them not as beacons of gender exclusivity but as opportunities for friendship and support. In an educational institution, the only kind of diversity that should matter is diversity of thought. That’s a lesson the Harvard administration desperately needs to learn.
Harvard’s own questionable record on diversity is currently under harsh scrutiny—and not because of the behavior of clubs that have a tenuous connection to the university’s educational mission. Research has demonstrated that to gain entry into an institution like Harvard, Asian-American applicants must score an average of 140 points higher on their SATs than white applicants, 270 points higher than Hispanic applicants, and an astonishing 450 points higher than African-American applicants. The Justice Department has taken note and is investigating the matter. In December, the New York Times reported that the university has agreed to give the DOJ access to applicant and student records. That Harvard’s administration has become consumed with the goal of bringing an end to institutions that fail to meet a 21st-century standard for diversity is not without its savage ironies.
1 Meaning something a dean does.
Choose your plan and pay nothing for six Weeks!
Review of 'In the Enemy’s House' By Howard Blum
Nearly a decade would pass until the FBI and NSA began to release the actual Venona transcripts in 1995. In the years since, a number of books (including several co-authored by me) have analyzed the Venona revelations, while others have mined Communist International files and the KGB archives. Virtually all the major mysteries about Soviet espionage in the United States have been resolved by these once-secret documents. In addition to confirming the guilt of the Rosenbergs, Alger Hiss, Harry Dexter White, and virtually every other person accused of spying in the 1940s by the ex-spies Whittaker Chambers and Elizabeth Bentley, these books have exposed several important and previously unknown agents such as Theodore Hall, Russell McNutt, and I.F. Stone. Indeed, the only accused spy who turns out to have been innocent (although he was a secret Communist almost up until the day he took charge of developing an atomic bomb) was J. Robert Oppenheimer.
A handful of espionage deniers, centered around the Nation magazine, continue to argue, against all evidence and logic, that Alger Hiss is still innocent. The Rosenberg children continue to distort their mother’s role in espionage. And some hard-core McCarthyites still demonize Oppenheimer. But in truth, the bloody battle over who spied is over.
Lamphere’s book emphasized his collaboration with the Army cryptographer Meredith Gardner in the hard work of unraveling the spy rings using the Venona cables. Employing those 1986 recollections as a template, the Vanity Fair contributor Howard Blum has now given us In the Enemy’s House, an overly dramatized but largely accurate account of the friendship between the outgoing, hard-driving, atypical G-man Lamphere and the shy, scholarly, soft-spoken Gardner as they worked together to find and prosecute those Americans who had betrayed their nation.
Blum intersperses the American hunt for spies with the recollections of Julius Rosenberg’s KGB controller, Alexander Feklisov, who ran Rosenberg in 1944 and 1945 and supervised Fuchs in Great Britain from 1947 to 1949. Feklisov watched with mounting dread as the KGB’s atomic spy networks were exposed, both because of Venona and the KGB’s own blunders—most notably because the Russians used Harry Gold, Fuch’s contact, to pick up espionage material from David Greenglass, who was Julius Rosenberg’s brother-in-law and part of his spy ring.
Blum also uses information from many of the scholarly accounts that have already appeared, although not always carefully. His only new source of data comes from interviews with members of the Lamphere and Gardner families and access to their personal notebooks. But while he provides a list of his sources for each chapter, Blum does not use footnotes, so that although many of the personal and emotional reactions to the investigation he attributes to people, and especially to Lamphere, presumably come from these sources, it is never clear whether they are based on contemporaneous written notes or third-party recollections of events more than 50 years in the past.
Such objections are not mere academic carping. While Blum successfully turns this oft-told story into an interesting and suspenseful narrative, his approach comes at a cost. For example: He is eager to transform Lamphere from a diligent and resourceful FBI investigator who often chafed at the bureaucracy and petty rules that governed the agency into a full-blown rebel who almost singlehandedly forced the FBI to take up the problem of Soviet espionage. To do so, Blum suggests that until the FBI received an anonymous letter in Russian in August 1943 alleging widespread spying and naming KGB operatives, the Bureau regarded the investigation of potential Soviet spies as useless because allies did not spy on each other.
This is wrong. In fact, the FBI had already mounted two large-scale investigations—one of Comintern activities in the United States undertaken in 1940 and the other of attempted espionage directed at atomic-bomb research at the Radiation Laboratory in Berkeley, which began in early 1943. Both had unearthed information on atomic espionage. These included discomfiting details about Robert Oppenheimer’s Communist connections; efforts by Steve Nelson, a CPUSA leader in the Bay Area in contact with known Soviet spies, to obtain atomic information; and contacts between a Soviet spy and Clarence Hiskey, a chemist on the Manhattan Project.
At one point, Blum renders one of Hiskey’s contacts, Zalmond Franklin, as Franklin Zelman and mischaracterizes him as “a KGB spook working under student cover.” In fact, Franklin was a veteran of the Abraham Lincoln Brigade working as a KGB courier. In any event, the FBI neutralized this threat by transferring Hiskey from Chicago to a military base near the Arctic Circle, thereby scaring his scientific contacts (whom he had introduced to a Soviet agent) into cooperating with the Bureau.
There are other occasions where Blum demonstrates an uncertain grasp of the history of Soviet intelligence. He misstates Elizabeth Bentley’s motives for defecting; angry at being pushed aside by the Soviets, she feared she was under FBI surveillance. And he claims that only three witnesses testified against the Rosenbergs (Ethel’s brother and sister-in-law and Harry Gold), which leaves off others (Bentley, Max Elitcher, and the photographer who had taken passport photos for the family just prior to their arrests).
Blum’s account of the way the KGB encoded and enciphered its messages is oversimplified. The mistake that made it possible for American counterintelligence to break into the Soviet messages was their intelligence services’ use of some one-use-only pads a second time. Not all of the one-time pads were used twice, and only if such a pad was used twice could the FBI strip the random numbers from the message sent by Western Union. That process allowed Gardner to attempt to break the underlying code. The vast majority of the Soviet cables remained unbreakable, and many could be only partially decrypted. And most of the decrypted cables had nothing to do with atomic espionage but concerned the stealing of diplomatic, political, industrial, and other military secrets.
Partly to heighten suspense, Blum misrepresents or distorts the timelines on matters involving Klaus Fuchs and the Rosenberg ring. He harps on Lamphere’s frustration about not being able to use the decrypts in court, but the FBI had concluded it was highly unlikely that they could be legally introduced into evidence without exposing valuable cryptological techniques, a conflict Lamphere surely understood. That very problem helps explain the FBI’s inability to prosecute Theodore Hall, the youngest physicist at Los Alamos, who had been exposed as a Soviet spy. Blum mistakenly suggests that the FBI agent in Chicago who investigated Hall was unaware of Venona. But that agent did know; the problem was that when the FBI began its investigation in the spring of 1950, Hall had temporarily ceased spying. He was eventually brought in for questioning, but neither he nor his one-time courier and friend, Saville Sax, broke and confessed. Lacking independent evidence, the FBI was stymied.T he most significant flaw of In the Enemy’s House is its assertion that Ethel Rosenberg’s conviction and execution were monumental acts of injustice that disillusioned both Lamphere and Gardner, soured their sense of accomplishment, and left them consumed by guilt. It is true that Lamphere had opposed Ethel’s execution and had drafted a memo that J. Edgar Hoover sent to the judge urging she be spared as the mother of two young sons. Gardner had translated one Venona message that indicated Ethel knew of her husband’s espionage but because of her delicate health “did not work,” which Gardner interpreted to mean she was not part of the spy ring. But, as Lamphere pointed out in his own book, her brother David Greenglass had testified to her involvement in his recruitment. And KGB messages available following the collapse of the Soviet Union now make clear that Ethel had played a key role in persuading her sister-in-law, Ruth Greenglass, to urge her husband to spy.
In The FBI-KGB War, Lamphere never evinced deep moral qualms about their fate. He expressed a more complex set of emotions. “I knew the Rosenbergs were guilty,” he writes, “but that did not lessen my sense of grim responsibility at their deaths.” And he calls claims that the case was a mockery of freedom and justice both “abominable and untruthful.” Blum insists that Gardner was “stunned” by their deaths and quotes him as saying somewhere: “I never wanted to get anyone in trouble” (which would suggest a monumental naiveté if true).
Blum’s claim that Lamphere and Gardner had condemned themselves “to another sort of death sentence” for their roles is a wild exaggeration. So, too, is his charge that Lamphere believed that in the Rosenberg case the United States “might prove to be as ruthless and vindictive as its enemies.”
Finally, Blum links Lamphere’s decision to leave the FBI for a high-level position in the Veteran’s Administration to a sense of lingering guilt. But in his own book, Lamphere attributes the move to the frustration he felt once he realized he would be stuck as a Soviet espionage supervisor for years to come. Blum links Gardner’s brief posting to Great Britain to work with its code-breaking agency as an effort to escape his guilt, but he never mentions that Gardner returned to work at the National Security Agency for many years.
Retired intelligence agents friendly with both men have no recollection of their expressing regret about their role in the Rosenberg case. It is possible that they may have made some such comment to a family member or jotted down something in a notebook, but without very specific and sourced comments, the idea that they ever regretted their work exposing Soviet spies is nonsense that mars Blum’s otherwise entertaining account.
Choose your plan and pay nothing for six Weeks!
What we got instead was a combination of celebrity puffery and partisan cheap shots at the Trump administration. The politics of North and South Korea, and the equally complex and intricate relations between these two countries and China, Japan, Russia, and the United States, were reduced to just another amateur sport. Ignorant and supercilious reporters transposed the clichés of the electoral horse race, complete with winners, losers, buzz, and sick burns, to nuclear brinkmanship. Major news organizations could not have done Kim’s job any better for her.
A representative example was written by no less than seven CNN reporters and researchers who concluded, “Kim Jong Un’s sister is stealing the show at the Winter Olympics.” The lead of this news article—I repeat, news article—was the following: “If ‘diplomatic dance’ were an event at the Winter Olympics, Kim Jong Un’s younger sister would be favored to win gold.” Gag me.
Then the authors let loose this howler: “Seen by some as her brother’s answer to American first daughter Ivanka Trump, Kim, 30, is not only a powerful member of Kim Jong Un’s kitchen cabinet but also a foil to the perception of North Korea as antiquated and militaristic.” Kim’s “Kitchen Cabinet”—why, he’s just like Andrew Jackson. And how could anyone have the “perception” that North Korea is “antiquated” and “militaristic”? Sure, they might threaten the world with nuclear annihilation. But have you seen Donald Trump’s latest tweet?
New York Times reporters are either smarter or more efficient than their peers at CNN, because it took only two of them to write “Kim Jong-Un’s sister turns on the charm, taking Pence’s spotlight.” Motoko Rich and Choe Sang-Hun described Kim’s “sphinx-like smile” and “no-nonsense hairstyle and dress, her low-key makeup, and the sprinkle of freckles on her cheeks.” They contrasted the “old message” of Vice President Pence, who has no freckles, with Kim’s “messages of reconciliation.” They cited one Mintaro Oba, a “former diplomat at the State Department specializing in the Koreas, who now works as a speechwriter in Washington.” What they did not mention is that Oba worked at Barack Obama’s State Department and writes speeches for a Democratic firm. Not that he has an axe to grind or anything.
The typical Kim puff piece began with her charm, grace, poise, statesmanship, and desire for unity and peace. Then, 10 paragraphs later, the journalist would mention that oh, by the way, North Korea is a totalitarian hellscape that Kim’s family has been plundering for over half a century. For instance, describing the South Korean reaction to Kim, Anna Fifield of the Washington Post wrote,
They marveled at her barely-there makeup and her lack of bling. They commented on her plain black outfits and simple purse. They noted the flower-shaped clip that kept her hair back in a no-nonsense style. Here she was, a political princess, but the North Korean “first sister” had none of the hallmarks of power and wealth that Koreans south of the divide have come to expect.
A political princess! It’s like Enchanted, except with gulags and famine.
Deep in Fifield’s article, however, we come across this sentence: “Certainly, Kim, who is under U.S. sanctions for human rights abuses related to her role in censoring information, was treated like royalty during her visit.” Just thinking out loud here, but maybe human-rights abuses and censorship deserve more than a glancing reference in a subordinate clause. Fifield went on to say that “Vice President Pence, who was also in South Korea for the opening of the Winter Olympics but studiously avoided Kim, had worried in advance that North Korea would ‘hijack’ the Olympic Games with its ‘propaganda.’” Now where could he have gotten that idea?
The fascination with Kim revealed both the superficiality and condescension of much of our press. Fifield’s colleague, national correspondent Philip Bump, tweeted out (and later deleted) a photo of Kim sitting behind Pence at the opening ceremonies with the comment, “Kim Jong Un’s sister with deadly side-eye at Pence,” as if he were being snarky about an episode of Real Housewives.
When Kim departed the Olympics, Christine Kim of Reuters wrote an article headlined, “Head held high, Kim’s sister returns to North Korea.” Here’s how it began:
A prim, young woman with a high forehead and hair half-swept back quietly gazes at the throngs of people pushing for a glimpse of her, a faint smile on her lips and eyelids low as four bodyguards jostle around her.
The Reuters piece ends this way: “Her big smiles and relaxed manner left a largely positive impression on the South Korean public. But her sometimes aloof expression and high-tilted chin also spoke of someone who sees herself ‘of royalty’ and ‘above anyone else,’ leadership experts and some critics said.” Thank goodness for the experts.
Kim Jong Un could not have anticipated more glowing coverage for his sister, for the robot-like cheerleaders he sent alongside her, or for his transparent attempt to drive a wedge between South Korea and its democratic allies. “North Korea has emerged as the early favorite to grab one of the Winter Olympics’ most important medals: the diplomatic gold,” wrote Soyoung Kim and James Pearson of Reuters, who called Pence “one of the loneliest figures at the opening event.” Quoting on background “a senior diplomatic source close to North Korea,” Will Ripley of CNN wrote an article headlined, “Pence’s Olympic trip a ‘missed opportunity’ for North Korea diplomacy.” But who was Ripley’s source? Dennis Rodman?
What most disturbed me was the difference in coverage of Kim Yo Jong and Fred Warmbier, whose son Otto died last year after being tortured and held captive in North Korea. Fred Warmbier accompanied Pence to the Olympics as a reminder of the North’s inhumanity and menace. Journalists ignored, dismissed, and even criticized this grieving man. Among many examples of thoughtlessness and callousness was a Politico tweet that read: “Fred Warmbier criticizes North Korean Olympic spirit.” He must have missed Kim’s freckles.
Washington Post columnist Christine Emba asked: “Is Otto Warmbier a symbol, or a prop?” You see, Emba wrote, “Otto’s father may want his son to be a symbol. But the nature of his escort risks turning him into a prop.” Why? Well, because “symbols stand for something” while “props are used by someone.” And “the Trump administration, which hosted Warmbier, is made up of shameless instrumentalizers who have made clear that they stand for very little.” So there you go. We should be skeptical of Fred Warmbier because Trump.
Emba’s not all wrong. There were a lot of props and tools at the Olympics. You could find them in the press box.
Choose your plan and pay nothing for six Weeks!
was nine when I made my first trip to Israel in June of 1968, almost exactly a year after the Six-Day War. My parents had been in Italy the autumn before, and while vacationing in Rome they learned that there were inexpensive flights leaving twice a week for Tel Aviv. The whole of Israel was giddy at the time, unburdened by their insecurities for the moment with the stunning success of their having just won the Six-Day War and their having increased the total size of their young, besieged nation by more than two-thirds.
My mother finally found a use for the crumpled phone numbers of distant Israeli relatives she’d been carrying in her purse for the past several months, relatives on both her father’s and her mother’s side, Romanians all. Osnat, my mother’s second cousin once removed, had had the misfortune of remaining in Europe while the Nazis were on the move. She spoke of having spent five days hiding from the Germans in the liquid filth of an outhouse and breathing through a tube when they came near.
Meeting scores of warm and loving relatives and having been feted by them as “our dear American Mishpacha” was partly why my parents were both so taken with Israel—that and the Israeli people themselves, the Sabras, so proud and brash, and the ancient beauty of the land. With some talk of perhaps making Aliyah, or at least exploring the idea of our moving to Israel, my parents, my siblings, my first cousins, and my Grandma Rose and her younger brother, Uncle Sol, gathered up a month’s worth of warm-weather clothing and flew en masse to Tel Aviv. We were greeted at Lod Airport by a crush of relations, all of them clambering to hug and kiss us. And then as the sun descended into the Mediterranean and night fell over the coastal plain, they drove us all north in a rag-tag caravan of tiny old Fiats, Renaults, and Peugeots to the beach town of Netanya, where we stayed for the entire summer in a tiny flat just behind the home Osnat shared with her husband, Shlomo.
Days later, I’m with my father and my brother Paul at the Wailing Wall. It’s weird to think that only a week ago I was at home watching Gilligan’s Island and looking for my dad’s Japanese Playboys in the bottom drawer of his bedroom closet during the commercials. Now, I’m in Jerusalem, in the glaring sun beneath this gigantic wall of stone. When I’m sure no one’s looking, I put both hands on the wall, and then I touch my forehead to it. The stones are colder than you’d think they’d be in all this heat.
For reasons I don’t understand, I start to cry. I’d be embarrassed if my brother or my dad saw me like this, so I pretend that I’m praying. I wonder, though, am I just crying because you’re supposed to cry here? If the rabbis from the Talmud Torah had shown me pictures of some random bridge in Saint Paul from the time I was in nursery school, would I have cried at that, too?
When I look up at the wall again, I see some birds’ nests and a million pieces of paper with people’s prayers in them, all stuffed into the cracks between the stones. Everyone who comes here wants God’s attention. I’ll bet He loves all the notes. They probably make Him feel like someone gives a shit about the cool stuff He does.I
had been born a Jew in Minneapolis. Growing up Jewish there wasn’t a good or a bad thing any more than growing up with snow was good or bad. It just was. Because we Jews were so few, being one made us all feel different. It wasn’t a difference we’d asked for or earned either. It, too, just was. It was natural for us, that is, becoming somewhat Jew-centric. We were fond of staying close to one another, close to our causes and to our history, it was just a natural reaction to being the “other.”
It’s 1970 and I’m in junior high, on my way to English, when I see Nelson Gomez, Stuey Nyberg, and Craig Walner. They’re hip-checking kids into the tall metal lockers that line the hall. They are the three kings of the Westwood Junior High’s dirtball dynasty, young hoodlums who regularly and without fear skip school, smoke filter-less Marlboros, and shout “Fuck you, faggot” to students and staff members alike, save perhaps for Mr. H, the anti-Semitic shop teacher with whom they have forged an abiding friendship.
To the left and right of me, hapless students fly, body-slammed with alarming speed into the lockers by the three of them. It doesn’t escape my notice that these unfortunates have not been chosen randomly. There goes Brian Resnick. Next it’s Shelly Abramovitz and then Alvin Fishbein. As I round the corner, Stuey Nyberg grabs my second cousin, Elaine Kamel, by the shoulders and slams her face-first into her own locker. She and they were selected for no other reason than their Jewishness.
I grab Stuey by his neck with both hands and I claw at him until my fingernails pierce his pale skin and blood spurts from his jugular. Now I take the clear plastic aquarium algae scraper that I made in Mr. H’s shop class this very morning and use it to gouge out one of Nelson Gomez’s eyeballs, making sure he can see it in the palm of my hand with his remaining eye. Craig Walner tries to run, but I catch him by his mullet and shove his head into Elaine Kamel’s locker. I slam her locker door on him again and again. I don’t stop until his head is severed from his neck…
…and my daydream comes to an abrupt halt when Stuey Nyberg says, “Himmelman, it’s your turn to meet the lockers, you fucking kike.” Without a word of warning, he clouts me with a stinging jab right to my nose. It’s the first time I’ve ever been hit in the face, and while it’s agonizing, the blow is also somehow euphoric. I’m supercharged with adrenaline, I feel as if I’m on fire. But of course, I don’t hit Stuey back. God, no. I simply stand there glowering at the three of them, blood dripping from my large Jewish nose. And for the first time in my life, I feel downright heroic. I look around me and I see that, for now at least, our bitterest enemies have stopped hip-checking what feels like the entire Jewish nation.
Six months later it’s summer vacation, and we Himmelmans fly from Minneapolis to New York and connect with a nonstop to Tel Aviv. In less than two days, I’m on a towel on the beach in Netanya looking out at the cerulean blue of the Mediterranean.
As I lay on the hot sand, Mirage fighter jets with blue Jewish stars emblazoned under their wings suddenly streak so low across the water that I can smell jet fuel. As they scream overhead, the whole beach seems to shake. With a strange sense of clannish pride, I laugh and stare up at the planes as they accelerate and finally rocket out of range.
My father died, after suffering from Stage IV lymphoma for five years, in 1984. I was 25 years old. A year later, I was living in the Twin Cities working on music with my band when I received a call from a woman named Ruth Grosh. She asked if I’d be willing to write some songs for a therapeutic teddy bear she’d dreamed up called Spinoza Bear. Ruth, a bona fide subversive by nature and New Age before anyone had even come up with the term, named her ursine brainchild after Baruch Spinoza, the heretical 17th-century Jewish philosopher. Spinoza was seen as harmful to, and at odds with, the views of the Jewish establishment of Amsterdam at the time. Eventually, both he and his writings were placed under a religious ban called a “cherem” by the Dutch Jewish community where he lived and worked. Aside from the fact that he was reviled for his modernist views, no one had much bad to say about him personally, except that “he was fond of watching spiders chase flies.”
The songs were to play off a battery-operated tape deck that fit into a zippered pouch beneath the soft brown fur of the bear’s stomach. A red heart-shaped knob on the bear’s chest served as the on-off switch. By today’s standards, the technology would seem crude, but at the time, with just a modicum of suspension of disbelief, it was possible to feel that the voice of the bear along with the music was issuing directly from its cheery muzzle. As to whom to hire to be the voice of Spinoza Bear, it was decided after some deliberation that not only would I write and sing the songs, I should also be the kind, concerned voice of the bear itself.
Each of the dozen or so cassette tapes that were eventually recorded had themes of self-empowerment, a kind of you-can-make-it-if-you-try bent. After just two years, the bear became a huge success—not as some plebeian, retail teddy, but as something greater. Spinoza Bear soon found his way into hospitals, health clinics, and centers for healing of all kinds. By holding the bear and listening closely to his stories and songs of wellness and inner light, rape victims, grief-stricken parents, bone-lonely pensioners, autistic kids, as well as children on cancer wards all across America found it possible to relieve some of their pain and fear.
Aside from the good works, the bear provided me with twenty grand in seed money that our band, Sussman Lawrence, used to set sail for New York City in 1985.
We were five new-wave rockers in an Oldsmobile Regal Vista Cruiser wagon, and two roadies in a spanking-new Dodge cube van. The van, which we were overjoyed to discover, had been hastily christened from bumper to bumper with graffiti sometime during our 45-minute debut set at CBGBs, the legendary East Village rock-and-roll club, only days after arriving on the East Coast.
Given the high cost of living in New York City, New Jersey seemed the next best thing. As it turned out, there were very few homeowners interested in renting a house to a band. I hatched a plan, which involved my calling on a middle-aged real-estate agent named Carol we’d found advertising in a Bergen County newspaper. When I finally got her on the line, I explained to her that we were medical students enrolled that fall at nearby Rutgers University and in need of a quiet place to live and study.
The following morning, as the rest of the guys waited outside in the Oldsmobile, I and my cousin Jeff, our band’s gifted keyboard player, showed up at Carol’s office in suits and ties we’d purchased at a local thrift shop and carrying responsible-looking briefcases. I had boned up on some medical terms as well, orthopedic surgical techniques mostly, in case she needed proof that we were actually who we were claiming to be. But there had been no need. We had the cash and seemed honest enough—“honest enough” to let her know that a few of us were also part-time musicians and that there might be some music playing, quietly of course, from time to time, just to ease the strain of our intense studies.
Two days later, Jeff and I woke up early, signed the lease papers, and pulled our now multihued, invective-laden cube van into the driveway of 133 Busteed Drive in Midland Park, New Jersey.
Trying for as much discretion as possible, lest the neighbors notice anything out of the ordinary, we backed the van up to the garage, lugged the gear up a short flight of stairs and into a large, unfurnished living room. Once upstairs, we began unloading beer-stained amplifiers, at least a dozen guitar cases, a drum set packed tightly into three large metal flight cases, assorted keyboards, and an entire public-address system and lighting rig. Aside from some bad scrapes in the hardwood floor and a gaping hole or two in the walls on our way in, the load-in was accomplished with speed and efficiency. We were up and practicing by late afternoon, our new-wave rock blaring fast and loud into the New Jersey autumn night.
A month after settling in, Ruth Grosh reached me at dinnertime by long distance, in the squalor of our band-house collective. After some catching up, she gently let me know me that some psychic friends had explained to her that I had just a few months left on the planet. “What!” I said, “they told you I was gonna die?” Ruth was practiced at this kind of thing, it seemed, although her nonchalance about my imminent demise didn’t make me feel any less concerned. “They asked me to find out if you’d like to come in for a free consultation,” she said. I was due to fly back to Minneapolis later that week anyway, and I figured I might as well find out what all this planet-leaving nonsense was about.
Back home, on the morning of my appointment with the psychics, I found my mother, who was normally quite composed, flitting around the kitchen and singing quietly to herself. She had agreed to a lunch date that afternoon with the contra bass player from the Minnesota symphony, her first since my dad had died almost two years before.
“Does this blouse look good on me?” she asked. “Be honest.”
“Yeah, it looks great,” I said.
I was uncomfortable in the extreme watching my mother dart around the house like a schoolgirl primping for a date with some dude who wasn’t my dad. True, it’d been two years since he’d died, and given all that she’d been through, it wasn’t like she didn’t deserve to live a little. After all, I thought, it was just lunch. But the more I saw of this weird, giddy side of her, the less I liked it. A car honked. It was Ruth.
She and I rode wordlessly as Japanese New Age wooden flutes intoned from her car stereo. We arrived after twenty minutes at the northern suburb of Brooklyn Center, and Ruth parked her car near a long row of newly built town houses. A man and a woman in their mid-forties greeted us at the front door, both smiling in a scary, off-putting way. They appeared to be a kind of husband-and-wife psychic tag team, and they rushed headlong into the consultation by asking if I’d like to give them some names of people I knew.
“We’ll be able to tell you all about them,” the woman said and smiled again. I thought it was just some cheesy method of showing off.
“The first names are enough,” said the man.
“Okay, let’s go with Jeff,” I said.
My cousin Jeff is a musical genius, a pianist of remarkable facility, who’s had to contend with neuromuscular tics most of his life. The two psychics were seated facing each other in cheap leather armchairs. In an instant, they were both precisely mimicking my cousin’s facial tics. I recognized each of them from the names Jeff and I had given them. When Jeff’s thumbs bent downward spasmodically, we called it “Southerner.” When his palms flexed upward in a sort of hand-waving motion, we called it “Reckless Greeter.” In another, with his eyebrows pinched together, lips compressed, and eyes blinking, Jeff looked like someone who was very curious about his environment. We called that one “Curious Man.” His most frequent tic was also his most unsettling. We called that one “Round the World.” It involved his eyeballs rolling uncontrollably in their sockets. Suddenly, to my astonishment, the corners of both of the psychics’ mouths had formed narrow half smiles. Their eyebrows began squeezing together; their eyes were blinking—open-shut-open-shut—perfectly mimicking Jeff’s Curious Man.
“The music, he can’t stop the music,” the woman shouted in excitement. Her husband, whose hands then began a remarkable imitation of Reckless Greeter added, “Yes, good God, the music! Can’t you feel it just pouring out of him?”
I was thinking this had to be some kind of brilliant trick, albeit a devilish one. It was astonishing, yes, but I wasn’t yet convinced that they were real. Next, I said the name “Beverly,” my mother’s, and they both giggled. It’s disconcerting to see adults giggle at any time, but when a pair of middle-aged psychics giggle at the mention of your bereaved mother’s name, it’s triply so.
“She’s doing something she feels guilty about,” the woman offered.
“Yes,” said the man. “Something she’s afraid of doing, but it seems to us that she’s also very excited.”
Almost in unison, the psychics said, “She’s acting like a little schoolgirl today!”
How in hell could they have known what I’d just experienced myself for the first time in my life that very morning? If these two freaks had wanted my undivided attention, they sure as hell had it now.
The room fell silent. I didn’t dare speak. They had officially scared the living daylights out of me with their last trick. Soon, they broached the subject I’d come all this way to talk about.
“Is it your wish to leave the planet?” the woman asked, more casually than I would have imagined possible for someone questioning a fellow human being about whether he wanted to live or die.
I paused and breathed deeply for a minute or so. It was a question I stopped and thought about longer than a mentally stable person might have.
“No,” I finally told them, “I have no intention of leaving anytime soon.”
This seemed to relieve them. The man said, “The reason we’ve been so concerned about you is that we believe music is more important to you than you may be aware. It represents your very essence, and by working as single-mindedly as you have to get a record deal, with the kind of music you’ve been making with your band, you’ve been cheapening and compromising your integrity. You’ve been, in a sense, unfaithful to your muse. That’s what’s causing this spiritual disconnect and, should it continue, my wife and I both feel like it will shorten your stay here.”
His wife took over: “What you need to do is uncover a deeper, more honest expression in your music, something closer to the bone. We know you love the blues and reggae. We think it’ll be helpful to start playing music you love, rather than music you think will sell.”
By this time, tears were spilling down my cheeks. “There’s this song,” I began telling them, “that I wrote for my dad over two years ago on Father’s Day, that almost no one has heard. It’s something that was written with the sole intention of connecting with him before he died. It’s on a cassette tape, just sitting there on a shelf in my closet.”
“Why not put that song out as your next single,” the man said.
I was suddenly speechless. Why had I never thought of this? It was such a simple yet profound idea. I flew back to New Jersey, determined to release not just the one song, but an entire album dedicated to my father.
The guys picked me up in the Oldsmobile at Newark Airport the next day. We were standing around the luggage carousel waiting for my bags when I told them I was going to record a solo record, a tribute to my father, whom they all loved and respected.
My bandmates understood this was something I needed to do. They also knew it wasn’t just talk. A solo album, produced for whatever reasons, also signaled the possibility that the ethos of the band may well have been coming to an end. Nevertheless, they played their hearts out on the record and, by doing so, tacitly gave me their blessings and their assurances that whatever happened with it would be for the best.
The recording featured the song I’d written for my dad, and it eventually became my debut album, This Father’s Day, for Island Records.
Its release also became a powerful catalyst for me personally. It took me from where I had been, locked up in pain and confusion, to some other, more hopeful place. Even before my meeting with the psychics, I thought I’d gotten beyond most of the hurt, that it was simply time to grit my teeth and persevere. It had been two years, after all. But I was mistaken. The process of mending broken hearts is never as pat as that. As much as I needed to forget, to emerge clear-eyed from the jumble and rawness of my father’s death, I knew I’d have to face my worst fears again and again. But I felt ready. I also knew, in a way I hadn’t before, that I really didn’t want to die.
While my father was suffering in the last five years of his life, I found myself in a different state of mind from that of my friends and bandmates, who were, for the most part, blithely moving through their young lives. I’m not saying pain made me wise; it’s just that it can, for those willing to accept its hard lessons, provide a bit of perspective, shine some light on what’s sacred and what’s less so.
During those years I was working very hard to become famous, whatever that might have meant. I felt that I needed to reach some level of achievement before my dad died. I suppose I was conducting a search for miracles. It’s no wonder. For my family and for me at least, miracles seemed to have been in very short supply back then.
It’s miracles after all, that compel us forward, that encourage us to move with some degree of willingness into the next day. But, despite what we might believe, it’s hardly ever the big ones that truly move us. The sea can split, we can win the lottery, we can even become rock stars, and still, those phenomenal circumstances are never what matter most. In the end, the only miracle worth wishing for is the ability to be made aware of the smallest splendors, the most inconsequential truths, and the overlooked rhythms that connect us to the people and things we love.
I felt a kind of heat rising up around me in those days, a sense that what had long been static was now stuttering back into motion. There was a pleasant strangeness to the feeling, but like many things that at first strike us as unusual, it wasn’t wholly unfamiliar, either. I’d felt that same unnamable sensation, lying awake in my bed in the dark as a young child, focusing on individual moonlit snowflakes as they fell outside my window. I felt it again in Jerusalem, at nine years old, when I first touched the sunbaked stones of the Western Wall. I felt it the first time I’d snorkeled in the Red Sea and became drunk from sheer beauty. I felt it the frigid November morning we buried my father. I felt it on the evening I finally met my wife, and again, the moment when each of my children was born.
The circumstances were wildly varying, but in each instance there was a sense of being taken from one place to another, of inertia finally giving way to movement. It was as if my mundane life had cracked open and I saw, arrayed in front of me, some image of the unseen hand that forms and directs the universe.M
y first experiences in Crown Heights, Brooklyn, at age 27 were catalytic. A rabbi named Simon Jacobson had posed a single question and it, too, set me into motion: “Why is walking on the surface of the Earth any less miraculous than flying above it?” he’d asked.
The idea that the world is a wondrous, mysterious place—even as we are destined to walk on the mundane surface of it, even if we cannot truly fly—is both a liberating and comforting notion. Being attuned to wonder is my preferred condition. Perhaps it’s natural for each of us. But why, then, are so many moments not imbued with this sense of the miraculous? Why is there such a divide between barely sensing and deeply feeling?
What I did know in the autumn of 1987, with a certainty I hadn’t known before—perhaps couldn’t have known—was that I needed to get married. I had awakened to the idea that there was nothing I was doing with my life, not my music, not my friendships, not my finally getting that almighty record deal, more important than finding the right woman with whom to create a family and live out my days. I also knew that to do this, I would need to create a powerful forcing frame for myself, not one that would constrict or limit me, but one that would allow me to channel my outsized ego and my creative proclivities toward more productive ends than I’d ever dreamed possible.
Eventually, I made a sort of pact with myself, a silent, personal agreement. It came down to this simple declaration: The next time I sleep with a woman, it will be with my wife. This meant that I had to extricate myself from my longtime girlfriend. Though I was, and still am, extremely fond of her, I could never envision her as a lifetime partner or the mother of my children. In addition, our arrangement was somewhat nebulous, and so this new, self-imposed structure also meant that I’d have to cut off any contact with the other women with whom I was having casual sex. I had to make a fundamental cultural and emotional shift. I would need to wean myself away from years of assumptions about the very nature of what a modern relationship meant. I would have to forge a new way of looking at women, at my role as a man, and at the world at large.
It became clear to me that the freedom I had always longed for could be obtained only through the somewhat paradoxical means of setting limits, delaying gratification, and cutting away many experiences that an all-pervasive consumerist culture had been (and continues to be) hell-bent on selling. If you’ll allow me, I’ll explain this further by way of metaphor.
Music is among the most transcendent of all art forms, both for the performer and listener. Since it has no form or substance, it can easily serve as a model for the boundlessness of spirituality. But as anyone who has mastered a musical instrument knows, musical ideas are expressed almost exclusively by means of structure and restriction, words very few of us would correlate with freedom.
At first glance, this seems like a paradox. How could something as liberating and intangible as music be based on restriction? Not only is music based on restriction, I’d go so far as to say that, aside from the existence of raw sound—elemental white noise, if you will—the only other thing that allows music to take place, the only thing that differentiates it from this pure noise, is what sounds the musician chooses to leave behind. In this sense, music comes about not by choosing notes but by the elimination of notes. Take a look at the idea in this somewhat inverse manner: Only by rejecting all other sonic choices are we left with the ones we truly desire. To make music, we don’t add, we subtract.
Here’s how something as commonplace as the key signature of a particular piece of music also reflects this idea. Unless you were trying to achieve a harsh atonal musical effect, you wouldn’t want to be playing in the key of B-flat minor while your key signature called for you to be playing in A major. The ensuing “music” would sound like a chaotic racket to most people. The time signatures of compositions, along with their tempos, which require that a particular note last only so long and that it be played at a particular speed, also function with this same principle—creation by negation. Avoiding the time signature, or playing at any speed without regard for the overall tempo, is another good way to produce only noise.
It is only through adherence to the limiting factors of time and tempo that music can take shape. In that same sense, if it weren’t for the constraint of playing only certain keys on a piano, and thereby negating all other choices, you would hear only noise. Anyone who has heard his or her toddler pounding away on a piano knows exactly what this sounds like.
Most, if not all, musical instruments also work on this principle of restriction. The trumpet, for example, is based upon compression and restriction. If the air a player blows into the trumpet’s mouthpiece weren’t compressed and regulated by the embouchure, the only sound you’d be able to hear would be a soft wind-like noise passing through the horn.
As I became more and more immersed in the wisdom of Jewish thought and practice, the idea of freedom-in-structure became clearer and ever more personally relevant. If it was true for music I wondered, how much more true must it be for all of life itself? And given that human sexuality (whether or not the participants engaged in a sexual act are conscious of it) concerns the creation of life, it occurred to me that causing dissonance in that most meaningful—dare I say mystical—arena of life was something I definitely needed to avoid.
I knew I had to place a set of restrictions on myself in order to make music out of my life, as opposed to just raw sound. Although this conception of the universe felt new to me, new in the sense that it was radically different from the one I’d been acting on for so many years, it wasn’t unfamiliar. Without my knowing it, I had undergone an awakening. I became alert to a perspective I recalled vaguely, even from my earliest childhood. It was as if I could see something important forming (though what it was, was still unclear) out of a barely examined and often fleeting sliver of thought. All at once, the world around me seemed to feel very much as it did when I was a child. I could remember clearly, lying feverish in bed, waiting for sleep, with every last thing in the world unknown and unexplained.
It was frightening as an adult to feel these thoughts growing stronger and more pervasive, but it also felt safe in ways—as though there’d been a kind of revelation, one that seemed to say: “Peter, son of David, there is a purpose to everything you’ve experienced in the recent past and everything you see before you now. From this moment on, there are things you must do and ways you must act.”
The mantra to live without restrictions, which had guided me for most of my life, seemed at that point to be leading me only to chaos. I believed I could, and must, do better for myself. My most fervent wish was no longer to become a rock star; it was to create my own family, one that could become a replacement for the one I’d been missing, the one that had changed so drastically when my father died.
So, in a tour bus rolling across the American continent, I did the three most practical things I could think of: I stuck to my private pact, I dreamed, and I prayed several times a day to an unseen Deity for strength and for love.
This part of the story really begins a few months after my dad’s funeral, when I found myself in a cramped apartment in South Minneapolis auditioning some songs I’d written for a local performer named Doug Maynard. I sang him a few things and he nodded quietly. Doug wasn’t a big talker. Finally he chose one. “Man, I think I could do this justice,” he said. It was called “My First Mistake.”
You taste like pepper frosting on a granite cake.
Baby fallin’ in love with you was my first mistake…
Less than a year later, Doug was found dead in his living room, stone-drunk and drowned on his own vomit at the age of forty. Before this happened, however, he had introduced me to his manager, who had introduced me to a New York City music lawyer, who had introduced me to a record producer named Kenny Vance.
Kenny had worked with a lot of famous people and he wasn’t particularly shy about mentioning just whom. “I used to date Diane Keaton,” he told me. “I know Woody Allen—been in a couple of his films. I was the music director for Saturday Night Live.” Then he said, “Tonight I’m gonna take you to my main connection, a religious Jew in Brooklyn.”
Before long, Kenny and I were crossing the Brooklyn Bridge. We arrived at an apartment in Crown Heights where Kenny’s friend, Simon Jacobson, greeted us. I liked Simon right off the bat. His eyes reflected some essential paradox, some awareness that being alive is both a source of great humor and great sadness. His wife, Shaindy, introduced herself with a gracious smile and placed glass bowls of almonds and chocolate-covered coffee beans on a yacht-sized table before excusing herself to tend to her young children. The thing I didn’t understand at first was how a big hirsute guy like Simon, in an oversize yarmulke, with a massive beard and in a white polyester button-up, was able to land such a good-looking wife. I soon learned that around these parts, it wasn’t the guy who could throw a football the farthest who got the girl. Simon had another thing going for him.
His, at the time, was to memorize every word of the Lubavitcher Rebbe’s Shabbos dissertations and record them on Saturday night for publication later in the week. To understand the scope of the job, it’s necessary to know that when the Rebbe spoke, it was often for four or more hours straight, without breaks, without notes, and in a manner of cyclical and increasing complexity. To make things even more challenging, the Rebbe wasn’t freestyling. Everything he taught was derived from a compendium of source materials that ranged into the tens of thousands of books. And they could not be recorded because it was the Sabbath and no electricity could be used.
When I once mentioned to Simon how awed I was at his ability to memorize this much information, he looked at me and said: “The memorization is the least of it. It’s the task of compiling it with the proper source notes that’s the real challenge. Every day I correspond with the Rebbe, and he writes me back with perfect editor’s notes. Once I wrote and said I didn’t understand a particular passage and couldn’t find the source for it. The Rebbe had a sharp sense of humor. He sent me back a markup with a big red circle, not just on the sentence I was having an issue with, but around the whole page, with the words, ‘What do you understand?’”
It was getting late. Kenny had left me there and driven back to the city. As Simon spoke to me, I kept looking up at the oil paintings of shtetl life and the Rebbe hanging on the walls. I was prodded more by fatigue than bravado when I finally asked, “What’s the deal with those pictures of the Rebbe? They seem sort of cultish to me.”
“I like the pictures,” he said, “To me, the Rebbe is like a very inspiring grandfather, and I get a lot out of reflecting on the things he says and the way he lives his life. There are people for whom there is no sense of self. People called Tzadikim, and they have no need for personal gain. A Tzadik lives only to serve others and they can do anything they wish.”
“Really,” I asked with just a hint of comic disdain. “Can they fly?”
“Understand, I’ve never seen anyone fly,” Simon answered. “But for a Tzadik, the act of flying is no greater miracle than the act of walking.”
This idea stunned me. Not because it was new. The things that move us most never are. They are things we already know, beliefs that are buried away inside us. Of course, when you stop and think about it, there’s absolutely no difference between the weights of the two miracles, walking and flight. It’s just that we non-Tzadikim get so tired of the one that happens all the time.
At that moment, at that table in Brooklyn, I started thinking about the little-known rhythm-and-blues singer Doug Maynard. I was remembering the sound of his voice and simultaneously considering the infinite number, the impossible number, of tiny coincidences—the tendrils, if you will, that in their unfathomable complexity, had guided me to that particular apartment on that particular night. The thought was so vivid, it was as if I could hear Doug singing again. Singing most soulfully, most truthfully about the joy, and the sweat, and the pain of this world. It wasn’t long after that I met the Lubavitcher Rebbe for the first time. He handed me a bottle of vodka and a blessing for success, and I started becoming more Jewishly observant right away: keeping Shabbos in my tiny apartment in Hell’s Kitchen, keeping kosher, and putting on tefillin. I married Maria two years later. We’ve been married for nearly 30 years.
About a year ago my cousin Jeff asked me what it had been like to meet the Rebbe. This is exactly how I answered him.
“You know when you’ve done something you think is horrible (whatever the hell it may be) and you start going down—deeper and deeper into the rabbit hole of regret? When you’re in so deep that you start to feel like the biggest loser ever born, like nothing is possible, that nothing good is ever gonna come your way, and that you can’t even face yourself in the mirror?”
“Sure,” Jeff said. “I’ve been there.”
“Well,” I said, “meeting the Rebbe was the exact opposite of what I just described.”