Not since World War II has there been such an era of ill feeling between Western Europe and the United…
Not since World War II has there been such an era of ill feeling between Western Europe and the United States as there has been in the past year. No sooner had the then Presidential Assistant Henry A. Kissinger issued his call in April 1973 for a new Atlantic Charter to reinvigorate “shared ideals and common purposes with our friends” than it appeared that our friends had begun to change places with our enemies. Six months later, in October, Dr. Kissinger, now Secretary of State, was overheard saying, no doubt in pique, that he didn't care what happened to NATO because he was so disgusted with it. Two months after that, in December, he privately described the behavior of the Europeans during the Arab-Israeli conflict as “craven,” “contemptible,” “pernicious,” and “jackal-like.” In March of this year, he unguardedly expressed his disgruntlement by publicly complaining that getting our friends to realize our “common interests” was a bigger problem than “regulating competition” with our enemies.1* Others have observed this extraordinary topsy-turvydom of friends and enemies. According to so experienced and respected a student of international affairs as George F. Kennan, the United States “now has relations with the Soviet Union fully as cordial as those with most of the European NATO members”—which is another way of saying that we are no more cordial with the latter than with the former.2
This is a peculiarly disturbing state of affairs for a country whose leaders, including Dr. Kissinger, have never tired of protesting that NATO, the Atlantic alliance, or Western Europe, is the very “cornerstone” of U.S. foreign policy. A profound change has obviously taken place, going far beyond the problems and provocations that have presented themselves in the past year. If it continues much longer, the 1970's are unlikely to produce the “structure for peace” pursued by President Nixon; they are more likely to resemble the 1930's in the breakdown of a fragile international order.
“Concepts” and “conceptual” have long been among Dr. Kissinger's favorite terms. Almost ten years ago, he drew up an indictment of U.S. policy which, in part, sounds strangely familiar today: “In recent years this promise [of a partnership between a united Europe and the United States] has been flawed by increasingly sharp disputes among the allies. The absence of agreement on major policies is striking. On the Continent, the fear of a bilateral United States-Soviet arrangement is pervasive.” And what was the basic trouble? It was, he wrote, the existence of an open challenge “not just to the technical implementation of American plans but to the validity of American conceptions.” In June 1968, after a long silence, Dr. Kissinger finally pronounced on what had gone wrong in Vietnam. He traced many of our difficulties in Vietnam to “conceptual failures” and decided that “almost all of our concepts, the military ones as well as some of the traditional ones, have really failed.” A few months later, he flatly asserted that “the basic problem [in Vietnam] has been conceptual.” More recently, in connection with his visit to Moscow in March 1974, Dr. Kissinger let it be known that he was looking for a “conceptual breakthrough.”3
It is all the more surprising, then, that concepts may well be the most vulnerable aspect of Dr. Kissinger's tenure as Presidential Assistant and Secretary of State. He has lived such a charmed life since he went to Washington in 1969 that far more attention has been paid to his personal tactics than to his conceptual strategy, although he used to warn against the danger of “being mired by the prudent, the tactical, or the expedient” and was wont to inveigh against the lamentable disposition of American leaders to act, during periods of détente, “as if a settlement could be reached by good personal relations with their Communist counterparts.”4 Indeed, one of the most intriguing aspects of the Kissinger phenomenon is the curious difference between Professor Kissinger and Presidential Assistant-Secretary of State Kissinger.
Professor Kissinger had devoted himself almost wholly to Europe and to European-American relations. In book after book and article after article he had taught that the fate of the United States was bound to be decided in or with Europe. In this view he had not differed from official American policy, at least until the Johnson administration, which had departed from it in deeds if not in words. Yet Presidential Assistant Kissinger went to work for a President who, not long before taking office, had downgraded Europe as secondary in future American policy. This position had been taken by Mr. Nixon in a well-known article in Foreign Affairs of October 1967, which has been so distorted in the retelling that one suspects more people have referred to it than have actually read it. The article has been recalled mainly for its alleged foreshadowing of President Nixon's later China policy because in it he had urged that “our long-range aim is to pull China back into the family of nations” and “into the world community” on condition that it give up the role of being “the epicenter of world revolution.” These phrases were subsequently taken to mean that Mr. Nixon had served notice of his intention to establish friendly relations with the People's Republic of China. But there was both more and less to it than that.
The leading Nixonian concept at that time had to do with a question of fundamental importance: What area of the world would be most dangerous to the United States in the final third of the 20th century? Mr. Nixon's answer in this very article was: “Asia, not Europe or Latin America.” He saw the United States as “the greatest Pacific power,” propelled westward, as “partners,” to be sure, not as “conquerors.” Asia was where “the greatest explosive potential is lodged.” The rhythm of history, as he put it, had dictated that “the focus of both crisis and change is shifting” from Europe to Asia. Europe having been rebuilt and the Soviets “contained,” he urged that we should reserve our main energies for Asia “to reach out westward to the East, and to fashion the sinews of a Pacific community.”
It was in this context that Mr. Nixon discussed what to do about China. There was no indication in this article that he thought of using China as a counterweight against Soviet Russia. Rather, he saw China as the “clear, present, and repeatedly and insistently expressed” threat in Asia. In order to meet that threat more effectively, he advocated coming “urgently to grips with the reality of China” by pulling China “back into the family of nations” and into “the world community.” At this stage in his rethinking of U.S. foreign policy, Mr. Nixon seemed oblivious to the contradiction inherent in his desire to concentrate American energies in Asia, which could only threaten or disturb China, and his inclination to make some sort of friendly overture to China, which could come to fruition only if America showed signs of leaving Asia alone. Nixon's formula was still so vague that it did not attract much notice at the time. When Dr. Kissinger later said that it “really foreshadowed the Peking initiative”5 he was hardly justified by anything in the 1967 article itself, since it contained little more than the germ of the idea of establishing some sort of new relationship with Communist China.
Mr. Nixon has had it both ways on the Vietnam war. During the period of the main U.S. build-up in 1965-67, he protected President Johnson's Republican flank and, if anything, out-Johnsoned Johnson in his pro-war fervor. Mr. Nixon would have had U.S. troops in Vietnam as early as 1954 when the French were facing defeat at Dien Bien Phu. On the eve of the massive U.S. intervention in 1965, he went all out for President Johnson's policy. He was one of those who thought the war was being fought primarily between the United States and Communist China. An extreme exponent of the “domino theory,” he foresaw Chinese Communist aggression as far as Australia in four or five years if South Vietnam fell. No one in the Johnson administration was more convinced than Mr. Nixon that the only way to end the war was “by winning it in South Vietnam.” Nevertheless, by the end of 1967, in his Foreign Affairs article, Mr. Nixon drew back somewhat from his previous bellicosity. He now recognized that “the role of the United States as world policeman is likely to be limited in the future,” that there was no more “room for heavy-handed American pressures,” and that “the central pattern of the future in U.S.-Asian relations must be American support for Asian initiatives.” Asia had by far the highest priority in his scheme of things. After he was elected, President Nixon repeated and repeated and repeated that he was not the one who had sent more than 500,000 American boys to fight in Vietnam. That was true; it also was true that he was the one who had done everything a Republican leader could do to make it possible for a Democratic President to send them.6
It is more difficult to know what Professor Kissinger thought about Asia and the Vietnam war. His 1965 book, The Troubled Partnership, was devoted to the European-American relationship, but in connection with it he made some interesting references to Southeast Asia. He was mainly concerned with getting across the thought—which figured prominently in his “new Atlantic Charter” speech eight years later—that our European allies had “ceased to think of themselves as world powers.” For this reason, he warned that the United States could not expect “meaningful support for such United States policies as the defense of Southeast Asia.” At the same time, however, he ventured the opinion that “over the next decades the United States is likely to find itself increasingly engaged in the Far East, in Southeast Asia, and in Latin America,” in none of which our European allies were likely to see any vital interest of their own.7 The book was written before President Johnson decided on large-scale U.S. intervention in Vietnam, but judging from Dr. Kissinger's expectation the decision could not have come as a surprise. In 1965-66, Dr. Kissinger made two trips to South Vietnam at the invitation of Ambassador Henry Cabot Lodge—who later recommended him as Presidential Assistant to President Nixon. In 1967, he acted as the American go-between in conjunction with two Frenchmen in an abortive “peace feeler.” Nevertheless, he published nothing about the war during these three years. His biographer, a close friend for many years, tries to explain this puzzling silence on the ground that Dr. Kissinger “had nothing to say” because he “did not know enough about the issues.”8 Since he knew more and was privy to more than all but a relatively few in the highest official circles, there would have been little protest against the war if his modesty or reticence had been more contagious.
In the spring and summer of 1968, Dr. Kissinger finally found his voice on the entire range of American foreign-policy questions, including the Vietnam war—the voice of former Governor Nelson Rockefeller, who was running for President and who, we are assured, spoke what Dr. Kissinger wrote.9 Fortunately, however, we do not have to depend wholly on such indirect evidence. In June of that year at a conference in Chicago sponsored by the Adlai Stevenson Institute of International Affairs, Dr. Kissinger spoke for the first time in his own name on the Vietnam war and its lessons. His main contribution, as previously noted, was the concept of our “conceptual failures.” Ironically, in view of the later insistence on his alleged infatuation with “balance of power,” he then lamented that the epidemic of rebellions in the world, “cannot be encompassed at all by traditional theories of balances of power.” His comments did little more than to communicate his disenchantment with the war, with U.S. policy, and even with “the American philosophy of international relations.”10
By chance, Dr. Kissinger's last published article dealt with “The Vietnam Negotiations.” Written before President Nixon asked him to serve, it appeared in January 1969, just as he was going into the White House. His criticism of U.S. policy was again “conceptual,” though it clearly showed that he had a great deal of inside knowledge about what had gone on militarily and politically in Vietnam. For an inveterate conceptualizer, however, he seemed to give inordinate importance to what he called the “choreography” of negotiations—the way they were carried on. A fascination with tactics, maneuvers, and symbols appeared to preoccupy him as much as grand concepts and large historical movements. After pages on the proper tactics to pursue vis-à-vis North Vietnam, he bethought himself to caution: “The over-concern with tactics suppresses a feeling for nuance and for intangibles.” This advice seemed to make the statesman or diplomat a conjurer of the ineffable and the impalpable. One would have expected that an over-concern with tactics would suppress a sense of essence and substance. Concepts and nuances, the philosopher and the fixer, were already struggling for the mind and soul of Henry Kissinger.
The Vietnam war produced another more tangible and immediate conflict. On the one hand, Dr. Kissinger had clearly come to the reluctant conclusion that the war was a bad job and needed to be ended as soon as possible. On the other hand, he held on to the conviction that it had to be ended “honorably” at all costs.11 The key concept here was “honorable”—although he never made clear how a dishonorable war could be ended honorably. The closest he came to condemning or repudiating the war was a comment that it was open to “the charge of bad judgment.” He hastened to add that such a “charge” could not be removed by “a demonstration of incompetence” in ending the war. Another concept mobilized in favor of ending the war “honorably” or not at all—for it came down to this in the end—was a variant of the “domino theory.” What was now at stake in Vietnam, Dr. Kissinger argued, was “confidence in American promises,” on which the stability of much of the world—the Middle East, Europe, Latin America, even Japan—allegedly depended. He apparently could not envisage anything worse than “unilateral withdrawal,” though the settlement which he eventually arrived at was admittedly a bilateral withdrawal only on paper. We have heard ever since that the North Vietnamese have never withdrawn. Judging by President Nixon's own criteria—“honor, a peace fair to all, and a peace that will last,” and “with no misunderstandings which could lead to a breakdown of the settlement and a resumption of the war”12—this “peace” has already failed the test, at least so far as the Vietnamese themselves are concerned. Long before the end of the Asia-first century once envisaged by Mr. Nixon, it is safe to say, it won't make a particle of difference how the United States got out of Vietnam; all that will matter is that the United States was forced to pull out and that the war between the Vietnamese went on.
There was also at this stage a curious historical disparity in the thinking of Mr. Nixon and Dr. Kissinger. The former went into office with the concept that American destiny would for the rest of the century move westward to Asia; his view of the Vietnam war and relations with China was governed by this grand, if highly dubious, proposition. Dr. Kissinger did not go anywhere as far afield in his outlook; he seemed to worry most about the more immediate consequences, mainly outside Asia, of how the Vietnam war would come to an end. Though their reasons were somewhat different, both were prepared to fight as long to end the war without victory as President Johnson had been prepared to fight for victory. Yet they were luckier than he was. Enough people seemed to think that the war had gone on for so long that it might never end; anyone who could bring it to a close at all, no matter how long it took and at what frightful cost, was entitled to eternal gratitude.
What concerns us here is not so much the Vietnam settlement as the decision which led to four more years of war until the settlement was reached. That decision was made by President Nixon, with Dr. Kissinger's concurrence, early in 1969, in the first weeks of the administration, and it may well be the most critical one that they made. Once they decided on a course which put American interests all over the world at the mercy of such an intangible nuance as “honor” in Vietnam, their freedom of action and even their field of vision were hopelessly restricted elsewhere. That this is exactly what happened as a result of the Vietnam war is not open to question. We have it on the authority of President Nixon himself: “Now, in terms of our world situation, the tendency is, and this has been the case for the last five to six years, for us to obscure our vision almost totally of the world because of Vietnam.”13 In January 1973, he remarked that “it just happens as we complete the long and difficult war in Vietnam, we must now turn to the problems of Europe.”14 The price of the Vietnam war was paid not only in Vietnam; it was paid all over the world. It is with the other price that we must now reckon.
Another major related decision was made at the outset of the Nixon administration.
Apart from the Vietnam war, the great question before U.S. policy was whether the problem of our friends was more important and deserved priority over the problem of our foes. Was it better or safer to let Western Europe go its own way and further weaken its ties with the United States, or to concentrate on working out a détente with Soviet Russia and a rapprochement with the People's Republic of China? The decision went in favor of the latter course. Western Europe was put on the waiting list, after China, Russia, and Vietnam. 1971 was the year of China; 1972 was the year of Russia; the Vietnamese agreement came at the end of 1972; and Europe was scheduled for 1973. The stage was thus carefully prepared for a crisis in European-American relations.
This order of priorities was based on several considerations: China and Russia could do far more than Europe to help end the Vietnam war. Russia was North Vietnam's main supplier, and China was supposed to be the chief inspiration of North Vietnam's hardliners. If outside pressure was to be brought on North Vietnam, it clearly had to be brought through them. Thus the Vietnam war was itself partially responsible for the choice of priorities.
Other factors undoubtedly entered into the calculation. By 1969, it was apparent that the Russians were heading for nuclear parity with the United States. For some reason, U.S. officialdom has been prone to underestimate Soviet military intentions and capabilities. It was surprised by the rapidity with which the Soviets achieved the A-bomb, the H-bomb, advanced jet engines, long-range turbo-prop bombers, airborne intercept radars, and large-scale fissile material production.15 In the mid-1960's, the Americans did not expect the Soviets to be willing to pay the exorbitant price necessary to achieve numerical equality in missiles. In 1966, the United States decided to build no more nuclear weapons and to limit itself to improvement of existing weapons.16 By late 1964 or early 1965 at the latest, however, the Soviets set out not only to equal but to surpass the United States, at least numerically, in intercontinental missiles.17 The publicly announced Soviet military budget rose from 12.8 billion rubles in 1965 to 13.4 in 1966, to 14.5 in 1967, 16.7 in 1968, 17.7 in 1969, and 17.9 in 1970—an increase of 40 per cent. The real resources devoted to the Soviet military, including secret and hidden allocations, amounted of course to much more.18 The point here is not whether the world has changed for better or worse because the Soviet Union decided to catch up with the United States in nuclear power. The Soviet decision and achievement faced the incoming Nixon administration with the unpleasant choice of accepting Soviet-American strategic parity or of engaging in another, probably futile arms race. From a strictly military point of view, Europe was out of this contest, and the Soviet Union preempted the fullest attention.
If China and Russia were the pressure-points on North Vietnam, China was obviously the pressure-point on Russia. By 1969, it was commonly believed that the Russians were more worried about the Chinese menace than about the American threat. Despite the protestations of President Nixon and his officials that nothing was further from their minds than the idea of trying to play off China against Russia, no one else could be prevented from seeing a Chinese-American rapprochement in this light. Again, Europe as well as Japan was out of the contest; they watched from afar, with pleasure, consternation, or indifference, the Soviet-American strategic-arms-limitation talks and the Chinese-American tête-à-tête as both proceeded in 1971.
In his incredible interview with the Italian journalist, Oriana Fallaci, Dr. Kissinger volunteered the information that, from 1969 on, he had wanted to achieve three things: peace in Vietnam, rapprochement with China, and a new relationship with the Soviet Union.19 The “troubled partnership” between the United States and Europe was significantly missing from the list.
But there was a consolation prize for Europe. The catchword was “partnership.” In one of his first Presidential speeches in April 1969, President Nixon pledged the United States to “deep and genuine consultation with its allies” whom he referred to as “partners.”20 In his report to Congress of February 1970, a key document of administration policy, Mr. Nixon said that the nations of Western Europe and North America made up “our partnership”; he also called for both a “genuine partnership” and “a new and mature partnership.” To make the Europeans acutely conscious of what a good deal they were getting from him, Mr. Nixon took to telling them how badly treated they had been by his predecessors. Anyone who wanted to demonstrate that the United States had run NATO and the Atlantic alliance as if they were plantations with a master and slaves had only to cite the President of the United States, who now confessed: “For too long in the past, the United States had led without listening, talked to our allies instead of with them, and informed them of new departures instead of deciding with them.” All this was going to change as the United States moved “from dominance to partnership.”21 A year later, in February 1971, Mr. Nixon announced that the move had successfully been made: “In Western Europe, we have shifted from predominance to partnership with our allies.”22
By 1971 the main concepts of the Nixon-Kissinger foreign policy seemed to have crystallized—rapprochement with China, détente with Russia, partnership with Western Europe. Perhaps because he would not leave well enough alone or because his adviser on national-security affairs was an inveterate conceptualizer, Mr. Nixon could not resist the temptation to bring forth his own grand conceptual structure. In the summer of 1971, he took a group of news-media executives meeting in Kansas City, Missouri, into his confidence and gave them a glimpse of the world in the next five to fifteen years. He characterized the United States, Western Europe, Soviet Russia, mainland China, and Japan as the “five great economic superpowers.” The emphasis was clearly on “great” rather than on “economic.” These five, he said, “will determine the economic future and, because economic power will be the key to other kinds of power, the future of the world in other ways in the last third of this century.23 Six months later, in January 1972, Mr. Nixon elaborated on this theme. He fitted the new era of the big five into the traditional theory of “balance of power,” which he extolled as having been the only basis for an extended period of peace in the history of the world. “It is when one nation becomes infinitely more powerful in relation to its potential competitor that the danger of war arises,” he explained. “So I believe in a world in which the United States is powerful. I think it will be a safer world and a better world if we have a strong, healthy United States, Europe, Soviet Union, China, Japan, each balancing the other, not playing one against the other, an even balance.”24
Until he became President, Mr. Nixon had belonged to the school of thought which believed in making the United States incomparably stronger than its enemies. If that policy had been good before, however, the decline of the United States in relative strength was apparently even better. And was it really true that the danger of war arises if one nation becomes infinitely more powerful than others? One had imagined that the danger increased as the gap closed. In any event, this Presidential analysis of the world in 1972 seemed to make the “balance of power” official doctrine, with each of the five so strong in relation to the others that they could play independent roles and constitute “an even balance.”
Other officials, especially in the State Department, made exegetical discourses on the new dispensation. One of their favorite commentaries concerned the displacement of bipolarity by multipolarity. The President had at least qualified the five superpowers as “economic,” though he had implied that all other kinds of power flowed from economic power. This distinction somehow dropped out of the exegeses. One official simply spoke of “multiple power centers” emerging in the final quarter of this century.25 Another expounded on the “new power centers”—Western Europe, Japan, and China—which “by definition relates to the decline in the bipolar structure of the world.”26 A third went into more detail: “First, we are entering a world in which the bipolar pattern dominant in the last quarter century has given way to multiple centers of power and influence. Power, defined in political and economic as well as in military terms, no longer is the near-exclusive province of ourselves and the Soviets. The dominant relationships in this decade will not be between two centers of influence, but among five. Western Europe, Japan, and China have moved to the front of the world stage.”27
How much Dr. Kissinger had to do with all this higher theorizing was not at first clear. Some seemed to think that if President Nixon had an idea, let alone an entire theory, it must have come from Dr. Kissinger. The mere mention of balance of power set off an epidemic of amateurish historical analogies between Kissinger and Metternich, about whose diplomacy he had written his doctoral thesis fifteen years earlier. A minor annoyance connected with the Kissinger phenomenon has been the excruciatingly bad books that it has inspired. One of them claimed to know that he was already plotting a “conceptual blueprint” for U.S. policy in his doctoral thesis.28 Dr. Kissinger himself was evidently not amused. He saw fit to tell Oriana Fallaci that “There can be nothing in common between me and Metternich” and that it was “childish” to associate the two of them.29 He did not say who benefited the more from this offer of disengagement.
The licensed experts pounced joyfully on President Nixon's pastiche of ideas about balance of power and the five superpowers. As one heartlessly put it: “Of the so-called major powers, one (Europe) does not yet exist, one (Japan) has not found a role, and one (China) happens to be neither a superpower yet nor a very likely practitioner of the balance of power should it become one.”30 For a while, the foreign-affairs journals were filled with the higher criticism on such urgent matters as the balance of power in the 19th as compared with the 20th century.
By the end of 1972, the theory had been put out of its misery. President Nixon himself distinguished between the Soviet Union, which “is a superpower,” and China, which “has the potential in the future.”31 A potential superpower was obviously not yet ready for the role which he had assigned to it. Early in 1973, Dr. Kissinger pleaded innocent. He made known that he had been on his way to China when Mr. Nixon had given birth in Kansas City in 1971 to the five-superpowers-balance-of-power theory and that he had not read the speech or known what was to be in it before it was delivered. He assured his listeners that “what this administration has attempted to do is not so much to play a complicated 19th-century game of balance of power” as to do something else evidently less complicated, which was “to try to eliminate those hostilities that were vestiges of a particular perception at the end of the war.”32 Once the word went out that the line had changed, Deputy Secretary of State Kenneth Rush carefully explained why the original theory could not have been right: “For one thing, the principal participants have different capabilities. Bipolarity still persists in the strategic relationship between the United States and the Soviet Union. Europe is still in the process of developing the voice and organization to fully reflect its international economic position. Japan is still exploring the meaning of its phenomenal economic growth in terms of its international role. China's international position primarily reflects her potential, her great size, and her potential military strength.”33
Such was the short, unhappy life of the most ambitious concept put forward by the Nixon administration. It was far less important as an intellectual exercise than as a sign of the times. The year of the Kansas City speech. 1971, was also the low point in the fortunes of the dollar that brought about the collapse of the international monetary system. The overall U.S. balance-of-payments deficit had reached the astronomical figure of $9.8 billion, the seemingly inviolate U.S. trade surplus had disappeared, and the convertibility of the dollar into gold, a basic principle of the post-World War II monetary order, was abandoned. The same year seemed to be the high point of European economic and financial strength. It was the turn of Europeans to tell the United States to “put its economic house in order.” When former Secretary of the Treasury John B. Connally tried to bully the Europeans, he told them it was now their duty to be philanthropic and their task to correct the imbalances that had developed. If they could have done what he wanted them to do, the implication was inescapable that Europe was more than able to hold its own against the United States. The trade and monetary shocks of 1971 undoubtedly contributed to the elevation of Europe and Japan to the august status of “economic superpowers.”
It is also fair to add that the idea of putting Europe on more or less the same plane as the United States was not altogether original. The thought was already in the air, particularly in neo-isolationist circles. One such confident pronouncement in 1970 went: “The Europeans are our best friends in the world; they are also our equals.”34 One year later, the “best friends” quarreled fiercely because the Europeans seemed to be more than equal, and three years later, they quarreled even more ferociously because the Europeans realized that they were much less than equal.
Precisely in this period détente became an active issue in policy.
If anyone should have been prepared for the pitfalls of détente, it was Dr. Kissinger. For about a dozen years before he went to Washington to serve President Nixon, he had been a stern and unsparing critic of anything that smacked to him of “illusions” about détente.35
In the first book, published in 1957, with which he attracted widespread attention, Professor Kissinger expressed a certain distaste for, or anxiety about, “peaceful coexistence,” the term then in vogue. He twice found it necessary to instruct the reader that “peaceful coexistence” meant for Soviet leaders nothing more than “the most effective offensive tactic” and “the best means to subvert the existing structure by means other than all-out war.” It was good Leninist doctrine, he patiently explained, that the Soviets, so long as the relationship of forces was not in their favor, should keep “provocation below the level which might produce a final showdown.”36
Four years later, in 1961, Professor Kissinger was worried most about the Western tendency to see a Soviet turn from belligerency to détente as evidence of far more than a change of tactics. “But,” he cautioned, “one of the principal Communist justifications for a détente can hardly prove very reassuring to the free world; peace is advocated not for its own sake but because the West is said to have grown so weak that it will go to perdition without a last convulsive upheaval.” As for the Western attitude, he observed disapprovingly that “all the instincts of a status quo power tempt it to gear its policy to the expectation of a fundamental change of heart of its opponent” and to the imminence of “a basic change in Communist society and aims.” Americans, he thought, were especially susceptible to the belief that all problems were soluble and that “there must be some way to achieve peace if only the correct method is utilized.” In this work he was especially censorious of President Eisenhower's “ambulatory” personal diplomacy which inspired him to lay down the general rule that “whenever the Communist leaders have pressed for a relaxation of tensions they have tied the success of it to personalities.”37
After four more years, in 1965, Professor Kissinger had some more pungent things to say about the American tendency to think of détente in terms of personal relations. It was “futile,” he repeatedly stressed, to engage in “personal diplomacy” with the Soviets “even at the highest level,” for one reason because their leaders were committed to a belief in the predominance of “objective” factors. Whenever Soviet leaders “have had to make a choice between Western goodwill and a territorial or political gain,” he maintained, they “have unhesitatingly chosen the latter.” If the Soviets seem to make “concessions,” they make them “to reality, not to individuals.” He noted that there had been five Soviet periods of “relaxation” since 1917, all of which had come to an end for the same reason—“when an opportunity for expanding Communism presented itself.”38
As late as 1968, the year before he went to Washington, Professor Kissinger was still of much the same mind about past détentes. “During periods of détente,” he observed sharply, “each [Western] ally makes its own approach to Eastern Europe or the USSR without attempting to further a coherent Western enterprise.” He summed up the entire process in a way that is still instructive: “Each [détente] was hailed in the West as ushering in a new era of reconciliation and as signifying the long-awaited final change in Soviet purposes. Each ended abruptly with a new period of intransigence, which was generally ascribed to a victory of Soviet hardliners rather than to the dynamics of the system. There were undoubtedly many reasons for this. But the tendency of many in the West to be content with changes of Soviet tone and to confuse atmosphere with substance surely did not help matters.”39
Judging from his books and articles for over a decade, Professor Kissinger should have been repelled by a Soviet-American détente that was accompanied by unprecedented tension between Western Europe and the United States. His entire oeuvre was distinguished by an acute distrust of détente and a moving belief in the need for Western Europe and the United States to be linked by the closest possible ties, going far beyond even the existing Atlantic alliance. In The Necessity for Choice of 1961, he appealed for “structural changes” within the Western alliance to make it a federalized “North Atlantic community” or a “confederation of states.” Otherwise, as he ominously foresaw: “Without a truly common position Western rivalries will either paralyze negotiations or enable the Soviet Union to use them to demoralize the West.”40 In The Troubled Partnership of 1965, he went further prophetically and summoned the West to form an “Atlantic Commonwealth” of all the peoples bordering the North Atlantic. The ability of the West to move from the nation-state to a larger community would, he avowed, “largely determine whether the West can remain relevant to the rest of the world.”41 If one had read what Professor Kissinger had written before going to Washington, one could not have imagined he would put the years of détente first and the year of Europe last.
In fact, he was not the only one in the Nixon administration who had had premonitions of what was going to happen in the name of détente. In 1969, the then Under Secretary of State Elliot L. Richardson had given this assurance:
We shall not bargain away our security for vague improvements in the “international atmosphere.” Progress in East-West relations can only come out of hard bargaining on real issues. A détente that exists only in “atmosphere” without being related to substantive improvements in the relationship between the powers is worse than no improvement at all. It tempts us to lower our readiness, while providing no really concrete basis for a reduction in tensions.42
In 1970, Robert Ellsworth, the U.S. representative to the NATO Council, came even closer to one of the real issues that later arose to bedevil the Soviet-American détente. He recognized that the Soviet's and Warsaw Pact's “hunger for access to the science and technology of the West” was a key element in their diplomacy and in their push for “expansion of trade, economic, scientific and technical relations” between East and West. Others have since come to the same conclusion.43 Ellsworth went on to explain that the principal difficulty confronting the Soviets was their inability to pay. It was still possible for an American official to be brutally candid about what the proposed deal entailed:
They [the Soviets] would be able to pay if they could balance their imports by increasing exports of raw materials, and oil and gas, but they are unable to achieve this balance. Thus, they must ask for credits—credits which would have to be guaranteed, or possibly even subsidized, by governments. In essence such an agreement is not trade, but aid. Decisions about extending such aid, as well as decisions about transferring advanced technology from West to East, are not simply economic or technical decisions. They involve the highest political considerations [emphasis added].
Finally, Ellsworth told the tragic story of the Duke of Urbino who had committed a “classic blunder” four hundred years ago:
He possessed by far the most advanced artillery of the 16th century, which he foolishly loaned to Cesare Borgia for the alleged purpose of a Borgia attack upon Naples. Instead, Borgia promptly turned the artillery upon Urbino as he had planned all along. That was the end of Urbino.44
Who would have guessed that so many American capitalists would become 20th-century Dukes of Urbino?
These premonitions, forebodings, and reflections on détente were not produced in a vacuum. There had been, as Professor Kissinger noted, several periods of détente between the Soviet Union and the West since the Bolshevik revolution, as well as three main Soviet détentes with individual Western countries since 1965. In fact, the Soviet Union has had a waiting-list for détentes, with the United States third in line.
The first to join the elect was France. Its détente had its roots in Gaullist doctrine and went as far back as the end of World War II.
De Gaulle made his first bid to the Soviet Union as early as December 1944, before the end of the war, when France was barely free of German troops. During his first visit to Moscow, he tried to convert Stalin to a three-stage system of alliances: the first Franco-Soviet, the second Anglo-Soviet and Anglo-French, and the third a catch-all in which for the first time the United States was generously included within the forthcoming United Nations, for which de Gaulle otherwise had little use. Since de Gaulle held on tenaciously to his long-term plans, even if he was flexible in his tactics, this scheme cannot be dismissed lightly. It was, in effect, his ultimate view of how to restore France to the position she had held before both world wars, when alliances with Russia had been the cornerstone of her foreign policy. Indeed, de Gaulle explicitly invoked the Franco-Russian alliances of 1892 and 1935 and told Stalin that he wanted another for the same reasons that had inspired them. De Gaulle's final aim, then, required more than a mere détente; it demanded an actual Franco-Soviet alliance based on a mutual recognition of each other's interests. Even in 1944, de Gaulle tried to tell Stalin how far to go—or rather not to go—in Poland.45
Toward the end of his life, de Gaulle looked back at this period and revealed more fully and clearly what he had had in mind. He had intended, he explained, to cooperate and contract alliances with both East and West, and to form with neighboring states a bloc that would become one of the three world powers, capable of acting as “the arbiter between the Soviet and Anglo-Saxon camps.”46 As time passed, de Gaulle and his successors found it more tactful and expedient to put their program negatively—the breaking down of the two “hegemonies” of the Soviet Union and the United States. But there was a positive side to this design—the building up of a third hegemony, that of France in Europe, and through Europe, in the world. De Gaulle's ends, if not his means, were remarkably consistent over the years. In one of his last writings, he still called on France to play “an international role of the first rank” and to take on “world responsibility,” not restricting herself merely to Europe. In Europe as well as in the world, he insisted, it was incumbent on France to be free to act by herself.47
De Gaulle's first overture to Stalin was, of course, premature. Stalin had no intention of letting de Gaulle get in his way in Eastern Europe, particularly in Poland. At the Yalta conference less than two months later, Stalin did not even wish to give France an occupation zone in Germany, and only Churchill's fight for it made him relent. When de Gaulle left office the first time in 1946, his Russian policy was in shambles.
De Gaulle's second effort was more successful. The latest Franco-Soviet détente, according to Maurice Couve de Murville, de Gaulle's Foreign Minister, began to take concrete shape in the spring of 1965 with a visit to Paris by the Soviet Foreign Minister Andrei Gromyko. When Couve de Murville returned the visit in the fall of 1965, Soviet Premier Alexei Kosygin told him how worried the Soviets were about the United States, already deeply engaged in Vietnam. The United States and Russia were, in Kosygin's view, actually at war, if only because Russia was supplying arms to the other side.48 Whatever the merits of the Vietnam war may be, the fact remains that Gaullist France chose to inaugurate its détente with a Soviet Russia which considered itself to be, in effect, at war with the United States.
In February 1966, three months later, de Gaulle publicly announced France's intention to leave NATO, a step which had been on the way the year before and was consummated a month later. In June of that year, de Gaulle “consecrated” the Franco-Soviet détente with a triumphal visit to Soviet Russia, during which Party Secretary-General Leonid I. Brezhnev for the first time broached the Soviet proposal of a European security conference excluding the United States.49
These moves were highly orchestrated. The French military abandonment of NATO was implicitly an integral part—or price of—the Franco-Soviet détente. In fact, France was not strong enough to offer very much more to the Soviet Union in return for Soviet favor. Ostensibly, the Gaullist position was based on opposition to all blocs, East and West. But France could do very little about breaking up the Eastern bloc; she, however, could do much about breaking up the Western bloc. The Franco-Soviet détente was made between very unequal powers; as a result, its terms were most unequal. If it had been arranged on an equal basis, the Soviets should have withdrawn from the Warsaw Pact as France withdrew from NATO. But of course such a Soviet contribution to détente was unthinkable, and no one in his right mind thought of demanding it. At the very moment France was mortally weakening NATO, the Soviets were massively strengthening the Warsaw Pact. De Gaulle could not negotiate from strength; his negotiating card, in fact if not in name, was the weakening of the system to which France had belonged—a service for which the Soviets were willing to pay a modest price.
For de Gaulle, détente was only the hors d'oeuvre; the main course, still to come, was a Franco-Soviet entente reminiscent of the pre-war alliances. De Gaulle's formula was “détente, entente, and cooperation.” In the end, he disdained the Atlantic alliance as no more than “the military and political subordination of Western Europe to the United States.”50 Russia was at least partly in his Europe “from the Atlantic to the Urals”; the United States was definitely out of it.
This was the Gaullist vision. De Gaulle knew, of course, that the road was bound to be hard and long, necessitating many detours, maneuvers, and ruses. At various times and in different circumstances, Gaullism seemed to be pro- and anti-German, pro- and anti-American, pro- and anti-Russian, pro- and anti-European. It was ready to use almost any means for its own ends, thereby leading others to think that they could use it for their ends. Yet, in the last analysis, there is a Gaullist hard core that comes as a shock to France's European partners and American well-wishers, no matter how much they have been forewarned. The French behavior after October 1973 in the face of the energy crisis would have given them less of a jolt if the Gaullist heritage had been kept in mind and taken more seriously. It is part of that heritage that France should be the rogue elephant of the West, taking advantage of every opportunity to advance its own interests, feeling more immediately threatened by the American embrace than by the Russian hug, savoring situations which force the other European nations to choose between France and the United States.
In a world in which power so often decides the issue, there was no reason why de Gaulle should not have wanted as much power as possible for France. What is more questionable is the admiration that so many non-French Westerners have had for Gaullism without unflinchingly facing up to what they were admiring. President Nixon was a notorious admirer, and French Gaullists have seen his unprecedented aggrandizement of the American Presidency as the sincerest form of flattery. Dr. Kissinger's case a decade ago was more typical of Western intellectuals. His esteem for de Gaulle was not uncritical, but he invariably found more to blame in American policies than in the French. However naughty the French might be, he tended to scold the Americans for provoking or encouraging them. He expected history to demonstrate “that de Gaulle's conceptions—as distinct from his style—were greater than those of most of his critics.” He estimated de Gaulle's conceptions to be “greater than his strength,” while America's power was “greater than its conceptions.” For most of his career, de Gaulle had been an “illusionist,” Dr. Kissinger conceded.51 But, he might have added, Frenchmen were not the only ones bemused by Gaullist illusions.
Thus the Gaullist détente with the Soviet Union in 1965-66 had its own distinctive raison d'être. It cannot be understood merely as an effort on both sides “to relax tensions.” Each side was trying to use the other for particular, far-reaching ends. Whatever tensions de Gaulle may have relaxed with the Soviet Union, he enormously increased other tensions with the United States. He was far less interested in relaxing tensions than in utilizing them for his own larger purposes. For de Gaulle as for the Soviet leaders détente was not an end in itself; it was a means to get whatever they wanted to get with or without détente.
The Federal Republic of Germany was the second country admitted to membership in the Soviet Union's exclusive “Détente Club.” Since the position and problems of West Germany were vastly different from those of France, the meaning and consequences of its détente were equally different.
The German détente also did not come cheaply. For almost two decades, German policy had rested on three interlocking premises. In essence, they were: German reunification, European Union, and the Atlantic alliance. The first was fundamental; everything else flowed from it. In the Adenauer era, which lasted until 1963, it was assumed that a united Germany had to be incorporated or submerged in a larger European Union in order to contain Germany's potentially unruly nationalism and thus make German reunification acceptable to Germany's neighbors. But since a European Union would not be strong enough in the foreseeable future to force or to induce the Soviet Union to disgorge East Germany, it was considered necessary to link Western Europe most closely to the United States through the Atlantic alliance to get the desired result. If West Germany was not yet able to do much about achieving reunification, the minimum demanded by this policy was to do nothing against it. By implication, Germany could not recognize the territorial status quo, such as the Oder-Neisse line, or even formally renounce the pre-war Munich agreement, until a final disposition of Germany's status had been made. Until then, everything had to be conditional and provisional, even the status of West Germany itself.
Ironically, in the early years of the Adenauer regime, the German Social-Democrats took an even harder line than the Christian-Democrats on the quest for German reunification; the Social-Democrats bitterly criticized their rivals for not being inflexible enough and exigent enough on the issue. How legitimate and necessary the Adenauerian Weltanschauung once seemed to be can also be seen in the past writings of Dr. Kissinger. “Any West German government must advocate reunification, however moderate it may be in the means it chooses to pursue this objective and however patient it may be in bringing it about,” he wrote in the heyday of the Adenauer regime. He admonished: “The Federal Republic would suffer a perhaps irreparable blow if its allies accepted its present frontiers as final—even to the extent of not pressing for unification.” He cautioned: “If the Federal Republic is persuaded that it cannot achieve reunification through ties with the West, it is likely to seek its aims through separate dealings with the East.”52
These words were written fifteen years ago. In that period, Dr. Kissinger was a full-fledged Adenauerian and a part-time Gaullist. His views are worth recalling not because of what they may tell about him today but because they so faithfully reflected the Adenauerian credo that they help to recall how seriously it was taken and how much was involved in giving it up. As long as Adenauer's basic policies prevailed, a German-Soviet détente was out of the question. Or, to put it another way, Adenauerism made German reunification a pre-condition of détente rather than détente a prerequisite of reunification. The Soviets would, of course, have none of this. They were bent on keeping Germany divided; on gaining recognition for the German Democratic Republic as an independent East German Communist state; on separating West Berlin from West Germany; on obtaining formal recognition of the status quo, especially the Oder-Neisse line; on frustrating an effective European Union; and, almost more than anything else, on divorcing the United States from Europe and breaking up the Atlantic alliance. Adenauer was not only allergic to a German-Soviet détente; any suggestion of a Soviet-American détente made him excessively nervous. Toward the end of his reign, he became so despondent over what the United States was doing for him that he turned for support to Charles de Gaulle—who was heading toward a Franco-Soviet détente. The Adenauer-de Gaulle honeymoon was short-lived because the French were far from dissatisfied with German disunity, were not at all satisfied with Adenauer's “supra-national” view of European unity, and were utterly contemptuous of his attachment to the Atlantic alliance, which they interpreted as little more than slavish dependence on the United States.
All this is hardly ancient history. It occurred only a decade ago, and the issues that caused so much trouble then are no less troublesome now. Couve de Murville recalls with relish and scorn in his memoirs how the West German government was “flabbergasted and terrified” at the prospect of following French “dynamism, audacity, and independence,” whenever the United States disapproved of French actions. Not, he adds for good measure, that the French ever expected anything else of the Germans.53
The Franco-Soviet détente of 1965, accompanied by France's withdrawal from NATO, stunned the Germans. From this point on, West German policy was irretrievably shaken from its moorings. The Franco-German rapprochement, the Atlantic alliance, the United States with its increasing entanglement in Vietnam—all seemed to have betrayed the hopes that had been placed in them. During the so-called Grand Coalition of 1966-69, headed by the Christian-Democratic Chancellor Kurt Kiesinger and Social-Democratic Foreign Minister Willy Brandt, more and more was heard of Ostpolitik and détente. But, as always, they came with price tags. This was how the tag read to an experienced observer in 1967: “The price held out to the Germans of détente with the Soviet Union is the continued division of Germany and detachment from the United States.”54 It was still possible then to contemplate the price of détente—at least, someone else's détente—with blunt candor and realistic concreteness. Just how much the Soviets would get was not clear until the Grand Coalition fell apart and Brandt took over as Chancellor, with the Free Democratic party as junior partner, in the fall of 1969.
Brandt's Ostpolitik went into high gear almost immediately; it resulted in a Soviet-German treaty in August and a Polish-German treaty in November 1970. In effect, both documents formally recognized what all West German governments had previously shied away from—the status quo, including the Oder-Neisse line and the border between East and West Germany. To take this step, Brandt's regime had to give West German policy a degree of independence or autonomy that it had never previously had. The German-Soviet détente was before anything else an act of German statesmanship based on a particular interpretation of German “national interest,” whatever its effects might be on the so-called European Community and the Atlantic alliance. De Gaulle did not ask the Germans whether to make his détente with the Soviets in 1965, and Brandt did not ask the French—or the Americans—whether to make his détente in 1970.
This form of Ostpolitik could not fail to impinge on Germany's Westpolitik. Adenauer's Westpolitik had been his Ostpolitik—that is, he had staked all on the power and determination of the West to force concessions from the East. Brandt's Ostpolitik was not in the same sense his Westpolitik, but the former now set limits on the latter. The delicate balancing act between East and West was plainly implied by Brandt himself in a notable address from Moscow to his compatriots in August 1970: “Our national interest does not permit us to stand between the East and West. Our country needs cooperation and harmonization with the West and understanding with the East.” Virtually paraphrasing de Gaulle, Brandt went on: “Russia is inextricably woven into the history of Europe, not only as an adversary and danger but also as a partner—historical, political, cultural, and economic.” He defended the Soviet-German treaty on the ground that “nothing is lost that had not been gambled away long ago. We have the courage to open a new page in history.”55
This rationale was not altogether disingenuous. It was true that what had been lost had been lost long ago; it also was true that Germany was now giving up all claim to regaining what had been lost. If nothing had really changed, it was hard to see why a new page in history had been opened. Something had surely been changed by the treaty or it would have been inconsequential; the only question was whether it had changed more on the West German side than on the Soviet-East German side.
Compared to the West German détente, the French détente had been comparatively simple. The French could take a most cavalier attitude toward European Union and the Atlantic alliance; the West Germans could not. The latter had to juggle several balls in the air: détente with the East, European Union, the Atlantic alliance, and, above all, relations with East Germany. After the Soviet-German treaty was signed, Chancellor Brandt made known that its effectiveness depended upon a satisfactory settlement of the ever-disturbing fate of Berlin, which, in turn, hinged on an agreement between East and West Germany. This step required an even more far-reaching historic decision on the part of West Germany—whether to give up German unification for the indefinite future, something that Professor Kissinger and others had not long ago regarded as virtually unthinkable.56 The formula which finally enabled West Germany to give up the substance, while saving the shadow, of reunification was “two German states in one nation.” The “two German states” satisfied the inexorable demand of East Germany; the “one nation” held out the consolation for West Germany that both Germanies had something deeper than statehood in common. In any case, the precondition for an East-West German settlement was satisfied by the Allied agreement on Berlin in September 1971, and the full East-West German settlement took the form of a Basic Treaty signed on December 22, 1972. In essence, East Germany got full and unconditional recognition as a sovereign state, and West Germany got freer travel and communication arrangements between the two Germanies.
We do not have to decide here whether Chancellor Brandt's Ostpolitik has been good or bad, right or wrong. It would be difficult in any case to make any definitive judgment of Brandt's policy; the deal between East and West was too unequal. What West Germany contributed to the détente were fundamental concessions on great historic issues. It may not be possible to tell for another generation or more what the full price of the formal recognition of German partition is going to be. In holding that the Federal Republic could not safely abandon reunification or accept the present frontiers as final, Dr. Kissinger and others may still prove to be farsighted. In return, all that East Germany promised to give was relatively limited and ephemeral. The German Communists have sought to protect themselves from closer relations with the West by adopting a policy of Abgrenzung (separation). Thus far the fruits of détente in intra-German relations have proven to be most disappointing to West Germany. Some gains have resulted from the Basic Treaty of December 1972, but they have been far more restricted than the West Germans had hoped. Observers have also noted that détente has encouraged a growing mood of “inwardness” in West Germany, accompanied by a growing estrangement from foreign affairs.57 If the German détente needed a symbol, it was provided by the resignation of Chancellor Brandt because his East German confreres had planted a spy in his midst as a token of mutual trust.
As in the German case, détentes may help to stabilize one area and destabilize another. While public-opinion polls showed that 80 to 90 per cent of West Germans favored reconciliation with the Soviet Union, they also revealed a disquieting trend away from the Western alliance system. From 1969 to 1971, the percentage in favor of German neutrality rose from 39 to 50. In the same period, the percentage in favor of a firm military alliance with the United States dropped from 48 to 39.58 In the fall of 1970, a poll presented twenty different political objectives; consolidation of the Western alliance ranked fifteenth, well toward the bottom. In 1972, a majority for the first time favored a neutrally oriented German foreign policy.59
One détente may also work at cross purposes with another. The Franco-Soviet détente of 1965 acutely disturbed the Germans, and the German-Soviet détente of 1969-70 intensely disconcerted the French. The German policy of de Gaulle's successor, Georges Pompidou, was essentially one of taking out an insurance policy against Germany. In 1969, Pompidou remarked: “Germany and its economic weight disturbs me.” He used this argument to induce the Italians to get closer to the French.60 In 1971, when Pompidou permitted Great Britain to enter the Common Market, his purpose was not merely to enlarge the European economic community; it was primarily to use Britain against Germany. As soon as France withdrew its veto, former Prime Minister Edward Heath began to talk with an Anglo-Gaullist accent, as Dr. Kissinger had long ago predicted.61 Most recently, the French anxiety about Germany was discussed by the arch-Gaullist, Michel Debré, former Premier, Foreign Minister, and Defense Minister, with Marc Ullmann, editor-in-chief of the Paris weekly, L'Express. Debré explained, according to Ullmann, that France's nuclear deterrent was “intended to enable France to adopt a position of Swedish-style armed neutrality in the event of West Germany being tempted to participate in the creation of a Finland-style Mittel-Europa.” Indeed, the latest French cauchemar has been the risk that the American-Soviet détente would open the way to the neutralization of Central Europe, including Germany. Pompidou, according to M. Ullmann, was “obsessed, even more than his predecessor was, with the fear that Germany will one day allow herself to be carried away by the wind from the East” and was “convinced that Germany is bound to seek her reunification in one way or another,” despite the terms of the German-Soviet détente. It was all right for French nationalism to set the pace of détente with the Soviet Union, but now “French policy appears to be once again dominated by the fear that Germany may start to play a purely nationalist game.” For these reasons, M. Ullmann reported, every conversation with former President Pompidou about foreign policy, no matter how it began, always ended up by “invoking the ‘German problem.’”62
It might be thought that two détentes would be better than one, but that is not necessarily the case. The French détente was a Gaullist expression of French national interest, and the German détente was a Brandtian expression of German national interest. The two détentes did not mesh because the national interests did not mesh. The Germans interpreted the Franco-Soviet détente as essentially inimical to German interests, and the French interpreted the German-Soviet détente as essentially inimical to French interests. It should come as no surprise, then, that the Soviet-American détente was not universally greeted with joy and applause in Europe.
The roots of the current Soviet-American détente go back at least a decade. It originated with John F. Kennedy, not Richard M. Nixon.
In a notable speech at the American University in June 1963, President Kennedy put out a feeler for “relaxation of tensions” based on a common “abhorrence of war.” We are told by Arthur M. Schlesinger, Jr., that one of Kennedy's motives was to get a Soviet-American front against Communist China which the President considered to be “the long-time danger to the peace.” Both the French and of course the Chinese were then interested in preventing a Soviet-American détente.63
The mini-détente of 1963 was made up of the same kind of deals that turned up in the more ambitious détente of 1972. The main consequences of Kennedy's initiative were the Test Ban Treaty of July 1963 on the military side and the first sale of U.S. grain to Soviet Russia on the commercial side. France and China refused to sign the limited Test Ban Treaty. The grain deal of $250 million of surplus wheat was a bagatelle compared with the gargantuan 1972 deal, but it set a precedent and helped the Soviets to get over a serious agricultural shortage. In Kennedy's entourage, détente was very much in the air, if only in an early phase. “The breathing spell had become a pause, the pause was becoming a détente and no one could foresee what further changes lay ahead,” Theodore Sorensen writes of the period.64 Arthur Schlesinger is somewhat more restrained, cautioning that the accomplishment of these and other measures would have stopped short of “a true détente” which would have required a closing of the “philosophical gap” between the two societies.65 Whatever it was and however far it went, the Kennedy-Khrushchev détente was cut short by the assassination of the President in November 1963.
Among those most disturbed by this first experiment in détente was Richard Nixon.
The wheat deal particularly perturbed Mr. Nixon. The United States, he said, would be “harming the cause of freedom if it sold wheat to the Soviet Union.” He wanted to know: “Why should we pull them out of their trouble and make Communism look better?” He suggested selling the wheat to the Soviet satellites “as a business deal, provided that the government involved gives some degree of freedom, more degree of freedom [sic] to the people in these countries”—exactly what he thought should not be done nine years later. Mr. Nixon did not like anything about Kennedy's tentative détente because as he put it: “The bear is always most dangerous when he stands with his arms open in friendship.”66
Yet the rationale for Kennedy's détente was not very different from that adopted by President Nixon for his détente. By 1963, U.S. authorities believed that the United States and the Soviet Union were about evenly matched in antiballistic missiles; the Soviet Union had apparently gained an advantage in very-large-yield nuclear weapons whereas the United States held a lead in medium- and low-yield weapons.67 Kennedy in 1963 as much as Nixon in 1972 professed to be mainly concerned with stopping or slowing down the nuclear arms race. The times were different but not so different that détente might not have been defended in 1963 on the same grounds that it was defended a decade later. Though Dr. Kissinger does not seem to have commented directly on the Kennedy test-ban treaty or wheat deal, he could not have been overly impressed by them if we may judge from his overall distrust of détente at this time. He certainly did not have a good word to say for them.
In October 1966, even President Johnson made a stab at what he called “reconciliation with the East,” but he never got far enough to talk of a more general détente.68 By the end of 1966, however, the Johnson administration was able to push ahead with the nuclear Nonproliferation Treaty, which was ratified in 1970, and to reach initial agreement on holding strategic-arms-limitation talks (SALT), which were also consummated by the Nixon administration. President Johnson would have liked nothing better than to have taken credit for a Soviet-American détente but the times were not propitious. The Vietnam war on the American side and the invasion of Czechoslovakia in August 1968 on the Soviet side were too much to overcome.
The Nixon détente took about two years to set in motion. The final phase seems to have started with the Allied agreement on Berlin in September 1971, which apparently convinced the Nixon administration that a summit meeting in Moscow was feasible.69 On October 12, President Nixon was confident enough of the outcome to make a public announcement of the meeting to be held the following May. In November 1971, then Secretary of Commerce Maurice Stans handed the Soviet Minister of Foreign Trade, Nikolai S. Patolichev, a letter of understanding listing conditions for increased trade relations, after which eleven months of negotiations followed.70 These three actions indicate that the turning point came in the last four months of 1971.
The pièce de résistance of the summit meeting in Moscow in May 1972 was the antiballistic-missile treaty. The technical details need not detain us; what is important for us here is the principle of quantitative parity embodied in the agreement. The quantitative aspect made the agreement possible because both sides had reached a point of diminishing returns which made mere increase in numbers exorbitantly wasteful. Since the agreement was reached, it has become unmistakable that the nuclear arms race has turned qualitative with emphasis on accuracy and “pay-load.” As a result, both sides are spending more than ever and accusing each other of evading the spirit of the May 1972 treaty by developing new and more sophisticated weaponry. The parity aspect of SALT I is, therefore, inherently temporary and unstable, even if it should be possible to determine, which is doubtful, what parity means in this context. Since the SALT I agreement, whatever its virtues and drawbacks may be, has a five-year time limit, it must be reinforced by a more far-reaching, permanent limitation of strategic arms which is the task of SALT II, so far deadlocked. If SALT II fails, Dr. Kissinger has admitted that “a spiraling of the arms race is inevitable” and that the Soviet Union “could wind up with both more warheads and more destructive warheads than we will possess” by the end of the present decade.71 Without a successful SALT II, the United States is apt to rue SALT I. The final returns, then, are far from in.
John Newhouse, whose study of SALT I, Cold Dawn, has been called “outstanding” and “distinguished” by none other than Dr. Kissinger,72 concluded that “SALT is an obscure, certainly an elusive enterprise,” at the heart of which lies “politics.”73 If that was implicitly true of the SALT I agreement, it was explicitly true of another major document that came out of the May 1972 summit in Moscow—the “Basic Principles of Relations Between the United States of America and the Union of Soviet Socialist Republics.” In this document there was nothing but politics—the politics of détente.
According to Dr. Kissinger, the idea of setting forth these “basic principles” was initially a Soviet proposal, which the United States “shelved” for some time in order to assure itself that the statement could be “really meaningful.” The idea that there should be an expression of general principles was discussed for “some months.” The idea that the principles should come out at the end of the summit meeting was “a joint one.”74 We may assume, therefore, that this document was intended to be “really meaningful” and that failure to live up to it would be regarded as equally “meaningful.”
It is fortunate that this charter of détente was issued. For if we want to know what détente is or implies, we have it here. It is no longer a vague, amorphous “relaxation of tension.” It is a concrete, specific code of behavior. Since the Soviet Union proposed the idea in the first place, and several months were spent working it out to the satisfaction of both sides, the Soviet Union as well as the United States can hardly object if they are judged on the basis of the code.
Of the twelve “basic principles,” a few were soon put to the test. Both sides committed themselves, among other things, to the following:
- Prevent the development of situations capable of causing a dangerous exacerbation of their relations.
- Do their utmost to avoid military confrontations.
- Recognize that efforts to obtain unilateral advantage at the expense of the other, directly or indirectly, are inconsistent with these objectives.
- Have a special responsibility . . . to do everything in their power so that conflicts or situations will not arise which would serve to increase international tensions.
- Make no claim for themselves and would not recognize the claims of anyone else to any special rights or advantages in world affairs. They recognize the sovereign equality of all states.75
There was much more, of course, but these five points will do. The last, which appeared as the eleventh principle, was interpreted by Dr. Kissinger as specifically renouncing “any claim to special spheres of influence.”76 In addition, according to a high U.S. official, these self-denying ordinances were specifically applicable to the Middle East and were understood to mean that it “should not be an area over which there should be confrontation between us.”77
From these basic principles we know what détente was supposed to mean operationally. Cynics might suppose that no one in his right mind could have taken these vows of international virtue seriously, least of all the statesmen and diplomats who put their names to them. Was it really edifying to sign a piece of paper which fostered the illusion that the Soviet Union was renouncing its sphere of influence in Eastern Europe? The fact remains that the May 1972 charter of détente was taken quite seriously, at least on the U.S. and Israeli side. It entered into their calculations on the chances of another Arab-Israeli conflict and significantly tipped the balance in favor of an optimistic assesment of the pre-war situation. If the May 1972 summit meeting was the euphoric expression of détente, the Arab attack on Israel in October 1973 was the acid test of its genuineness.
For the fact is that if the “basic principles” of détente had been respected, the Egyptian-Syrian attack should not have taken place. It was clearly dependent on massive, extravagant Soviet support; it could not have failed to cause a dangerous exacerbation of U.S.-USSR relations; it had to have as its objective a unilateral advantage for the Soviet Union at the expense of the United States; and it clearly increased international tensions. After all, the United States and the Soviet Union had been, as Secretary Kissinger himself put it, “essentially allied to one of the contenders in the area,”78 making a Soviet-American crisis an inevitable result of an Arab-Israeli conflict. If the Soviet Union had made any effort to live up to the May 1972 agreement, it should have done its utmost to avoid the Arab-Israeli military confrontation, let alone to make it possible or to urge other Arab nations to get into it. By believing that these were precisely the Middle Eastern implications of détente, the Americans and Israelis opened themselves to being substantially surprised by the Arab attack. One of the determining elements in the intelligence estimate was the answer to the question: Would the Soviet Union consider it more important not to disturb its détente with the United States than to help the Arab states to attack Israel? If the first thesis was adopted, the intelligence estimate was inevitably weighted in favor of discounting the possibility of an Arab attack or even of assuming that any confrontation would be initiated by Israel. If the Israelis, as has been revealed, had sufficient information but interpreted it wrongly, the misleading character of détente was partially responsible for the incorrect evaluation.
The result of this and other illusions was some of the most serious miscalculations in recent U.S. history. When the Soviet planes began to evacuate Soviet families from Egypt and Syria on October 4, two days before the attack, some U.S. intelligence officials interpreted the flights as indicative of an Arab-Soviet break, such as the one that had occurred in July 1972, just after the Moscow summit meeting. On the morning of October 6, the day the war broke out, the highest-level U.S. intelligence report, written the previous day, took the view that hostilities were not imminent and even suggested a crisis in Arab-Soviet relations. After news of the war was received in Washington, high-level U.S. policymakers and intelligence experts at first believed that the Israelis had attacked the Arabs. Not since the Bay of Pigs had there been such a consummate politico-intelligence fiasco.
Détentes may be maximal or minimal or anything in between. The “basic principles” of May 1972 represented détente at its maximum. They proved to be an unmitigated snare and delusion. The official American response was curious. President Nixon had put his name to the “basic principles” and had recommended them to Congress as “a solid framework for the future development of better American-Soviet relations.”79 Not since Franklin D. Roosevelt has an American President had more cause to regret a public expression of confidence in the good faith of the Soviet leadership. Yet so great was the political investment in détente that both President Nixon and Dr. Kissinger publicly reacted to the Soviet role in the conflict as if the “solid framework” had never existed. An official conspiracy of silence protected the once-acclaimed “basic principles” from public scrutiny.
The new party line fell back on the minimal version of détente. In effect, it reduced the concept of détente to little more than the avoidance of nuclear war between the superpowers. Whereas the original “basic principles” of détente were specific and concrete, Secretary Kissinger now described détente as “inherently ambiguous” and “somewhat ambivalent.”80 The best and almost the only thing he could say in favor of détente was that it limited “the risks of nuclear conflict.”81 Senator J. William Fulbright expounded: “Détente, in its essence, is an agreement not to let these differences [between the two superpowers] explode into nuclear war.”82 A distinguished academic exponent of the new line blamed liberals for “reacting to the collapse of their too-high expectations for friendly relations with a liberalized Soviet regime.” He did not say whether he classified President Nixon and Secretary Kissinger among those disenchanted liberals. Détente, we were told, is a process with one, two, three stages and beyond, lasting decades; we are now in stage one, or limited détente, the main business of which is “to reduce the danger of nuclear war.”83 Presumably, the “basic principles” of May 1972 had been a much later stage, and we have been going backward ever since—in order to go forward.
This view of détente distinguishes it from hot war, but it comes perilously close to obliterating the distinction between détente and cold war. The cold war was also considered preferable to hot war in that the conflicts and competition between the so-called superpowers were held within bounds short of actual nuclear warfare. The cold war was in any case never a very satisfactory term; John Lukacs was right to observe that “cold peace” would have been a much better metaphor.84 Both cold war and détente are accordion-like terms; they can be pushed and pulled in and out so that they may mean almost anything. During the cold war, the United States and the Soviet Union could collaborate in 1956 as if they were partners against Britain, France, and Israel; during the détente, the United States and the Soviet Union could threaten each other in 1973 with preliminary mobilizations or precautionary nuclear alerts. If détente means little more than, as Secretary Kissinger put it, that “confrontations are kept within bounds that do not threaten civilized life,”85 it is not doing much more than the cold war did. It is small comfort to learn that all other confrontations, short of threatening civilized life, are still compatible with détente. It is time to stop using cold war as a scare term and détente as a sedative term; in their relationship to nuclear war, they are not all that different.
A witty French journalist may have said the last word on these terms. Paraphrasing Clausewitz, he remarked that “détente is the cold war pursued by other means—and sometimes by the same.”86
It is easy, as most of us have found to our sorrow, to be bewitched by the day-to-day flow of events. As Secretary Kissinger said at his confirmation hearings, the great challenge before the United States is “to distinguish the fundamental from the ephemeral” and for someone like himself in public life, “to leave something behind that would be valid and permanent.”87 With this one can hardly disagree.
What has been fundamental and permanent in this period of “détente”? Everyone seems agreed that a great transition has been going on, but no one is quite sure what it is or where it is going. No doubt we are too close to events to see them in a long enough perspective. Yet, for better or worse, we must try as best we can to take stock and look ahead. We have hardly begun to face the implications and consequences of the Arab-Israeli war of October 1973. But it is not too soon to raise a few questions about some of the deeper premises which have gone into U.S. policy in the past few years. The effort is worth making if only to bring together some strands of the problems that I have been pursuing.
Nuclear and Other Wars: Long before détente became a household word, it was evident that nuclear weapons were a rare, special breed, in a different category from the kind of weapons on which the accretion of power had traditionally been based. French strategists have long believed that no country would ever use nuclear weapons because they were self-destructive. These strategists have had the advantage that no one wishes to prove them wrong. A policy which is primarily aimed at preventing nuclear war is still going to leave us with the risk of all the wars that mankind used to have before nuclear war was invented. Experience has shown that the United States and the Soviet Union are quite capable of going up to the brink of nuclear war without going over. They did more or less just that during the missile crisis of October 1962 and again during the Arab-Israeli conflict of October 1973. Secretary Kissinger has given détente the special function of preventing a general nuclear war from arising out of “the rivalries of client states.”88 This was, indeed, the rationale of the “basic principles” of May 1972, but their fate is not reassuring. The United States in Vietnam and the Soviet Union in the Middle East have shown that they can take vast risks on behalf of client states without setting off a nuclear war. The possibility of nuclear war is always there, of course, but something else may be more probable. A policy which faces the possible but not the probable leaves something to be desired.
Secretary Kissinger himself suggested where to look for the trouble. “But assuming the present balance holds,” he stated at his confirmation hearings, “and granting the strategic significance of what we had both agreed upon, the increasing difficulty of conceiving a rational objective for general nuclear war makes it, therefore, less risky to engage in local adventures.”89 A month later, the Soviet Union engaged in just such a “local adventure.” To be sure, it set off a Soviet-American contretemps, the nature of which is not yet entirely clear. But neither side was anxious to push it to a showdown, and the Soviet Union was not penalized for having taken the risk. What operated was not the détente; it was exactly the same thing that had operated during the cold war, namely, the inhibition of the two superpowers against hot nuclear war. The Arab-Israeli conflict of 1973 hardly disproved Secretary Kissinger's rule that the increasing unlikelihood of nuclear war makes local adventure less risky—and, one might add, more likely.
Now a new element has injected itself into this equation. The Americans toward the end of the Vietnam war and the Soviets especially during the latest Arab-Israeli conflict introduced what have been called “precision-guided non-nuclear munitions” or “smart weapons.” Among them are the new Soviet hand-held antitank guns and surface-to-air missiles, such as the SAM-6 and SAM-7, which enabled the Egyptian ground troops to surprise and at first take a heavy toll of Israeli tanks and planes. The new technology, it is claimed, permits a hitherto unattainable degree of control and precision which makes possible the use of “non-nuclear weapons in many circumstances where a desperate hope had formerly been pinned to using small nuclear weapons.” Bigger and bigger weapons having reached a destructive force beyond rational utilization, it would seem that the only way to gain an advantage was to reverse the trend and develop more discriminating and more accurate smaller weapons against the tank-and-fighter-bomber team that had dominated the battlefield since World War II. Professor Albert Wohlstetter, an acute and well-informed authority in this field, who has made the most penetrating analysis of these developments, has persuasively argued that they have significantly raised the threshold of nuclear war and have substantially increased the likelihood of conventional or non-nuclear warfare.90
If so, détente needs some reconsideration from this point of view. The post-October 1973 doctrine of détente has almost exclusively correlated it with the prevention of nuclear war. If conventional war has become less risky as nuclear war has become, in Dr. Kissinger's words, “less and less plausible and a less and less rational method,”91 and if conventional warfare is making a technological comeback so that it becomes a more plausible and more rational exercise of power, this shift in the credibility of nuclear versus non-nuclear war should be reflected in the function of détente. Primarily it must concern itself with precisely the kind of war which it failed to hold back—and which, as I have tried to show, it may even have encouraged—in October 1973. Too much or one-sided emphasis on preventing nuclear war may be the easy way out; the more difficult and more pressing problem may well be the prevention of conventional or non-nuclear wars.
Marginal Advantages: Dr. Kissinger has also put forward another concept in connection with the “nuclear era” that may be open to question. According to him, this era had changed the balance of power in such a way that neither the Soviet Union nor the United States had anything to fear from each other in the competition for “marginal advantages.” This theory was another reason why the Arab-Israeli conflict of October 1973 should not have taken place, theoretically.
In June 1972, soon after the summit meeting in Moscow which was the source of so many of these comforting concepts, Dr. Kissinger maintained that “to the extent that balance of power means constant jockeying for marginal advantages over an opponent, it no longer applies.” He explained at some length:
The reason is that the determination of national power has changed fundamentally in the nuclear age. Throughout history, the primary concern of most national leaders has been to accumulate geopolitical and military power. It would have seemed inconceivable even a generation ago that such power once gained could not be translated directly into advantage over one's opponent. But now both we and the Soviet Union have begun to find that each increment of power does not necessarily represent an increment of usable political strength.92
Almost a year later, this consoling notion was written into the President's foreign-policy report to the Congress of May 3, 1973. It contended that, although a certain balance of power was still inherent in any international system, the balance was no longer “the overriding concept,” because continual maneuvering for marginal advantages in the nuclear era had become “both unrealistic and dangerous.” It went on:
It is unrealistic because both sides possess such enormous power, small additional increments cannot be translated into tangible advantages or even usable political strength. And it is dangerous because attempts to seek tactical gains might lead to confrontations which could be catastrophic.93
Five months later, the Arab-Israeli conflict broke out. Evidently the Soviet leaders had not been apt students of Dr. Kissinger's lessons. What did they hope to achieve? No more “geopolitical and military power”? No “increment of power” translatable into “an increment of usable political strength”? No “tactical gains”? Dangerous this continual maneuvering may well be, but “unrealistic”?
This Kissingerian theory was an extrapolation of the “basic principles” of May 1972. He made it seem as if he and the Soviet leaders had seen eye to eye on the practical implications of the principles. Yet whatever the Soviet leaders may have professed to believe, their actions belied their words. They were not deterred by détente in the nuclear era from seeking “marginal advantages” or “increments of power” or “tangible advantages” or “tactical gains.”
The true test of a concept is not how persuasive it may appear in the abstract but how close it comes to defining and explaining reality. Dr. Kissinger's theorem on the obsolescence of marginal advantages cannot begin to cope with the reality of the Arab-Israeli war or the competition that has obviously not ceased elsewhere. After that war, Dr. Kissinger somewhat spoiled the beautiful simplicity of his theorem by conceding that the Soviet-American relationship was made up “both of confidence and of competition, coexisting in a somewhat ambivalent manner.”94 If competition is part of the game, what is competition about if not for “marginal advantages,” “increments of power,” “tangible advantages,” and “tactical gains”? In fact, if the theorem is valid, we hardly need a brilliant Secretary of State and a huge foreign-affairs bureaucracy and budget any longer; the nuclear era would by itself virtually insure a cessation of these petty annoyances and permit only the final, apocalyptic conflict. Unnoticed, the theory went all the way back in its implications to John Foster Dulles, who had never been one of Dr. Kissinger's favorite statesmen.
China and the “self-regulating mechanism”: The role of China in the Soviet-American détente might also be profitably rethought. The once-popular “triangular theory” put China more or less on a par with the United States and the Soviet Union in order to account for the way they were reshuffling their relationships—China with the United States presumably against the Soviet Union, the Soviet Union with the United States presumably against China, and the United States with both of them protesting that it was not against either.
The Soviets were certainly not above using the United States against China. We have been told on good authority that the Soviets on at least three occasions beginning in 1970 tried unsuccessfully to get an agreement with the United States to act jointly in the event of some vague “provocative action” on the part of a third nuclear power, which could only mean China.95 Though the Chinese have never been so crass, they would certainly take help from anywhere and anyone if they found themselves in real trouble with the Soviet Union. The Chinese-American rapprochement, such as it is, derived in large part from the Chinese assessment that the Soviet Union was a greater threat than the United States.
Dr. Kissinger, it seems, had also chased the will-o'-the-wisp of a “self-regulating mechanism”—through China. In November 1972, he confided to the distinguished journalist, Theodore H. White, that “what the world needed was a self-regulating mechanism” and that the key to such a mechanism was China.96 A “self-regulating mechanism” would imply that the United States, the Soviet Union, and China were so evenly matched that one would not dare to take on a second without the third.
The Soviet interest in a Soviet-American détente has often been attributed to the supposition that the Soviet leaders consider China a greater threat than the United States. This is another questionable proposition. It may have been true in the late 1960's but the time has passed for it to be accepted uncritically.
One reason why the fear—as distinct from the hostility—of the Soviet Union toward China has diminished is the Soviet military build-up in the Soviet-Chinese border area. According to the best available information, the Soviet Union had 15 divisions in this area in 1968; it has 45, including about 8 tank divisions, in 1974.97 This enormous increase in only six years was not accomplished at the expense of the Soviet armed forces in the West; it was achieved simply by adding more divisions to an already immense military machine.
The Chinese have given every evidence of knowing that they have more to fear from the Soviets than the latter have to fear from them. De Gaulle liked to believe that the Soviet Union needed the West, and the West had nothing to fear from it, because China had replaced the West as the main Soviet concern.98 Whether or not de Gaulle was right in his time, the Soviets have had different ideas. They were faced, in essence, with the classical problem of the two-front war; de Gaulle assumed that they had to fall back on the classical solution of concentration; he gave the Soviets the option of concentrating their force in the East or West but not both. In effect, this constraint would put the Soviets at a disadvantage, at least to the extent that they could not afford to take risks in the West if they were tied up in the East. This kind of thinking has had a lulling effect on Western policy; it has also helped in making détente seem much safer than it has been.
Instead, the Soviet leaders chose to build up their armed forces on all fronts in order to give themselves the maximum freedom of action. The triangular theory was never very persuasive for the same reason that the pentagonal theory failed to be convincing—the balance was nowhere as “even” as President Nixon had supposed. The Chinese-American rapprochement may well be—and I think it is—a good thing in its own right, but it is not an insurance policy against the Soviet Union and it is least of all a “self-regulating mechanism.”
The troubled partnership? This WAS the title, without the question mark, of Professor Kissinger's last book on European-American relations, published in 1965. It indicates how far back the trouble goes. I have put a question mark after the title to cast doubt not on the trouble but on the “partnership.”
The term itself was first popularized by former President John F. Kennedy. When he spoke of Europe, he used such phrases as “partners in aid, trade, defense, diplomacy, and monetary affairs,” “a partner with whom we could deal on a basis of full equality,” “a full and equal partner,” and “partners for peace.”99
This rhetoric provoked some of de Gaulle's most wrathful discourses. If Kennedy was right about the equal European-American partnership, de Gaulle could not be right to declare against French and European dependence on the United States and against the lurking threat of an Anglo-Saxon-Soviet condominium. When the continental Europeans were excluded from the Test Ban Treaty negotiations of 1963, his anguish and anger exploded publicly. His separate détente with the Soviet Union two years later was partly a reply to that treaty and all that it implied to him.
One of those who substantially agreed with de Gaulle on this issue was Dr. Henry Kissinger. In addition to the Test Ban negotiations, he was disturbed by the attempt of the Kennedy administration in 1962-63 to deal directly with the Soviet Union on the status of Berlin. In the Gaullist vein, he protested: “The mere fact of bilateral negotiations raised the specter of a U.S.-Soviet accommodation at the expense of our allies.” When Mr. Kennedy spoke of partnership on the basis of full equality, Dr. Kissinger instructed the President sternly: “Real partnership is possible only between equals.”100
Much of The Troubled Partnership two years later was an extended commentary on these themes. It explored at length all the flaws in the concept of partnership from the European point of view. During those years Dr. Kissinger was Europe's most consistent and persuasive academic protagonist in the United States and a hard, relentless critic of U.S. policy; he almost never liked what any President or Secretary of State said or did. If “consultation” was the issue, he countered that it “is far from a panacea”; it was least effective when it was most needed. If the Europeans were recognized as equal in fact, they would want to be more independent than partnership implied; if the United States insisted on retaining its dominant position, the political will of Europe would eventually be broken. About the most cheerful thing he could say was that we might get through “the transition from tutelage to equality” if we mustered enough “wisdom and delicacy,” neither of which had been our strong points. In fact, his analysis was filled with such depressing contradictions that he finally took refuge, as we have seen, in the visionary call for an “Atlantic Commonwealth,” far beyond the so-called Atlantic alliance.101
In his 1968 essay, only a year before he went to Washington as Presidential Assistant, Professor Kissinger still argued, more compellingly than ever, that European-American partnership was not feasible in the existing circumstances. He accused the Americans of invoking “leadership” and “partnership” only to support “the existing pattern” of inequality. He repeated his previous belief that Europe was no longer capable of playing a “global role.” He regarded even more extensive consultation, always offered as a cure-all, as nothing more than a “palliative.” Instead of partnership, he advised the United States to settle for “political multipolarity,” by which he seemed to mean that differences in interest and policy should be accepted with understanding and tolerance. He criticized advocates of détente as being more concerned with atmospherics than with substance. He warned against mistaking a “benign Soviet tone” for the achievement of peace. In short, he was still the same old Kissinger, only more so.102
Then came the new Kissinger. In President Nixon's foreign-policy report to the Congress of February 1970, unmistakably written in Dr. Kissinger's familiar cadences, the first principle of American policy with respect to Europe was given as—“partnership.” The term itself was used again and again throughout the report, even in headings: “Peace Through Partnership—The Nixon Doctrine” and “A New and Mature Partnership.”103 In the next foreign-policy report of February 1971, headings read: “Towards New Forms of Partnership” and “The Evolution of Partnership.” A careful reader would have noted that we were merely in “the necessary transition to an equal partnership” which was “still in proggress.”104 Evidently there were partnerships and equal partnerships, a distinction that had not been contemplated when Dr. Kissinger had implied that there were only real and unreal partnerships (“real partnership is possible only between equals”).
Had so much really changed between 1968 and 1970? The answer is that something had changed but not what the ritual use of the term “partnership” suggested. Instead of a change from non-partnership to partnership, Europe was increasingly neglected and shunted aside in favor of the deals with China and Russia. The pentagonal theory was conceived by the President, with or without Dr. Kissinger's assistance, to put Europe on more or less the same plane as the United States, Soviet Union, China, and Japan, “each balancing the other.” This arrangement was hardly how a European-American partnership should have worked. The discrepancy was never explained.
In his “Year of Europe” speech in April 1973, Dr. Kissinger took over the term in his own name. He referred to “Atlantic partners,” to “the principles of partnership,” and to Japan as “a principal partner in our common enterprise.” He also distinguished between the United States which had “global interests and responsibilities” and our European allies which had only “regional interests.”105 This distinction caused much resentment in Europe, where it was apparently not known or forgotten that he had been saying much the same thing for a decade. Nevertheless, Secretary Kissinger continued to make use of the term “partnership” and “our Atlantic partnership” in later speeches, even when he was trying to explain why the putative partners had been behaving so unpartnerly and why they should change their ways.106
If the use of the term were merely a verbal quibble, Dr. Kissinger would not have gone to so much trouble analyzing what was wrong with it when he was still a professor. In fact, the contradiction inherent in his “Year of Europe” speech takes us close to the heart of the matter.
The long-term trouble was the problematic relationship between the United States and Europe. As Dr. Kissinger had pointed out as early as 1963, real partnership required equality. Without equality, a so-called partnership could only have a leader and a follower, the dominating and the dominated. He was perfectly right to expose the self-serving shallowness of the Kennedy catchword. In the catchword was concealed a program, one that de Gaulle understood and, therefore, rejected.
How, then, could someone who had seen through this verbiage write it into President Nixon's foreign-policy reports to Congress without holding his nose and, worse still, bandy it about in his own speeches? It is tempting to ascribe this intellectual transmogrification to some venial political sin. The case, however, may be more serious. In April 1973, as we have seen, Dr. Kissinger was capable in one and the same important speech of combining a reference to “partnership” with a reference to an inequality of power and interest which, by his own say-so, made any respectable partnership impossible. Six months later, he told the Senate Foreign Relations Committee: “For the first time since World War II, all great nations have become full participants in the international system.”107 All? Full? Was he referring merely to the United States, the Soviet Union, and possibly China? Or to our great European partners, too? And if not, how could they be our partners?
One strongly suspects a profound confusion of thought. It is not an ordinary confusion; it arises out of the confusing circumstances in which the United States finds itself. In some situations, the U.S. policymaker is still able to think how strong the United States is, compared to the lesser breeds; in other situations, the same person is forced to think how helpless the United States is to enforce its will, even on its friends, let alone its enemies. This duality has produced a kind of official schizophrenia which expresses itself in action and language. Dr. Kissinger has not been immune from the disease.
Americans are not the only ones. The French took the greatest umbrage when Dr. Kissinger consigned our European allies to the lower order of “regional interests.” Of all the European powers, France still aspires most to play a world role. But what happened in October 1973 when the French were faced with the Arab-Israeli war and the Arab oil embargo? The French Foreign Minister Michel Jobert whimpered: “We count for little [Nous pesons peu]. We will try to count for more.”108 And what of West Germany which not so long ago was considered the “leading European power” and “the leading spokesman for Western Europe”?109 The German Foreign Minister Walter Scheel has recently unburdened himself: “The Federal Republic is aware of the limits of her influence. She cannot overcome the existing differences between France and America on her own.”110 The British did not have to apologize for anything; they have known their place since November 1956.
The great gamble: To Conclude, I wish to return to the beginning—the crucial effect of détente on our relations with our allies and antagonists. Nothing else is more important for deciding the fate of the United States and the world in the foreseeable future.
The decision on which of these relationships to foster came very early in the Nixon administration. One might not have expected it to go the way it did. In 1968, Dr. Kissinger noted with alarm that NATO was “in disarray,” that the emergence of an economically resurgent but politically disunited Europe was inevitably bringing in “a difficult transitional period,” and that “Atlantic relations, for all their seeming normalcy, thus face a profound crisis.”111 A year later, the first moves were made which, consciously or not, postponed facing the disarray of NATO, the difficult transitional period, and the crisis in Atlantic relations for at least four years. And when, finally, they came in for renewed attention, it was so late that the effort failed embarrassingly and merely called public attention to how intractable the difficulties had become.
Two decisions by the Nixon administration may prove to be of far greater long-range historical importance than anything else. I have already referred to the first—the willingness to take four years to end direct U.S. military intervention in the Vietnam war. The second was partially related but more far-reaching—the attempt to solve our problems through our antagonists, without, or even at the expense of, our friends. Conceivably, we might have tried to bolster both fronts simultaneously, but this effort was never seriously made. This onesidedness made it overly important that the détente with the Soviet Union should come off as a colossal, spectacular success. Even Dr. Kissinger lost his head long enough to hail the SALT I agreement as “without precedent in all relevant modern history.” Since SALT I was in a sense the first agreement of its kind, that may not have been saying as much as Dr. Kissinger sought to convey.
The essence of the problem was once stated by Dr. Kissinger with remarkable clairvoyance. The situation at that time was not strictly comparable with the present one, but it was uncomfortably close. Words that seemed to be dealing with the past can now be read as prophecy:
If the West is to act purposefully in this situation, it must develop a common policy and a specific program. The temptation for bilateral approaches is great. Each national leader, depending on his temperament, has visions of appearing as the arbiter of a final settlement or of adding Communist pressures to his own as a bargaining device within the [Atlantic] Alliance. This sets up a vicious circle. Since leaders generally do not reach eminence without a touch of vanity and since some stake their prestige on their ability to woo their Soviet counterparts, they tend to present their contacts with the Soviets as a considerable accomplishment. But the real issues have gone unresolved because they are genuinely difficult; hence they are usually avoided during summit diplomacy in favor of showy but essentially peripheral gestures. The vaguer the East-West discourse, the greater will be the confusion in the West. Moreover, each leader faces two different audiences: toward his own people he will be tempted to leave the impression that he has made a unique contribution to peace; toward his allies he will be forced to insist that he will make no settlement in which they do not participate. Excessive claims are coupled with reassurances to uneasy allies which are in turn tempted to pursue bilateral diplomacy.
Where would it end? Here was how Dr. Kissinger saw it nine years ago:
Such a course is suicidal for the West. It will stimulate distrust within the Alliance. The traditional Western balance-of-power diplomacy will reappear, manipulated by the Kremlin. Any Soviet incentive to be responsible will vanish. The Soviet leaders will be able to overcome their difficulties with the assistance of the West and without settling any of the outstanding issues. Since in the Kremlin—as in the West—there must be many who consider the status quo preferable to change, the result is likely to be diplomatic paralysis obscured by abstract declarations about peace and friendship.112
While many of these sentiments seem to be as fresh as ever, the parallels are, of course, not exact. Nevertheless, the real issues have certainly not been resolved, the Soviet incentive to be responsible in the Middle East vanished some time between May 1972 and October 1973, and none of the outstanding issues has been finally settled. There is something uncanny about the repetition of the suicide theme at the end of 1972 by the U.S. Ambassador to the European Community, J. Robert Schaetzel, just after his resignation: “What has been happening to U.S.-E.C. relations is a kind of common death wish.”113
No doubt we are still far from suicide or death. But we are no nearer safety and health if such grave warnings could have been issued by Dr. Kissinger nine years ago and by Ambassador Schaetzel less than two years ago. The central fact of the past five years is that détente with the East has beguiled us while deterioration in the West has beset us. It will not help at this late date to quarrel over which has been more to blame, Europe or the United States; there is more than enough blame for all. It is wasteful of energies for the United States to be exasperated with Europe or Europe to be exasperated with the United States; the accumulation of exasperation is part of the problem. Little is gained by adding up the resources of the European Community and finding that they exceed those of the Soviet Union or that their gross national product comes to about two-thirds that of the United States. Europe is like an optical illusion; it looks formidable only when it is viewed in the abstract as a whole; it shrinks and shrivels as soon as it is examined country by country in the light of each one's political and social reality. Only last month, Secretary of Defense James R. Schlesinger tried to inject some realism and clarity into our understanding of the position of the European states. He asserted that “contrary to the view that they are robust states with the strength to defend Europe by themselves, they are relatively weak states” and that “the most critical region in the world continues to be Western Europe.”114
The policy of détente, whatever we may think of it, would not be so equivocal if turning toward the East had not been accompanied by turning away from the West. While lip-service was being paid to European-American “partnership,” to the Atlantic alliance as the “cornerstone” of American foreign policy, and to concern about the resurgence of American isolationism, the concept of partnership became more and more of a mockery, the cornerstone was relegated to a corner, and ardent support for the policy of détente as it has worked out in practice has come from some of our most eminent neo-isolationists. There is a natural affinity between resurgent isolationism and illusory détentism; if we can persuade ourselves that we can solve our problems directly with our erstwhile enemies, why do we need to bother with allies? The great gamble inherent in this kind of détente is that we are going to be in worse trouble than ever unless détente pays off in continuous, long-lasting Soviet good-will and good behavior. For over four years, détente was pursued so single-mindedly and to the exclusion of so many other interests that it became a go-for-broke operation. The best criticism of such a policy may be found in Dr. Kissinger's past writings, which is why I have cited them so often.
Kto kogo? Who-whom? It was Lenin's favorite formulation of the crucial political question. It may be more freely translated as: “Who does what to whom?”115 It is not a bad way of thinking about détente.
1 New York Times, October 31, 1973 and March 12, 1974.
2 George F. Kennan, “Europe's Problems, Europe's Choices,” Foreign Policy, Spring 1974, p. 8.
3 Henry A. Kissinger, The Troubled Partnership (McGraw-Hill, 1965), p. 4; Richard M. Pfeffer, ed., No More Vietnams? (Harper & Row, 1968), p. 11; Foreign Affairs, January 1969, p. 101; Press conference, March 21, 1974.
4 Foreign Affairs, January 1963, p. 285; The Troubled Partnership, pp. 57, 251.
5 February 1, 1973 (in Department of State Bulletin, April 2, 1973, p. 394).
6 Speech before American Society of Newspaper Editors, April 16, 1954; speech of March 15, 1965 (Congressional Record, House of Representatives, September 2, 1965, pp. 21928-30); “Asia After Vietnam,” Foreign Affairs, October 1967, pp. 111-25.
7 The Troubled Partnership, pp. 9, 232.
8 Stephen R. Graubard, Kissinger: Portrait of a Mind (Norton, 1973), pp. 225-26.
9 Ibid., p. 243.
10 No More Vietnams?, pp. 11-13.
11 “The Vietnam Negotiations,” Foreign Affairs, January 1969, pp. 233-34.
12 November 2, 1972 (in Department of State Bulletin, November 20, 1972, p. 605). One wonders whether Mr. Nixon had in mind the kind of peace that General W. C. Westmoreland, the former U.S. commander in Vietnam and Army Chief of Staff, recently described: “A full year after the cease-fire, which many thought would bring peace to Vietnam, the country is still ravaged by war, with the prospect of continued bloodshed ahead. The ceasefire did bring about an end to United States military action, cause our 588 prisoners to be released, and set the stage for a truce in Laos. But little else has been accomplished. During the last year, there have been more than 10,000 hostile contacts and over 13,000 armed attacks resulting in the deaths of more than 33,000 Communists and 6,000 South Vietnamese military men. Also there have been thousands of civilians killed, injured, or abducted in the South” (New York Times, April 18, 1974).
13 July 6, 1971 (in Department of State Bulletin, July 26, 1971, p. 93).
14 January 31, 1973 (ibid., February 19, 1973, p. 195).
15 Albert Wohlstetter, statement before the Senate Armed Services Committee, April 23, 1969 (Congressional Record, Senate, May 1, 1969, p. 10957 note). The persistent underestimation of Soviet military capabilities is dealt with at length in Albert Wohlstetter, “Is There a Strategic Arms Race?,” Foreign Policy, Summer 1974.
16 Deputy Secretary of State Kenneth Rush, Department of State Bulletin, April 23, 1973, p. 479.
17 Walter Slocombe, The Political Implications of Strategic Parity, Adelphi Papers, International Institute for Strategic Studies, No. 77, May 1971, p. 5.
18 Thomas W. Wolfe, Soviet Power and Europe 1945-1970 (Johns Hopkins Press, 1970), p. 429.
19 Oriana Fallaci, “Kissinger,” the New Republic, December 16, 1972, p. 20.
20 NATO Ministerial Council Meeting, April 10, 1969.
21 U. S. Foreign Policy for the 1970's: A New Strategy for Peace, A Report to the Congress by Richard Nixon, President of the United States, February 18, 1970, pp. 27-31.
22 February 25, 1971 (in Department of State Bulletin, March 15, 1971, p. 307).
23 July 6, 1971 (ibid., July 26, 1971, p. 96).
24 Time, January 3, 1972, p. 15.
25 Deputy Under Secretary for Economic Affairs Nathaniel Samuels, April 14, 1972 (in Department of State Bulletin, May 1, 1972, p. 633).
26 Deputy Secretary of State John N. Irwin II, October 18, 1972 (ibid., November 20, 1972, p. 612).
27 Counselor of the Department Richard F. Pedersen, September 7, 1972 (ibid., October 2, 1972, p. 371).
28 David Landau, Kissinger, The Uses of Power (Houghton Mifflin, 1972), p. 26.
29 Op. cit., p. 21.
30 Stanley Hoffmann, “Will the Balance Balance at Home?,” Foreign Policy, Summer 1972, p. 80.
31 November 5, 1972 (in Department of State Bulletin, December 4, 1972, p. 654).
32 February 1, 1973 (ibid., April 2, 1973, p. 395).
33 March 21, 1973 (ibid., April 9, 1973, p. 419).
34 David Calleo, The Atlantic Fantasy: The U.S., NATO, and Europe (Johns Hopkins Press, 1970), p. ix.
35 The Necessity for Choice (1961), p. 204; The Troubled Partnership (1965), p. 217.
36 Nuclear Weapons and Foreign Policy (Harper, 1957), pp. 142-43, 350.
37 The Necessity for Choice, pp. 178-81, 194-95.
38 The Troubled Partnership, pp. 192, 197-98.
39 “Central Issues of American Foreign Policy,” in Agenda for the Nation (The Brookings Institution, 1968), pp. 599, 608-9.
40 The Necessity for Choice, pp. 172-73.
41 The Troubled Partnership, pp. 248-49.
42 September 5, 1969 (in Department of State Bulletin, September 22, 1969, p. 259).
43 “The condition of the Soviet economy is clearly the primary determinant of present Soviet foreign policy” (Marshall D. Shulman, Foreign Affairs, October 1973, p. 43). “The first and most decisive reason for this change in [the direction of a more moderate and more flexible] foreign policy was the stagnation in the Soviet economy” (Wolfgang Leonhard, ibid., p 66).
44 October 6, 1970 (in Department of State Bulletin, November 23, 1970, pp. 642-43).
45 Charles de Gaulle, Mémoires de guerre (Plon, 1959), Vol. III, pp. 62-70.
46 Ibid., pp. 179-80.
47 Charles de Gaulle, Mémoires d'espoir (Plon, 1970), Vol. I, pp. 175-76.
48 Maurice Couve de Murville, Une politique étrangère 1958-1969 (Plon, 1971), pp. 194, 206-12. Kosygin also made some most revealing remarks about China and the United States more than six years before President Nixon's pilgrimage to Peking: “China was also disquieting [to Kosygin], but perhaps the major preoccupation from this angle was [for the Russians] to know what the game of the United States would be in the future. In fact, the most alarming [redoutable] unknown factor was the possible Chinese-American connection [conjonction]” (p. 212).
49 Ibid., pp. 78-79, 218-21.
50 Mémoires d'espoir, Vol. I, p. 177.
51 The Troubled Partnership, pp. 44, 63.
52 “The Search for Stability,” Foreign Affairs, July 1959, pp. 539-42. In all of his extant writings, Dr. Kissinger never changed his position on these questions; see Foreign Affairs, January 1963, pp. 263, 269, 271, and The Troubled Partnership (1965), pp. 216-18.
53 Couve de Murville, op. cit., p. 273.
54 Marshall D. Shulman, “‘Europe’ versus ‘Détente,’” Foreign Affairs, April 1967, p. 396.
55 Bundeskanzler Brandt Reden und Interviews (Hamburg: Hoffmann & Campe, 1971), pp. 203-4.
56 Among the others were such notable authorities as Professor Zbigniew Brzezinski, Alternative to Partition (McGraw-Hill, 1965, pp. 137-40) and Professor Hans J. Morgenthau, A New Foreign Policy for the United States (Praeger, 1969, pp. 170, 177-81).
57 W. E. Paterson, “Foreign Policy and Stability in West Germany,” International Affairs (London), July 1973, pp. 426-27.
58 Josef Korbel, Détente in Europe: Real or Imaginary? (Princeton University Press, 1972), pp. 204, 242.
59 Werner Kaltefleiter, Orbis, Spring 1973, pp. 91-92.
60 Le Point (Paris), December 10, 1973, p. 56.
61 “The great-power status which Great Britain has so tenaciously sought to sustain throughout the postwar period can now be achieved only through the closest association with the Continent. But to do this effectively Great Britain may have to adopt views similar to France's, ameliorating them with its own subtle style” (“Strains on the Alliance,” Foreign Affairs, January 1963, p. 283).
62 Marc Ullmann, “Security Aspects in French Foreign Policy,” Survival, International Institute for Strategic Studies, London, November-December 1973, pp. 262-67.
63 Arthur M. Schlesinger, Jr., A Thousand Days (Houghton Mifflin, 1965), p. 904. Theodore C. Sorensen quotes Kennedy as saying, “We are not wedded to a policy of hostility to Red China. I would hope that . . . the normalization of relations . . . between China and the West . . . would be brought about,” which would suggest that Kennedy anticipated Nixon in at least the projection of a policy of rapprochement with China. But Sorensen also says that Kennedy regarded the “isolation of the Chinese” as a major gain of the Test Ban Treaty with the Soviet Union in 1963 (Theodore C. Sorensen, Kennedy, Harper & Row, 1965, pp. 665, 736).
64 Sorensen, p. 745.
65 Schlesinger, p. 921.
66 New York Times, October 9, 1963, p. 19, and October 25, 1963, p. 18. As far back as 1956, when Adlai Stevenson had called for a test ban, Nixon had denounced it as “catastrophic nonsense” and had given as one reason that the Russians “haven't kept many agreements as we well know” (New York Times, October 4, 1956, p. 22, and October 5, 1956, p. 16). He was at least consistent—until he had to deal with the Russians himself.
67 William F. Kaufmann, The McNamara Strategy (Harper & Row, 1964), pp. 152-58.
68 Speech of October 7, 1966.
69 President Nixon at least three times named the “understandings on Berlin” as the turning point which had led to the May 1972 summit meeting in Moscow (Department of State Bulletin, January 24, 1972, p. 81, and June 12, 1972, p. 803, and interview in Time, January 3, 1972, p. 14).
70 Assistant Secretary of State for Economic and Business Affairs, William C. Armstrong, ibid., December 25, 1972, p. 721.
71 Nomination of Henry A. Kissinger: Hearings Before the Committee on Foreign Relations, U.S. Senate, September 1973, Part I, p. 122.
72 Ibid., p. 111.
73 John Newhouse, Cold Dawn (Holt, Rinehart & Winston, 1973), p. 272.
74 Department of State Bulletin, June 26, 1972, pp. 886, 894.
75 The full text of the “Basic Principles” may be found in the Department of State Bulletin, June 26. 1972, pp. 898-99.
76 Ibid., p. 896.
77 Assistant Secretary for Near Eastern and South Asian Affairs Joseph J. Sisco, ibid., April 23, 1973, p. 485.
78 Press conference of March 21, 1974.
79 Address to joint session of Congress, June 1, 1972.
80 Press conference of March 21, 1974.
81 Interview in Peking, November 12, 1973 (New York Times, November 13, 1973).
82 Congressional Record, Senate, November 9, 1973, p. S-20136.
83 Marshall D. Shulman, New York Times, March 10, 1974.
84 John Lukacs, A New History of the Cold War (Anchor Books, 1966), p. 273.
85 Press conference of October 25, 1973.
86 André Fontaine, Le Monde, October 30, 1973.
87 Nomination of Henry A. Kissinger, op. cit., pp. 10, 118.
88 Press conference of November 21, 1973.
89 Nomination of Henry A. Kissinger, op. cit., p. 101.
90 This paragraph is based on Albert Wohlstetter, “Threats and Promises of Peace: Europe and America in the New Era,” Orbis, Winter 1974, pp. 1107-44. The entire article should be compulsory reading for anyone interested in this subject.
91 Nomination of Henry A. Kissinger, op. cit., p. 43.
92 June 15, 1972 (in Department of State Bulletin, July 10, 1972, p. 40).
93 U.S. Foreign Policy for the 1970's: Shaping a Durable Peace, p. 232.
94 Press conference of November 21, 1973.
95 Marshall D. Shulman, Foreign Affairs, October 1973, p. 45. More details of the 1970 approach are given by John Newhouse, Cold Dawn, pp. 188-89.
96 Theodore H. White, The Making of the Presidency 1972 (Bantam edition, 1973), p. xviii.
97 Alastair Buchan, Power and Equilibrium in the 1970s (Praeger, 1973), p. 18 [for 1968]; The Military Balance 1973-1974, International Institute for Strategic Studies, 1974, p. 6 [for 1974].
98 In a press conference on November 10, 1959, de Gaulle said that Soviet Russia needed détente in the West in order “to reckon with the yellow multitude which is China” and which threatened to expand at the expense of Russia, “a white nation which has conquered part of China” (Discours et Messages, Plon, 1970, Vol. III, p. 130). A decade later, he wrote retrospectively of his belief in 1958 that the Russians would be attracted to détente with the West because of “the eternal alternation which dominates their history and which today makes them turn their worries toward Asia rather than toward Europe on account of the ambitions of China and provided that the West does not threaten them” (Mémoires d'espoir, Vol. I, p. 213). On July 29, 1963, de Gaulle remarked sardonically that the Sino-Soviet conflict could “add a note of sincerity to the poems [couplets] which the USSR devotes to peaceful coexistence” (Discours et Messages, Vol. IV, pp. 122-23).
99 John F. Kennedy, The Burden and the Glory (Harper & Row, 1964), pp. 16, 106, 111, 114.
100 “Strains on the Alliance,” Foreign Affairs, January 1963, pp. 267, 284.
101 The Troubled Partnership, op. cit., pp. 7.8, 227, 229, 234, 248.
102 “Central Issues of American Foreign Policy,” Agenda for the Nation, pp. 596-99, 607, 609.
103 A New Strategy for Peace, pp. 5, 8, 29.
104 Building for Peace, pp. 11, 25-6.
105 Department of State Bulletin, May 14, 1973, pp. 594, 598.
106 Speech in London to the Pilgrims, December 12, 1973.
107 Nomination of Henry A. Kissinger, op. cit., p. 8.
108 In the National Assembly, October 17, 1973 (Le Monde, October 19, 1973, p. 10).
109 Lawrence L. Whetten, Germany's Ostpolitik (Oxford, 1971), pp. 208, 212.
110 New York Times, March 29, 1974, p. 3.
111 Agenda for the Nation, op. cit., pp. 594-96.
112 The Troubled Partnership, op. cit., pp. 205-6.
113 Fortune, November 1972, p. 148.
114 U.S. News & World Report, May 13, 1974, p. 44.
115 According to the official custodian of Leninism in the United States, Gus Hall, General-Secretary of the Communist Party, U.S.A., détente means, among other good things, “retreat” by and “struggle” against the United States (Political Affairs, March 1974, pp. 7, 9).
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Must-Reads from Magazine
t can be said that the Book of Samuel launched the American Revolution. Though antagonistic to traditional faith, Thomas Paine understood that it was not Montesquieu, or Locke, who was inscribed on the hearts of his fellow Americans. Paine’s pamphlet Common Sense is a biblical argument against British monarchy, drawing largely on the text of Samuel.
Today, of course, universal biblical literacy no longer exists in America, and sophisticated arguments from Scripture are all too rare. It is therefore all the more distressing when public intellectuals, academics, or religious leaders engage in clumsy acts of exegesis and political argumentation by comparing characters in the Book of Samuel to modern political leaders. The most common victim of this tendency has been the central character in the Book of Samuel: King David.
Most recently, this tendency was made manifest in the writings of Dennis Prager. In a recent defense of his own praise of President Trump, Prager wrote that “as a religious Jew, I learned from the Bible that God himself chose morally compromised individuals to accomplish some greater good. Think of King David, who had a man killed in order to cover up the adultery he committed with the man’s wife.” Prager similarly argued that those who refuse to vote for a politician whose positions are correct but whose personal life is immoral “must think God was pretty flawed in voting for King David.”
Prager’s invocation of King David was presaged on the left two decades ago. The records of the Clinton Presidential Library reveal that at the height of the Lewinsky scandal, an email from Dartmouth professor Susannah Heschel made its way into the inbox of an administration policy adviser with a similar comparison: “From the perspective of Jewish history, we have to ask how Jews can condemn President Clinton’s behavior as immoral, when we exalt King David? King David had Batsheva’s husband, Uriah, murdered. While David was condemned and punished, he was never thrown off the throne of Israel. On the contrary, he is exalted in our Jewish memory as the unifier of Israel.”
One can make the case for supporting politicians who have significant moral flaws. Indeed, America’s political system is founded on an awareness of the profound tendency to sinfulness not only of its citizens but also of its statesmen. “If men were angels, no government would be necessary,” James Madison informs us in the Federalist. At the same time, anyone who compares King David to the flawed leaders of our own age reveals a profound misunderstanding of the essential nature of David’s greatness. David was not chosen by God despite his moral failings; rather, David’s failings are the lens that reveal his true greatness. It is in the wake of his sins that David emerges as the paradigmatic penitent, whose quest for atonement is utterly unlike that of any other character in the Bible, and perhaps in the history of the world.
While the precise nature of David’s sins is debated in the Talmud, there is no question that they are profound. Yet it is in comparing David to other faltering figures—in the Bible or today—that the comparison falls flat. This point is stressed by the very Jewish tradition in whose name Prager claimed to speak.
It is the rabbis who note that David’s predecessor, Saul, lost the kingship when he failed to fulfill God’s command to destroy the egregiously evil nation of Amalek, whereas David commits more severe sins and yet remains king. The answer, the rabbis suggest, lies not in the sin itself but in the response. Saul, when confronted by the prophet Samuel, offers obfuscations and defensiveness. David, meanwhile, is similarly confronted by the prophet Nathan: “Thou hast killed Uriah the Hittite with the sword, and hast taken his wife to be thy wife, and hast slain him with the sword of the children of Ammon.” David’s immediate response is clear and complete contrition: “I have sinned against the Lord.” David’s penitence, Jewish tradition suggests, sets him apart from Saul. Soon after, David gave voice to what was in his heart at the moment, and gave the world one of the most stirring of the Psalms:
Have mercy upon me, O God, according to thy lovingkindness: according unto the multitude of thy tender mercies blot out my transgressions.
Wash me thoroughly from mine iniquity, and cleanse me from my sin. For I acknowledge my transgressions: and my sin is ever before me.
. . . Deliver me from bloodguiltiness, O God, thou God of my salvation: and my tongue shall sing aloud of thy righteousness.
O Lord, open thou my lips; and my mouth shall shew forth thy praise.
For thou desirest not sacrifice; else would I give it: thou delightest not in burnt offering.
The sacrifices of God are a broken spirit: a broken and a contrite heart, O God, thou wilt not despise.
The tendency to link David to our current age lies in the fact that we know more about David than any other biblical figure. The author Thomas Cahill has noted that in a certain literary sense, David is the only biblical figure that is like us at all. Prior to the humanist autobiographies of the Renaissance, he notes, “we can count only a few isolated instances of this use of ‘I’ to mean the interior self. But David’s psalms are full of I’s.” In David’s Psalms, Cahill writes, we “find a unique early roadmap to the inner spirit—previously mute—of ancient humanity.”
At the same time, a study of the Book of Samuel and of the Psalms reveals how utterly incomparable David is to anyone alive today. Haym Soloveitchik has noted that even the most observant of Jews today fail to feel a constant intimacy with God that the simplest Jew of the premodern age might have felt, that “while there are always those whose spirituality is one apart from that of their time, nevertheless I think it safe to say that the perception of God as a daily, natural force is no longer present to a significant degree in any sector of modern Jewry, even the most religious.” Yet for David, such intimacy with the divine was central to his existence, and the Book of Samuel and the Psalms are an eternal testament to this fact. This is why simple comparisons between David and ourselves, as tempting as they are, must be resisted. David Wolpe, in his book about David, attempts to make the case as to why King David’s life speaks to us today: “So versatile and enduring is David in our culture that rare is the week that passes without some public allusion to his life…We need to understand David better because we use his life to comprehend our own.”
The truth may be the opposite. We need to understand David better because we can use his life to comprehend what we are missing, and how utterly unlike our lives are to his own. For even the most religious among us have lost the profound faith and intimacy with God that David had. It is therefore incorrect to assume that because of David’s flaws it would have been, as Amos Oz has written, “fitting for him to reign in Tel Aviv.” The modern State of Israel was blessed with brilliant leaders, but to which of its modern warriors or statesmen should David be compared? To Ben Gurion, who stripped any explicit invocation of the Divine from Israel’s Declaration of Independence? To Moshe Dayan, who oversaw the reconquest of Jerusalem, and then immediately handed back the Temple Mount, the locus of King David’s dreams and desires, to the administration of the enemies of Israel? David’s complex humanity inspires comparison to modern figures, but his faith, contrition, and repentance—which lie at the heart of his story and success—defy any such engagement.
And so, to those who seek comparisons to modern leaders from the Bible, the best rule may be: Leave King David out of it.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Three attacks in Britain highlight the West’s inability to see the threat clearly
This lack of seriousness manifests itself in several ways. It’s perhaps most obvious in the failure to reform Britain’s chaotic immigration and dysfunctional asylum systems. But it’s also abundantly clear from the grotesque underfunding and under-resourcing of domestic intelligence. In MI5, Britain has an internal security service that is simply too small to do its job effectively, even if it were not handicapped by an institutional culture that can seem willfully blind to the ideological roots of the current terrorism problem.
In 2009, Jonathan Evans, then head of MI5, confessed at a parliamentary hearing about the London bus and subway attacks of 2005 that his organization only had sufficient resources to “hit the crocodiles close to the boat.” It was an extraordinary metaphor to use, not least because of the impression of relative impotence that it conveys. MI5 had by then doubled in size since 2001, but it still boasted a staff of only 3,500. Today it’s said to employ between 4,000 and 5,000, an astonishingly, even laughably, small number given a UK population of 65 million and the scale of the security challenges Britain now faces. (To be fair, the major British police forces all have intelligence units devoted to terrorism, and the UK government’s overall counterterrorism strategy involves a great many people, including social workers and schoolteachers.)
You can also see that unseriousness at work in the abject failure to coerce Britain’s often remarkably sedentary police officers out of their cars and stations and back onto the streets. Most of Britain’s big-city police forces have adopted a reactive model of policing (consciously rejecting both the New York Compstat model and British “bobby on the beat” traditions) that cripples intelligence-gathering and frustrates good community relations.
If that weren’t bad enough, Britain’s judiciary is led by jurists who came of age in the 1960s, and who have been inclined since 2001 to treat terrorism as an ordinary criminal problem being exploited by malign officials and politicians to make assaults on individual rights and to take part in “illegal” foreign wars. It has long been almost impossible to extradite ISIS or al-Qaeda–linked Islamists from the UK. This is partly because today’s English judges believe that few if any foreign countries—apart from perhaps Sweden and Norway—are likely to give terrorist suspects a fair trial, or able to guarantee that such suspects will be spared torture and abuse.
We have a progressive metropolitan media elite whose primary, reflexive response to every terrorist attack, even before the blood on the pavement is dry, is to express worry about an imminent violent anti-Muslim “backlash” on the part of a presumptively bigoted and ignorant indigenous working class. Never mind that no such “backlash” has yet occurred, not even when the young off-duty soldier Lee Rigby was hacked to death in broad daylight on a South London street in 2013.
Another sign of this lack of seriousness is the choice by successive British governments to deal with the problem of internal terrorism with marketing and “branding.” You can see this in the catchy consultant-created acronyms and pseudo-strategies that are deployed in place of considered thought and action. After every atrocity, the prime minister calls a meeting of the COBRA unit—an acronym that merely stands for Cabinet Office Briefing Room A but sounds like a secret organization of government superheroes. The government’s counterterrorism strategy is called CONTEST, which has four “work streams”: “Prevent,” “Pursue,” “Protect,” and “Prepare.”
Perhaps the ultimate sign of unseriousness is the fact that police, politicians, and government officials have all displayed more fear of being seen as “Islamophobic” than of any carnage that actual terror attacks might cause. Few are aware that this short-term, cowardly, and trivial tendency may ultimately foment genuine, dangerous popular Islamophobia, especially if attacks continue.R
ecently, three murderous Islamist terror attacks in the UK took place in less than a month. The first and third were relatively primitive improvised attacks using vehicles and/or knives. The second was a suicide bombing that probably required relatively sophisticated planning, technological know-how, and the assistance of a terrorist infrastructure. As they were the first such attacks in the UK, the vehicle and knife killings came as a particular shock to the British press, public, and political class, despite the fact that non-explosive and non-firearm terror attacks have become common in Europe and are almost routine in Israel.
The success of all three plots indicates troubling problems in British law-enforcement practice and culture, quite apart from any other failings on the parts of the state in charge of intelligence, border control, and the prevention of radicalization. At the time of writing, the British media have been full of encomia to police courage and skill, not least because it took “only” eight minutes for an armed Metropolitan Police team to respond to and confront the bloody mayhem being wrought by the three Islamist terrorists (who had ploughed their rented van into people on London Bridge before jumping out to attack passersby with knives). But the difficult truth is that all three attacks would be much harder to pull off in Manhattan, not just because all NYPD cops are armed, but also because there are always police officers visibly on patrol at the New York equivalents of London’s Borough Market on a Saturday night. By contrast, London’s Metropolitan police is a largely vehicle-borne, reactive force; rather than use a physical presence to deter crime and terrorism, it chooses to monitor closed-circuit street cameras and social-media postings.
Since the attacks in London and Manchester, we have learned that several of the perpetrators were “known” to the police and security agencies that are tasked with monitoring potential terror threats. That these individuals were nevertheless able to carry out their atrocities is evidence that the monitoring regime is insufficient.
It also seems clear that there were failures on the part of those institutions that come under the leadership of the Home Office and are supposed to be in charge of the UK’s border, migration, and asylum systems. Journalists and think tanks like Policy Exchange and Migration Watch have for years pointed out that these systems are “unfit for purpose,” but successive governments have done little to take responsible control of Britain’s borders. When she was home secretary, Prime Minister Theresa May did little more than jazz up the name, logo, and uniforms of what is now called the “Border Force,” and she notably failed to put in place long-promised passport checks for people flying out of the country. This dereliction means that it is impossible for the British authorities to know who has overstayed a visa or whether individuals who have been denied asylum have actually left the country.
It seems astonishing that Youssef Zaghba, one of the three London Bridge attackers, was allowed back into the country. The Moroccan-born Italian citizen (his mother is Italian) had been arrested by Italian police in Bologna, apparently on his way to Syria via Istanbul to join ISIS. When questioned by the Italians about the ISIS decapitation videos on his mobile phone, he declared that he was “going to be a terrorist.” The Italians lacked sufficient evidence to charge him with a crime but put him under 24-hour surveillance, and when he traveled to London, they passed on information about him to MI5. Nevertheless, he was not stopped or questioned on arrival and had not become one of the 3,000 official terrorism “subjects of interest” for MI5 or the police when he carried out his attack. One reason Zaghba was not questioned on arrival may have been that he used one of the new self-service passport machines installed in UK airports in place of human staff after May’s cuts to the border force. Apparently, the machines are not yet linked to any government watch lists, thanks to the general chaos and ineptitude of the Home Office’s efforts to use information technology.
The presence in the country of Zaghba’s accomplice Rachid Redouane is also an indictment of the incompetence and disorganization of the UK’s border and migration authorities. He had been refused asylum in 2009, but as is so often the case, Britain’s Home Office never got around to removing him. Three years later, he married a British woman and was therefore able to stay in the UK.
But it is the failure of the authorities to monitor ringleader Khuram Butt that is the most baffling. He was a known and open associate of Anjem Choudary, Britain’s most notorious terrorist supporter, ideologue, and recruiter (he was finally imprisoned in 2016 after 15 years of campaigning on behalf of al-Qaeda and ISIS). Butt even appeared in a 2016 TV documentary about ISIS supporters called The Jihadist Next Door. In the same year, he assaulted a moderate imam at a public festival, after calling him a “murtad” or apostate. The imam reported the incident to the police—who took six months to track him down and then let him off with a caution. It is not clear if Butt was one of the 3,000 “subjects of interest” or the additional 20,000 former subjects of interest who continue to be the subject of limited monitoring. If he was not, it raises the question of what a person has to do to get British security services to take him seriously as a terrorist threat; if he was in fact on the list of “subjects of interest,” one has to wonder if being so designated is any barrier at all to carrying out terrorist atrocities. It’s worth remembering, as few do here in the UK, that terrorists who carried out previous attacks were also known to the police and security services and nevertheless enjoyed sufficient liberty to go at it again.B
ut the most important reason for the British state’s ineffectiveness in monitoring terror threats, which May addressed immediately after the London Bridge attack, is a deeply rooted institutional refusal to deal with or accept the key role played by Islamist ideology. For more than 15 years, the security services and police have chosen to take note only of people and bodies that explicitly espouse terrorist violence or have contacts with known terrorist groups. The fact that a person, school, imam, or mosque endorses the establishment of a caliphate, the stoning of adulterers, or the murder of apostates has not been considered a reason to monitor them.
This seems to be why Salman Abedi, the Manchester Arena suicide bomber, was not being watched by the authorities as a terror risk, even though he had punched a girl in the face for wearing a short skirt while at university, had attended the Muslim Brotherhood-controlled Didsbury Mosque, was the son of a Libyan man whose militia is banned in the UK, had himself fought against the Qaddafi regime in Libya, had adopted the Islamist clothing style (trousers worn above the ankle, beard but no moustache), was part of a druggy gang subculture that often feeds individuals into Islamist terrorism, and had been banned from a mosque after confronting an imam who had criticized ISIS.
It was telling that the day after the Manchester Arena suicide-bomb attack, you could hear security officials informing radio and TV audiences of the BBC’s flagship morning-radio news show that it’s almost impossible to predict and stop such attacks because the perpetrators “don’t care who they kill.” They just want to kill as many people as possible, he said.
Surely, anyone with even a basic familiarity with Islamist terror attacks over the last 15 or so years and a nodding acquaintance with Islamist ideology could see that the terrorist hadn’t just chosen the Ariana Grande concert in Manchester Arena because a lot of random people would be crowded into a conveniently small area. Since the Bali bombings of 2002, nightclubs, discotheques, and pop concerts attended by shameless unveiled women and girls have been routinely targeted by fundamentalist terrorists, including in Britain. Among the worrying things about the opinion offered on the radio show was that it suggests that even in the wake of the horrific Bataclan attack in Paris during a November 2015 concert, British authorities may not have been keeping an appropriately protective eye on music venues and other places where our young people hang out in their decadent Western way. Such dereliction would make perfect sense given the resistance on the part of the British security establishment to examining, confronting, or extrapolating from Islamist ideology.
The same phenomenon may explain why authorities did not follow up on community complaints about Abedi. All too often when people living in Britain’s many and diverse Muslim communities want to report suspicious behavior, they have to do so through offices and organizations set up and paid for by the authorities as part of the overall “Prevent” strategy. Although criticized by the left as “Islamophobic” and inherently stigmatizing, Prevent has often brought the government into cooperative relationships with organizations even further to the Islamic right than the Muslim Brotherhood. This means that if you are a relatively secular Libyan émigré who wants to report an Abedi and you go to your local police station, you are likely to find yourself speaking to a bearded Islamist.
From its outset in 2003, the Prevent strategy was flawed. Its practitioners, in their zeal to find and fund key allies in “the Muslim community” (as if there were just one), routinely made alliances with self-appointed community leaders who represented the most extreme and intolerant tendencies in British Islam. Both the Home Office and MI5 seemed to believe that only radical Muslims were “authentic” and would therefore be able to influence young potential terrorists. Moderate, modern, liberal Muslims who are arguably more representative of British Islam as a whole (not to mention sundry Shiites, Sufis, Ahmmadis, and Ismailis) have too often found it hard to get a hearing.
Sunni organizations that openly supported suicide-bomb attacks in Israel and India and that justified attacks on British troops in Iraq and Afghanistan nevertheless received government subsidies as part of Prevent. The hope was that in return, they would alert the authorities if they knew of individuals planning attacks in the UK itself.
It was a gamble reminiscent of British colonial practice in India’s northwest frontier and elsewhere. Not only were there financial inducements in return for grudging cooperation; the British state offered other, symbolically powerful concessions. These included turning a blind eye to certain crimes and antisocial practices such as female genital mutilation (there have been no successful prosecutions relating to the practice, though thousands of cases are reported every year), forced marriage, child marriage, polygamy, the mass removal of girls from school soon after they reach puberty, and the epidemic of racially and religiously motivated “grooming” rapes in cities like Rotherham. (At the same time, foreign jihadists—including men wanted for crimes in Algeria and France—were allowed to remain in the UK as long as their plots did not include British targets.)
This approach, simultaneously cynical and naive, was never as successful as its proponents hoped. Again and again, Muslim chaplains who were approved to work in prisons and other institutions have sometimes turned out to be Islamist extremists whose words have inspired inmates to join terrorist organizations.
Much to his credit, former Prime Minister David Cameron fought hard to change this approach, even though it meant difficult confrontations with his home secretary (Theresa May), as well as police and the intelligence agencies. However, Cameron’s efforts had little effect on the permanent personnel carrying out the Prevent strategy, and cooperation with Islamist but currently nonviolent organizations remains the default setting within the institutions on which the United Kingdom depends for security.
The failure to understand the role of ideology is one of imagination as well as education. Very few of those who make government policy or write about home-grown terrorism seem able to escape the limitations of what used to be called “bourgeois” experience. They assume that anyone willing to become an Islamist terrorist must perforce be materially deprived, or traumatized by the experience of prejudice, or provoked to murderous fury by oppression abroad. They have no sense of the emotional and psychic benefits of joining a secret terror outfit: the excitement and glamor of becoming a kind of Islamic James Bond, bravely defying the forces of an entire modern state. They don’t get how satisfying or empowering the vengeful misogyny of ISIS-style fundamentalism might seem for geeky, frustrated young men. Nor can they appreciate the appeal to the adolescent mind of apocalyptic fantasies of power and sacrifice (mainstream British society does not have much room for warrior dreams, given that its tone is set by liberal pacifists). Finally, they have no sense of why the discipline and self-discipline of fundamentalist Islam might appeal so strongly to incarcerated lumpen youth who have never experienced boundaries or real belonging. Their understanding is an understanding only of themselves, not of the people who want to kill them.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'White Working Class' By Joan C. Williams
Williams is a prominent feminist legal scholar with degrees from Yale, MIT, and Harvard. Unbending Gender, her best-known book, is the sort of tract you’d expect to find at an intersectionality conference or a Portlandia bookstore. This is why her insightful, empathic book comes as such a surprise.
Books and essays on the topic have accumulated into a highly visible genre since Donald Trump came on the American political scene; J.D. Vance’s Hillbilly Elegy planted itself at the top of bestseller lists almost a year ago and still isn’t budging. As with Vance, Williams’s interest in the topic is personal. She fell “madly in love with” and eventually married a Harvard Law School graduate who had grown up in an Italian neighborhood in pre-gentrification Brook-lyn. Williams, on the other hand, is a “silver-spoon girl.” Her father’s family was moneyed, and her maternal grandfather was a prominent Reform rabbi.
The author’s affection for her “class-migrant” spouse and respect for his family’s hardships—“My father-in-law grew up on blood soup,” she announces in her opening sentence—adds considerable warmth to what is at bottom a political pamphlet. Williams believes that elite condescension and “cluelessness” played a big role in Trump’s unexpected and dreaded victory. Enlightening her fellow elites is essential to the task of returning Trump voters to the progressive fold where, she is sure, they rightfully belong.
Liberals were not always so dense about the working class, Williams observes. WPA murals and movies like On the Waterfront showed genuine fellow feeling for the proletariat. In the 1970s, however, the liberal mood changed. Educated boomers shifted their attention to “issues of peace, equal rights, and environmentalism.” Instead of feeling the pain of Arthur Miller and John Steinbeck characters, they began sneering at the less enlightened. These days, she notes, elite sympathies are limited to the poor, people of color (POC), and the LGBTQ population. Despite clear evidence of suffering—stagnant wages, disappearing manufacturing jobs, declining health and well-being—the working class gets only fly-over snobbery at best and, more often, outright loathing.
Williams divides her chapters into a series of explainers to questions she has heard from her clueless friends and colleagues: “Why Does the Working Class Resent the Poor?” “Why Does the Working Class Resent Professionals but Admire the Rich?” “Why Doesn’t the Working Class Just Move to Where the Jobs Are?” “Is the Working Class Just Racist?” She weaves her answers into a compelling picture of a way of life and worldview foreign to her targeted readers. Working-class Americans have had to struggle for whatever stability and comfort they have, she explains. Clocking in for midnight shifts year after year, enduring capricious bosses, plant closures, and layoffs, they’re reliant on tag-team parenting and stressed-out relatives for child care. The campus go-to word “privileged” seems exactly wrong.
Proud of their own self-sufficiency and success, however modest, they don’t begrudge the self-made rich. It’s snooty professionals and the dysfunctional poor who get their goat. From their vantage point, subsidizing the day care for a welfare mother when they themselves struggle to manage care on their own dime mocks both their hard work and their beliefs. And since, unlike most professors, they shop in the same stores as the dependent poor, they’ve seen that some of them game the system. Of course that stings.
White Working Class is especially good at evoking the alternate economic and mental universe experienced by Professional and Managerial Elites, or “PMEs.” PMEs see their non-judgment of the poor, especially those who are “POC,” as a mark of their mature understanding that we live in an unjust, racist system whose victims require compassion regardless of whether they have committed any crime. At any rate, their passions lie elsewhere. They define themselves through their jobs and professional achievements, hence their obsession with glass ceilings.
Williams tells the story of her husband’s faux pas at a high-school reunion. Forgetting his roots for a moment, the Ivy League–educated lawyer asked one of his Brooklyn classmates a question that is the go-to opener in elite social settings: “What do you do?” Angered by what must have seemed like deliberate humiliation by this prodigal son, the man hissed: “I sell toilets.”
Instead of stability and backyard barbecues with family and long-time neighbors and maybe the occasional Olive Garden celebration, PMEs are enamored of novelty: new foods, new restaurants, new friends, new experiences. The working class chooses to spend its leisure in comfortable familiarity; for the elite, social life is a lot like networking. Members of the professional class may view themselves as sophisticated or cosmopolitan, but, Williams shows, to the blue-collar worker their glad-handing is closer to phony social climbing and their abstract, knowledge-economy jobs more like self-important pencil-pushing.
White Working Class has a number of proposals for creating the progressive future Williams would like to see. She wants to get rid of college-for-all dogma and improve training for middle-skill jobs. She envisions a working-class coalition of all races and ethnicities bolstered by civics education with a “distinctly celebratory view of American institutions.” In a saner political environment, some of this would make sense; indeed, she echoes some of Marco Rubio’s 2016 campaign themes. It’s little wonder White Working Class has already gotten the stink eye from liberal reviewers for its purported sympathies for racists.
Alas, impressive as Williams’s insights are, they do not always allow her to transcend her own class loyalties. Unsurprisingly, her own PME biases mostly come to light in her chapters on race and gender. She reduces immigration concerns to “fear of brown people,” even as she notes elsewhere that a quarter of Latinos also favor a wall at the southern border. This contrasts startlingly with her succinct observation that “if you don’t want to drive working-class whites to be attracted to the likes of Limbaugh, stop insulting them.” In one particularly obtuse moment, she asserts: “Because I study social inequality, I know that even Malia and Sasha Obama will be disadvantaged by race, advantaged as they are by class.” She relies on dubious gender theories to explain why the majority of white women voted for Trump rather than for his unfairly maligned opponent. That Hillary Clinton epitomized every elite quality Williams has just spent more than a hundred pages explicating escapes her notice. Williams’s own reflexive retreat into identity politics is itself emblematic of our toxic divisions, but it does not invalidate the power of this astute book.
Choose your plan and pay nothing for six Weeks!
When music could not transcend evil
he story of European classical music under the Third Reich is one of the most squalid chapters in the annals of Western culture, a chronicle of collective complaisance that all but beggars belief. Without exception, all of the well-known musicians who left Germany and Austria in protest when Hitler came to power in 1933 were either Jewish or, like the violinist Adolf Busch, Rudolf Serkin’s father-in-law, had close family ties to Jews. Moreover, most of the small number of non-Jewish musicians who emigrated later on, such as Paul Hindemith and Lotte Lehmann, are now known to have done so not out of principle but because they were unable to make satisfactory accommodations with the Nazis. Everyone else—including Karl Böhm, Wilhelm Furtwängler, Walter Gieseking, Herbert von Karajan, and Richard Strauss—stayed behind and served the Reich.
The Berlin and Vienna Philharmonics, then as now Europe’s two greatest orchestras, were just as willing to do business with Hitler and his henchmen, firing their Jewish members and ceasing to perform the music of Jewish composers. Even after the war, the Vienna Philharmonic was notorious for being the most anti-Semitic orchestra in Europe, and it was well known in the music business (though never publicly discussed) that Helmut Wobisch, the orchestra’s principal trumpeter and its executive director from 1953 to 1968, had been both a member of the SS and a Gestapo spy.
The management of the Berlin Philharmonic made no attempt to cover up the orchestra’s close relationship with the Third Reich, no doubt because the Nazi ties of Karajan, who was its music director from 1956 until shortly before his death in 1989, were a matter of public record. Yet it was not until 2007 that a full-length study of its wartime activities, Misha Aster’s The Reich’s Orchestra: The Berlin Philharmonic 1933–1945, was finally published. As for the Vienna Philharmonic, its managers long sought to quash all discussion of the orchestra’s Nazi past, steadfastly refusing to open its institutional archives to scholars until 2008, when Fritz Trümpi, an Austrian scholar, was given access to its records. Five years later, the Viennese, belatedly following the precedent of the Berlin Philharmonic, added a lengthy section to their website called “The Vienna Philharmonic Under National Socialism (1938–1945),” in which the damning findings of Trümpi and two other independent scholars were made available to the public.
Now Trümpi has published The Political Orchestra: The Vienna and Berlin Philharmonics During the Third Reich, in which he tells how they came to terms with Nazism, supplying pre- and postwar historical context for their transgressions.1 Written in a stiff mixture of academic jargon and translatorese, The Political Orchestra is ungratifying to read. Even so, the tale that it tells is both compelling and disturbing, especially to anyone who clings to the belief that high art is ennobling to the spirit.U
nlike the Vienna Philharmonic, which has always doubled as the pit orchestra for the Vienna State Opera, the Berlin Philharmonic started life in 1882 as a fully independent, self-governing entity. Initially unsubsidized by the state, it kept itself afloat by playing a grueling schedule of performances, including “popular” non-subscription concerts for which modest ticket prices were levied. In addition, the orchestra made records and toured internationally at a time when neither was common.
These activities made it possible for the Berlin Philharmonic to develop into an internationally renowned ensemble whose fabled collective virtuosity was widely seen as a symbol of German musical distinction. Furtwängler, the orchestra’s principal conductor, declared in 1932 that the German music in which it specialized was “one of the very few things that actually contribute to elevating [German] prestige.” Hence, he explained, the need for state subsidy, which he saw as “a matter of [national] prestige, that is, to some extent a requirement of national prudence.” By then, though, the orchestra was already heavily subsidized by the city of Berlin, thus paving the way for its takeover by the Nazis.
The Vienna Philharmonic, by contrast, had always been subsidized. Founded in 1842 when the orchestra of what was then the Vienna Court Opera decided to give symphonic concerts on its own, it performed the Austro-German classics for an elite cadre of longtime subscribers. By restricting membership to local players and their pupils, the orchestra cultivated what Furtwängler, who spent as much time conducting in Vienna as in Berlin, described as a “homogeneous and distinct tone quality.” At once dark and sweet, it was as instantly identifiable—and as characteristically Viennese—as the strong, spicy bouquet of a Gewürztraminer wine.
Unlike the Berlin Philharmonic, which played for whoever would pay the tab and programmed new music as a matter of policy, the Vienna Philharmonic chose not to diversify either its haute-bourgeois audience or its conservative repertoire. Instead, it played Beethoven, Brahms, Haydn, Mozart, and Schubert (and, later, Bruckner and Richard Strauss) in Vienna for the Viennese. Starting in the ’20s, the orchestra’s recordings consolidated its reputation as one of the world’s foremost instrumental ensembles, but its internal culture remained proudly insular.
What the two orchestras had in common was a nationalistic ethos, a belief in the superiority of Austro-German musical culture that approached triumphalism. One of the darkest manifestations of this ethos was their shared reluctance to hire Jews. The Berlin Philharmonic employed only four Jewish players in 1933, while the Vienna Philharmonic contained only 11 Jews at the time of the Anschluss, none of whom was hired after 1920. To be sure, such popular Jewish conductors as Otto Klemperer and Bruno Walter continued to work in Vienna for as long as they could. Two months before the Anschluss, Walter led and recorded a performance of the Ninth Symphony of Gustav Mahler, his musical mentor and fellow Jew, who from 1897 to 1907 had been the director of the Vienna Court Opera and one of the Philharmonic’s most admired conductors. But many members of both orchestras were open supporters of fascism, and not a few were anti-Semites who ardently backed Hitler. By 1942, 62 of the 123 active members of the Vienna Philharmonic were Nazi party members.
The admiration that Austro-German classical musicians had for Hitler is not entirely surprising since he was a well-informed music lover who declared in 1938 that “Germany has become the guardian of European culture and civilization.” He made the support of German art, music very much included, a key part of his political program. Accordingly, the Berlin Philharmonic was placed under the direct supervision of Joseph Goebbels, who ensured the cooperation of its members by repeatedly raising their salaries, exempting them from military service, and guaranteeing their old-age pensions. But there had never been any serious question of protest, any more than there would be among the members of the Vienna Philharmonic when the Nazis gobbled up Austria. Save for the Jews and one or two non-Jewish players who were fired for reasons of internal politics, the musicians went along unhesitatingly with Hitler’s desires.
With what did they go along? Above all, they agreed to the scrubbing of Jewish music from their programs and the dismissal of their Jewish colleagues. Some Jewish players managed to escape with their lives, but seven of the Vienna Philharmonic’s 11 Jews were either murdered by the Nazis or died as a direct result of official persecution. In addition, both orchestras performed regularly at official government functions and made tours and other public appearances for propaganda purposes, and both were treated as gems in the diadem of Nazi culture.
As for Furtwängler, the most prominent of the Austro-German orchestral conductors who served the Reich, his relationship to Nazism continues to be debated to this day. He had initially resisted the firing of the Berlin Philharmonic’s Jewish members and protected them for as long as he could. But he was also a committed (if woolly-minded) nationalist who believed that German music had “a different meaning for us Germans than for other nations” and notoriously declared in an open letter to Goebbels that “we all welcome with great joy and gratitude . . . the restoration of our national honor.” Thereafter he cooperated with the Nazis, by all accounts uncomfortably but—it must be said—willingly. A monster of egotism, he saw himself as the greatest living exponent of German music and believed it to be his duty to stay behind and serve a cause higher than what he took to be mere party politics. “Human beings are free wherever Wagner and Beethoven are played, and if they are not free at first, they are freed while listening to these works,” he naively assured a horrified Arturo Toscanini in 1937. “Music transports them to regions where the Gestapo can do them no harm.”O
nce the war was over, the U.S. occupation forces decided to enlist the Berlin Philharmonic in the service of a democratic, anti-Soviet Germany. Furtwängler and Herbert von Karajan, who succeeded him as principal conductor, were officially “de-Nazified” and their orchestra allowed to function largely undisturbed, though six Nazi Party members were fired. The Vienna Philharmonic received similarly privileged treatment.
Needless to say, there was more to this decision than Cold War politics. No one questioned the unique artistic stature of either orchestra. Moreover, the Vienna Philharmonic, precisely because of its insularity, was now seen as a living museum piece, a priceless repository of 19th-century musical tradition. Still, many musicians and listeners, Jews above all, looked askance at both orchestras for years to come, believing them to be tainted by Nazism.
Indeed they were, so much so that they treated many of their surviving Jewish ex-members in a way that can only be described as vicious. In the most blatant individual case, the violinist Szymon Goldberg, who had served as the Berlin Philharmonic’s concertmaster under Furtwängler, was not allowed to reassume his post in 1945 and was subsequently denied a pension. As for the Vienna Philharmonic, the fact that it made Helmut Wobisch its executive director says everything about its deep-seated unwillingness to face up to its collective sins.
Be that as it may, scarcely any prominent musicians chose to boycott either orchestra. Leonard Bernstein went so far as to affect a flippant attitude toward the morally equivocal conduct of the Austro-German artists whom he encountered in Europe after the war. Upon meeting Herbert von Karajan in 1954, he actually told his wife Felicia that he had become “real good friends with von Karajan, whom you would (and will) adore. My first Nazi.”
At the same time, though, Bernstein understood what he was choosing to overlook. When he conducted the Vienna Philharmonic for the first time in 1966, he wrote to his parents:
I am enjoying Vienna enormously—as much as a Jew can. There are so many sad memories here; one deals with so many ex-Nazis (and maybe still Nazis); and you never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget. The city is so beautiful, and so full of tradition. Everyone here lives for music, especially opera, and I seem to be the new hero.
Did Bernstein sell his soul for the opportunity to work with so justly renowned an orchestra—and did he get his price by insisting that its members perform the symphonies of Mahler, with which he was by then closely identified? It is a fair question, one that does not lend itself to easy answers.
Even more revealing is the case of Bruno Walter, who never forgave Furtwängler for staying behind in Germany, informing him in an angry letter that “your art was used as a conspicuously effective means of propaganda for the regime of the Devil.” Yet Walter’s righteous anger did not stop him from conducting in Vienna after the war. Born in Berlin, he had come to identify with the Philharmonic so closely that it was impossible for him to seriously consider quitting its podium permanently. “Spiritually, I was a Viennese,” he wrote in Theme and Variations, his 1946 autobiography. In 1952, he made a second recording with the Vienna Philharmonic of Mahler’s Das Lied von der Erde, whose premiere he had conducted in 1911 and which he had recorded in Vienna 15 years earlier. One wonders what Walter, who had converted to Christianity but had been driven out of both his native lands for the crime of being Jewish, made of the text of the last movement: “My friend, / On this earth, fortune has not been kind to me! / Where do I go?”
As for the two great orchestras of the Third Reich, both have finally acknowledged their guilt and been forgiven, at least by those who know little of their past. It would occur to no one to decline on principle to perform with either group today. Such a gesture would surely be condemned as morally ostentatious, an exercise in what we now call virtue-signaling. Yet it is impossible to forget what Samuel Lipman wrote in 1993 in Commentary apropos the wartime conduct of Furtwängler: “The ultimate triumph of totalitarianism, I suppose it can be said, is that under its sway only a martyred death can be truly moral.” For the only martyrs of the Berlin and Vienna Philharmonics were their Jews. The orchestras themselves live on, tainted and beloved.
Choose your plan and pay nothing for six Weeks!
He knows what to reveal and what to conceal, understands the importance of keeping the semblance of distance between oneself and the story of the day, and comprehends the ins and outs of anonymous sourcing. Within days of his being fired by President Trump on May 9, for example, little green men and women, known only as his “associates,” began appearing in the pages of the New York Times and Washington Post to dispute key points of the president’s account of his dismissal and to promote Comey’s theory of the case.
“In a Private Dinner, Trump Demanded Loyalty,” the New York Times reported on May 11. “Comey Demurred.” The story was a straightforward narrative of events from Comey’s perspective, capped with an obligatory denial from the White House. The next day, the Washington Post reported, “Comey associates dispute Trump’s account of conversations.” The Post did not identify Comey’s associates, other than saying that they were “people who have worked with him.”
Maybe they were the same associates who had gabbed to the Times. Or maybe they were different ones. Who can tell? Regardless, the story these particular associates gave to the Post was readable and gripping. Comey, the Post reported, “was wary of private meetings and discussions with the president and did not offer the assurance, as Trump has claimed, that Trump was not under investigation as part of the probe into Russian interference in last year’s election.”
On May 16, Michael S. Schmidt of the Times published his scoop, “Comey Memo Says Trump Asked Him to End Flynn Investigation.” Schmidt didn’t see the memo for himself. Parts of it were read to him by—you guessed it—“one of Mr. Comey’s associates.” The following day, Robert Mueller was appointed special counsel to oversee the Russia investigation. On May 18, the Times, citing “two people briefed” on a call between Comey and the president, reported, “Comey, Unsettled by Trump, Is Said to Have Wanted Him Kept at a Distance.” And by the end of that week, Comey had agreed to testify before the Senate Intelligence Committee.
As his testimony approached, Comey’s people became more aggressive in their criticisms of the president. “Trump Should Be Scared, Comey Friend Says,” read the headline of a CNN interview with Brookings Institution fellow Benjamin Wittes. This “Comey friend” said he was “very shocked” when he learned that President Trump had asked Comey for loyalty. “I have no doubt that he regarded the group of people around the president as dishonorable,” Wittes said.
Comey, Wittes added, was so uncomfortable at the White House reception in January honoring law enforcement—the one where Comey lumbered across the room and Trump whispered something in his ear—that, as CNN paraphrased it, he “stood in a position so that his blue blazer would blend in with the room’s blue drapes in an effort for Trump to not notice him.” The integrity, the courage—can you feel it?
On June 6, the day before Comey’s prepared testimony was released, more “associates” told ABC that the director would “not corroborate Trump’s claim that on three separate occasions Comey told the president he was not under investigation.” And a “source with knowledge of Comey’s testimony” told CNN the same thing. In addition, ABC reported that, according to “a source familiar with Comey’s thinking,” the former director would say that Trump’s actions stopped short of obstruction of justice.
Maybe those sources weren’t as “familiar with Comey’s thinking” as they thought or hoped? To maximize the press coverage he already dominated, Comey had authorized the Senate Intelligence Committee to release his testimony ahead of his personal interview. That testimony told a different story than what had been reported by CNN and ABC (and by the Post on May 12). Comey had in fact told Trump the president was not under investigation—on January 6, January 27, and March 30. Moreover, the word “obstruction” did not appear at all in his written text. The senators asked Comey if he felt Trump obstructed justice. He declined to answer either way.
My guess is that Comey’s associates lacked Comey’s scalpel-like, almost Jesuitical ability to make distinctions, and therefore misunderstood what he was telling them to say to the press. Because it’s obvious Comey was the one behind the stories of Trump’s dishonesty and bad behavior. He admitted as much in front of the cameras in a remarkable exchange with Senator Susan Collins of Maine.
Comey said that, after Trump tweeted on May 12 that he’d better hope there aren’t “tapes” of their conversations, “I asked a friend of mine to share the content of the memo with a reporter. Didn’t do it myself, for a variety of reasons. But I asked him to, because I thought that might prompt the appointment of a special counsel. And so I asked a close friend of mine to do it.”
Collins asked whether that friend had been Wittes, known to cable news junkies as Comey’s bestie. Comey said no. The source for the New York Times article was “a good friend of mine who’s a professor at Columbia Law School,” Daniel Richman.
Every time I watch or read that exchange, I am amazed. Here is the former director of the FBI just flat-out admitting that, for months, he wrote down every interaction he had with the president of the United States because he wanted a written record in case the president ever fired or lied about him. And when the president did fire and lie about him, that director set in motion a series of public disclosures with the intent of not only embarrassing the president, but also forcing the appointment of a special counsel who might end up investigating the president for who knows what. And none of this would have happened if the president had not fired Comey or tweeted about him. He told the Senate that if Trump hadn’t dismissed him, he most likely would still be on the job.
Rarely, in my view, are high officials so transparent in describing how Washington works. Comey revealed to the world that he was keeping a file on his boss, that he used go-betweens to get his story into the press, that “investigative journalism” is often just powerful people handing documents to reporters to further their careers or agendas or even to get revenge. And as long as you maintain some distance from the fallout, and stick to the absolute letter of the law, you will come out on top, so long as you have a small army of nightingales singing to reporters on your behalf.
“It’s the end of the Comey era,” A.B. Stoddard said on Special Report with Bret Baier the other day. On the contrary: I have a feeling that, as the Russia investigation proceeds, we will be hearing much more from Comey. And from his “associates.” And his “friends.” And persons “familiar with his thinking.”
Choose your plan and pay nothing for six Weeks!
In April, COMMENTARY asked a wide variety of writers,
thinkers, and broadcasters to respond to this question: Is free speech under threat in the United States? We received twenty-seven responses. We publish them here in alphabetical order.
Floyd AbramsFree expression threatened? By Donald Trump? I guess you could say so.
When a president engages in daily denigration of the press, when he characterizes it as the enemy of the people, when he repeatedly says that the libel laws should be “loosened” so he can personally commence more litigation, when he says that journalists shouldn’t be allowed to use confidential sources, it is difficult even to suggest that he has not threatened free speech. And when he says to the head of the FBI (as former FBI director James Comey has said that he did) that Comey should consider “putting reporters in jail for publishing classified information,” it is difficult not to take those threats seriously.
The harder question, though, is this: How real are the threats? Or, as Michael Gerson put it in the Washington Post: Will Trump “go beyond mere Twitter abuse and move against institutions that limit his power?” Some of the president’s threats against the institution of the press, wittingly or not, have been simply preposterous. Surely someone has told him by now that neither he nor Congress can “loosen” libel laws; while each state has its own libel law, there is no federal libel law and thus nothing for him to loosen. What he obviously takes issue with is the impact that the Supreme Court’s 1964 First Amendment opinion in New York Times v. Sullivan has had on state libel laws. The case determined that public officials who sue for libel may not prevail unless they demonstrate that the statements made about them were false and were made with actual knowledge or suspicion of that falsity. So his objection to the rules governing libel law is to nothing less than the application of the First Amendment itself.
In other areas, however, the Trump administration has far more power to imperil free speech. We live under an Espionage Act, adopted a century ago, which is both broad in its language and uncommonly vague in its meaning. As such, it remains a half-open door through which an administration that is hostile to free speech might walk. Such an administration could initiate criminal proceedings against journalists who write about defense- or intelligence-related topics on the basis that classified information was leaked to them by present or former government employees. No such action has ever been commenced against a journalist. Press lawyers and civil-liberties advocates have strong arguments that the law may not be read so broadly and still be consistent with the First Amendment. But the scope of the Espionage Act and the impact of the First Amendment upon its interpretation remain unknown.
A related area in which the attitude of an administration toward the press may affect the latter’s ability to function as a check on government relates to the ability of journalists to protect the identity of their confidential sources. The Obama administration prosecuted more Espionage Act cases against sources of information to journalists than all prior administrations combined. After a good deal of deserved press criticism, it agreed to expand the internal guidelines of the Department of Justice designed to limit the circumstances under which such source revelation is demanded. But the guidelines are none too protective and are, after all, simply guidelines. A new administration is free to change or limit them or, in fact, abandon them altogether. In this area, as in so many others, it is too early to judge the ultimate treatment of free expression by the Trump administration. But the threats are real, and there is good reason to be wary.
Floyd Abrams is the author of The Soul of the First Amendment (Yale University Press, 2017).
Ayaan Hirsi AliFreedom of speech is being threatened in the United States by a nascent culture of hostility to different points of view. As political divisions in America have deepened, a conformist mentality of “right thinking” has spread across the country. Increasingly, American universities, where no intellectual doctrine ought to escape critical scrutiny, are some of the most restrictive domains when it comes to asking open-ended questions on subjects such as Islam.
Legally, speech in the United States is protected to a degree unmatched in almost any industrialized country. The U.S. has avoided unpredictable Canadian-style restrictions on speech, for example. I remain optimistic that as long as we have the First Amendment in the U.S., any attempt at formal legal censorship will be vigorously challenged.
Culturally, however, matters are very different in America. The regressive left is the forerunner threatening free speech on any issue that is important to progressives. The current pressure coming from those who call themselves “social-justice warriors” is unlikely to lead to successful legislation to curb the First Amendment. Instead, censorship is spreading in the cultural realm, particularly at institutions of higher learning.
The way activists of the regressive left achieve silence or censorship is by creating a taboo, and one of the most pernicious taboos in operation today is the word “Islamophobia.” Islamists are similarly motivated to rule any critical scrutiny of Islamic doctrine out of order. There is now a university center (funded by Saudi money) in the U.S. dedicated to monitoring and denouncing incidences of “Islamophobia.”
The term “Islamophobia” is used against critics of political Islam, but also against progressive reformers within Islam. The term implies an irrational fear that is tainted by hatred, and it has had a chilling effect on free speech. In fact, “Islamophobia” is a poorly defined term. Islam is not a race, and it is very often perfectly rational to fear some expressions of Islam. No set of ideas should be beyond critical scrutiny.
To push back in this cultural realm—in our universities, in public discourse—those favoring free speech should focus more on the message of dawa, the set of ideas that the Islamists want to promote. If the aims of dawa are sufficiently exposed, ordinary Americans and Muslim Americans will reject it. The Islamist message is a message of divisiveness, misogyny, and hatred. It’s anachronistic and wants people to live by tribal norms dating from the seventh century. The best antidote to Islamic extremism is the revelation of what its primary objective is: a society governed by Sharia. This is the opposite of censorship: It is documenting reality. What is life like in Saudi Arabia, Iran, the Northern Nigerian States? What is the true nature of Sharia law?
Islamists want to hide the true meaning of Sharia, Jihad, and the implications for women, gays, religious minorities, and infidels under the veil of “Islamophobia.” Islamists use “Islamophobia” to obfuscate their vision and imply that any scrutiny of political Islam is hatred and bigotry. The antidote to this is more exposure and more speech.
As pressure on freedom of speech increases from the regressive left, we must reject the notions that only Muslims can speak about Islam, and that any critical examination of Islamic doctrines is inherently “racist.”
Instead of contorting Western intellectual traditions so as not to offend our Muslim fellow citizens, we need to defend the Muslim dissidents who are risking their lives to promote the human rights we take for granted: equality for women, tolerance of all religions and orientations, our hard-won freedoms of speech and thought.
It is by nurturing and protecting such speech that progressive reforms can emerge within Islam. By accepting the increasingly narrow confines of acceptable discourse on issues such as Islam, we do dissidents and progressive reformers within Islam a grave disservice. For truly progressive reforms within Islam to be possible, full freedom of speech will be required.
Ayaan Hirsi Ali is a research fellow at the Hoover Institution, Stanford University, and the founder of the AHA Foundation.
Lee C. BollingerI know it is too much to expect that political discourse mimic the measured, self-questioning, rational, footnoting standards of the academy, but there is a difference between robust political debate and political debate infected with fear or panic. The latter introduces a state of mind that is visceral and irrational. In the realm of fear, we move beyond the reach of reason and a sense of proportionality. When we fear, we lose the capacity to listen and can become insensitive and mean.
Our Constitution is well aware of this fact about the human mind and of its negative political consequences. In the First Amendment jurisprudence established over the past century, we find many expressions of the problematic state of mind that is produced by fear. Among the most famous and potent is that of Justice Brandeis in Whitney v. California in 1927, one of the many cases involving aggravated fears of subversive threats from abroad. “It is the function of (free) speech,” he said, “to free men from the bondage of irrational fears.” “Men feared witches,” Brandeis continued, “and burned women.”
Today, our “witches” are terrorists, and Brandeis’s metaphorical “women” include the refugees (mostly children) and displaced persons, immigrants, and foreigners whose lives have been thrown into suspension and doubt by policies of exclusion.
The same fears of the foreign that take hold of a population inevitably infect our internal interactions and institutions, yielding suppression of unpopular and dissenting voices, victimization of vulnerable groups, attacks on the media, and the rise of demagoguery, with its disdain for facts, reason, expertise, and tolerance.
All of this poses a very special obligation on those of us within universities. Not only must we make the case in every venue for the values that form the core of who we are and what we do, but we must also live up to our own principles of free inquiry and fearless engagement with all ideas. This is why recent incidents on a handful of college campuses disrupting and effectively censoring speakers is so alarming. Such acts not only betray a basic principle but also inflame a rising prejudice against the academic community, and they feed efforts to delegitimize our work, at the very moment when it’s most needed.
I do not for a second support the view that this generation has an unhealthy aversion to engaging differences of opinion. That is a modern trope of polarization, as is the portrayal of universities as hypocritical about academic freedom and political correctness. But now, in this environment especially, universities must be at the forefront of defending the rights of all students and faculty to listen to controversial voices, to engage disagreeable viewpoints, and to make every effort to demonstrate our commitment to the sort of fearless and spirited debate that we are simultaneously asking of the larger society. Anyone with a voice can shout over a speaker; but being able to listen to and then effectively rebut those with whom we disagree—particularly those who themselves peddle intolerance—is one of the greatest skills our education can bestow. And it is something our democracy desperately needs more of. That is why, I say to you now, if speakers who are being denied access to other campuses come here, I will personally volunteer to introduce them, and listen to them, however much I may disagree with them. But I will also never hesitate to make clear why I disagree with them.
Lee C. Bollinger is the 19th president of Columbia University and the author of Uninhibited, Robust, and Wide-Open: A Free Press for a New Century. This piece has been excerpted from President Bollinger’s May 17 commencement address.
Richard A. Epstein
Today, the greatest threat to the constitutional protection of freedom of speech comes from campus rabble-rousers who invoke this very protection. In their book, the speech of people like Charles Murray and Heather Mac Donald constitutes a form of violence, bordering on genocide, that receives no First Amendment protection. Enlightened protestors are both bound and entitled to shout them down, by force or other disruptive actions, if their universities are so foolish as to extend them an invitation to speak. Any indignant minority may take the law into its own hands to eradicate the intellectual cancer before it spreads on their own campus.
By such tortured logic, a new generation of vigilantes distorts the First Amendment doctrine: Speech becomes violence, and violence becomes heroic acts of self-defense. The standard First Amendment interpretation emphatically rejects that view. Of course, the First Amendment doesn’t let you say what you want when and wherever you want to. Your freedom of speech is subject to the same limitations as your freedom of action. So you have no constitutional license to assault other people, to lie to them, or to form cartels to bilk them in the marketplace. But folks such as Murray, Mac Donald, and even Yiannopoulos do not come close to crossing into that forbidden territory. They are not using, for example, “fighting words,” rightly limited to words or actions calculated to provoke immediate aggression against a known target. Fighting words are worlds apart from speech that provokes a negative reaction in those who find your speech offensive solely because of the content of its message.
This distinction is central to the First Amendment. Fighting words have to be blocked by well-tailored criminal and civil sanctions lest some people gain license to intimidate others from speaking or peaceably assembling. The remedy for mere offense is to speak one’s mind in response. But it never gives anyone the right to block the speech of others, lest everyone be able to unilaterally increase his sphere of action by getting really angry about the beliefs of others. No one has the right to silence others by working himself into a fit of rage.
Obviously, it is intolerable to let mutual animosity generate factional warfare, whereby everyone can use force to silence rivals. To avoid this war of all against all, each side claims that only its actions are privileged. These selective claims quickly degenerate into a form of viewpoint discrimination, which undermines one of the central protections that traditional First Amendment law erects: a wall against each and every group out to destroy the level playing field on which robust political debate rests. Every group should be at risk for having its message fall flat. The new campus radicals want to upend that understanding by shutting down their adversaries if their universities do not. Their aggression must be met, if necessary, by counterforce. Silence in the face of aggression is not an acceptable alternative.
Richard A. Epstein is the Laurence A. Tisch Professor of Law at the New York University School of Law.
David FrenchWe’re living in the midst of a troubling paradox. At the exact same time that First Amendment jurisprudence has arguably never been stronger and more protective of free expression, millions of Americans feel they simply can’t speak freely. Indeed, talk to Americans living and working in the deep-blue confines of the academy, Hollywood, and the tech sector, and you’ll get a sense of palpable fear. They’ll explain that they can’t say what they think and keep their jobs, their friends, and sometimes even their families.
The government isn’t cracking down or censoring; instead, Americans are using free speech to destroy free speech. For example, a social-media shaming campaign is an act of free speech. So is an economic boycott. So is turning one’s back on a public speaker. So is a private corporation firing a dissenting employee for purely political reasons. Each of these actions is largely protected from government interference, and each one represents an expression of the speaker’s ideas and values.
The problem, however, is obvious. The goal of each of these kinds of actions isn’t to persuade; it’s to intimidate. The goal isn’t to foster dialogue but to coerce conformity. The result is a marketplace of ideas that has been emptied of all but the approved ideological vendors—at least in those communities that are dominated by online thugs and corporate bullies. Indeed, this mindset has become so prevalent that in places such as Portland, Berkeley, Middlebury, and elsewhere, the bullies and thugs have crossed the line from protected—albeit abusive—speech into outright shout-downs and mob violence.
But there’s something else going on, something that’s insidious in its own way. While politically correct shaming still has great power in deep-blue America, its effect in the rest of the country is to trigger a furious backlash, one characterized less by a desire for dialogue and discourse than by its own rage and scorn. So we’re moving toward two Americas—one that ruthlessly (and occasionally illegally) suppresses dissenting speech and the other that is dangerously close to believing that the opposite of political correctness isn’t a fearless expression of truth but rather the fearless expression of ideas best calculated to enrage your opponents.
The result is a partisan feedback loop where right-wing rage spurs left-wing censorship, which spurs even more right-wing rage. For one side, a true free-speech culture is a threat to feelings, sensitivities, and social justice. The other side waves high the banner of “free speech” to sometimes elevate the worst voices to the highest platforms—not so much to protect the First Amendment as to infuriate the hated “snowflakes” and trigger the most hysterical overreactions.
The culturally sustainable argument for free speech is something else entirely. It reminds the cultural left of its own debt to free speech while reminding the political right that a movement allegedly centered around constitutional values can’t abandon the concept of ordered liberty. The culture of free speech thrives when all sides remember their moral responsibilities—to both protect the right of dissent and to engage in ideological combat with a measure of grace and humility.
David French is a senior writer at National Review.
Pamela GellerThe real question isn’t whether free speech is under threat in the United States, but rather, whether it’s irretrievably lost. Can we get it back? Not without war, I suspect, as is evidenced by the violence at colleges whenever there’s the shamefully rare event of a conservative speaker on campus.
Free speech is the soul of our nation and the foundation of all our other freedoms. If we can’t speak out against injustice and evil, those forces will prevail. Freedom of speech is the foundation of a free society. Without it, a tyrant can wreak havoc unopposed, while his opponents are silenced.
With that principle in mind, I organized a free-speech event in Garland, Texas. The world had recently been rocked by the murder of the Charlie Hebdo cartoonists. My version of “Je Suis Charlie” was an event here in America to show that we can still speak freely and draw whatever we like in the Land of the Free. Yet even after jihadists attacked our event, I was blamed—by Donald Trump among others—for provoking Muslims. And if I tried to hold a similar event now, no arena in the country would allow me to do so—not just because of the security risk, but because of the moral cowardice of all intellectual appeasers.
Under what law is it wrong to depict Muhammad? Under Islamic law. But I am not a Muslim, I don’t live under Sharia. America isn’t under Islamic law, yet for standing for free speech, I’ve been:
- Prevented from running our advertisements in every major city in this country. We have won free-speech lawsuits all over the country, which officials circumvent by prohibiting all political ads (while making exceptions for ads from Muslim advocacy groups);
- Shunned by the right, shut out of the Conservative Political Action Conference;
- Shunned by Jewish groups at the behest of terror-linked groups such as the Council on American-Islamic Relations;
- Blacklisted from speaking at universities;
- Prevented from publishing books, for security reasons and because publishers fear shaming from the left;
- Banned from Britain.
A Seattle court accused me of trying to shut down free speech after we merely tried to run an FBI poster on global terrorism, because authorities had banned all political ads in other cities to avoid running ours. Seattle blamed us for that, which was like blaming a woman for being raped because she was wearing a short skirt.
This kind of vilification and shunning is key to the left’s plan to shut down all dissent from its agenda—they make legislation restricting speech unnecessary.
The same refusal to allow our point of view to be heard has manifested itself elsewhere. The foundation of my work is individual rights and equality for all before the law. These are the foundational principles of our constitutional republic. That is now considered controversial. Truth is the new hate speech. Truth is going to be criminalized.
The First Amendment doesn’t only protect ideas that are sanctioned by the cultural and political elites. If “hate speech” laws are enacted, who would decide what’s permissible and what’s forbidden? The government? The gunmen in Garland?
There has been an inversion of the founding premise of this nation. No longer is it the subordination of might to right, but right to might. History is repeatedly deformed with the bloody consequences of this transition.
Pamela Geller is the editor in chief of the Geller Report and president of the American Freedom Defense Initiative.
Jonah GoldbergOf course free speech is under threat in America. Frankly, it’s always under threat in America because it’s always under threat everywhere. Ronald Reagan was right when he said in 1961, “Freedom is never more than one generation away from extinction. We didn’t pass it on to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same.”
This is more than political boilerplate. Reagan identified the source of the threat: human nature. God may have endowed us with a right to liberty, but he didn’t give us all a taste for it. As with most finer things, we must work to acquire a taste for it. That is what civilization—or at least our civilization—is supposed to do: cultivate attachments to certain ideals. “Cultivate” shares the same Latin root as “culture,” cultus, and properly understood they mean the same thing: to grow, nurture, and sustain through labor.
In the past, threats to free speech have taken many forms—nationalist passion, Comstockery (both good and bad), political suppression, etc.—but the threat to free speech today is different. It is less top-down and more bottom-up. We are cultivating a generation of young people to reject free speech as an important value.
One could mark the beginning of the self-esteem movement with Nathaniel Branden’s 1969 paper, “The Psychology of Self-Esteem,” which claimed that “feelings of self-esteem were the key to success in life.” This understandable idea ran amok in our schools and in our culture. When I was a kid, Saturday-morning cartoons were punctuated with public-service announcements telling kids: “The most important person in the whole wide world is you, and you hardly even know you!”
The self-esteem craze was just part of the cocktail of educational fads. Other ingredients included multiculturalism, the anti-bullying crusade, and, of course, that broad phenomenon known as “political correctness.” Combined, they’ve produced a generation that rejects the old adage “sticks and stones can break my bones but words can never harm me” in favor of the notion that “words hurt.” What we call political correctness has been on college campuses for decades. But it lacked a critical mass of young people who were sufficiently receptive to it to make it a fully successful ideology. The campus commissars welcomed the new “snowflakes” with open arms; truly, these are the ones we’ve been waiting for.
“Words hurt” is a fashionable concept in psychology today. (See Psychology Today: “Why Words Can Hurt at Least as Much as Sticks and Stones.”) But it’s actually a much older idea than the “sticks and stones” aphorism. For most of human history, it was a crime to say insulting or “injurious” things about aristocrats, rulers, the Church, etc. That tendency didn’t evaporate with the Divine Right of Kings. Jonathan Haidt has written at book length about our natural capacity to create zones of sanctity, immune from reason.
And that is the threat free speech faces today. Those who inveigh against “hate speech” are in reality fighting “heresy speech”—ideas that do “violence” to sacred notions of self-esteem, racial or gender equality, climate change, and so on. Put whatever label you want on it, contemporary “social justice” progressivism acts as a religion, and it has no patience for blasphemy.
When Napoleon’s forces converted churches into stables, the clergy did not object on the grounds that regulations regarding the proper care and feeding of animals had been violated. They complained of sacrilege and blasphemy. When Charles Murray or Christina Hoff Summers visits college campuses, the protestors are behaving like the zealous acolytes of St. Jerome. Appeals to the First Amendment have as much power over the “antifa” fanatics as appeals to Odin did to champions of the New Faith.
That is the real threat to free speech today.
Jonah Goldberg is a senior editor at National Review and a fellow at the American Enterprise Institute.
KC JohnsonIn early May, the Washington Post urged universities to make clear that “racist signs, symbols, and speech are off-limits.” Given the extraordinarily broad definition of what constitutes “racist” speech at most institutions of higher education, this demand would single out most right-of-center (and, in some cases, even centrist and liberal) discourse on issues of race or ethnicity. The editorial provided the highest-profile example of how hostility to free speech, once confined to the ideological fringe on campus, has migrated to the liberal mainstream.
The last few years have seen periodic college protests—featuring claims that significant amounts of political speech constitute “violence,” thereby justifying censorship—followed by even more troubling attempts to appease the protesters. After the mob scene that greeted Charles Murray upon his visit to Middlebury College, for instance, the student government criticized any punishment for the protesters, and several student leaders wanted to require that future speakers conform to the college’s “community standard” on issues of race, gender, and ethnicity. In the last few months, similar attempts to stifle the free exchange of ideas in the name of promoting diversity occurred at Wesleyan, Claremont McKenna, and Duke. Offering an extreme interpretation of this point of view, one CUNY professor recently dismissed dialogue as “inherently conservative,” since it reinforced the “relations of power that presently exist.”
It’s easy, of course, to dismiss campus hostility to free speech as affecting only a small segment of American public life—albeit one that trains the next generation of judges, legislators, and voters. But, as Jonathan Chait observed in 2015, denying “the legitimacy of political pluralism on issues of race and gender” has broad appeal on the left. It is only most apparent on campus because “the academy is one of the few bastions of American life where the political left can muster the strength to impose its political hegemony upon others.” During his time in office, Barack Obama generally urged fellow liberals to support open intellectual debate. But the current campus environment previews the position of free speech in a post-Obama Democratic Party, increasingly oriented around identity politics.
Waning support on one end of the ideological spectrum for this bedrock American principle should provide a political opening for the other side. The Trump administration, however, seems poorly suited to make the case. Throughout his public career, Trump has rarely supported free speech, even in the abstract, and has periodically embraced legal changes to facilitate libel lawsuits. Moreover, the right-wing populism that motivates Trump’s base has a long tradition of ideological hostility to civil liberties of all types. Even in campus contexts, conservatives have defended free speech inconsistently, as seen in recent calls that CUNY disinvite anti-Zionist fanatic Linda Sarsour as a commencement speaker.
In a sharply polarized political environment, awash in dubiously-sourced information, free speech is all the more important. Yet this same environment has seen both sides, most blatantly elements of the left on campuses, demand restrictions on their ideological foes’ free speech in the name of promoting a greater good.
KC Johnson is a professor of history at Brooklyn College and the CUNY Graduate Center.
Laura KipnisI find myself with a strange-bedfellows problem lately. Here I am, a left-wing feminist professor invited onto the pages of Commentary—though I’d be thrilled if it were still 1959—while fielding speaking requests from right-wing think tanks and libertarians who oppose child-labor laws.
Somehow I’ve ended up in the middle of the free-speech-on-campus debate. My initial crime was publishing a somewhat contentious essay about campus sexual paranoia that put me on the receiving end of Title IX complaints. Apparently I’d created a “hostile environment” at my university. I was investigated (for 72 days). Then I wrote up what I’d learned about these campus inquisitions in a second essay. Then I wrote about it all some more, in a book exposing the kangaroo-court elements of the Title IX process—and the extra-legal gag orders imposed on everyone caught in its widening snare.
I can’t really comment on whether more charges have been filed against me over the book. I’ll just say that writing about being a Title IX respondent could easily become a life’s work. I learned, shortly after writing this piece, that I and my publisher were being sued for defamation, among other things.
Is free speech under threat on American campuses? Yes. We know all about student activists who wish to shut down talks by people with opposing views. I got smeared with a bit of that myself, after a speaking invitation at Wellesley—some students made a video protesting my visit before I arrived. The talk went fine, though a group of concerned faculty circulated an open letter afterward also protesting the invitation: My views on sexual politics were too heretical, and might have offended students.
I didn’t take any of this too seriously, even as right-wing pundits crowed, with Wellesley as their latest outrage bait. It was another opportunity to mock student activists, and the fact that I was myself a feminist rather than a Charles Murray or a Milo Yiannopoulos, made them positively gleeful.
I do find myself wondering where all my new free-speech pals were when another left-wing professor, Steven Salaita, was fired (or if you prefer euphemism, “his job offer was withdrawn”) from the University of Illinois after he tweeted criticism of Israel’s Gaza policy. Sure the tweets were hyperbolic, but hyperbole and strong opinions are protected speech, too.
I guess free speech is easy to celebrate until it actually challenges something. Funny, I haven’t seen Milo around lately—so beloved by my new friends when he was bashing minorities and transgender kids. Then he mistakenly said something authentic (who knew he was capable of it!), reminiscing about an experience a lot of gay men have shared: teenage sex with older men. He tried walking it back—no, no, he’d been a victim, not a participant—but his fan base was shrieking about pedophilia and fleeing in droves. Gee, they were all so against “political correctness” a few minutes before.
It’s easy to be a free-speech fan when your feathers aren’t being ruffled. No doubt what makes me palatable to the anti-PC crowd is having thus far failed to ruffle them enough. I’m just going to have to work harder.
Laura Kipnis’s latest book is Unwanted Advances: Sexual Paranoia Comes to Campus.
Eugene KontorovichThe free and open exchange of views—especially politically conservative or traditionally religious ones—is being challenged. This is taking place not just at college campuses but throughout our public spaces and cultural institutions. James Watson was fired from the lab he led since 1968 and could not speak at New York University because of petty, censorious students who would not know DNA from LSD. Our nation’s founders and heroes are being “disappeared” from public commemoration, like Trotsky from a photograph of Soviet rulers.
These attacks on “free speech” are not the result of government action. They are not what the First Amendment protects against. The current methods—professional and social shaming, exclusion, and employment termination—are more inchoate, and their effects are multiplied by self-censorship. A young conservative legal scholar might find himself thinking: “If the late Justice Antonin Scalia can posthumously be deemed a ‘bigot’ by many academics, what chance have I?”
Ironically, artists and intellectuals have long prided themselves on being the first defenders of free speech. Today, it is the institutions of both popular and high culture that are the censors. Is there one poet in the country who would speak out for Ann Coulter?
The inhibition of speech at universities is part of a broader social phenomenon of making longstanding, traditional views and practices sinful overnight. Conservatives have not put up much resistance to this. To paraphrase Martin Niemöller’s famous dictum: “First they came for Robert E. Lee, and I said nothing, because Robert E. Lee meant nothing to me.”
The situation with respect to Israel and expressions of support for it deserves separate discussion. Even as university administrators give political power to favored ideologies by letting them create “safe spaces” (safe from opposing views), Jews find themselves and their state at the receiving end of claims of apartheid—modern day blood libels. It is not surprising if Jewish students react by demanding that they get a safe space of their own. It is even less surprising if their parents, paying $65,000 a year, want their children to have a nicer time of it. One hears Jewish groups frequently express concern about Jewish students feeling increasingly isolated and uncomfortable on campus.
But demanding selective protection from the new ideological commissars is unlikely to bring the desired results. First, this new ideology, even if it can be harnessed momentarily to give respite to harassed Jews on campus, is ultimately illiberal and will be controlled by “progressive” forces. Second, it is not so terrible for Jews in the Diaspora to feel a bit uncomfortable. It has been the common condition of Jews throughout the millennia. The social awkwardness that Jews at liberal arts schools might feel in being associated with Israel is of course one of the primary justifications for the Jewish State. Facing the snowflakes incapable of hearing a dissonant view—but who nonetheless, in the grip of intersectional ecstasy, revile Jewish self-determination—Jewish students should toughen up.
Eugene Kontorovich teaches constitutional law at Northwestern University and heads the international law department of the Kohelet Policy Forum in Jerusalem.
Nicholas LemannThere’s an old Tom Wolfe essay in which he describes being on a panel discussion at Princeton in 1965 and provoking the other panelists by announcing that America, rather than being in crisis, is in the middle of a “happiness explosion.” He was arguing that the mass effects of 20 years of post–World War II prosperity made for a larger phenomenon than the Vietnam War, the racial crisis, and the other primary concerns of intellectuals at the time.
In the same spirit, I’d say that we are in the middle of a free-speech explosion, because of 20-plus years of the Internet and 10-plus years of social media. If one understands speech as disseminated individual opinion, then surely we live in the free-speech-est society in the history of the world. Anybody with access to the unimpeded World Wide Web can say anything to a global audience, and anybody can hear anything, too. All threats to free speech should be understood in the context of this overwhelmingly reality.
It is a comforting fantasy that a genuine free-speech regime will empower mainly “good,” but previously repressed, speech. Conversely, repressive regimes that are candid enough to explain their anti-free-speech policies usually say that they’re not against free speech, just “bad” speech. We have to accept that more free speech probably means, in the aggregate, more bad speech, and also a weakening of the power, authority, and economic support for information professionals such as journalists. Welcome to the United States in 2017.
I am lucky enough to live and work on the campus of a university, Columbia, that has been blessedly free of successful attempts to repress free speech. Just in the last few weeks, Charles Murray and Dinesh D’Souza have spoken here without incident. But, yes, the evidently growing popularity of the idea that “hate speech” shouldn’t be permitted on campuses is a problem, especially, it seems, at small private liberal-arts colleges. We should all do our part, and I do, by frequently and publicly endorsing free-speech principles. Opposing the BDS movement falls squarely into that category.
It’s not just on campuses that free-speech vigilance is needed, though. The number-one threat to free speech, to my mind, is that the wide-open Web has been replaced by privately owned platforms such as Facebook and Google as the way most people experience the public life of the Internet. These companies are committed to banning “hate speech,” and they are eager to operate freely in countries, like China, that don’t permit free political speech. That makes for a far more consequential constrained environment than any campus’s speech code.
Also, Donald Trump regularly engages in presidentially unprecedented rhetoric demonizing people who disagree with him. He seems to think this is all in good fun, but, as we have already seen at his rallies, not everybody hears it that way. The place where Trumpism will endanger free speech isn’t in the center—the White House press room—but at the periphery, for example in the way that local police handle bumptious protestors and the journalists covering them. This is already happening around the country. If Trump were as disciplined and knowledgeable as Vladimir Putin or Recep Tayyip Erdogan, which so far he seems not to be, then free speech could be in even more serious danger from government, which in most places is its usual main enemy.
Nicholas Lemann is a professor at Columbia Journalism School and a staff writer for the New Yorker.
Michael J. LewisFree speech is a right but it is also a habit, and where the habit shrivels so will the right. If free speech today is in headlong retreat—everywhere threatened by regulation, organized harassment, and even violence—it is in part because our political culture allowed the practice of persuasive oratory to atrophy. The process began in 1973, an unforeseen side effect of Roe v. Wade. Legislators were delighted to learn that by relegating this divisive matter of public policy to the Supreme Court and adopting a merely symbolic position, they could sit all the more safely in their safe seats.
Since then, one crucial question of public policy after another has been punted out of the realm of politics and into the judicial. Issues that might have been debated with all the rhetorical agility of a Lincoln and a Douglas, and then subjected to a process of negotiation, compromise, and voting, have instead been settled by decree: e.g., Chevron, Kelo, Obergefell. The consequences for speech have been pernicious. Since the time of Pericles, deliberative democracy has been predicated on the art of persuasion, which demands the forceful clarity of thought and expression without which no one has ever been persuaded. But a legislature that relegates its authority to judges and regulators will awaken to discover its oratorical culture has been stunted. When politicians, rather than seeking to convince and win over, prefer to project a studied and pleasant vagueness, debate withers into tedious defensive performance. It has been decades since any presidential debate has seen any sustained give and take over a matter of policy. If there is any suspense at all, it is only the possibility that a fatigued or peeved candidate might blurt out that tactless shard of truth known as a gaffe.
A generation accustomed to hearing platitudes smoothly dispensed from behind a teleprompter will find the speech of a fearless extemporaneous speaker to be startling, even disquieting; unfamiliar ideas always are. Unhappily, they have been taught to interpret that disquiet as an injury done to them, rather than as a premise offered to them to consider. All this would not have happened—certainly not to this extent—had not our deliberative democracy decided a generation ago that it preferred the security of incumbency to the risks of unshackled debate. The compulsory contraction of free speech on college campuses is but the logical extension of the voluntary contraction of free speech in our political culture.
Michael J. Lewis’s new book is City of Refuge: Separatists and Utopian Town Planning (Princeton University Press).
Heather Mac DonaldThe answer to the symposium question depends on how powerful the transmission belt is between academia and the rest of the country. On college campuses, violence and brute force are silencing speakers who challenge left-wing campus orthodoxies. These totalitarian outbreaks have been met with listless denunciations by college presidents, followed by . . . virtually nothing. As of mid-May, the only discipline imposed for 2017’s mass attacks on free speech at UC Berkeley, Middlebury, and Clare-mont McKenna College was a letter of reprimand inserted—sometimes only temporarily—into the files of several dozen Middlebury students, accompanied by a brief period of probation. Previous outbreaks of narcis-sistic incivility, such as the screaming-girl fit at Yale and the assaults on attendees of Yale’s Buckley program, were discreetly ignored by college administrators.
Meanwhile, the professoriate unapologetically defends censorship and violence. After the February 1 riot in Berkeley to prevent Milo Yiannapoulos from speaking, Déborah Blocker, associate professor of French at UC Berkeley, praised the rioters. They were “very well-organized and very efficient,” Blocker reported admiringly to her fellow professors. “They attacked property but they attacked it very sparingly, destroying just enough University property to obtain the cancellation order for the MY event and making sure no one in the crowd got hurt” (emphasis in original). (In fact, perceived Milo and Donald Trump supporters were sucker-punched and maced; businesses downtown were torched and vandalized.) New York University’s vice provost for faculty, arts, humanities, and diversity, Ulrich Baer, displayed Orwellian logic by claiming in a New York Times op-ed that shutting down speech “should be understood as an attempt to ensure the conditions of free speech for a greater group of people.”
Will non-academic institutions take up this zeal for outright censorship? Other ideological products of the left-wing academy have been fully absorbed and operationalized. Racial victimology, which drives much of the campus censorship, is now standard in government and business. Corporate diversity trainers counsel that bias is responsible for any lack of proportional racial representation in the corporate ranks. Racial disparities in school discipline and incarceration are universally attributed to racism rather than to behavior. Public figures have lost jobs for violating politically correct taboos.
Yet Americans possess an instinctive commitment to the First Amendment. Federal judges, hardly an extension of the Federalist Society, have overwhelmingly struck down campus speech codes. It is hard to imagine that they would be any more tolerant of the hate-speech legislation so prevalent in Europe. So the question becomes: At what point does the pressure to conform to the elite worldview curtail freedom of thought and expression, even without explicit bans on speech?
Social stigma against conservative viewpoints is not the same as actual censorship. But the line can blur. The Obama administration used regulatory power to impose a behavioral conformity on public and private entities. School administrators may have technically still possessed the right to dissent from novel theories of gender, but they had to behave as if they were fully on board with the transgender revolution when it came to allowing boys to use girls’ bathrooms and locker rooms.
Had Hillary Clinton had been elected president, the federal bureaucracy would have mimicked campus diversocrats with even greater zeal. That threat, at least, has been avoided. Heresies against left-wing dogma may still enter the public arena, if only by the back door. The mainstream media have lurched even further left in the Trump era, but the conservative media, however mocked and marginalized, are expanding (though Twitter and Facebook’s censorship of conservative speakers could be a harbinger of more official silencing).
Outside the academy, free speech is still legally protected, but its exercise requires ever greater determination.
Heather Mac Donald is a fellow at the Manhattan Institute and the author of The War on Cops.
John McWhorterThere is a certain mendacity, as Brick put it in Cat on a Hot Tin Roof, in our discussion of free speech on college campuses. Namely, none of us genuinely wish that absolutely all issues be aired in the name of education and open-mindedness. To insist so is to pretend that civilized humanity makes nothing we could call advancement in philosophical consensus.
I doubt we need “free speech” on issues such as whether slavery and genocide are okay, whether it has been a mistake to view women as men’s equals, or to banish as antique the idea that whites are a master race while other peoples represent a lower rung on the Darwinian scale. With all due reverence of John Stuart Mill’s advocacy for the regular airing of even noxious views in order to reinforce clarity on why they were rejected, we are also human beings with limited time. A commitment to the Enlightenment justifiably will decree that certain views are, indeed, no longer in need of discussion.
However, our modern social-justice warriors are claiming that this no-fly zone of discussion is vaster than any conception of logic or morality justifies. We are being told that questions regarding the modern proposals about cultural appropriation, about whether even passing infelicitous statements constitute racism in the way that formalized segregation and racist disparagement did, or about whether social disparities can be due to cultural legacies rather than structural impediments, are as indisputably egregious, backwards, and abusive as the benighted views of the increasingly distant past.
That is, the new idea is not only that discrimination and inequality still exist, but that to even question the left’s utopian expectation on such matters justifies the same furious, sloganistic and even physically violent resistance that was once levelled against those designated heretics by a Christian hegemony.
Of course the protesters in question do not recognize themselves in a portrait as opponents of something called heresy. They suppose that Galileo’s opponents were clearly wrong but that they, today, are actually correct in a way that no intellectual or moral argument could coherently deny.
As such, we have students allowed to decree college campuses as “racist” when they are the least racist spaces on the planet—because they are, predictably given the imperfection of humans, not perfectly free of passingly unsavory interactions. Thinkers invited to talk for a portion of an hour from the right rather than the left and then have dinner with a few people and fly home are treated as if they were reanimated Hitlers. The student of color who hears a few white students venturing polite questions about the leftist orthodoxy is supported in fashioning these questions as “racist” rhetoric.
The people on college campuses who openly and aggressively spout this new version of Christian (or even Islamist) crusading—ironically justifying it as a barricade against “fascist” muzzling of freedom when the term applies ominously well to the regime they are fostering—are a minority. However, the sawmill spinning blade of their rhetoric has succeeding in rendering opposition as risky as espousing pedophilia, such that only those natively open to violent criticism dare speak out. The latter group is small. The campus consensus thereby becomes, if only at moralistic gunpoint à la the ISIS victim video, a strangled hard-leftism.
Hence freedom of speech is indeed threatened on today’s college campuses. I have lost count of how many of my students, despite being liberal Democrats (many of whom sobbed at Hillary Clinton’s loss last November), have told me that they are afraid to express their opinions about issues that matter, despite the fact that their opinions are ones that any liberal or even leftist person circa 1960 would have considered perfectly acceptable.
Something has shifted of late, and not in a direction we can legitimately consider forwards.
John McWhorter teaches linguistics, philosophy, and music history at Columbia University and is the author of The Language Hoax, Words on the Move, and Talking Back, Talking Black.
Kate Bachelder OdellIt’s 2021, and Harvard Square has devolved into riots: Some 120 people are injured in protests, and the carnage includes fire-consumed cop cars and smashed-in windows. The police discharge canisters of tear gas, and, after apprehending dozens of protesters, enforce a 1:45 A.M. curfew. Anyone roaming the streets after hours is subject to arrest. About 2,000 National Guardsmen are prepared to intervene. Such violence and disorder is also roiling Berkeley and other elite and educated areas.
Oh, that’s 1970. The details are from the Harvard Crimson’s account of “anti-war” riots that spring. The episode is instructive in considering whether free speech is under threat in the United States. Almost daily, there’s a new YouTube installment of students melting down over viewpoints of speakers invited to one campus or another. Even amid speech threats from government—for example, the IRS’s targeting of political opponents—nothing has captured the public’s attention like the end of free expression at America’s institutions of higher learning.
Yet disruption, confusion, and even violence are not new campus phenomena. And it’s hard to imagine that young adults who deployed brute force in the 1960s and ’70s were deeply committed to the open and peaceful exchange of ideas.
There may also be reason for optimism. The rough and tumble on campus in the 1960s and ’70s produced a more even-tempered ’80s and ’90s, and colleges are probably heading for another course correction. In covering the ruckuses at Yale, Missouri, and elsewhere, I’ve talked to professors and students who are figuring out how to respond to the illiberalism, even if the reaction is delayed. The University of Chicago put out a set of free-speech principles last year, and others schools such as Princeton and Purdue have endorsed them.
The NARPs—Non-Athletic Regular People, as they are sometimes known on campus—still outnumber the social-justice warriors, who appear to be overplaying their hand. Case in point is the University of Missouri, which experienced a precipitous drop in enrollment after instructor Melissa Click and her ilk stoked racial tensions last spring. The college has closed dorms and trimmed budgets. Which brings us to another silver lining: The economic model of higher education (exorbitant tuition to pay ever more administrators) may blow up traditional college before the fascists can.
Note also that the anti-speech movement is run by rich kids. A Brookings Institution analysis from earlier this year discovered that “the average enrollee at a college where students have attempted to restrict free speech comes from a family with an annual income $32,000 higher than that of the average student in America.” Few rank higher in average income than those at Middlebury College, where students evicted scholar Charles Murray in a particularly ugly scene. (The report notes that Murray was received respectfully at Saint Louis University, “where the median income of students’ families is half Middlebury’s.”) The impulses of over-adulated 20-year-olds may soon be tempered by the tyranny of having to show up for work on a daily basis.
None of this is to suggest that free speech is enjoying some renaissance either on campus or in America. But perhaps as the late Wall Street Journal editorial-page editor Robert Bartley put it in his valedictory address: “Things could be worse. Indeed, they have been worse.”
Kate Bachelder Odell is an editorial writer for the Wall Street Journal.
Jonathan RauchIs free speech under threat? The one-syllable answer is “yes.” The three-syllable answer is: “Yes, of course.” Free speech is always under threat, because it is not only the single most successful social idea in all of human history, it is also the single most counterintuitive. “You mean to say that speech that is offensive, untruthful, malicious, seditious, antisocial, blasphemous, heretical, misguided, or all of the above deserves government protection?” That seemingly bizarre proposition is defensible only on the grounds that the marketplace of ideas turns out to be the most powerful engine of knowledge, prosperity, liberty, social peace, and moral advancement that our species has had the good fortune to discover.
Every new generation of free-speech advocates will need to get up every morning and re-explain the case for free speech and open inquiry—today, tomorrow, and forever. That is our lot in life, and we just need to be cheerful about it. At discouraging moments, it is helpful to remember that the country has made great strides toward free speech since 1798, when the Adams administration arrested and jailed its political critics; and since the 1920s, when the U.S. government banned and burned James Joyce’s great novel Ulysses; and since 1954, when the government banned ONE, a pioneering gay journal. (The cover article was a critique of the government’s indecency censors, who censored it.) None of those things could happen today.
I suppose, then, the interesting question is: What kind of threat is free speech under today? In the present age, direct censorship by government bodies is rare. Instead, two more subtle challenges hold sway, especially, although not only, on college campuses. The first is a version of what I called, in my book Kindly Inquisitors, the humanitarian challenge: the idea that speech that is hateful or hurtful (in someone’s estimation) causes pain and thus violates others’ rights, much as physical violence does. The other is a version of what I called the egalitarian challenge: the idea that speech that denigrates minorities (again, in someone’s estimation) perpetuates social inequality and oppression and thus also is a rights violation. Both arguments call upon administrators and other bureaucrats to defend human rights by regulating speech rights.
Both doctrines are flawed to the core. Censorship harms minorities by enforcing conformity and entrenching majority power, and it no more ameliorates hatred and injustice than smashing thermometers ameliorates global warming. If unwelcome words are the equivalent of bludgeons or bullets, then the free exchange of criticism—science, in other words—is a crime. I could go on, but suffice it to say that the current challenges are new variations on ancient themes—and they will be followed, in decades and centuries to come, by many, many other variations. Memo to free-speech advocates: Our work is never done, but the really amazing thing, given the proposition we are tasked to defend, is how well we are doing.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of Kindly Inquisitors: The New Attacks on Free Thought.
Nicholas Quinn RosenkranzSpeech is under threat on American campuses as never before. Censorship in various forms is on the rise. And this year, the threat to free speech on campus took an even darker turn, toward actual violence. The prospect of Milo Yiannopoulos speaking at Berkeley provoked riots that caused more than $100,000 worth of property damage on the campus. The prospect of Charles Murray speaking at Middlebury led to a riot that put a liberal professor in the hospital with a concussion. Ann Coulter’s speech at Berkeley was cancelled after the university determined that none of the appropriate venues could be protected from “known security threats” on the date in question.
The free-speech crisis on campus is caused, at least in part, by a more insidious campus pathology: the almost complete lack of intellectual diversity on elite university faculties. At Yale, for example, the number of registered Republicans in the economics department is zero; in the psychology department, there is one. Overall, there are 4,410 faculty members at Yale, and the total number of those who donated to a Republican candidate during the 2016 primaries was three.
So when today’s students purport to feel “unsafe” at the mere prospect of a conservative speaker on campus, it may be easy to mock them as “delicate snowflakes,” but in one sense, their reaction is understandable: If students are shocked at the prospect of a Republican behind a university podium, perhaps it is because many of them have never before laid eyes on one.
To see the connection between free speech and intellectual diversity, consider the recent commencement speech of Harvard President Drew Gilpin Faust:
Universities must be places open to the kind of debate that can change ideas….Silencing ideas or basking in intellectual orthodoxy independent of facts and evidence impedes our access to new and better ideas, and it inhibits a full and considered rejection of bad ones. . . . We must work to ensure that universities do not become bubbles isolated from the concerns and discourse of the society that surrounds them. Universities must model a commitment to the notion that truth cannot simply be claimed, but must be established—established through reasoned argument, assessment, and even sometimes uncomfortable challenges that provide the foundation for truth.
Faust is exactly right. But, alas, her commencement audience might be forgiven a certain skepticism. After all, the number of registered Republicans in several departments at Harvard—e.g., history and psychology—is exactly zero. In those departments, the professors themselves may be “basking in intellectual orthodoxy” without ever facing “uncomfortable challenges.” This may help explain why some students will do everything in their power to keep conservative speakers off campus: They notice that faculty hiring committees seem to do exactly the same thing.
In short, it is a promising sign that true liberal academics like Faust have started speaking eloquently about the crucial importance of civil, reasoned disagreement. But they will be more convincing on this point when they hire a few colleagues with whom they actually disagree.
Nicholas Quinn Rosenkranz is a professor of law at Georgetown. He serves on the executive committee of Heterodox Academy, which he co-founded, on the board of directors of the Federalist Society, and on the board of directors of the Foundation for Individual Rights in Education (FIRE).
Ben ShapiroIn February, I spoke at California State University in Los Angeles. Before my arrival, professors informed students that a white supremacist would be descending on the school to preach hate; threats of violence soon prompted the administration to cancel the event. I vowed to show up anyway. One hour before the event, the administration backed down and promised to guarantee that the event could go forward, but police officers were told not to stop the 300 students, faculty, and outside protesters who blocked and assaulted those who attempted to attend the lecture. We ended up trapped in the auditorium, with the authorities telling students not to leave for fear of physical violence. I was rushed from campus under armed police guard.
Is free speech under assault?
Of course it is.
On campus, free speech is under assault thanks to a perverse ideology of intersectionality that claims victim identity is of primary value and that views are a merely secondary concern. As a corollary, if your views offend someone who outranks you on the intersectional hierarchy, your views are treated as violence—threats to identity itself. On campus, statements that offend an individual’s identity have been treated as “microaggressions”–actual aggressions against another, ostensibly worthy of violence. Words, students have been told, may not break bones, but they will prompt sticks and stones, and rightly so.
Thus, protesters around the country—leftists who see verbiage as violence—have, in turn, used violence in response to ideas they hate. Leftist local authorities then use the threat of violence as an excuse to ideologically discriminate against conservatives. This means public intellectuals like Charles Murray being run off of campus and his leftist professorial cohort viciously assaulted; it means Ann Coulter being targeted for violence at Berkeley; it means universities preemptively banning me and Ayaan Hirsi Ali and Condoleezza Rice and even Jason Riley.
The campus attacks on free speech are merely the most extreme iteration of an ideology that spans from left to right: the notion that your right to free speech ends where my feelings begin. Even Democrats who say that Ann Coulter should be allowed to speak at Berkeley say that nobody should be allowed to contribute to a super PAC (unless you’re a union member, naturally).
Meanwhile, on the right, the president’s attacks on the press have convinced many Republicans that restrictions on the press wouldn’t be altogether bad. A Vanity Fair/60 Minutes poll in late April found that 36 percent of Americans thought freedom of the press “does more harm than good.” Undoubtedly, some of that is due to the media’s obvious bias. CNN’s Jeff Zucker has targeted the Trump administration for supposedly quashing journalism, but he was silent when the Obama administration’s Department of Justice cracked down on reporters from the Associated Press and Fox News, and when hacks like Deputy National Security Adviser Ben Rhodes openly sold lies regarding Iran. But for some on the right, the response to press falsities hasn’t been to call for truth, but to instead echo Trumpian falsehoods in the hopes of damaging the media. Free speech is only important when people seek the truth. Leftists traded truth for tribalism long ago; in response, many on the right seem willing to do the same. Until we return to a common standard under which facts matter, free speech will continue to rest on tenuous grounds.
Ben Shapiro is the editor in chief of The Daily Wire and the host of The Ben Shapiro Show.
Judith ShulevitzIt’s tempting to blame college and university administrators for the decline of free speech in America, and for years I did just that. If the guardians of higher education won’t inculcate the habits of mind required for serious thinking, I thought, who will? The unfettered but civil exchange of ideas is the basic operation of education, just as addition is the basic operation of arithmetic. And universities have to teach both the unfettered part and the civil part, because arguing in a respectful manner isn’t something anyone does instinctively.
So why change my mind now? Schools still cling to speech codes, and there still aren’t enough deans like the one at the University of Chicago who declared his school a safe-space-free zone. My alma mater just handed out prizes for “enhancing race and/or ethnic relations” to two students caught on video harassing the dean of their residential college, one screaming at him that he’d created “a space for violence to happen,” the other placing his face inches away from the dean’s and demanding, “Look at me.” All this because they deemed a thoughtful if ill-timed letter about Halloween costumes written by the dean’s wife to be an act of racist aggression. Yale should discipline students who behave like that, even if they’re right on the merits (I don’t think they were, but that’s not the point). They certainly don’t deserve awards. I can’t believe I had to write that sentence.
But in abdicating their responsibilites, the universities have enabled something even worse than an attack on free speech. They’ve unleashed an assault on themselves. There’s plenty of free speech around; we know that because so much bad speech—low-minded nonsense—tests our constitutional tolerance daily, and that’s holding up pretty well. (As Nicholas Lemann observes elsewhere in this symposium, Facebook and Google represent bigger threats to free speech than students and administrators.) What’s endangered is good speech.
Universities were setting themselves up to be used. Provocateurs exploit the atmosphere on campus to goad overwrought students, then gleefully trash the most important bastion of our crumbling civil society. Higher education and everything it stands for—logical argument, the scientific method, epistemological rigor—start to look illegitimate. Voters perceive tenure and research and higher education itself as hopelessly partisan and unworthy of taxpayers’ money.
The press is a secondary victim of this process of delegitimization. If serious inquiry can be waved off as ideology, then facts won’t be facts and reporting can’t be trusted. All journalism will be equal to all other journalism, and all journalists will be reduced to pests you can slam to the ground with near impunity. Politicians will be able to say anything and do just about anything and there will be no countervailing authority to challenge them. I’m pretty sure that that way lies Putinism and Erdoganism. And when we get to that point, I’m going to start worrying about free speech again.
Judith Shulevitz is a critic in New York.
Harvey SilverglateFree speech is, and has always been, threatened. The title of Nat Hentoff’s 1993 book Free Speech for Me – but Not for Thee is no less true today than at any time, even as the Supreme Court has accorded free speech a more absolute degree of protection than in any previous era.
Since the 1980s, the high court has decided most major free-speech cases in favor of speech, with most of the major decisions being unanimous or nearly so.
Women’s-rights advocates were turned back by the high court in 1986 when they sought to ban the sale of printed materials that, because deemed pornographic by some, were alleged to promote violence against women. Censorship in the name of gender–based protection thus failed to gain traction.
Despite the demands of civil-rights activists, the Supreme Court in 1992 declared cross-burning to be a protected form of expression in R.A.V. v. City of St. Paul, a decision later refined to strengthen a narrow exception for when cross-burning occurs primarily as a physical threat rather than merely an expression of hatred.
Other attempts at First Amendment circumvention have been met with equally decisive rebuff. When the Reverend Jerry Falwell sued Hustler magazine publisher Larry Flynt for defamation growing out of a parody depicting Falwell’s first sexual encounter as a drunken tryst with his mother in an outhouse, a unanimous Supreme Court lectured on the history of parody as a constitutionally protected, even if cruel, form of social and political criticism.
When the South Boston Allied War Veterans, sponsor of Boston’s Saint Patrick’s Day parade, sought to exclude a gay veterans’ group from marching under its own banner, the high court unanimously held that as a private entity, even though marching in public streets, the Veterans could exclude any group marching under a banner conflicting with the parade’s socially conservative message, notwithstanding public-accommodations laws. The gay group could have its own parade but could not rain on that of the conservatives.
Despite such legal clarity, today’s most potent attacks on speech are coming, ironically, from liberal-arts colleges. Ubiquitous “speech codes” limit speech that might insult, embarrass, or “harass,” in particular, members of “historically disadvantaged” groups. “Safe spaces” and “trigger warnings” protect purportedly vulnerable students from hearing words and ideas they might find upsetting. Student demonstrators and threats of violence have forced the cancellation of controversial speakers, left and right.
It remains unclear how much campus censorship results from politically correct faculty, control-obsessed student-life administrators, or students socialized and indoctrinated into intolerance. My experience suggests that the bureaucrats are primarily, although not entirely, to blame. When sued, colleges either lose or settle, pay a modest amount, and then return to their censorious ways.
This trend threatens the heart and soul of liberal education. Eventually it could infect the entire society as these students graduate and assume influential positions. Whether a resulting flood of censorship ultimately overcomes legal protections and weakens democracy remains to be seen.
Harvey Silverglate, a Boston-based lawyer and writer, is the co-author of The Shadow University: The Betrayal of Liberty on America’s Campuses (Free Press, 1998). He co-founded the Foundation for Individual Rights in Education in 1999 and is on FIRE’s board of directors. He spent some three decades on the board of the ACLU of Massachusetts, two of those years as chairman. Silverglate taught at Harvard Law School for a semester during a sabbatical he took in the mid-1980s.
Christina Hoff SommersWhen Heather Mac Donald’s “blue lives matter” talk was shut down by a mob at Claremont McKenna College, the president of neighboring Pomona College sent out an email defending free speech. Twenty-five students shot back a response: “Heather Mac Donald is a fascist, a white supremacist . . . classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.”
Some blame the new campus intolerance on hypersensitive, over-trophied millennials. But the students who signed that letter don’t appear to be fragile. Nor do those who recently shut down lectures at Berkeley, Middlebury, DePaul, and Cal State LA. What they are is impassioned. And their passion is driven by a theory known as intersectionality.
Intersectionality is the source of the new preoccupation with microaggressions, cultural appropriation, and privilege-checking. It’s the reason more than 200 colleges and universities have set up Bias Response Teams. Students who overhear potentially “otherizing” comments or jokes are encouraged to make anonymous reports to their campus BRTs. A growing number of professors and administrators have built their careers around intersectionality. What is it exactly?
Intersectionality is a neo-Marxist doctrine that views racism, sexism, ableism, heterosexism, and all forms of “oppression” as interconnected and mutually reinforcing. Together these “isms” form a complex arrangement of advantages and burdens. A white woman is disadvantaged by her gender but advantaged by her race. A Latino is burdened by his ethnicity but privileged by his gender. According to intersectionality, American society is a “matrix of domination,” with affluent white males in control. Not only do they enjoy most of the advantages, they also determine what counts as “truth” and “knowledge.”
But marginalized identities are not without resources. According to one of intersectionality’s leading theorists, Patricia Collins (former president of the American Sociology Association), disadvantaged groups have access to deeper, more liberating truths. To find their voice, and to enlighten others to the true nature of reality, they require a safe space—free of microaggressive put-downs and imperious cultural appropriations. Here they may speak openly about their “lived experience.” Lived experience, according to intersectional theory, is a better guide to the truth than self-serving Western and masculine styles of thinking. So don’t try to refute intersectionality with logic or evidence: That only proves that you are part of the problem it seeks to overcome.
How could comfortably ensconced college students be open to a convoluted theory that describes their world as a matrix of misery? Don’t they flinch when they hear intersectional scholars like bell hooks refer to the U.S. as an “imperialist, white-supremacist, capitalist patriarchy”? Most take it in stride because such views are now commonplace in high-school history and social studies texts. And the idea that knowledge comes from lived experience rather than painstaking study and argument is catnip to many undergrads.
Silencing speech and forbidding debate is not an unfortunate by-product of intersectionality—it is a primary goal. How else do you dismantle a lethal system of oppression? As the protesting students at Claremont McKenna explained in their letter: “Free speech . . . has given those who seek to perpetuate systems of domination a platform to project their bigotry.” To the student activists, thinkers like Heather MacDonald and Charles Murray are agents of the dominant narrative, and their speech is “a form of violence.”
It is hard to know how our institutions of higher learning will find their way back to academic freedom, open inquiry, and mutual understanding. But as long as intersectional theory goes unchallenged, campus fanaticism will intensify.
Christina Hoff Sommers is a resident scholar at the American Enterprise Institute. She is the author of several books, including Who Stole Feminism? and The War Against Boys. She also hosts The Factual Feminist, a video blog. @Chsommers
John StosselYes, some college students do insane things. Some called police when they saw “Trump 2016” chalked on sidewalks. The vandals at Berkeley and the thugs who assaulted Charles Murray are disgusting. But they are a minority. And these days people fight back.
Someone usually videotapes the craziness. Yale’s “Halloween costume incident” drove away two sensible instructors, but videos mocking Yale’s snowflakes, like “Silence U,” make such abuse less likely. Groups like Young America’s Foundation (YAF) publicize censorship, and the Foundation for Individual Rights in Education (FIRE) sues schools that restrict speech.
Consciousness has been raised. On campus, the worst is over. Free speech has always been fragile. I once took cameras to Seton Hall law school right after a professor gave a lecture on free speech. Students seemed to get the concept. Sean, now a lawyer, said, “Protect freedom for thought we hate; otherwise you never have a society where ideas clash, and we come up with the best idea.” So I asked, “Should there be any limits?” Students listed “fighting words,” “shouting fire in a theater,” malicious libel, etc.— reasonable court-approved exceptions. But then they went further. Several wanted bans on “hate” speech, “No value comes out of hate speech,” said Javier. “It inevitably leads to violence.”
No it doesn’t, I argued, “Also, doesn’t hate speech bring ideas into the open, so you can better argue about them, bringing you to the truth?”
“No,” replied Floyd, “With hate speech, more speech is just violence.”
So I pulled out a big copy of the First Amendment and wrote, “exception: hate speech.”
Two students wanted a ban on flag desecration “to respect those who died to protect it.”
One wanted bans on blasphemy:
“Look at the gravity of the harm versus the value in blasphemy—the harm outweighs the value.”
Several wanted a ban on political speech by corporations because of “the potential for large corporations to improperly influence politicians.”
Finally, Jillian, also now a lawyer, wanted hunting videos banned.
“It encourages harm down the road.”
I asked her, incredulously, “you’re comfortable locking up people who make a hunting film?”
“Oh, yeah,” she said. “It’s unnecessary cruelty to feeling and sentient beings.”
So, I picked up my copy of the Bill of Rights again. After “no law . . . abridging freedom of speech,” I added: “Except hate speech, flag burning, blasphemy, corporate political speech, depictions of hunting . . . ”
That embarrassed them. “We may have gone too far,” said Sean. Others agreed. One said, “Cross out the exceptions.” Free speech survived, but it was a close call. Respect for unpleasant speech will always be thin. Then-Senator Hillary Clinton wanted violent video games banned. John McCain and Russ Feingold tried to ban political speech. Donald Trump wants new libel laws, and if you burn a flag, he tweeted, consequences might be “loss of citizenship or a year in jail!” Courts or popular opinion killed those bad ideas.
Free speech will survive, assuming those of us who appreciate it use it to fight those who would smother it.
John Stossel is a FOX News/FOX Business Network Contributor.
Warren TreadgoldEven citizens of dictatorships are free to praise the regime and to talk about the weather. The only speech likely to be threatened anywhere is the sort that offends an important and intolerant group. What is new in America today is a leftist ideology that threatens speech precisely because it offends certain important and intolerant groups: feminists and supposedly oppressed minorities.
So far this new ideology is clearly dominant only in colleges and universities, where it has become so strong that most controversies concern outside speakers invited by students, not faculty speakers or speakers invited by administrators. Most academic administrators and professors are either leftists or have learned not to oppose leftism; otherwise they would probably never have been hired. Administrators treat even violent leftist protestors with respect and are ready to prevent conservative and moderate outsiders from speaking rather than provoke protests. Most professors who defend conservative or moderate speakers argue that the speakers’ views are indeed noxious but say that students should be exposed to them to learn how to refute them. This is very different from encouraging a free exchange of ideas.
Although the new ideology began on campuses in the ’60s, it gained authority outside them largely by means of several majority decisions of the Supreme Court, from Roe (1973) to Obergefell (2015). The Supreme Court decisions that endanger free speech are based on a presumed consensus of enlightened opinion that certain rights favored by activists have the same legitimacy as rights explicitly guaranteed by the Constitution—or even more legitimacy, because the rights favored by activists are assumed to be so fundamental that they need no grounding in specific constitutional language. The Court majorities found restricting abortion rights or homosexual marriage, as large numbers of Americans wish to do, to be constitutionally equivalent to restricting black voting rights or interracial marriage. Any denial of such equivalence therefore opposes fundamental constitutional rights and can be considered hate speech, advocating psychological and possibly physical harm to groups like women seeking abortions or homosexuals seeking approval. Such speech may still be constitutionally protected, but acting upon it is not.
This ideology of forbidding allegedly offensive speech has spread to most of the Democratic Party and the progressive movement. Rather than seeing themselves as taking one side in a free debate, progressives increasingly argue (for example) that opposing abortion is offensive to women and supporting the police is offensive to blacks. Some politicians object so strongly to such speech that despite their interest in winning votes, they attack voters who disagree with them as racists or sexists. Expressing views that allegedly discriminate against women, blacks, homosexuals, and various other minorities can now be grounds for a lawsuit.
Speech that supposedly offends women or minorities has already cost some people their careers, their businesses, and their opportunities to deliver or hear speeches. Such intimidation is the intended result of an ideology that threatens free speech.
Warren Treadgold is a professor of history at Saint Louis University.
Matt WelchLike a sullen zoo elephant rocking back and forth from leg to leg, there is an oversized paradox we’d prefer not to see standing smack in the sightlines of most our policy debates. Day by day, even minute by minute, America simultaneously gets less free in the laboratory, but more free in the field. Individuals are constantly expanding the limits and applications of their own autonomy, even as government transcends prior restraints on how far it can reach into our intimate business.
So it is that the Internal Revenue Service can charge foreign banks with collecting taxes on U.S. citizens (therefore causing global financial institutions to shun many of the estimated 6 million-plus Americans who live abroad), even while block-chain virtuosos make illegal transactions wholly undetectable to authorities. It has never been easier for Americans to travel abroad, and it’s never been harder to enter the U.S. without showing passports, fingerprints, retinal scans, and even social-media passwords.
What’s true for banking and tourism is doubly true for free speech. Social media has given everyone not just a platform but a megaphone (as unreadable as our Facebook timelines have all become since last November). At the same time, the federal government during this unhappy 21st century has continuously ratcheted up prosecutorial pressure against leakers, whistleblowers, investigative reporters, and technology companies.
A hopeful bulwark against government encroachment unique to the free-speech field is the Supreme Court’s very strong First Amendment jurisprudence in the past decade or two. Donald Trump, like Hillary Clinton before him, may prattle on about locking up flag-burners, but Antonin Scalia and the rest of SCOTUS protected such expression back in 1990. Barack Obama and John McCain (and Hillary Clinton—she’s as bad as any recent national politician on free speech) may lament the Citizens United decision, but it’s now firmly legal to broadcast unfriendly documentaries about politicians without fear of punishment, no matter the electoral calendar.
But in this very strength lies what might be the First Amendment’s most worrying vulnerability. Barry Friedman, in his 2009 book The Will of the People, made the persuasive argument that the Supreme Court typically ratifies, post facto, where public opinion has already shifted. Today’s culture of free speech could be tomorrow’s legal framework. If so, we’re in trouble.
For evidence of free-speech slippage, just read around you. When both major-party presidential nominees react to terrorist attacks by calling to shut down corners of the Internet, and when their respective supporters are actually debating the propriety of sucker punching protesters they disagree with, it’s hard to escape the conclusion that our increasingly shrill partisan sorting is turning the very foundation of post-1800 global prosperity into just another club to be swung in our national street fight.
In the eternal cat-and-mouse game between private initiative and government control, the former is always advantaged by the latter’s fundamental incompetence. But what if the public willingly hands government the power to muzzle? It may take a counter-cultural reformation to protect this most noble of American experiments.
Matt Welch is the editor at large of Reason.
Adam. J. WhiteFree speech is indeed under threat on our university campuses, but the threat did not begin there and it will not end there. Rather, the campus free-speech crisis is a particularly visible symptom of a much more fundamental crisis in American culture.
The problem is not that some students, teachers, and administrators reject traditional American values and institutions, or even that they are willing to menace or censor others who defend those values and institutions. Such critics have always existed, and they can be expected to use the tools and weapons at their disposal. The problem is that our country seems to produce too few students, teachers, and administrators who are willing or able to respond to them.
American families produce children who arrive on campus unprepared for, or uninterested in, defending our values and institutions. For our students who are focused primarily on their career prospects (if on anything at all), “[c]ollege is just one step on the continual stairway of advancement,” as David Brooks observed 16 years ago. “They’re not trying to buck the system; they’re trying to climb it, and they are streamlined for ascent. Hence they are not a disputatious group.”
Meanwhile, parents bear incomprehensible financial burdens to get their kids through college, without a clear sense of precisely what their kids will get out of these institutions in terms of character formation or civic virtue. With so much money at stake, few can afford for their kids to pursue more than career prospects.
Those problems are not created on campus, but they are exacerbated there, as too few college professors and administrators see their institutions as cultivators of American culture and republicanism. Confronted with activists’ rage, they offer no competing vision of higher education—let alone a compelling one.
Ironically, we might borrow a solution from the Left. Where progressives would leverage state power in service of their health-care agenda, we could do the same for education. State legislatures and governors, recognizing the present crisis, should begin to reform and renegotiate the fundamental nature of state universities. By making state universities more affordable, more productive, and more reflective of mainstream American values, they will attract students—and create incentives for competing private universities to follow suit.
Let’s hope they do it soon, for what’s at stake is much more than just free speech on campus, or even free speech writ large. In our time, as in Tocqueville’s, “the instruction of the people powerfully contributes to the support of a democratic republic,” especially “where instruction which awakens the understanding is not separated from moral education which amends the heart.” We need our colleges to cultivate—not cut down—civic virtue and our capacity for self-government. “Republican government presupposes the existence of these qualities in a higher degree than any other form,” Madison wrote in Federalist 55. If “there is not sufficient virtue among men for self-government,” then “nothing less than the chains of despotism” can restrain us “from destroying and devouring one another.”
Adam J. White is a research fellow at the Hoover Institution.
Cathy YoungA writer gets expelled from the World Science Fiction Convention for criticizing the sci-fi community’s preoccupation with racial and gender “inclusivity” while moderating a panel. An assault on free speech, or an exercise of free association? How about when students demand the disinvitation of a speaker—or disrupt the speech? When a critic of feminism gets banned from a social-media platform for unspecified “abuse”?
Such questions are at the heart of many recent free-speech controversies. There is no censorship by government; but how concerned should we be when private actors effectively suppress unpopular speech? Even in the freest society, some speech will—and should—be considered odious and banished to unsavory fringes. No one weeps for ostracized Holocaust deniers or pedophilia apologists.
But shunned speech needs to remain a narrow exception—or acceptable speech will inexorably shrink. As current Federal Communications Commission chairman Ajit Pai cautioned last year, First Amendment protections will be hollowed out unless undergirded by cultural values that support a free marketplace of ideas.
Sometimes, attacks on speech come from the right. In 2003, an Iraq War critic, reporter Chris Hedges, was silenced at Rockford College in Illinois by hecklers who unplugged the microphone and rushed the stage; some conservative pundits defended this as robust protest. Yet the current climate on the left—in universities, on social media, in “progressive” journalism, in intellectual circles—is particularly hostile to free expression. The identity-politics left, fixated on subtle oppressions embedded in everyday attitudes and language, sees speech-policing as the solution.
Is hostility to free-speech values on the rise? New York magazine columnist Jesse Singal argues that support for restrictions on public speech offensive to minorities has remained steady, and fairly high, since the 1970s. Perhaps. But the range of what qualifies as offensive—and which groups are to be shielded—has expanded dramatically. In our time, a leading liberal magazine, the New Republic, can defend calls to destroy a painting of lynching victim Emmett Till because the artist is white and guilty of “cultural appropriation,” and a feminist academic journal can be bullied into apologizing for an article on transgender issues that dares to mention “male genitalia.”
There is also a distinct trend of “bad” speech being squelched by coercion, not just disapproval. That includes the incidents at Middlebury College in Vermont and at Claremont McKenna in California, where mobs not only prevented conservative speakers—Charles Murray and Heather Mac Donald—from addressing audiences but physically threatened them as well. It also includes the use of civil-rights legislation to enforce goodthink in the workplace: Businesses may face stiff fines if they don’t force employees to call a “non-binary” co-worker by the singular “they,” even when talking among themselves.
These trends make a mockery of liberalism and enable the kind of backlash we have seen with Donald Trump’s election. But the backlash can bring its own brand of authoritarianism. It’s time to start rebuilding the culture of free speech across political divisions—a project that demands, above all, genuine openness and intellectual consistency. Otherwise it will remain, as the late, great Nat Hentoff put it, a call for “free speech for me, but not for thee.”
Cathy Young is a contributing editor at Reason.
Robert J. ZimmerFree speech is not a natural feature of human society. Many people are comfortable with free expression for views they agree with but would withhold this privilege for those they deem offensive. People justify such restrictions by various means: the appeal to moral certainty, political agendas, demand for change, opposing change, retaining power, resisting authority, or, more recently, not wanting to feel uncomfortable. Moral certainty about one’s views or a willingness to indulge one’s emotions makes it easy to assert that others are doing true damage or creating unacceptable offense simply by presenting a fundamentally different perspective.
The resulting challenges to free expression may come in the form of laws, threats, pressure (whether societal, group, or organizational), or self-censorship in the face of a prevailing consensus. Specific forms of challenge may be more or less pronounced as circumstances vary. But the widespread temptation to consider the silencing of “objectionable” viewpoints as acceptable implies that the challenge to free expression is always present.
The United States today is no exception. We benefit from the First Amendment, which asserts that the government shall make no law abridging the freedom of speech. However, fostering a society supporting free expression involves matters far beyond the law. The ongoing and increasing demonization of one group by another creates a political and social environment conducive to suppressing speech. Even violent acts opposing speech can become acceptable or encouraged. Such behavior is evident at both political rallies and university events. Our greatest current threat to free expression is the emergence of a national culture that accepts the legitimacy of suppression of speech deemed objectionable by a segment of the population.
University and college campuses present a particularly vivid instance of this cultural shift. There have been many well-publicized episodes of speakers being disinvited or prevented from speaking because of their views. However, the problem is much deeper, as there is significant self-censorship on many campuses. Both faculty and students sometimes find themselves silenced by social and institutional pressures to conform to “acceptable” views. Ironically, the very mission of universities and colleges to provide a powerful and deeply enriching education for their students demands that they embrace and protect free expression and open discourse. Failing to do so significantly diminishes the quality of the education they provide.
My own institution, the University of Chicago, through the words and actions of its faculty and leaders since its founding, has asserted the importance of free expression and its essential role in embracing intellectual challenge. We continue to do so today as articulated by the Chicago Principles, which strongly affirm that “the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” It is only in such an environment that universities can fulfill their own highest aspirations and provide leadership by demonstrating the value of free speech within society more broadly. A number of universities have joined us in reinforcing these values. But it remains to be seen whether the faculty and leaders of many institutions will truly stand up for these values, and in doing so provide a model for society as a whole.
Robert J. Zimmer is the president of the University of Chicago.