Of the leading political figures of the age, Ronald Reagan was perhaps the most sharply defined. He stood without ambiguity…
When two-and-a-half years ago Ronald Reagan was elected to the Presidency, almost everyone expected that there would be a marked change in the direction of American foreign policy. Nor was there much disagreement over the nature of this anticipated change. How could there have been? Of the leading political figures of the age, Ronald Reagan was perhaps the most sharply defined. He stood without ambiguity for the view that the conflict between the United States and the Soviet Union was the central issue of our time; that it could be defined as a struggle between good and evil; that in this struggle the United States had been falling behind while an expansionist Soviet Union was forging ahead; and that unless we made every effort to restore and assert our power, the future would belong to the forces of totalitarian Communism.
Obviously, by itself this view did not yield a blueprint for day-to-day action in international affairs. But just as obviously it suggested motion in a certain direction: a significant increase in defense spending so as to restore the deteriorating military balance, and a new determination to resist the expansion of Soviet imperial control and influence. No one who voted for Reagan could have had any doubt that this was what he would aim for, and it was therefore reasonable to suppose that the decisive majority by which he was elected signified the crystallization of a new consensus in American public opinion on the seriousness of the Soviet threat and the need to take action against it.
It is very important to recognize, however, that Reagan did not create this new consensus. Actually, it would be more accurate to say that it created him; or, to be still more precise, that its prior existence made his election possible. As evidence of this proposition, we can point to the dramatic rise in alarm over the Soviet threat after the invasion of Afghanistan in 1979. We can also point to the growth of support for increases in defense spending charted throughout the 1970’s by all the public-opinion polls. And we can, finally, point to a palpable intensification of nationalist sentiment in the country, beginning with the surprising outburst of patriotism that accompanied the bicentennial celebrations of 1976 and culminating in the pro-American demonstrations provoked by the humiliating seizure of the hostages in Iran three years later.
But even more striking than any of this was the radical alteration in both the tone and substance of the Carter administration’s foreign policy in its fourth and—as it would turn out—final year in office. Jimmy Carter, who throughout his campaign for the Presidency in 1976 had promised to cut defense spending by at least $5 billion and also never to lie to the American people, could now be discovered boasting that he had broken both of these promises by raising defense spending in his first three years as President. Carter, who had begun by congratulating the nation on having overcome its “inordinate fear of Communism,” and who had spoken of the obsolescence of military power as a factor in international conflict, now not only grew alarmed over the prospect of a Soviet takeover of the Persian Gulf, but enunciated a new presidential doctrine committing the United States to the use of force in order to prevent it. Carter, who had begun by stigmatizing the American effort to save South Vietnam from Communism as a symptom of “intellectual and moral poverty,” and who had cooperated in administering the coup de grâce to the Somoza government in Nicaragua, now cut off American aid to the Communist-dominated Sandinista regime which had replaced Somoza, and in addition sent money and military advisers to El Salvador to help prevent a Communist-dominated guerrilla force from taking power there.
Without denying that these highly dramatic reversals represented a conscientious effort by a sitting President of the United States to discharge his constitutional responsibilities as the guardian of the national security, I would nevertheless maintain that Carter the President would not have done such things without the permission (or even, perhaps, the urging) of Carter the politician. For the politician in Carter could see all too clearly that a shift in the climate of opinion was robbing his policies as President of the popularity they had briefly seemed to enjoy and thereby jeopardizing his chances for reelection to a second term.
In the end, Carter lost to a much more plausible and, to all appearances much more reliable, exponent of the policies to which Carter himself had so recently become a convert. The new consensus on the Soviet threat which had already waxed strong enough to force a change of direction on Jimmy Carter was by now too strong to settle for him when in Ronald Reagan it could get the real thing.
Yet no sooner had it swept Reagan into office than questions began to be raised about the precise meaning and limits of the new consensus. Those who opposed Reagan, and some who supported him, were quick to deny that his election had provided him with a clear “mandate” in foreign policy. No one went so far as to deny that Carter had been badly hurt by the national humiliation over Iran and his inability to do anything about it, but many denied that the Iranian episode was much more than a freakish accident. Reagan had won, they said, mainly because of economic factors, or because so many different groups had come to dislike Carter for a great variety of reasons forming no coherent political pattern. In any case, if Reagan should make a serious attempt to put his “simplistic” view of the world into practice, he would soon find himself frustrated by “reality” (by which his opponents meant their own view of the world).
At first not much consolation could be derived by Reagan’s opponents from this prediction. In his early months in office, he seemed bent on doing almost exactly What the critics were so sure he would be unable to do. Within days of moving into the White House, he spoke of Communism as a bizarre historical phenomenon destined to disappear in the foreseeable future, and he gave every indication of wishing to hurry the process along. In his economic program, the only area of government spending to be increased rather than cut was defense. The emphasis on arms control, which had been so marked in the past three administrations, was to be muted in favor of an arms build-up aimed at restoring the strategic balance between the United States and the Soviet Union. Through his Secretary of State, Alexander M. Haig, Jr., he served unambiguous notice that he regarded the guerrilla movement in El Salvador as an effort by the Soviet Union, acting through Cuba and Nicaragua, to extend its imperial reach in Central America, and he expressed his determination to prevent this.
Not surprisingly, there were cries of alarm, especially over the language in which these intentions were described. But anyone looking more closely than it suited the critics to do could see that they had less to worry about than they imagined. First of all, the Reagan administration implicitly agreed with its opponents in interpreting the election not so much as a mandate for changing the foreign policy of the nation as for reforming the economy. Most of the energy during Reagan’s early months in office went into his economic program, while foreign policy was treated almost as a distraction. Thus the heavy emphasis placed on holding the line in Central America, and especially for the moment in El Salvador, was soon softened, evidently because the White House feared that the controversy Haig had provoked both in the Congress and in the media would undercut support for the President’s economic progam.
Nor was this the only sign that wherever the interests of Reagan’s economic program conflicted with the interests of his foreign policy, the former would be favored over the latter. For example, he revoked the grain embargo instituted by Carter against the Soviets in response to the invasion of Afghanistan, even though he was supposedly against doing anything to help or strengthen them. No matter that this move was rationalized with the argument that embargoes were ineffective and that we were doing more harm to ourselves than to the Soviets. The fact was that on this issue, Reagan showed that his was an administration which—in George Will’s devastating characterization—“loved commerce more than it loathed Communism,” a characterization that would later be richly confirmed by the decision to go on subsidizing the Polish economy even after the Soviet-ordered institution of martial law by the Quisling Jaruzelski regime.
To be sure, there were two major exceptions to this subordination of foreign policy to economic considerations. One was the President’s brave and stubborn determination to hold out for a significant increase in defense spending even when the pressure to back down became nearly intolerable. Yet the very erosion of the support Reagan had originally enjoyed on this issue could be blamed in some measure on his own decision to give priority to the economy over foreign policy. For if reducing government spending was our most important order of business, there was no way that the defense budget could be spared from the ax, and there would always be enough evidence of “waste” in the Pentagon to reassure deficit-minded Republicans that in opposing the President on this issue they were not endangering the national security. (Most Democrats were already reassured.) Here indeed was a “reality” to frustrate Reagan’s ideas, but in this case it was a Coleridgean reality that he himself was at least half helping to create.
The other great exception to the favoring of commerce over anti-Communism was Reagan’s staunch opposition to the construction of a pipeline that would carry natural gas from the Soviet Union to Western Europe. Here too, however, his ideas were undermined by a reality he himself had helped to create. As apologists for the pipeline kept saying, by selling American grain to the Soviets, Reagan had not exactly put himself in the best position to demand that the Europeans refuse to sell equipment for building the pipeline. American opponents of the pipeline countered by arguing that there were great differences between the two deals. Grain sales, they said, were a straight form of trade, whereas the subsidized pipeline deal was a form of aid; grain sales cost the Soviets hard currency, whereas the Soviets would ultimately earn hard currency through the pipeline; grain sales gave the Soviets no political leverage over the United States, whereas becoming the supplier of energy to Western Europe would enable the Soviets to threaten a cutoff as a way of exerting pressure in some future crisis.
Yet whatever the merits of these arguments, in the eyes of Reagan’s European critics and their American allies, the United States was at the very best being inconsistent and at the worst hypocritical. Either Reagan wanted to declare “economic warfare” on the Soviet Union or he did not. But if he did, he could not ask the Europeans to shoulder the burden while decreeing a special exemption for the American farmer.
The net result of this assignment to economic policy of a higher priority than foreign policy was the creation of a vacuum into which the opposition to the 1980 consensus on the Soviet threat was able to move. Discredited by Iran and Afghanistan and demoralized by Reagan’s landslide victory, the opposition (which included Republicans as well as Democrats) was now handed a chance to regroup much sooner than it had expected. Even its severest critics would have to grant that it went on to make the most of this happy windfall.
The opposition to the 1980 consensus on the Soviet threat was, and is, heavily influenced by two closely related though distinguishable elements: pacifism and isolationism. I am well aware that most members of the opposition would indignantly deny that their ideas can be identified either with pacifism or with isolationism. But as I hope to show, my use of these terms is fully warranted by the historical pedigree of what the opposition says and the logical consequences of the policies it advocates.
Of the two major elements whose influence has shaped the opposition to the 1980 consensus, pacifism is at once the more elusive and the more pervasive. It is elusive because there are in the United States only a minuscule number of people frankly and openly committed to pacifism in the strict sense of the term: the belief that war is the greatest of all evils and that nothing, literally nothing, nothing whatever, is worth defending by force of arms or can justify resorting to war. Indeed, even among the few self-declared pacifists in America, there are many who make an exception for “wars of national liberation” and are even willing to defend terrorism.
But if pacifism in the strict sense can scarcely be said to exist in America, pacifism in a looser form has become more influential than it has ever been before except perhaps in England in the period between the two world wars. What made pacifism so fashionable then was the carnage of World War I—a war that no one seemed able to explain or justify and yet that had decimated an entire generation of young men who went blindly to the slaughter mouthing “mindless” and “meaningless” patriotic slogans.
This pacifist tide was fed, however, not only by memories of World War I but also by apocalyptic visions of what a second world war would be like. It was widely believed in the 30’s that there was no defense against aerial bombardment, and that the next war would therefore spell the end of the world, or at least of “civilization as we know it.” Along with being evil, then, war had become senseless and could no longer be seen in Clausewitzian terms as a continuation of policy by other means.
The same combination of disillusioned memory and apocalyptic anticipation is at work in the spread of pacifism in America today. The memory in our case is of course the memory of Vietnam whose effect on American attitudes toward war in general has been strikingly similar to the effect of World War I on the British in the 20’s and 30’s. In 1933, the notorious resolution “that this house will in no circumstances fight for its king and country” carried the day in a debate at Oxford; fifty years later, in 1983 (just after, ironically, the same resolution had been debated again at Oxford and this time defeated), a joint session of the U.S. Congress cheered when Ronald Reagan interrupted his appeal for increased aid to El Salvador with the words: “Now, before I go any further, let me say to those who invoke the memory of Vietnam: there is no thought of sending American combat troops to Central America.” The Senators and Congressmen cheered not merely because they approved of Reagan’s declaration in the case of Central America but because the “thought of sending American combat troops” anywhere, or for any purpose, has been rendered almost taboo by “the memory of Vietnam.” Even those, both in and out of Congress, who insist that there are “places where they would favor American action,” as Meg Greenfield of the Washington Post puts it, “never can seem to think of one this side of San Diego.” In America today, slogans like “No More Vietnams,” “Hell No, We Won’t Go,” and “Nothing is ever settled by force” have become the functional equivalent of the resolution never to fight for king and country.
As in the 30’s, moreover, the pacifist attitudes growing out of the country’s most recent experience of war have been reinforced by apocalyptic visions of the future. What the idea of aerial bombardment did for British pacifism in the 30’s, the idea of nuclear missiles has done for American (and of course West European) pacifism in the 80’s. But nuclear weapons have been a much greater blessing to pacifism than aerial bombardment ever was. Whereas not everyone in the 30’s agreed that there was no defense against aerial bombardment, virtually everyone in the United States today believes that there is no defense against nuclear missiles. More and more Americans have come to doubt, furthermore, that a limited nuclear war is possible. It is now almost universally assumed that any use of nuclear weapons anywhere would inevitably escalate into all-out nuclear war and the certain destruction of the entire world. This means that for the first time ever, the basic pacifist premise—that war is a greater evil than any objective for which it might be fought—has acquired plausibility in the eyes of many people otherwise not inclined to pacifism either by temperament or by philosophy. Hence the emergence of what has been called nuclear pacifism.
Nuclear pacifism expresses itself in a variety of positions. At its most forthright and logical, it calls for unilateral disarmament on the plainly sensible ground that if nuclear weapons can never and must never be used, there is no point in possessing them at all. Some unilateralists think that if the West gave up its nuclear arsenal, the Soviets would follow suit. Others admit that the Soviet Union might take advantage of such a move to compel a Western surrender. But whether they are optimistic or pessimistic about the Soviet response, all unilateralists by definition agree that the West should immediately begin getting rid of its own nuclear weapons without waiting for the Soviets to respond in kind.
Although unilateralism has become a serious political force in Western Europe, it has thus far made very little headway in the United States. Perhaps the closest any reputable American group has come to endorsing unilateralism is the pastoral letter of the Roman Catholic bishops (“The Challenge of Peace: God’s Promise and Our Response”). To be sure, the bishops explicitly say that they “do not advocate a policy of unilateral disarmament.” They are willing to accord “a strictly conditioned moral acceptance” to the temporary or interim possession of nuclear weapons by the West as a deterrent, provided that deterrence is “used as a step on the way toward progressive disarmament.” Yet they declare their “profound skepticism about the moral acceptability of any use of nuclear weapons.” But if it is immoral to use nuclear weapons under any circumstances (even in retaliation for a nuclear attack), they might just as well be renounced unilaterally for all the good they do even as a deterrent or a bargaining chip.
The West German bishops, whose minds have been concentrated wonderfully by the overwhelming superiority of the Soviet conventional forces poised against their country and deterred only by the NATO promise that an invasion would if necessary be met with a nuclear response, have been sensitive to the unilateralist implications contained in the casuistical formulations of their American brethren. For their part, the German bishops have come out in support of NATO’s policy on this point. So too have the French bishops.
Here, then, we have the representatives of a constituency which not long ago was among the most hawkish in America, and perhaps the most resolutely anti-Communist, throwing their moral and political weight on the side of a position verging on unilateral disarmament, and doing so in the full awareness (as the pastoral letter suggests) that this might well result in a Soviet-dominated world. What more vivid measure could there be of the great boost that nuclear weapons have given to pacifism in America?
Historically, pacifist thought, while in itself always enjoying only a limited appeal and often operating on the margins of political debate, has nevertheless exerted a great influence on the mainstream—not under its own doctrinal flag but in the bowdlerized form of illusions about and pressures for disarmament. In the period between the two world wars, these pacifist-inspired illusions and pressures gave us the Kellogg-Briand Pact of 1928 renouncing war. They also found more concrete expression in the Washington naval armaments treaty of 1922 (limiting the number of American, British, Japanese, French, and Italian warships) and the London naval agreement of 1930 (which set limits on the size of submarines and other warships).
The best that can be said for these efforts is that if their purpose was to prevent or lessen the risk of war by reducing armaments, they obviously failed. But the worst that can be said for them—that if they had any effect at all, it was to increase rather than decrease the chances of war—is closer to the truth. Thus the naval agreement of 1922, recently cited by the historian Gaddis Smith as a successful example of a “freeze,” is seen by Barbara Tuchman (who is at least as dovish as Smith in her attitude toward nuclear weapons) to have “fueled the rising Japanese militarism that led eventually to Pearl Harbor.”
Mrs. Tuchman’s judgment is shared by most historians, as is the view taken by Eugene V. Rostow, the former director of the Arms Control and Disarmament Agency, both of the Washington “freeze” of 1922 and of the limitations later negotiated in London in 1930. “The post-World War I arms-limitation agreements . . . helped to bring on World War II, by reinforcing the blind and willful optimism of the West, thus inhibiting the possibility of military preparedness and diplomatic actions through which Britain and France could easily have deterred the war.”
Some who accept this assesssment (including Mrs. Tuchman) think that the invention of nuclear weapons has changed everything by making the prevention of war a more overriding imperative than it was in the pre-nuclear age. But as Mrs. Tuchman herself recognizes, the record thus far simply does not bear out this idea. On the contrary, negotiations between the United States and the Soviet Union over nuclear weapons show almost exactly the same characteristics as the arms-control agreements of the pre-nuclear past.
First of all, negotiations over nuclear weapons have not led to real reductions in the quantity or quality of those weapons, and such limitations as they have succeeded in establishing have not notably lessened the risk of war. Take as an example even the proudest single achievement of arms control in the nuclear age—the Test Ban Treaty of 1963. Far from eliminating or even cutting down on the testing of nuclear weapons (and therefore of their further development), the treaty has been followed by an increase in the number of such tests. The only effect the treaty has had on testing has been to drive it underground. That may, as Mrs. Tuchman drily notes, be a gain for the environment but it is not a gain for disarmament.
Secondly, arms-control agreements in the nuclear age, like the disarmament agreements of the 20’s and 30’s, have resulted in cutbacks by the democratic side and increases by the totalitarian side. Under SALT I, the Soviet Union took full advantage of what was legally permitted and forged ahead to increase the quantity of its nuclear weapons while also improving their quality. This is exactly how the Japanese and later the Germans acted in the 1930’s. The United States (following the precedent set by itself and the other Western democracies in the 1930’s after the naval agreements) either stood still or cut back in the years after SALT I was ratified. The one significant advance we did make, the placing of more than one warhead on a single missile (MRV), is now regarded by almost all arms-control enthusiasts as “destabilizing.” But in view of the fact that this innovation was developed in order to conform to the provisions of SALT I (which limited the number of missiles rather than the number of warheads), it demonstrates that the process of arms control has not even been capable of achieving one of its minimal objectives, which is (in the words of the Scowcroft Report) to “help channel modernization into stabilizing rather than destabilizing paths.”1
There is nothing arbitrary or accidental about this record of failure. It stems directly from the pacifist illusion that wars are caused by arms and can therefore be prevented by reducing or eliminating arms. But wars are not caused by arms. Salvador de Madariaga, who chaired the League of Nations Disarmament Commission, came to believe that disarmament was a “mirage” because it tackled the problem of war “upside down and at the wrong end. . . . Nations don’t distrust each other because they are armed; they are armed because they distrust each other. And therefore to want disarmament before a minimum of common agreement on fundamentals is as absurd as to want people to go undressed in winter.”
This simple and unanswerable observation explains why on the one hand there are no arms on the border between the United States and Canada and why on the other hand it is indeed “absurd” to expect that anything much can be done about the arms on the border between East and West Europe.
But there is a further point to be made. The “common agreement on fundamentals” that de Madariaga rightly sees as a necessary precondition of disarmament can never be reached with a nation whose ambitions are to overturn the existing international system and to replace it with a new system in which it will enjoy hegemony. Because Nazi Germany was such a nation, it was foolish of Chamberlain and other Western leaders to imagine that war with Hitler could be avoided by negotiated concessions (or “appeasement,” to use the then respectable term). And because the Soviet Union is also such a nation, it is equally foolish to imagine that a “common agreement on fundamentals” can be arrived at between Moscow and the West.
To say that the Soviet Union’s aim is to create a new international system in which it would enjoy hegemony is not to suggest (as the vulgar caricature has it) that there is a “timetable” or a “blueprint” for world conquest guiding every action the Kremlin takes. It is, however, to recognize that the strategy of the Soviet Union is to move toward a greater and greater expansion of its power and influence, at a pace and by tactical means that combine maximum prudence with maximum opportunism.
In other words, wherever a chance presents itself and the risks are not too great, the Soviets will take advantage of it. If force must be used, as in Afghanistan, it will be used, but the clear preference of the Soviet leadership is either to employ surrogates to do the fighting, or better still, to win through intimidation rather than through war.
Since their military arsenal is designed to serve this expansionist strategy, the Soviets will never voluntarily surrender an advantage in the balance of military power. Nor will they ever enter into (or honor) any agreement that prevents them from achieving military superiority. To accept anything less than superiority—even equality or parity—would be tantamount to accepting the present international system. Indeed, because their ideological or political attractiveness has diminished in recent years, and because they suffer from a great disadvantage to the West in the economic area as well, their reliance on military power has increased and will ineluctably grow in the future. For how else can they compensate for their other weaknesses in the overall “correlation of forces”?
The United States, by contrast, leads an alliance whose strategic objective is to maintain “stability,” and our military arsenal is designed to serve this defensive purpose. Far from pursuing superiority, the United States voluntarily gave it up, allowing the Soviets to achieve parity on the theory that they had surrendered their originally revolutionary aims and had now become a “status-quo power,” content with the present international arrangements. The Soviets themselves made nonsense of this theory by continuing their military buildup even after they had caught up and reached parity. In addition they sent Cuban surrogates to Africa, and then their own troops into Afghanistan (while also providing military support to Communist guerrillas in Central America). So much, then, for the idea that they had become a “status-quo power,” and so much therefore for the chances of arriving at de Madariaga’s “common agreement on fundamentals.”
The brutal truth is that with one side eager (for economic or other reasons) to cut back on its armaments and the other side eager to consolidate and enhance its advantages, disarmament negotiations offer only a fraudulent hope. In the 30’s, the Germans and the Japanese built up their armed forces (with or without cheating) because they wanted to do so, while the democracies—pushed by internal political and economic pressures to disarm—did not even fulfill their legal quotas under the various disarmament agreements. A similar pattern has developed in our own day under cover of the SALT process, during which we have either stood still or moved back while the Soviets have built and built and built, not only expanding and refining their nuclear arsenal but enlarging and improving every category of their conventional force as well. Yet so pervasive has the influence of the pacifist illusion become in the West that, even in the face of all this, hope continues to be invested in disarmament and the opposition to the 1980 consensus on the Soviet threat clamors for new and ever more radical measures.
It is in the guise of these new measures that the second major influence behind the opposition to the 1980 consensus—isolationism—has been able to stage a sensational comeback in American political culture. Like pacifism (to which it has no necessary logical connection but with which it can and always has comfortably allied itself), isolationism claims very few open adherents in the United States. For like pacifism too, isolationism was so discredited by World War II that those who have continued to believe in it, or those who have rediscovered it in recent years, rarely invoke its name in talking about their position. As if this did not cause enough trouble for frank and honest discussion, the name of isolationism (or sometimes neo-isolationism) has occasionally been claimed by writers like Robert W. Tucker and William Pfaff who for better or worse are not truly entitled to it. (Almost the only political commentator with any visibility today who both claims the title and is truly entitled to it is Earl Ravenal.)
However isolationism may be defined in the abstract, historically it has mainly meant a policy of American disengagement from the affairs, and especially the wars, of Europe. This is why the two latest manifestations of the anti-nuclear movement in the United States—the proposal that we commit ourselves to “no first use” of nuclear weapons, and the proposal that we commit ourselves to a “freeze” on the building, testing, and deployment of all such weapons—can legitimately be described as forms of isolationism.
It is true that most proponents of these measures deny that their intention is to disengage the United States from Europe, or that this would be the effect. Thus McGeorge Bundy, George F. Kennan, Robert S. McNamara, and Gerard Smith (who have now become collectively known in certain European circles as the “American gang of four”) take great pains in their famous Foreign Affairs article endorsing no-first-use to insist that they come not to destroy but to strengthen the American commitment to the defense of Western Europe. It is, they write, the “disarray that currently besets the nuclear policy and practices of the Alliance,” and specifically the divisive debate over the proposed deployment of the new intermediate-range missiles in Western Europe, that led them to back the idea of no-first-use. A no-first-use policy would, they believe, restore credibility to NATO’s deterrent and is therefore a good idea on military grounds. But in their view, “The political coherence of the Alliance, especially in times of stress, is at least as important as the military strength required to maintain credible deterrence. Indeed the political requirement has, if anything, an even higher priority.”
How does no-first-use measure up to that overriding political requirement? Perfectly, the American gang of four tells us: “. . . the value of a no-first-use policy . . . is first of all for the internal health of the Western Alliance itself.” And so far as West Germany in particular is concerned, “A policy of no-first-use would not and should not imply an abandonment” of the American guarantee but “only its redefinition.”
This complacent judgment by the American gang of four is not shared by an equally distinguished West German gang of four whose minds have been as wonderfully concentrated by the prospect of Soviet domination as have the minds of their clerical compatriots.2 According to the Germans, not only would renunciation of first use fail to contribute to the “internal health of the Western Alliance itself,” but it would have the opposite effect of increasing insecurity and fear. Nor do the Germans agree that no-first-use would mean nothing more than a “redefinition” of “the American protective guarantee” to Western Europe. As they see it, this “redefinition” would define the “present commitments of the United States” right out of existence.
In short, far from being the best means “for keeping the Alliance united and effective,” the Germans assert that “the proposed no-first-use policy would destroy the confidence of Europeans and especially of Germans in the European-American Alliance as a community of risk, and would endanger the strategic unity of the Alliance and the security of Western Europe.”
The Germans are right. NATO has relied on the threat of a nuclear response to deter not a nuclear attack on Western Europe but an invasion by conventional forces. The reason for this reliance on a nuclear response is that the conventional forces of NATO have never been large enough to repel a conventional Soviet invasion. Not being adequate to repel in actual combat, they are also inadequate to deter such an invasion. Therefore to renounce first use means renouncing deterrence of a conventional war; it is also to counsel surrender in the face of an inevitable defeat by decisively superior forces. (On this point the Germans do not diplomatically mince words: “The advice of the authors to renounce the use of nuclear weapons even in the face of pending conventional defeat of Western Europe is tantamount to suggesting that ‘rather Red than dead’ would be the only remaining option for those Europeans then still alive.”)
The only way around this trap is to create a Western conventional capability that would be a match for the conventional Soviet forces arrayed against Western Europe. With the rise of anti-nuclear sentiment in the last year or two, this solution has become more and more popular. Except for the American bishops, everyone, it seems, is now in favor of a conventional military build-up. Even critics of the “military-industrial complex” who have complained without let-up about the “bloated” military budget can nowadays be found paying their rhetorical respects to the need for larger and better conventional forces.
But the fact is that nuclear weapons are much cheaper than conventional forces; they give “more bang for a buck,” in the phrase used during the Eisenhower years to justify a greater reliance on them in our overall military posture. How many of those both in the United States and Europe agitating against nuclear and for conventional weapons would be willing to spend the enormous sums that would be needed to build the requisite number of tanks and artillery and munitions? And what about the manpower? What about the draft that would have to be instituted in the United States and extended in Western Europe also at enormous financial cost (not to mention political unrest)?
There is reason, then, to doubt the sincerity of many of the pious genuflections before the newly fashionable idol of a conventional balance of power. But it is not the sincerity of the American gang of four that comes into question when they too pay their obeisances to the conventional defense of Europe; it is, rather, their intellectual and political seriousness. “It seems clear,” they write, “that the nations of the Alliance together can provide whatever forces are needed, and within realistic budgetary constraints.” Since no evidence is adduced to support this astonishing claim, one wonders why “it seems clear.” It is not, at any rate, clear to everyone. According to one extremely optimistic assessment—a report by the European Security Study entitled “Strengthening Conventional Deterrence in Europe”—NATO conventional forces could be adequately upgraded through new technologies over a period of ten years at a cost of only an additional 1 percent above “the present NATO commitments [of an annual real growth of 3 percent in defense spending] if such commitments are sustained and extended beyond 1986.” Yet even the optimistic authors of this report “recognize that political pressures generated by the current economic situation in the NATO countries make it difficult to achieve even the present NATO commitments.”
As for the United States in particular, former Secretary of State Alexander Haig (who was once commander-in-chief of NATO) estimates that an adequate conventional defense would mean “tripling the armed forces, and putting the economy on a wartime footing.” Possibly this estimate is too pessimistic. Even so, half of the entire military appropriation requested by the Reagan administration for this year will go to the conventional defense of Europe. Is it “clear” to anyone that more could be made available?
The only way around this trap is to envisage the additional burden being shouldered by the Europeans themselves. And that is precisely the objective of several proponents of no-first-use like Irving Kristol and Herman Kahn who are generally hawkish in their ideas about defense and whose espousal of no-first-use therefore comes as a surprise. But Kristol thinks that the dependence of Europe on the United States has sapped Europeans of the will to defend themselves. Therefore an American withdrawal in the form of a policy of no-first-use (coupled with the removal of American troops who would no longer be needed as a “tripwire”) might shock the Europeans into doing whatever would be necessary to insure their own defense. Kahn, who differs from Kristol on the issue of withdrawing troops, agrees with him that no-first-use would have a salutary effect on the Europeans.
Both Kristol and Kahn admit, however, that the Europeans might well be shocked by this policy not into building an adequate defensive capability of their own but rather into collapsing before the intimidating military might of the Soviet Union. Kahn’s vision of this possibility extends only to a neutralized Germany, but he thinks “we can live with that.” Kristol foresees worse: appeasement leading to the Finlandization of Western Europe as a whole. But if this were to come about, it would in Kristol’s stern opinion prove that the Europeans were “simply unworthy” of the liberties they enjoy (and, he adds, the same harsh judgment of “political decadence” would be passed by future historians on the United States as well if we in our turn were to refuse “the burden of large, expensive, conventional military establishments, so that we can meet our responsibilities without always and immediately raising the specter of nuclear disaster”).
In any case, Kristol does not doubt that an American policy of no-first-use would put Western Europe at the mercy of the Soviet Union unless it were accompanied by a massive build-up of conventional military force. (Kahn adds the requirement of a credible strategy for fighting a limited nuclear war if the Soviets should use nuclear weapons first.) But there is little likelihood that a conventional build-up will be undertaken either by the Europeans or by the United States. If there is to be a barrier to Soviet domination of the West, it will have to continue taking the form of nuclear weapons. Kristol may be right in saying that this Western dependency on nuclear weapons should never have been allowed to develop. But it is hard to imagine democratic societies placing themselves in peacetime on the kind of permanent war footing that an adequate conventional defense would have required. It is harder still to imagine a future reversal of this situation with expensive welfare states now in place everywhere in the democratic world. To remove nuclear weapons from the picture, then, is for all practical purposes to give the Soviets a decisive edge.
If a policy of no-first-use would do this in Europe, so too would a freeze (since it would prevent deployment of the intermediate-range Pershing 2’s and cruise missiles needed to balance the Soviet SS-20’s). That much is obvious. What is perhaps less obvious is that a freeze would give the Soviet Union a decisive edge not only over the Europeans but over the United States as well.
Proponents of the freeze all deny that the Soviet Union has achieved superiority over the United States in nuclear weaponry. Although most, if not indeed all, of them think that superiority is in any case a meaningless concept as applied to nuclear weapons, they still make a great and indignant point of insisting that a “rough parity” in strategic forces now exists between the two superpowers.
The Reagan administration does not agree. Its position is that the Soviets have an edge because their missiles are now sufficiently powerful and sufficiently accurate to take out our land-based ICBM’s in a first strike, thus depriving us of the means to do anything other than attack their civilian population, after which they would still have enough left over to retaliate in kind against our cities. In our aging Minutemen we have no matching capability, and until that force of land-based ICBM’s is modernized by the deployment of MX or some substitute like the smaller single-warhead “Midgetman,” the Soviets will continue to enjoy an edge. It follows that the “window of vulnerability” is still open. A freeze would prevent us from closing it and hence would lock us into a position of strategic inferiority.3
Yet even if Reagan’s critics were right in claiming that the window of vulnerability is a myth and that the nuclear balance is about equal, a freeze (even a mutual and verifiable freeze) would still lock the United States into a position of strategic inferiority. The reason, simply, is that with a freeze, Soviet superiority in conventional arms would become and remain the decisive factor in the overall balance of military power.
Unlike no-first-use, which would leave Western Europe open to a Soviet invasion (though this in itself would probably suffice to bring about a gradual political capitulation without any troops and tanks actually moving across the borders), the freeze would not expose the United States to any such threat. But—again, unless there were a massive conventional build-up, which, again, is unlikely—a freeze would signify the acquiescence of the United States in a balance of military power clearly favorable to the Soviet Union. This, in turn, would necessitate a very severe contraction of American commitments around the world. For our own defense, we would rely on “minimum deterrence”—that is, a presumably (though at best only temporarily) invulnerable force of submarines armed with nuclear weapons capable of devastating the Soviet Union in retaliation for an attack on the United States itself. The rest of the world we would leave to deal as best it could with the unchecked might of the Soviet Union. Soon enough, however, alone in a sea of Finlandized and Vichyized regimes, we too would find what John F. Kennedy called the “red tide” lapping at our political shores and inexorably eroding our independence and our liberty.
The isolationism that is implicit in the freeze movement, then, goes even farther than the isolationism hiding behind no-first-use. But even the freeze does not go so far as the variety of isolationism that has surfaced in the debate over Central America. If, historically, isolationism has meant American disengagement from Europe, it has also meant the determination to keep the Americas free of foreign influence. The Monroe Doctrine, indeed, was promulgated as the corollary to an isolationist foreign policy. Yet there is now a school of thought in Congress and the media which denies that the United States has the right to fight against the spread of Soviet influence even in the Americas.
Of course, it can be argued that the Monroe Doctrine has already been abrogated by the transformation of Cuba into a Soviet satellite, and that it is a little late to invoke it now in connection with El Salvador and Nicaragua. But the radical new isolationism which has appeared among us on this issue does not rest content even with the de-facto repeal of the Monroe Doctrine. In what is certainly one of the most bizarre pieces of legislation in the history of American foreign policy, the Congress of the United States has in effect demanded that we not only forget about the Monroe Doctrine but that we observe the Brezhnev Doctrine in its place. Under the Brezhnev Doctrine, once a country has become “socialist” (i.e., Communist) it must remain “socialist”; all “socialist” revolutions are to be considered irreversible. Congress evidently agrees. By enacting the Boland Amendment, which forbids the U.S. government to assist in overthrowing the “socialist” Sandinista regime, Congress has virtually written the Brezhnev Doctrine into American law.
But we are not yet done with the incredible perversity of the new isolationists on this issue. Not satisfied with turning the United States into the virtual enforcer of the Brezhnev Doctrine where Nicaragua is concerned, they are also doing their best to help the guerrillas in El Salvador get into power, despite the fact that these guerrillas are openly connected to the Soviet Union through Nicaragua and Cuba. In Congress and in the media, the new isolationists work to obstruct the giving of aid; they devote all their energies to attacking the elected government of El Salvador for its abuses of human rights; they ridicule the administration’s judgment that these abuses are declining; and they loudly and persistently demand that the guerrillas be given a share of power.
Adding intellectual insult to political injury, they claim to be doing all this because they wish to prevent a Communist victory in El Salvador, and they wax righteous with anyone who suggests otherwise. Thus one Congressman who has participated in these various efforts has attacked UN Ambassador Jeane Kirkpatrick for observing that there are those in Congress “who would actually like to see the Marxist forces take power in El Salvador.” “It is,” declared the Congressman, “slander and McCarthyite nonsense to say that members of Congress want to see Marxism triumph, in El Salvador or anywhere else.” In a similar vein, Senator Christopher Dodd of Connecticut, replying for the Democrats to the President’s appeal for increased aid to El Salvador, began his assault on Reagan’s speech by affirming his opposition to “the establishment of Marxist states in Central America.”
These loud protestations are all very well, but if we ask what political views like those of Senator Dodd logically imply, we have to conclude that Ambassador Kirkpatrick’s charge, far from being slanderous or McCarthyite, verges on the self-evident. For what outcome other than a Marxist victory in El Salvador can be expected from a policy that restricts military aid to the government while simultaneously hampering efforts to interdict the flow of arms to the guerrillas; that puts continual pressure on the government to institute wide-ranging reforms in the midst of a guerrilla war; and that insists that the government enter into some form of coalition with the Communist-dominated guerrilla forces? It is hard to think of a better recipe for a Marxist victory in El Salvador than this combination of policies.
During the Vietnam war those who advocated accommodation with the Vietcong were able to persuade themselves that the National Liberation Front was indigenous to South Vietnam rather than an instrument of the Communist regime in the north; that although it included Communists, it was not dominated by them; and that it was fighting against the oppressions and repressions of the Diem and Thieu regimes. We now know from Hanoi itself that all these claims were false and that those in the United States who believed them were deceived. Similarly with Castro’s rebellion against the Batista regime in Cuba. Though we now know from Castro’s own mouth that he was a Communist from the beginning, when at first he claimed to be a Jeffersonian Democrat almost everyone in the United States believed him.
Things are very different today. As Ambassador Kirkpatrick points out in an article in the Washington Post, “what distinguishes the current debate about military and economic aid for Central America from similar disputes about China, Cuba, Vietnam, and Nicaragua is that we have fewer illusions and more information.” Hardly anyone claims any longer that the regime in Nicaragua is a coalition of different political groups whose objective is to create a pluralistic democracy there. The Sandinistas “are done with dissembling,” and have by their candor “denied their international supporters the comforts of ambiguity.” They openly proclaim, as one Sandinista leader puts it, that “We guide ourselves by the scientific doctrines of the Revolution, by Marxism-Leninism.” They make no effort to hide their close association with Cuba, which has sent thousands of teachers, managers, and military advisers to help them move more smoothly toward a fully totalitarian society and to enlarge and strengthen an army which is already the most powerful in the region.
Nor are the Sandinistas connected to the Soviet Union only indirectly, through Cuba. Recently, the Soviets began building a new port on the strategically important Pacific coast of Nicaragua, with the ostensible purpose of servicing Soviet fishing boats. More recently still, a member of the Nicaraguan junta said that his government would consider installing Soviet nuclear missiles in Nicaragua if requested by Moscow to do so. No wonder one French observer thinks “we are headed for a slow-motion replay of the Cuban missile crisis.”
In El Salvador, too, “the comforts of ambiguity” have largely disappeared. The elections of March 1982 in El Salvador, with their huge turnout despite threats of guerrilla reprisal, have made it hard to go on maintaining that the guerrillas enjoy great popular support at home, and the documentary evidence has made it more and more difficult to deny that they are (in Ambassador Kirkpatrick’s words) “directed from command-control centers in Nicaragua, armed with Soviet-bloc arms delivered through Cuba and Nicaragua, bent on establishing in El Salvador the kind of one-party dictatorship linked to the Soviet Union that already exists in Nicaragua.”
Beyond having every reason to know who the guerrillas are, those who advocate a “political solution” in the form of “power sharing” in El Salvador also have every reason to know what invariably happens to such arrangements. Nicaragua is only the most recent example of how a coalition in which Communists are included soon ceases to be a coalition and becomes a one-party regime.
Given all this, to say that the new isolationists would like to see a Marxist regime take over in El Salvador may be the only alternative to the truly slanderous charge that they are so stupid and so ignorant of history that they cannot understand the clear implication of what they say and do.
But why would anyone in Congress or anywhere else wish to see a Marxist regime take power in El Salvador? In the vast majority of instances, the answer obviously cannot be that they are Marxists themselves or that they are sympathetic to Communism. But nowadays it is not necessary to be either a Marxist or a Communist sympathizer in order to believe that Communism is the wave of the future, at least in the “Third World,” and that to range oneself against it is to be on “the losing side.” Thus Senator Dodd: “American dollars alone cannot buy military victory . . . in Central America. If we continue down that road, if we continue to ally ourselves with repression, we will not only deny our own most basic values, we will also find ourselves once again on the losing side.”
One would never guess from these words that 68 percent of the dollars we have sent to El Salvador have gone to economic rather than military aid; that what we have allied ourselves with in El Salvador is a democratically elected government; that it is trying with some success both to carry social reform forward and to cut down on the murders and other horrors that always and everywhere accompany guerrilla war; that if the guerrillas came to power they would be far more repressive than the present government in El Salvador. Despite all this, Senator Dodd declares that we are standing against “the tide of history” instead of moving with it.
Outside the halls of Congress, among columnists, editorialists, and academics, the idea that a Communist victory is the inevitable wave of the future comes out even more clearly. How, asks Anthony Lewis of the New York Times, can a government as bad as the one in El Salvador “win a war, whatever aid it gets,” against guerrilla forces “powerfully motivated by a desire to change a society long marked by brutality and exploitation?”
Again, one would never guess that the government in El Salvador has demonstrated in a free election that it enjoys vast popular support and that the grievances of the people have not led them to support the alternative represented by the guerrillas. Nor, when it comes to Nicaragua, does Lewis assume the invincibility of the powerfully motivated guerrillas fighting against a brutal and oppressive regime there. But of course, being against the Sandinistas, they must be unregenerate Somocistas (even though old leaders of the fight against Somoza are prominent among them); and being anti-Communists, they cannot be regarded as the inevitable victors in a struggle against a Communist regime.
But the most, honest of all the statements yet published on these issues is by Seweryn Bialer, who directs the Research Institute on International Change at Columbia University. After declaring that “it is simply unrealistic to expect that American support for the Salvadoran government can prevent the insurrectionist forces from making significant advances—and perhaps even winning the war—in the next two years,” Bialer goes on to conclude flatly that it is also “unrealistic for the United States to hope to defeat Communist—or potentially Communist—regimes in the region.” Bialer knows this “from talks with representatives of the Salvadoran guerrillas, Sandinista leaders, and Cuban officials,” who have assured him that they will win. He also knows from the same sources that the Nicaraguan guerrillas cannot be expected to “defeat the Sandinistas or prevent their evolution toward Communism.” But of course he really knows it from the assumption he makes that the Salvadoran guerrillas are (to revert to Senator Dodd’s telling image) moving with the tide of history while the Nicaraguan guerrillas are moving against it.
Besides believing that Communism is the wave of the future, the new isolationists evidently believe that Communist regimes are on the whole better for the people who live under them than the “corrupt” and “repressive” governments they replace. On this point, politicians are unable to speak with the same degree of candor as a columnist like Anthony Lewis or an organization like the American Friends Service Committee. Where El Salvador is concerned, although Lewis is under “no illusion that the guerrilla forces and their leaders are all noble democrats, believers in government under law,” he nevertheless tells us that what they are fighting against is “brutality and exploitation.” Now, only yesterday Lewis was railing against the brutality and exploitation of the Thieu regime in South Vietnam only to discover (if indeed he yet has) that it was a paradise compared with what the “powerfully motivated” Vietnamese Communists had in store for the people of South Vietnam. But this time he is sure it will be different. Though the Sandinistas “do indeed have human-rights violations on their record,” Lewis says, “what has happened in Nicaragua in the last few years is pretty tame stuff compared to what has happened—and is still happening—in El Salvador.” After all, only a hundred civilians have been killed in Nicaragua during the past few years as compared with more than 30,000 in El Salvador.
The Anthony Lewis who throws these figures around is the same Anthony Lewis who in writing first about the Christmas 1972 bombing of Hanoi and then about the Israeli invasion of Lebanon uncritically accepted false casualty statistics to discredit the United States in the former case and Israel in the latter. Here, at it once again, he fails to consider that the guerrillas must have been responsible for at least some portion of the 30,000 civilians killed in El Salvador. Nor does he notice that during the period in question the war in El Salvador was still raging while one phase of the war in Nicaragua was over and the next not yet really begun. Nor does it occur to him that the Sandinistas are now in the process of consolidating their power and extending their control with the ultimate objective of turning the country into a totalitarian society on the model of Cuba. Nor does he recognize that Castroism, like every other example of Communist rule the world has ever known, has brought nothing but political repression, economic misery, and cultural starvation. Nor does he take into account the fact that the young men of Cuba have been turned into the cannon fodder of Soviet imperialism in Africa. None of these things disturbs Lewis’s belief that Nicaragua will be different.
Indeed, he and many others already detect signs that it is. Conditions, Lewis assures us, are better there than in El Salvador, and according to the national coordinator of the Human Rights Program of the American Friends Service Committee: “In many aspects of Nicaraguan life—nutrition, education, health care, and land reform—there have been tremendous improvements.” Having sung this old familiar song whose strains have echoed in countless reports from Stalin’s Russia, Mao’s China, Ho’s North Vietnam, and Castro’s Cuba, this Quaker guardian of human rights acknowledges that there have been a few violations in Nicaragua. For these, however, he mainly blames not the government but “attempts to destabilize the government.” Needless to say, he offers no such apology for the human-rights violations in El Salvador. There, he no doubt feels, the people will go on suffering until the inevitable arrival of the same blessings that the Sandinistas are now bringing to the people of Nicaragua and which would be more abundant still if not for “attempts to destabilize the government.”
But even if a Communist victory were both inevitable and morally desirable as compared with the alternative in El Salvador, would it not still be a blow to the interests of the United States?
Not necessarily, says Seweryn Bialer. Admittedly the Sandinistas, like Castro before them and (although here Bialer is less forthright) the guerrillas in El Salvador after them, are bent on creating Communist states. Admittedly nothing the United States could have done, “neither ‘carrots’ nor ‘sticks,’ . . . could have prevented the Cuban evolution into a Communist state,” and the same is true of Nicaragua and (presumably) El Salvador. However, Bialer believes, “a less bellicose policy toward Cuba might well have prevented it from becoming a satellite of the Soviet Union.” Where Cuba is concerned, it is now too late: we have “probably missed the opportunity to separate what is authentically Cuban in the Cuban revolution from the influence of the Soviet Union in Havana.” But elsewhere in Central America it is not yet too late: we still have a chance, through “a shrewder, more deft United States policy [to] prevent El Salvador and Nicaragua from moving into the Soviet orbit.”
What we should do, according to this analysis, is coopt the Communist revolution in Central America. For “the only plausible way to prevent Soviet influence in the United States’ own backyard” is to accept and even promote the spread of Communism in the United States’ own backyard. Instead of making Central America safe for such brutal dictatorships as the one in Guatemala, which is how Bialer characterizes our present policy, we should—to put it more nakedly than Bialer himself does—be working to make the region safe for national Communism.
Never mind that there is no evidence for Bialer’s assertion that Castro once was, or that the Sandinistas or the guerrillas in El Salvador now are, “interested not in Soviet goals but rather in . . . independence, social reform, and economic development.” Never mind that Castro himself has given the lie to the idea that he was driven into the arms of the Soviet Union by a “bellicose” American policy (which in any case was not at all bellicose in the immediate aftermath of Castro’s victory and only became so as he moved through his own revolutionary ardor into the Soviet camp). Never mind the simple fact that the United States not only helped in the end to topple the Somoza regime in Nicaragua after many years of supporting it, but initially welcomed the new regime in Nicaragua, sending it more economic aid in its first eighteen months in power than it had given to Somoza in the preceding twenty years. Never mind that as in the earlier case of Castro, these friendly relations with the Sandinistas turned sour as it became clear even to a sympathetic Carter administration that they were both failing to keep their democratic promises at home and also actively working with Soviet and Cuban help to promote a “revolution without frontiers” in El Salvador and elsewhere in the region. All these inconvenient truths to the contrary notwithstanding, Bialer and others can still assure us that all the Sandinista government cares about is “its own independence, social reform, and economic development.”
Both Lewis and Bialer (among many other commentators) freely concede that the United States could prevent a Communist victory in El Salvador (and could reverse the Communist revolution in Nicaragua) if it sent its own troops in to do the job. But the not-so-hidden term in their analysis is that the United States will not and cannot intervene militarily in Central America. Bialer: “It is difficult if not impossible to imagine that Congress and the American public would agree to such a course.” Lewis: “Public feeling against any dispatch of U.S. combat forces to El Salvador is so great that it is hard to see how any President could send them.”
It is here that we arrive finally at the juncture where pacifism and isolationism—the two great shapers of the opposition to the 1980 consensus—meet and merge into a single mighty wave of appeasement.
Of course, the term appeasement itself retains its pejorative ring—so much so that in what may well be the prize polemical trick of the age, one opponent of Reagan has tried to discredit him by pointing out that Neville Chamberlain, the great apostle of appeasement, was also anti-Soviet. But appeasement by any other name smells as rank, and the stench of it now pervades the American political atmosphere. It would indeed be astonishing if this were not the case, since appeasement (as the word itself reveals) is the natural offspring of pacifism and the policy most compatible with isolationism.
Those like Bialer who call for the appeasement of Communism in Central America tell us that this is “the only way to fight Soviet influence” in the region. But the spirit of appeasement does not always disguise itself as a clever tactic for opposing Soviet expansionism. More often it appears in the shape of a rush to apologize for or explain away or even justify every aggressive move the Soviet Union makes. Thus many of the same people who think that the United States has no right or is ill-advised to intervene in Central America against the spread of Communist regimes there are quick to defend (while of course piously deploring) the Soviet intervention in Afghanistan or the suppression of Solidarity in Poland on the ground that keeping friendly regimes in countries so close to its own borders is a legitimate security interest of the Soviet Union.
Similarly, many of the same people who oppose the proposed deployment in Europe of the new intermediate-range missiles are willing, nay eager, to justify the Soviet deployment of such missiles in Europe and to translate the sophistries of Moscow’s case into terms that sound very reasonable to American ears. When, for example, Irving Kristol asked why the Soviets decided to deploy the SS-20’s in Europe and arrived at the surely correct answer that they did so for the purposes of political intimidation, he was immediately countered by Raymond L. Garthoff of the Brookings Institution who came up with “a perfectly understandable Soviet military rationale for modernization, without resort to speculation on intentions for a first strike or political pressure.”
The same impulse to deny or even cover up evidence of Soviet malevolence—and again by people, especially in the media, who leap at and magnify even the faintest indication of American wrongdoing—can be sniffed out in several other areas as well. Even before the attempted assassination of Pope John Paul II, there was a widespread refusal to credit the abundant evidence that the Soviets had been deeply involved in international terrorism. And then, even after the attempt, journalists who had never hesitated to convict the United States of outlandish charges merely on the basis of rumor, willfully blinded themselves for many months to the increasingly obvious conclusion that the Soviets were the guilty party. A comparable degree of skepticism has been manifested, and also for an extraordinarily long time, toward the evidence that the Soviets, in violation of the Biological Weapons Convention of 1972, had been using the poisonous chemicals known as “yellow rain” in Laos, Cambodia, and Afghanistan.
While many of the skeptics have finally been forced to come around on the assassination attempt and on “yellow rain,” no such readiness to give up exonerating the Soviets has yet materialized on two other issues. One is Soviet involvement in Central America, for which an impossible standard of proof is demanded (again in contrast to how the United States is treated). The other is the issue of Soviet cheating on SALT. When President Reagan said recently that “There have been increasingly serious grounds for questioning [Soviet] compliance with the arms-control agreements that have already been signed,” he was instantly denounced for what a New York Times editorial called “loose talk about Soviet cheating.” Other commentators charged Reagan with hypocrisy: since he himself opposed ratification of SALT II, by what right did he accuse the Soviets of violating it?
In any event, said Tom Wicker of the New York Times, even though Reagan had promised “to refrain from actions which undercut SALT II so long as the Soviet Union shows equal restraint,” he himself had made proposals that “numerous experts” considered violations of the treaty. To which one of these experts, a former arms-control official in the Carter administraton named William E. Jackson, Jr., added that it would not be surprising if Moscow had “long since concluded that the unratified [SALT II] treaty is a dead letter.”
To sum up the Wicker-Jackson position: there is no conclusive proof that the Soviets actually violated SALT II, and even if there were, they would be justified in doing so by the way “the Reagan administration has trashed the very idea” of preserving SALT II and has “repeatedly denigrated the arms-control achievements of Presidents Nixon, Ford, and Carter.”
In the past apologies for Soviet behavior usually arose out of love or admiration or sympathy. But that is not what we are dealing with here. The new species of apologetics comes not from Communists or fellow-travelers but from people who are so driven by the fear of Soviet power and so mesmerized by pacifist illusions that they will go to any lengths to persuade themselves and others that safety can be found in negotiations with the Soviet Union.
Sometimes this pretense is maintained by dismissing or denying realities like the size and scope of the Soviet military build-up and the aggressive political strategy that has accompanied it in violation of the promise implicit in the Basic Principles of Détente of 1972; or by dismissing or denying the evidence that the Soviets have certainly violated the 1972 treaty prohibiting the use of chemical weapons, and have almost certainly cheated on SALT I. Yet even when denial is made impossible by an avalanche of incontrovertible evidence, the very acknowledgment of these previously suppressed realities is usually accompanied by intensified affirmations of the need to pursue and reach agreements.
The best recent illustration of how fear begets pacifist illusions which then beget appeasement is a column entitled “Sarajevo and St. Peter’s” by Flora Lewis of the New York Times. Miss Lewis here begins by quoting a British historian who had warned against pursuing the facts of the assassination attempt on the Pope because “the echo of a bullet at Sarajevo set off World War I.” Miss Lewis disagrees. The facts, she says, “should not, and probably cannot, be stifled. History and Western dignity demand the truth.” What then is the “warning” sounded by the horrible realization that “the line of responsibility leads directly to Moscow’s KGB and to the man who was then its chief and is now the Soviet leader, Yuri Andropov”? Does this mean that agreements with such a man and such a nation are worthless? Not in the least, Miss Lewis tells us: “It means getting on with arms negotiations, engaging determinedly in a search for peace with an adversary too dangerous to defy or discount. The issue isn’t mutual trust, it is everybody’s survival in a world where dirty tricks are all too possible, and so is total disaster. The appropriate lesson of Sarajevo now is to face facts, and therefore plan for peace.”
One can scarcely imagine a more vivid expression of the spirit of appeasement which has been bred by the resurgence of pacifism and isolationism in the past two years.
If the opposition to the 1980 consensus on the Soviet threat and the need to take action against it is shaped by these elements (traveling, to repeat, under different names), what are its prospects for the future?
In trying to answer this question, the beginning of wisdom is to recognize that despite appearances to the contrary, we are not dealing here with a struggle that divides neatly along party lines. On the two main issues we have been examining—defense and Central America—the Democratic Jimmy Carter and the Republican Ronald Reagan have been surprisingly close. As I have already pointed out, it was Carter who cut off American aid to Nicaragua and sent money and military advisers to El Salvador; and it was also Carter who endorsed the MX, agreed to deploy the Pershing 2’s and cruise missiles in Europe, and who withdrew SALT II because the votes for ratification could not be mustered in the (Democratically-controlled) Senate. Conversely, there are many Republicans in the House and Senate who are against or are lukewarm toward these same policies even when espoused by a President of their own party, and it is an open secret that many Democrats disagreed with Senator Dodd’s attack on Reagan’s speech about Central America. There are, then, Republicans in the opposition to the 1980 consensus and there are Democrats who remain part of that consensus.
Nor does the debate divide neatly along a liberal-conservative or Left-Right axis. The liberal New York Times opposes the freeze, while a conservative like former Secretary of the Treasury William E. Simon calls for cuts in the defense budget. A conservative like Irving Kristol supports withdrawal of American troops from Europe, while liberals like Morton Kondracke of the New Republic and Richard Holbrooke (formerly of the Carter State Department and Foreign Policy magazine) oppose withdrawal of American support from El Salvador.
There is hope in these crisscrossings and incoherent combinations. For if it is true that the opposition to the 1980 consensus on the Soviet threat was given an ideal chance to regroup and mobilize by Reagan’s decision to pay more attention to the economy than to foreign policy, then the recent change in the balance of presidential attention might serve to restore the consensus to some approximation of its former bipartisan strength and confidence. The series of speeches Reagan has made in the past few months defending his policies on defense and Central America has already had an effect. The MX has survived a major congressional challenge, and Congress has also accepted a larger increase in defense spending than the opposition had not so long ago bargained for. On Central America, too, the opposition in Congress has been forced to back down. It has not (yet) succeeded in cutting off all aid to the Nicaraguan guerrillas or in forcing the government of El Salvador to submit to the demands of the guerrillas there.
On the other hand, Reagan has been forced to back down as well. To get the MX and other elements of his rearmament program, he has had to enter into an arms-control process which he once gave every indication of understanding to be a fraud and a trap; and he has also had to move more slowly and cautiously in Central America than he presumably would have wished.
Reagan has been forced to act in these ways largely because the consensus that elected him has been frightened by the relentless pounding and the demagogic appeals of the opposition. Even so, the American people have not changed their minds about the seriousness of the Soviet threat. We know this from the fact that in all the polls large majorities say that they are very worried about it. But the influence of the opposition shows in the equally large majorities who place their hopes in arms-control negotiations and who are especially reluctant to send American troops to Central America. As a politician, Ronald Reagan, confronted by this twin reluctance, has been compelled to bend.
But those who still hold with the 1980 consensus on the Soviet threat, and who are not politicians, have no compelling need to bend. They are free to speak plainly, and they have a great responsibility to do so. They have a great responsibility to go on saying that the Soviet threat can only be successfully met by a policy of strength and resolve which will inevitably entail larger defense budgets and a continued reliance on nuclear weapons; that the hopes vested in arms control are delusory and dangerous, and serve mainly as a respectable cover for isolationism and appeasement; that we can deter a war with the Soviet Union only if we are prepared and willing if necessary to fight; that if the United States cannot prevent a Communist victory in El Salvador, it will stand revealed as a spent and impotent force; and that the United States must therefore do whatever may be required, up to and including the dispatch of American troops, to stop and then to reverse the totalitarian drift in Central America.
In short, they have a great responsibility to go on demonstrating that pacifism and isolationism in any guise and under any name can only give us a world fashioned in the image of the Soviet Union. I for one do not believe that the American people will cooperate knowingly in the emergence of such a world. And that is why I think the spirit of appeasement now hovering so heavily over the land can still be blown away by a renewed, persistent, and unembarrassed appeal to the realism, the sense of honor, and the patriotism that erupted after Iran and Afghanistan and then swept Ronald Reagan into office only two-and-a-half years ago.
1 This statement occurs in a brief summary of the non-pacifist case, such as it is, for arms-control negotiations with the-Soviet Union. What little there is to be said in favor of arms control from a non-pacifist perspective is also well put at greater length in “The Realities of Arms Control” by the Harvard Nuclear Study Group (Atlantic, June 1983).
2 The four are Karl Kaiser, who directs the leading German research institute on foreign affairs; Georg Leber, a labor leader and a former Social Democratic Defense Minister; Alois Mertes, the parliamentary foreign-policy spokesman of the Christian Democrats; and Franz-Josef Schulze, a retired general who has served in various high positions in NATO. Like their American opponents in this debate, then, the Germans are a bipartisan group with much professional experience in foreign and defense policy.
3 It is widely asserted that the Scowcroft Commission appointed by Reagan to advise on the deployment and basing of the MX has exposed the “window of vulnerability” as a myth. But the Scowcroft Report does no such thing. It clearly acknowledges that our land-based ICBM's need to be modernized. It also acknowledges that they are vulnerable. Recognizing, however, that the only way to solve the problem of vulnerability in the short term—namely, the Carter scheme of movable multiple shelters (MPS)—had to be rejected because of “local political opposition,” the Scowcroft Commission takes such comfort as it can find in the idea that the other legs of the strategic triad will temporarily compensate for this vulnerability until it is eventually cured by the substitution of smaller single-warhead missiles for the MX.
Appeasement By Any Other Name
Must-Reads from Magazine
Can it be reversed?
Writing in these pages last year (“Illiberalism: The Worldwide Crisis,” July/August 2016), I described this surge of intemperate politics as a global phenomenon, a crisis of illiberalism stretching from France to the Philippines and from South Africa to Greece. Donald Trump and Bernie Sanders, I argued, were articulating American versions of this growing challenge to liberalism. By “liberalism,” I was referring not to the left or center-left but to the philosophy of individual rights, free enterprise, checks and balances, and cultural pluralism that forms the common ground of politics across the West.
Less a systematic ideology than a posture or sensibility, the new illiberalism nevertheless has certain core planks. Chief among these are a conspiratorial account of world events; hostility to free trade and finance capital; opposition to immigration that goes beyond reasonable restrictions and bleeds into virulent nativism; impatience with norms and procedural niceties; a tendency toward populist leader-worship; and skepticism toward international treaties and institutions, such as NATO, that provide the scaffolding for the U.S.-led postwar order.
The new illiberals, I pointed out, all tend to admire established authoritarians to varying degrees. Trump, along with France’s Marine Le Pen and many others, looks to Vladimir Putin. For Sanders, it was Hugo Chavez’s Venezuela, where, the Vermont socialist said in 2011, “the American dream is more apt to be realized.” Even so, I argued, the crisis of illiberalism traces mainly to discontents internal to liberal democracies.
Trump’s election and his first eight months in office have confirmed the thrust of my predictions, if not all of the policy details. On the policy front, the new president has proved too undisciplined, his efforts too wild and haphazard, to reorient the U.S. government away from postwar liberal order.
The courts blunted the “Muslim ban.” The Trump administration has reaffirmed Washington’s commitment to defend treaty partners in Europe and East Asia. Trumpian grumbling about allies not paying their fair share—a fair point in Europe’s case, by the way—has amounted to just that. The president did pull the U.S. out of the Trans-Pacific Partnership, but even the ultra-establishmentarian Hillary Clinton went from supporting to opposing the pact once she figured out which way the Democratic winds were blowing. The North American Free Trade Agreement, which came into being nearly a quarter-century ago, does look shaky at the moment, but there is no reason to think that it won’t survive in some modified form.
Yet on the cultural front, the crisis of illiberalism continues to rage. If anything, it has intensified, as attested by the events surrounding the protest over a Robert E. Lee statue in Charlottesville, Virginia. The president refused to condemn unequivocally white nationalists who marched with swastikas and chanted “Jews will not replace us.” Trump even suggested there were “very fine people” among them, thus winking at the so-called alt-right as he had during the campaign. In the days that followed, much of the left rallied behind so-called antifa (“anti-fascist”) militants who make no secret of their allegiance to violent totalitarian ideologies at the other end of the political spectrum.
Disorder is the new American normal, then. Questions that appeared to have been settled—about the connection between economic and political liberty, the perils of conspiracism and romantic politics, America’s unique role on the world stage, and so on—are unsettled once more. Serious people wonder out loud whether liberal democracy is worth maintaining at all, with many of them concluding that it is not. The return of ideas that for good reason were buried in the last century threatens the decent political order that has made the U.S. an exceptionally free and prosperous civilization.F or many leftists, America’s commitment to liberty and equality before the law has always masked despotism and exploitation. This view long predated Trump’s rise, and if they didn’t subscribe to it themselves, too often mainstream Democrats and progressives treated its proponents—the likes of Noam Chomsky and Howard Zinn—as beloved and respectable, if slightly eccentric, relatives.
This cynical vision of the free society (as a conspiracy against the dispossessed) was a mainstay of Cold War–era debates about the relative merits of Western democracy and Communism. Soviet apologists insisted that Communist states couldn’t be expected to uphold “merely” formal rights when they had set out to shape a whole new kind of man. That required “breaking a few eggs,” in the words of the Stalinist interrogators in Arthur Koestler’s Darkness at Noon. Anyway, what good were free speech and due process to the coal miner, when under capitalism the whole social structure was rigged against him?
That line worked for a time, until the scale of Soviet tyranny became impossible to justify by anyone but its most abject apologists. It became obvious that “bourgeois justice,” however imperfect, was infinitely preferable to the Marxist alternative. With the Communist experiment discredited, and Western workers uninterested in staging world revolution, the illiberal left began shifting instead to questions of identity. In race-gender-sexuality theory and the identitarian “subaltern,” it found potent substitutes for dialectical materialism and the proletariat. We are still living with the consequences of this shift.
Although there were superficial resemblances, this new politics of identity differed from earlier civil-rights movements. Those earlier movements had sought a place at the American table for hitherto entirely or somewhat excluded groups: blacks, women, gays, the disabled, and so on. In doing so, they didn’t seek to overturn or radically reorganize the table. Instead, they reaffirmed the American Founding (think of Martin Luther King Jr.’s constant references to the Declaration of Independence). And these movements succeeded, owing to America’s tremendous capacity for absorbing social change.
Yet for the new identitarians, as for the Marxists before them, liberal-democratic order was systematically rigged against the downtrodden—now redefined along lines of race, gender, and sexuality, with social class quietly swept under the rug. America’s strides toward racial progress, not least the election and re-election of an African-American president, were dismissed. The U.S. still deserved condemnation because it fell short of perfect inclusion, limitless autonomy, and complete equality—conditions that no free society can achieve given the root fact of human nature. The accidentals had changed from the Marxist days, in other words, but the essentials remained the same.
In one sense, though, the identitarians went further. The old Marxists still claimed to stand on objectively accessible truth. Not so their successors. Following intellectual lodestars such as the gender theorist Judith Butler, the identity left came to reject objective truth—and with it, biological sex differences, aesthetic standards in art, the possibility of universal moral precepts, and much else of the kind. All of these things, the left identitarians said, were products of repressive institutions, hierarchies, and power.
Today’s “social-justice warriors” are heirs to this sordid intellectual legacy. They claim to seek justice. But, unmoored from any moral foundations, SJW justice operates like mob justice and revolutionary terror, usually carried out online. SJWs claim to protect individual autonomy, but the obsession with group identity and power dynamics means that SJW autonomy claims must destroy the autonomy of others. Self-righteousness married to total relativism is a terrifying thing.
It isn’t enough to have legalized same-sex marriage in the U.S. via judicial fiat; the evangelical baker must be forced to bake cakes for gay weddings. It isn’t enough to have won legal protection and social acceptance for the transgendered; the Orthodox rabbi must use preferred trans pronouns on pain of criminal prosecution. Likewise, since there is no objective truth to be gained from the open exchange of ideas, any speech that causes subjective discomfort among members of marginalized groups must be suppressed, if necessary through physical violence. Campus censorship that began with speech codes and mobs that prevented conservative and pro-Israel figures from speaking has now evolved into a general right to beat anyone designated as a “fascist,” on- or off-campus.
For the illiberal left, the election of Donald Trump was indisputable proof that behind America’s liberal pieties lurks, forever, the beast of bigotry. Trump, in this view, wasn’t just an unqualified vulgarian who nevertheless won the decisive backing of voters dissatisfied with the alternative or alienated from mainstream politics. Rather, a vote for Trump constituted a declaration of war against women, immigrants, and other victims of American “structures of oppression.” There would be no attempt to persuade Trump supporters; war would be answered by war.
This isn’t liberalism. Since it can sometimes appear as an extension of traditional civil-rights activism, however, identity leftism has glommed itself onto liberalism. It is frequently impossible to tell where traditional autonomy- and equality-seeking liberalism ends and repressive identity leftism begins. Whether based on faulty thinking or out of a sense of weakness before an angry and energetic movement, liberals have too often embraced the identity left as their own. They haven’t noticed how the identitarians seek to undermine, not rectify, liberal order.
Some on the left, notably Columbia University’s Mark Lilla, are sounding the alarm and calling on Democrats to stress the common good over tribalism. Yet these are a few voices in the wilderness. Identitarians of various stripes still lord over the broad left, where it is fashionable to believe that the U.S. project is predatory and oppressive by design. If there is a viable left alternative to identity on the horizon, it is the one offered by Sanders and his “Bernie Bros”—which is to say, a reversion to the socialism and class struggle of the previous century.
Americans, it seems, will have to wait a while for reason and responsibility to return to the left.T
hen there is the illiberal fever gripping American conservatives. Liberal democracy has always had its critics on the right, particularly in Continental Europe, where statist, authoritarian, and blood-and-soil accounts of conservatism predominate. Mainstream Anglo-American conservatism took a different course. It has championed individual rights, free enterprise, and pluralism while insisting that liberty depends on public virtue and moral order, and that sometimes the claims of liberty and autonomy must give way to those of tradition, state authority, and the common good.
The whole beauty of American order lies in keeping in tension these rival forces that are nevertheless fundamentally at peace. The Founders didn’t adopt wholesale Enlightenment liberalism; rather, they tempered its precepts about universal rights with the teachings of biblical religion as well as Roman political theory. The Constitution drew from all three wellsprings. The product was a whole, and it is a pointless and ahistorical exercise to elevate any one source above the others.
American conservatism and liberalism, then, are in fact branches of each other, the one (conservatism) invoking tradition and virtue to defend and, when necessary, discipline the regime of liberty; the other (liberalism) guaranteeing the open space in which churches, volunteer organizations, philanthropic activity, and other sources of tradition and civic virtue flourish, in freedom, rather than through state establishment or patronage.
One result has been long-term political stability, a blessing that Americans take for granted. Another has been the transformation of liberalism into the lingua franca of all politics, not just at home but across a world that, since 1945, has increasingly reflected U.S. preferences. The great French classical liberal Raymond Aron noted in 1955 that the “essentials of liberalism—the respect for individual liberty and moderate government—are no longer the property of a single party: they have become the property of all.” As Aron archly pointed out, even liberalism’s enemies tend to frame their objections using the rights-based talk associated with liberalism.
Under Trump, however, some in the party of the right have abdicated their responsibility to liberal democracy as a whole. They have reduced themselves to the lowest sophistry in defense of the New Yorker’s inanities and daily assaults on presidential norms. Beginning when Trump clinched the GOP nomination last year, a great deal of conservative “thinking” has amounted to: You did X to us, now enjoy it as we dish it back to you and then some. Entire websites and some of the biggest stars in right-wing punditry are singularly devoted to making this rather base point. If Trump is undermining this or that aspect of liberal order that was once cherished by conservatives, so be it; that 63 million Americans supported him and that the president “drives the left crazy”—these are good enough reasons to go along.
Some of this is partisan jousting that occurs with every administration. But when it comes to Trump’s most egregious statements and conduct—such as his repeated assertions that the U.S. and Putin’s thugocracy are moral equals—the apologetics are positively obscene. Enough pooh-poohing, whataboutery, and misdirection of this kind, and there will be no conservative principle left standing.
More perniciously, as once-defeated illiberal philosophies have returned with a vengeance to the left, so have their reactionary analogues to the right. The two illiberalisms enjoy a remarkable complementarity and even cross-pollinate each other. This has developed to the point where it is sometimes hard to distinguish Tucker Carlson from Chomsky, Laura Ingraham from Julian Assange, the Claremont Review from New Left Review, and so on.
Two slanders against liberalism in particular seem to be gathering strength on the thinking right. The first is the tendency to frame elements of liberal democracy, especially free trade, as a conspiracy hatched by capitalists, the managerial class, and others with soft hands against American workers. One needn’t renounce liberal democracy as a whole to believe this, though believers often go the whole hog. The second idea is that liberalism itself was another form of totalitarianism all along and, therefore, that no amount of conservative course correction can set right what is wrong with the system.
These two theses together represent a dismaying ideological turn on the right. The first—the account of global capitalism as an imposition of power over the powerless—has gained currency in the pages of American Affairs, the new journal of Trumpian thought, where class struggle is a constant theme. Other conservatives, who were always skeptical of free enterprise and U.S.-led world order, such as the Weekly Standard’s Christopher Caldwell, are also publishing similar ideas to a wider reception than perhaps greeted them in the past.
In a March 2017 essay in the Claremont Review of Books, for example, Caldwell flatly described globalization as a “con game.” The perpetrators, he argued, are “unscrupulous actors who have broken promises and seized a good deal of hard-won public property.” These included administrations of both parties that pursued trade liberalization over decades, people who live in cities and therefore benefit from the knowledge-based economy, American firms, and really anyone who has ever thought to capitalize on global supply chains to boost competitiveness—globalists, in a word.
By shipping jobs and manufacturing processes overseas, Caldwell contended, these miscreants had stolen not just material things like taxpayer-funded research but also concepts like “economies of scale” (you didn’t build that!). Thus, globalization in the West differed “in degree but not in kind from the contemporaneous Eastern Bloc looting of state assets.”
That comparison with predatory post-Communist privatization is a sure sign of ideological overheating. It is somewhat like saying that a consumer bank’s lending to home buyers differs in degree but not in kind from a loan shark’s racket in a housing project. Well, yes, in the sense that the underlying activity—moneylending, the purchase of assets—is the same in both cases. But the context makes all the difference: The globalization that began after World War II and accelerated in the ’90s took place within a rules-based system, which duly elected or appointed policymakers in Western democracies designed in good faith and for a whole host of legitimate strategic and economic reasons.
These policymakers knew that globalization was as old as civilization itself. It would take place anyway, and the only question was whether it would be rules-based and efficient or the kind of globalization that would be driven by great-power rivalry and therefore prone to protectionist trade wars. And they were right. What today’s anti-trade types won’t admit is that defeating the Trans-Pacific Partnership and a proposed U.S.-European trade pact known as TTIP won’t end globalization as such; instead, it will cede the game to other powers that are less concerned about rules and fair play.
The postwar globalizers may have gone too far (or not far enough!). They certainly didn’t give sufficient thought to the losers in the system, or how to deal with the de-industrialization that would follow when information became supremely mobile and wages in the West remained too high relative to skills and productivity gains in the developing world. They muddled and compromised their way through these questions, as all policymakers in the real world do.
The point is that these leaders—the likes of FDR, Churchill, JFK, Ronald Reagan, Margaret Thatcher, and, yes, Bill Clinton—acted neither with malice aforethought nor anti-democratically. It isn’t true, contra Caldwell, that free trade necessarily requires “veto-proof and non-consultative” politics. The U.S., Britain, and other members of what used to be called the Free World have respected popular sovereignty (as understood at the time) for as long as they have been trading nations. Put another way, you were far more likely to enjoy political freedom if you were a citizen of one of these states than of countries that opposed economic liberalism in the 20th century. That remains true today. These distinctions matter.
Caldwell and like-minded writers of the right, who tend to dwell on liberal democracies’ crimes, are prepared to tolerate far worse if it is committed in the name of defeating “globalism.” Hence the speech on Putin that Caldwell delivered this spring at a Hillsdale College gathering in Phoenix. Promising not to “talk about what to think about Putin,” he proceeded to praise the Russian strongman as the “preeminent statesman of our time” (alongside Turkish strongman Recep Tayyip Erdogan). Putin, Caldwell said, “has become a symbol of national self-determination.”
Then Caldwell made a remark that illuminates the link between the illiberalisms of yesterday and today. Putin is to “populist conservatives,” he declared, what Castro once was to progressives. “You didn’t have to be a Communist to appreciate the way Castro, whatever his excesses, was carving out a space of autonomy for his country.”
Whatever his excesses, indeed.T
he other big idea is that today’s liberal crises aren’t a bug but a core feature of liberalism. This line of thinking is particularly prevalent among some Catholic traditionalists and other orthodox Christians (both small- and capital-“o”). The common denominator, it seems to me, is having grown up as a serious believer at a time when many liberals—to their shame—have declared war on faith generally and social conservatism in particular.
The argument essentially is this:
We (social conservatives, traditionalists) saw the threat from liberalism coming. With its claims about abstract rights and universal reason, classical liberalism had always posed a danger to the Church and to people of God. We remembered what those fired up by the new ideas did to our nuns and altars in France. Still we made peace with American liberal order, because we were told that the Founders had “built on low but solid ground,” to borrow Leo Strauss’s famous formulation, or that they had “built better than they knew,” as American Catholic hierarchs in the 19th century put it.
Maybe these promises held good for a couple of centuries, the argument continues, but they no longer do. Witness the second sexual revolution under way today. The revolutionaries are plainly telling us that we must either conform our beliefs to Herod’s ways or be driven from the democratic public square. Can it still be said that the Founding rested on solid ground? Did the Founders really build better than they knew? Or is what is passing now precisely what they intended, the rotten fruit of the Enlightenment universalism that they planted in the Constitution? We don’t love Trump (or Putin, Hungary’s Viktor Orbán, etc.), but perhaps he can counter the pincer movement of sexual and economic liberalism, and restore a measure of solidarity and commitment to the Western project.
The most pessimistic of these illiberal critics go so far as to argue that liberalism isn’t all that different from Communism, that both are totalitarian children of the Enlightenment. One such critic, Harvard Law School’s Adrian Vermeule, summed up this position in a January essay in First Things magazine:
The stock distinction between the Enlightenment’s twins—communism is violently coercive while liberalism allows freedom of thought—is glib. Illiberal citizens, trapped [under liberalism] without exit papers, suffer a narrowing sphere of permitted action and speech, shrinking prospects, and increasing pressure from regulators, employers, and acquaintances, and even from friends and family. Liberal society celebrates toleration, diversity, and free inquiry, but in practice it features a spreading social, cultural, and ideological conformism.1
I share Vermeule’s despair and that of many other conservative-Christian friends, because there have been genuinely alarming encroachments against conscience, religious freedom, and the dignity of life in Western liberal democracies in recent years. Even so, despair is an unhelpful companion to sober political thought, and the case for plunging into political illiberalism is weak, even on social-conservative grounds.
Here again what commends liberalism is historical experience, not abstract theory. Simply put, in the real-world experience of the 20th century, the Church, tradition, and religious minorities fared far better under liberal-democratic regimes than they did under illiberal alternatives. Are coercion and conformity targeting people of faith under liberalism? To be sure. But these don’t take the form of the gulag or the concentration camp or the soccer stadium–cum-killing field. Catholic political practice knows well how to draw such moral distinctions between regimes: Pope John Paul II befriended Reagan. If liberal democracy and Communism were indeed “twins” whose distinctions are “glib,” why did he do so?
And as Pascal Bruckner wrote in his essay “The Tyranny of Guilt,” if liberal democracy does trap or jail you (politically speaking), it also invariably slips the key under your cell door. The Swedish midwives driven out of the profession over their pro-life views can take their story to the media. The Down syndrome advocacy outfit whose anti-eugenic advertising was censored in France can sue in national and then international courts. The Little Sisters of the Poor can appeal to the Supreme Court for a conscience exemption to Obamacare’s contraceptives mandate. And so on.
Conversely, once you go illiberal, you don’t just rid yourself of the NGOs and doctrinaire bureaucrats bent on forcing priests to perform gay marriages; you also lose the legal guarantees that protect the Church, however imperfectly, against capricious rulers and popular majorities. And if public opinion in the West is turning increasingly secular, indeed anti-Christian, as social conservatives complain and surveys seem to confirm, is it really a good idea to militate in favor of a more illiberal order rather than defend tooth and nail liberal principles of freedom of conscience? For tomorrow, the state might fall into Elizabeth Warren’s hands.
Nor, finally, is political liberalism alone to blame for the Church’s retreating on various fronts. There have been plenty of wounds inflicted by churchmen and laypeople, who believed that they could best serve the faith by conforming its liturgy, moral teaching, and public presence to liberal order. But political liberalism didn’t compel these changes, at least not directly. In the space opened up by liberalism, and amid the kaleidoscopic lifestyles that left millions of people feeling empty and confused, it was perfectly possible to propose tradition as an alternative. It is still possible to do so.N one of this is to excuse the failures of liberals. Liberals and mainstream conservatives must go back to the drawing board, to figure out why it is that thoughtful people have come to conclude that their system is incompatible with democracy, nationalism, and religious faith. Traditionalists and others who see Russia’s mafia state as a defender of Christian civilization and national sovereignty have been duped, but liberals bear some blame for driving large numbers of people in the West to that conclusion.
This is a generational challenge for the liberal project. So be it. Liberal societies like America’s by nature invite such questioning. But before we abandon the 200-and-some-year-old liberal adventure, it is worth examining the ways in which today’s left-wing and right-wing critiques of it mirror bad ideas that were overcome in the previous century. The ideological ferment of the moment, after all, doesn’t relieve the illiberals of the responsibility to reckon with the lessons of the past.
1 Vermeule was reviewing The Demon in Democracy, a 2015 book by the Polish political theorist and parliamentarian Ryszard Legutko that makes the same case. Fred Siegel’s review of the English edition appeared in our June 2016 issue.
How the courts are intervening to block some of the most unjust punishments of our time
Barrett’s decision marked the 59th judicial setback for a college or university since 2013 in a due-process lawsuit brought by a student accused of sexual assault. (In four additional cases, the school settled a lawsuit before any judicial decision occurred.) This body of law serves as a towering rebuke to the Obama administration’s reinterpretation of Title IX, the 1972 law barring sex discrimination in schools that receive federal funding.
Beginning in 2011, the Education Department’s Office for Civil Rights (OCR) issued a series of “guidance” documents pressuring colleges and universities to change how they adjudicated sexual-assault cases in ways that increased the likelihood of guilty findings. Amid pressure from student and faculty activists, virtually all elite colleges and universities have gone far beyond federal mandates and have even further weakened the rights of students accused of sexual assault.
Like all extreme victims’-rights approaches, the new policies had the greatest impact on the wrongly accused. A 2016 study from UCLA public-policy professor John Villasenor used just one of the changes—schools employing the lowest standard of proof, a preponderance of the evidence—to predict that as often as 33 percent of the time, campus Title IX tribunals would return guilty findings in cases involving innocent students. Villasenor’s study could not measure the impact of other Obama-era policy demands—such as allowing accusers to appeal not-guilty findings, discouraging cross-examination of accusers, and urging schools to adjudicate claims even when a criminal inquiry found no wrongdoing.
In a September 7 address at George Mason University, Education Secretary Betsy DeVos stated that “no student should be forced to sue their way to due process.” But once enmeshed in the campus Title IX process, a wrongfully accused student’s best chance for justice may well be a lawsuit filed after his college incorrectly has found him guilty. (According to data from United Educators, a higher-education insurance firm, 99 percent of students accused of campus sexual assault are male.) The Foundation for Individual Rights has identified more than 180 such lawsuits filed since the 2011 policy changes. That figure, obviously, excludes students with equally strong claims whose families cannot afford to go to court. These students face life-altering consequences. As Judge T.S. Ellis III noted in a 2016 decision, it is “so clear as to be almost a truism” that a student will lose future educational and employment opportunities if his college wrongly brands him a rapist.
“It is not the role of the federal courts to set aside decisions of school administrators which the court may view as lacking in wisdom or compassion.” So wrote the Supreme Court in a 1975 case, Wood v. Strickland. While the Supreme Court has made clear that colleges must provide accused students with some rights, especially when dealing with nonacademic disciplinary questions, courts generally have not been eager to intervene in such matters.
This is what makes the developments of the last four years all the more remarkable. The process began in May 2013, in a ruling against St. Joseph’s University, and has lately accelerated (15 rulings in 2016 and 21 thus far in 2017). Of the 40 setbacks for colleges in federal court, 14 came from judges nominated by Barack Obama, 11 from Clinton nominees, and nine from selections of George W. Bush. Brown University has been on the losing side of three decisions; Duke, Cornell, and Penn State, two each.
Court decisions since the expansion of Title IX activism have not all gone in one direction. In 36 of the due-process lawsuits, courts have permitted the university to maintain its guilty finding. (In four other cases, the university settled despite prevailing at a preliminary stage.) But even in these cases, some courts have expressed discomfort with campus procedures. One federal judge was “greatly troubled” that Georgia Tech veered “very far from an ideal representation of due process” when its investigator “did not pursue any line of investigation that may have cast doubt on [the accuser’s] account of the incident.” Another went out of his way to say that he considered it plausible that a former Case Western Reserve University student was actually “innocent of the charges levied against him.” And one state appellate judge opened oral argument by bluntly informing the University of California’s lawyer, “When I . . . finished reading all the briefs in this case, my comment was, ‘Where’s the kangaroo?’”
Judges have, obviously, raised more questions in cases where the college has found itself on the losing side. Those lawsuits have featured three common areas of concern: bias in the investigation, resulting in a college decision based on incomplete evidence; procedures that prevented the accused student from challenging his accuser’s credibility, chiefly through cross-examination; and schools utilizing a process that seemed designed to produce a predetermined result, in response to real or perceived pressure from the federal government.C olleges and universities have proven remarkably willing to act on incomplete information when adjudicating sexual-assault cases. In December 2013, for example, Amherst College expelled a student for sexual assault despite text messages (which the college investigator failed to discover) indicating that the accuser had consented to sexual contact. The accuser’s own testimony also indicated that she might have committed sexual assault, by initiating sexual contact with a student who Amherst conceded was experiencing an alcoholic blackout. When the accused student sued Amherst, the college said its failure to uncover the text messages had been irrelevant because its investigator had only sought texts that portrayed the incident as nonconsensual. In February, Judge Mark Mastroianni allowed the accused student’s lawsuit to proceed, commenting that the texts could raise “additional questions about the credibility of the version of events [the accuser] gave during the disciplinary proceeding.” The two sides settled in late July.
Amherst was hardly alone in its eagerness to avoid evidence that might undermine the accuser’s version of events; the same happened at Penn State, St. Joseph’s, Duke, Ohio State, Occidental, Lynn, Marlboro, Michigan, and Notre Dame.
Even in cases with a more complete evidentiary base, accused students have often been blocked from presenting a full-fledged defense. As part of its reinterpretation of Title IX, the Obama administration sought to shield campus accusers from cross-examination. OCR’s 2011 guidance “strongly” discouraged direct cross-examination of accusers by the accused student—a critical restriction, since most university procedures require the accused student, rather than his lawyer, to defend himself in the hearing. OCR’s 2014 guidance suggested that this type of cross-examination in and of itself could create a hostile environment. The Obama administration even spoke favorably about the growing trend among schools to abolish hearings altogether and allow a single official to serve as investigator, prosecutor, judge, and jury in sexual-assault cases.
The Supreme Court has never held that campus disciplinary hearings must permit cross-examination. Nonetheless, the recent attack on the practice has left schools struggling to explain why they would not want to utilize what the Court has described as the “greatest legal engine ever invented for the discovery of truth.” In June 2016, the University of Cincinnati found a student guilty of sexual assault after a hearing at which neither his accuser nor the university’s Title IX investigator appeared. In an unintentionally comical line, the hearing chair noted the absent witnesses before asking the accused student if he had “any questions of the Title IX report.” The student, befuddled, replied, “Well, since she’s not here, I can’t really ask anything of the report.” (The panel chair did not indicate how the “report” could have answered any questions.) Cincinnati found the student guilty anyway.1
Limitations on full cross-examination also played a role in judicial setbacks for Middlebury, George Mason, James Madison, Ohio State, Occidental, Penn State, Brandeis, Amherst, Notre Dame, and Skidmore.
Finally, since 2011, more than 300 students have filed Title IX complaints with the Office for Civil Rights, alleging mishandling of their sexual-assault allegation by their college. OCR’s leadership seemed to welcome the complaints, which allowed Obama officials not only to inspect the individual case but all sexual-assault claims at the school in question over a three-year period. Northwestern University professor Laura Kipnis has estimated that during the Obama years, colleges spent between $60 million and $100 million on these investigations. If OCR finds a Title IX violation, that might lead to a loss of federal funding. This has led Harvard Law professors Jeannie Suk Gersen, Janet Halley, Elizabeth Bartholet, and Nancy Gertner to observe in a white paper submitted to OCR that universities have “strong incentives to ensure the school stays in OCR’s good graces.”
One of the earliest lawsuits after the Obama administration’s policy shift, involving former Xavier University basketball player Dez Wells, demonstrated how an OCR investigation can affect the fairness of a university inquiry. The accuser’s complaint had been referred both to Xavier’s Title IX office and the Cincinnati police. The police concluded that the allegation was meritless; Hamilton County Prosecuting Attorney Joseph Deters later said he considered charging the accuser with filing a false police report.
Deters asked Xavier to delay its proceedings until his office completed its investigation. School officials refused. Instead, three weeks after the initial allegation, the university expelled Wells. He sued and speculated that Xavier’s haste came not from a quest for justice but instead from a desire to avoid difficulties in finalizing an agreement with OCR to resolve an unrelated complaint filed by two female Xavier students. (In recent years, OCR has entered into dozens of similar resolution agreements, which bind universities to policy changes in exchange for removing the threat of losing federal funds.) In a July 2014 ruling, Judge Arthur Spiegel observed that Xavier’s disciplinary tribunal, however “well-equipped to adjudicate questions of cheating, may have been in over its head with relation to an alleged false accusation of sexual assault.” Soon thereafter, the two sides settled; Wells transferred to the University of Maryland.
Ohio State, Occidental, Cornell, Middlebury, Appalachian State, USC, and Columbia have all found themselves on the losing side of court decisions arising from cases that originated during a time in which OCR was investigating or threatening to investigate the school. (In the Ohio State case, one university staffer testified that she didn’t know whether she had an obligation to correct a false statement by an accuser to a disciplinary panel.) Pressure from OCR can be indirect, as well. The Obama administration interpreted federal law as requiring all universities to have at least one Title IX coordinator; larger universities now employ dozens of Title IX personnel who, as the Harvard Law professors explained, “have reason to fear for their jobs if they hold a student not responsible or if they assign a rehabilitative or restorative rather than a harshly punitive sanction.”A mid the wave of judicial setbacks for universities, two decisions in particular stand out. Easily the most powerful opinion in a campus due-process case came in March 2016 from Judge F. Dennis Saylor. While the stereotypical campus sexual-assault allegation results from an alcohol-filled, one-night encounter between a male and a female student, a case at Brandeis University involved a long-term monogamous relationship between two male students. A bad breakup led to the accusing student’s filing the following complaint, against which his former boyfriend was expected to provide a defense: “Starting in the month of September, 2011, the Alleged violator of Policy had numerous inappropriate, nonconsensual sexual interactions with me. These interactions continued to occur until around May 2013.”
To adjudicate, Brandeis hired a former OCR staffer, who interviewed the two students and a few of their friends. Since the university did not hold a hearing, the investigator decided guilt or innocence on her own. She treated each incident as if the two men were strangers to each other, which allowed her to determine that sexual “violence” had occurred in the relationship. The accused student, she found, sometimes looked at his boyfriend in the nude without permission and sometimes awakened his boyfriend with kisses when the boyfriend wanted to stay asleep. The university’s procedures prevented the student from seeing the investigator’s report, with its absurdly broad definition of sexual misconduct, in preparing his appeal. “In the context of American legal culture,” Boston Globe columnist Dante Ramos later argued, denying this type of information “is crazy.” “Standard rules of evidence and other protections for the accused keep things like false accusations or mistakes by authorities from hurting innocent people.” When the university appeal was denied, the student sued.
At an October 2015 hearing to consider the university’s motion to dismiss, Saylor seemed flabbergasted at the unfairness of the school’s approach. “I don’t understand,” he observed, “how a university, much less one named after Louis Brandeis, could possibly think that that was a fair procedure to not allow the accused to see the accusation.” Brandeis’s lawyer cited pressure to conform to OCR guidance, but the judge deemed the university’s procedures “closer to Salem 1692 than Boston, 2015.”
The following March, Saylor issued an 89-page opinion that has been cited in virtually every lawsuit subsequently filed by an accused student. “Whether someone is a ‘victim’ is a conclusion to be reached at the end of a fair process, not an assumption to be made at the beginning,” Saylor wrote. “If a college student is to be marked for life as a sexual predator, it is reasonable to require that he be provided a fair opportunity to defend himself and an impartial arbiter to make that decision.” Saylor concluded that Brandeis forced the accused student “to defend himself in what was essentially an inquisitorial proceeding that plausibly failed to provide him with a fair and reasonable opportunity to be informed of the charges and to present an adequate defense.”
The student, vindicated by the ruling’s sweeping nature, then withdrew his lawsuit. He currently is pursuing a Title IX complaint against Brandeis with OCR.
Four months later, a three-judge panel of the Second Circuit Court of Appeals produced an opinion that lacked Saylor’s rhetorical flourish or his understanding of the basic unfairness of the campus Title IX process. But by creating a more relaxed standard for accused students to make federal Title IX claims, the Second Circuit’s decision in Doe v. Columbia carried considerable weight.
Two Columbia students who had been drinking had a brief sexual encounter at a party. More than four months later, the accuser claimed she was too intoxicated to have consented. Her allegation came in an atmosphere of campus outrage about the university’s allegedly insufficient toughness on sexual assault. In this setting, the accused student found Columbia’s Title IX investigator uninterested in hearing his side of the story. He cited witnesses who would corroborate his belief that the accuser wasn’t intoxicated; the investigator declined to speak with them. The student was found guilty, although for reasons differing from the initial claim; the Columbia panel ruled that he had “directed unreasonable pressure for sexual activity toward the [accuser] over a period of weeks,” leaving her unable to consent on the night in question. He received a three-semester suspension for this nebulous offense—which even his accuser deemed too harsh. He sued, and the case was assigned to Judge Jesse Furman.
Furman’s opinion provided a ringing victory for Columbia and the Obama-backed policies it used. As Title IX litigator Patricia Hamill later observed, Furman’s “almost impossible standard” required accused students to have inside information about the institution’s handling of other sexual-assault claims—information they could plausibly obtain only through the legal process known as discovery, which happens at a later stage of litigation—in order to survive a university’s initial motion to dismiss. Furman suggested that, to prevail, an accused student would need to show that his school treated a female student accused of sexual assault more favorably, or at least provide details about how cases against other accused students showed a pattern of bias. But federal privacy law keeps campus disciplinary hearings private, leaving most accused students with little opportunity to uncover the information before their case is dismissed.
At the same time, the opinion excused virtually any degree of unfairness by the institution. Furman reasoned that taking “allegations of rape on campus seriously and . . . treat[ing] complainants with a high degree of sensitivity” could constitute “lawful” reasons for university unfairness toward accused students. Samantha Harris of the Foundation for Individual Rights in Education detected the decision’s “immediate and nationwide impact” in several rulings against accused students. It also played the same role in university briefs that Saylor’s Brandeis opinion did in filings by accused students.
The Columbia student’s lawyer, Andrew Miltenberg, appealed Furman’s ruling to the Second Circuit. The stakes were high, since a ruling affirming the lower court’s reasoning would have all but foreclosed Title IX lawsuits by accused students in New York, Connecticut, and Vermont. But a panel of three judges, all nominated by Democratic presidents, overturned Furman’s decision. In the opinion’s crucial passage, Judge Pierre Leval held that a university “is not excused from liability for discrimination because the discriminatory motivation does not result from a discriminatory heart, but rather from a desire to avoid practical disadvantages that might result from unbiased action. A covered university that adopts, even temporarily, a policy of bias favoring one sex over the other in a disciplinary dispute, doing so in order to avoid liability or bad publicity, has practiced sex discrimination, notwithstanding that the motive for the discrimination did not come from ingrained or permanent bias against that particular sex.” Before the Columbia decision, courts almost always had rebuffed Title IX pleadings from accused students. More recently, judges have allowed Title IX claims to proceed against Amherst, Cornell, California–Santa Barbara, Drake, and Rollins.
After the Second Circuit’s decision, Columbia settled with the accused student, sparing its Title IX decision-makers from having to testify at a trial. James Madison was one of the few universities to take a different course, with disastrous results. A lawsuit from an accused student survived a motion to dismiss, but the university refused to settle, allowing the student’s lawyer to depose the three school employees who had decided his client’s fate. One unintentionally revealed that he had misapplied the university’s own definition of consent. Another cited the importance of the accuser’s slurring words on a voicemail, thus proving her extreme intoxication on the night of the alleged assault. It was left to the accused student’s lawyer, at a deposition months after the decision had been made, to note that the voicemail in question actually was received on a different night. In December 2016, Judge Elizabeth Dillon, an Obama nominee, granted summary judgment to the accused student, concluding that “significant anomalies in the appeal process” violated his due-process rights under the Constitution.niversities were on the losing side of 36 due-process rulings when Obama appointee Catherine Lhamon was presiding over the Office for Civil Rights between 2013 and 2016; no record exists of her publicly acknowledging any of them. In June 2017, however, Lhamon suddenly rejoiced that “yet another federal court” had found that students disciplined for sexual misconduct “were not denied due process.” That Fifth Circuit decision, involving two former students at the University of Houston, was an odd case for her to celebrate. The majority cabined its findings to the “unique facts” of the case—that the accused students likely would have been found guilty even under the fairest possible process. And the dissent, from Judge Edith Jones, denounced the procedures championed by Lhamon and other Obama officials as “heavily weighted in favor of finding guilt,” predicting “worse to come if appellate courts do not step in to protect students’ procedural due process right where allegations of quasi-criminal sexual misconduct arise.”
At this stage, Lhamon, who now chairs the U.S. Commission on Civil Rights, cannot be taken seriously when it comes to questions of campus due process. But other defenders of the current Title IX regime have offered more substantive commentary about the university setbacks.
Legal scholar Michelle Anderson was one of the few to even discuss the due-process decisions. “Colleges and universities do not always adjudicate allegations of sexual assault well,” she noted in a 2016 law review article defending the Obama-era policies. Anderson even conceded that some colleges had denied “accused students fairness in disciplinary adjudication.” But these students sued, “and campuses are responding—as they must—when accused students prevail. So campuses face powerful legal incentives on both sides to address campus sexual assault, and to do so fairly and impartially.”
This may be true, but Anderson does not explain why wrongly accused students should bear the financial and emotional burden of inducing their colleges to implement fair procedures. More important, scant evidence exists that colleges have responded to the court victories of wrongly accused students by creating fairer procedures. Some have even made it more difficult for wrongly accused students to sue. After losing a lawsuit in December 2014, Brown eliminated the right of students accused of sexual assault to have “every opportunity” to present evidence. That same year, an accused student showed how Swarthmore had deviated from its own procedures in his case. The college quickly settled the lawsuit—and then added a clause to its procedures immunizing it from similar claims in the future. Swarthmore currently informs accused students that “rules of evidence ordinarily found in legal proceedings shall not be applied, nor shall any deviations from any of these prescribed procedures alone invalidate a decision.”
Many lawsuits are still working their way through the judicial system; three cases are pending at federal appellate courts. Of the two that address substantive matters, oral arguments seemed to reveal skepticism of the university’s position. On July 26, a three-judge panel of the First Circuit considered a case at Boston College, where the accused student plausibly argued that someone else had committed the sexual assault (which occurred on a poorly lit dance floor). Judges Bruce Selya and William Kayatta seemed troubled that a Boston College dean had improperly intruded on the hearing board’s deliberations. At the Sixth Circuit a few days later, Judges Richard Griffin and Amul Thapar both expressed concerns about the University of Cincinnati’s downplaying the importance of cross-examination in campus-sex adjudications. Judge Eric Clay was quieter, but he wondered about the tension between the university’s Title IX and truth-seeking obligations.
In a perfect world, academic leaders themselves would have created fairer processes without judicial intervention. But in the current campus environment, such an approach is impossible. So, at least for the short term, the courts remain the best, albeit imperfect, option for students wrongly accused of sexual assault. Meanwhile, every year, young men entrust themselves and their family’s money to institutions of higher learning that are indifferent to their rights and unconcerned with the injustices to which these students might be subjected.
1 After a district court placed that finding on hold, the university appealed to the Sixth Circuit.
Review of 'Terror in France' By Gilles Kepel
Kepel is particularly knowledgeable about the history and process of radicalization that takes place in his nation’s heavily Muslim banlieues (the depressed housing projects ringing Paris and other major cities), and Terror in France is informed by decades of fieldwork in these volatile locales. What we have been witnessing for more than a decade, Kepel argues, is the “third wave” of global jihadism, which is not so much a top-down doctrinally inspired campaign (as were the 9/11 attacks, directed from afar by the oracular figure of Osama bin Laden) but a bottom-up insurgency with an “enclave-based ethnic-racial logic of violence” to it. Kepel traces the phenomenon back to 2005, a convulsive year that saw the second-generation descendants of France’s postcolonial Muslim immigrants confront a changing socio-political landscape.
That was the year of the greatest riots in modern French history, involving mostly young Muslim men. It was also the year that Abu Musab al-Suri, the Syrian-born Islamist then serving as al-Qaeda’s operations chief in Europe, published The Global Islamic Resistance Call. This 1,600-page manifesto combined pious imprecations against the West with do-it-yourself ingenuity, an Anarchist’s Cookbook for the Islamist set. In Kepel’s words, the manifesto preached a “jihadism of proximity,” the brand of civil war later adopted by the Islamic State. It called for ceaseless, mass-casualty attacks in Western cities—attacks which increase suspicion and regulation of Muslims and, in turn, drive those Muslims into the arms of violent extremists.
The third-generation jihad has been assisted by two phenomena: social-networking sites that easily and widely disseminate Islamist propaganda (thus increasing the rate of self-radicalization) and the so-called Arab Spring, which led to state collapse in Syria and Libya, providing “an exceptional site for military training and propaganda only a few hours’ flight from Europe, and at a very low cost.”
Kepel’s book is not just a study of the ideology and tactics of Islamists but a sociopolitical overview of how this disturbing phenomenon fits within a country on the brink. For example, Kepel finds that jihadism is emerging in conjunction with developments such as the “end of industrial society.” A downturn in work has led to an ominous situation in which a “right-wing ethnic nationalism” preying on the economically anxious has risen alongside Islamism as “parallel conduits for expressing grievances.” Filling a space left by the French Communist Party (which once brought the ethnic French working class and Arab immigrants together), these two extremes leer at each other from opposite sides of a societal chasm, signaling the potentially cataclysmic future that awaits France if both mass unemployment and Islamist terror continue undiminished.
The French economy has also had a more direct inciting effect on jihadism. Overregulated labor markets make it difficult for young Muslims to get jobs, thus exacerbating the conditions of social deprivation and exclusion that make individuals susceptible to radicalization. The inability to tackle chronic unemployment has led to widespread Muslim disillusionment with the left (a disillusionment aggravated by another, often glossed over, factor: widespread Muslim opposition to the Socialist Party’s championing of same-sex marriage). Essentially, one left-wing constituency (unions) has made the unemployment of another constituency (Muslim youth) the mechanism for maintaining its privileges.
Kepel does not, however, cite deprivation as the sole or even main contributing factor to Islamist radicalization. One Parisian banlieue that has sent more than 80 residents to fight in Syria, he notes, has “attractive new apartment buildings” built by the state and features a mosque “constructed with the backing of the Socialist mayor.” It is also the birthplace of well-known French movie stars of Arab descent, and thus hardly a place where ambition goes to die. “The Islamophobia mantra and the victim mentality it reinforces makes it possible to rationalize a total rejection of France and a commitment to jihad by making a connection between unemployment, discrimination, and French republican values,” Kepel writes. Indeed, Kepel is refreshingly derisive of the term “Islamophobia” throughout the book, excoriating Islamists and their fellow travelers for “substituting it for anti-Semitism as the West’s cardinal sin.” These are meaningful words coming from Kepel, a deeply learned scholar of Islam who harbors great respect for the faith and its adherents.
Kepel also weaves the saga of jihadism into the ongoing “kulturkampf within the French left.” Arguments about Islamist terrorism demonstrate a “divorce between a secular progressive tradition” and the children of the Muslim immigrants this tradition fought to defend. The most ironically perverse manifestation of this divorce was ISIS’s kidnapping of Didier François, co-founder of the civil-rights organization SOS Racisme. Kepel recognizes the origins of this divorce in the “red-green” alliance formed decades ago between Islamists and elements of the French intellectual left, such as Michel Foucault, a cheerleader of the Iranian revolution.
Though he offers a rigorous history and analysis of the jihadist problem, Kepel is generally at a loss for solutions. He decries a complacent French elite, with its disregard for genuine expertise (evidenced by the decline in institutional academic support for Islamicists and Arabists) and the narrow, relatively impenetrable way in which it perpetuates itself, chiefly with a single school (the École normale supérieure) that practically every French politician must attend. Despite France’s admirable republican values, this has made the process of assimilation rather difficult. But other than wishing that the public education system become more effective and inclusive at instilling republican values, Kepel provides little in the way of suggestions as to how France emerges from this mess. That a scholar of such erudition and humanity can do little but throw up his hands and issue a sigh of despair cannot bode well. The third-generation jihad owes as much to the political breakdown in France as it does to the meltdown in the Middle East. Defeating this two-headed beast requires a new and comprehensive playbook: the West’s answer to The Global Islamic Resistance Call. That book has yet to be written.
resident Trump, in case you haven’t noticed, has a tendency to exaggerate. Nothing is “just right” or “meh” for him. Buildings, crowds, election results, and military campaigns are always outsized, gargantuan, larger, and more significant than you might otherwise assume. “People want to believe that something is the biggest and the greatest and the most spectacular,” he wrote 30 years ago in The Art of the Deal. “I call it truthful hyperbole. It’s an innocent form of exaggeration—and a very effective form of promotion.”
So effective, in fact, that the press has picked up the habit. Reporters and editors agree with the president that nothing he does is ordinary. After covering Trump for more than two years, they still can’t accept him as a run-of-the-mill politician. And while there are aspects of Donald Trump and his presidency that are, to say the least, unusual, the media seem unable to distinguish between the abnormal and significant—firing the FBI director in the midst of an investigation into one’s presidential campaign, for example—and the commonplace.
Consider the fiscal deal President Trump struck with Democratic leaders in early September.
On September 6, the president held an Oval Office meeting with Vice President Pence, Treasury Secretary Mnuchin, and congressional leaders of both parties. He had to find a way to (a) raise the debt ceiling, (b) fund the federal government, and (c) spend money on hurricane relief. The problem is that a bloc of House Republicans won’t vote for (a) unless the increase is accompanied by significant budget cuts, which interferes with (b) and (c). To raise the debt ceiling, then, requires Democratic votes. And the debt ceiling must be raised. “There is zero chance—no chance—we will not raise the debt ceiling,” Senate Majority Leader Mitch McConnell said in August.
The meeting went like this. First House Speaker Paul Ryan asked for an 18-month increase in the debt ceiling so Republicans wouldn’t have to vote again on the matter until after the midterm elections. Democrats refused. The bargaining continued until Ryan asked for a six-month increase. The Democrats remained stubborn. So Trump, always willing to kick a can down the road, interrupted Mnuchin to offer a three-month increase, a continuing resolution that will keep the government open through December, and about $8 billion in hurricane money. The Democrats said yes.
That, anyway, is what happened. But the media are not satisfied to report what happened. They want—they need—to tell you what it means. And what does it mean? Well, they aren’t really sure. But it’s something big. It’s something spectacular. For example:
1. “Trump Bypasses Republicans to Strike Deal on Debt Limit and Harvey Aid” was the headline of a story for the New York Times by Peter Baker, Thomas Kaplan, and Michael D. Shear. “The deal to keep the government open and paying its debts until Dec. 15 represented an extraordinary public turn for the president, who has for much of his term set himself up on the right flank of the Republican Party,” their article began. Fair enough. But look at how they import speculation and opinion into the following sentence: “But it remained unclear whether Mr. Trump’s collaboration with Democrats foreshadowed a more sustained shift in strategy by a president who has presented himself as a master dealmaker or amounted to just a one-time instinctual reaction of a mercurial leader momentarily eager to poke his estranged allies.”
2. “The decision was one of the most fascinating and mysterious moves he’s made with Congress during eight months in office,” reported Jeff Zeleny, Dana Bash, Deirdre Walsh, and Jeremy Diamond for CNN. Thanks for sharing!
3. “Trump budget deal gives GOP full-blown Stockholm Syndrome,” read the headline of Tina Nguyen’s piece for Vanity Fair. “Donald Trump’s unexpected capitulation to new best buds ‘Chuck and Nancy’ has thrown the Grand Old Party into a frenzy as Republicans search for explanations—and scapegoats.”
4. “For Conservatives, Trump’s Deal with Democrats Is Nightmare Come True,” read the headline for a New York Times article by Jeremy W. Peters and Maggie Haberman. “It is the scenario that President Trump’s most conservative followers considered their worst nightmare, and on Wednesday it seemed to come true: The deal-making political novice, whose ideology and loyalty were always fungible, cut a deal with Democrats.”
5. “Trump sides with Democrats on fiscal issues, throwing Republican plans into chaos,” read the Washington Post headline the day after the deal was announced. “The president’s surprise stance upended sensitive negotiations over the debt ceiling and other crucial policy issues this fall and further imperiled his already tenuous relationships with Senate Majority Leader Mitch McConnell and House Speaker Paul Ryan.” Yes, the negotiations were upended. Then they made a deal.
6. “Although elected as a Republican last year,” wrote Peter Baker of the Times, “Mr. Trump has shown in the nearly eight months in office that he is, in many ways, the first independent to hold the presidency since the advent of the two-party system around the time of the Civil War.” The title of Baker’s news analysis: “Bound to No Party, Trump Upends 150 Years of Two-Party Rule.” One hundred and fifty years? Why not 200?
The journalistic rule of thumb used to be that an article describing a political, social, or cultural trend requires at least three examples. Not while covering Trump. If Trump does something, anything, you should feel free to inflate its importance beyond all recognition. And stuff your “reporting” with all sorts of dramatic adjectives and frightening nouns: fascinating, mysterious, unexpected, extraordinary, nightmare, chaos, frenzy, and scapegoats. It’s like a Vince Flynn thriller come to life.
The case for the significance of the budget deal would be stronger if there were a consensus about whom it helped. There isn’t one. At first the press assumed Democrats had won. “Republicans left the Oval Office Wednesday stunned,” reported Rachael Bade, Burgess Everett, and Josh Dawsey of Politico. Another trio of Politico reporters wrote, “In the aftermath, Republicans seethed privately and distanced themselves publicly from the deal.” Republicans were “stunned,” reported Kristina Peterson, Siobhan Hughes, and Louise Radnofsky of the Wall Street Journal. “Meet the swamp: Donald Trump punts September agenda to December after meeting with Congress,” read the headline of Charlie Spiering’s Breitbart story.
By the following week, though, these very outlets had decided the GOP was looking pretty good. “Trump’s deal with Democrats bolsters Ryan—for now,” read the Politico headline on September 11. “McConnell: No New Debt Ceiling Vote until ‘Well into 2018,’” reported the Washington Post. “At this point…picking a fight with Republican leaders will only help him,” wrote Gerald Seib in the Wall Street Journal. “Trump has long warned that he would work with Democrats, if necessary, to fulfill his campaign promises. And Wednesday’s deal is a sign that he intends to follow through on that threat,” wrote Breitbart’s Joel Pollak.
The sensationalism, the conflicting interpretations, the visceral language is dizzying. We have so many reporters chasing the same story that each feels compelled to gussy up a quotidian budget negotiation until it resembles the Ribbentrop–Molotov pact, and none feel it necessary to apply to their own reporting the scrutiny and incredulity they apply to Trump. The truth is that no one knows what this agreement portends. Nor is it the job of a reporter to divine the meaning of current events like an augur of Rome. Sometimes a cigar is just a cigar. And a deal is just a deal.
Remembering something wonderful
Not surprisingly, many well-established performers were left in the lurch by the rise of the new media. Moreover, some vaudevillians who, like Fred Allen, had successfully reinvented themselves for radio were unable to make the transition to TV. But a handful of exceptionally talented performers managed to move from vaudeville to radio to TV, and none did it with more success than Jack Benny, whose feigned stinginess, scratchy violin playing, slightly effeminate demeanor, and preternaturally exact comic timing made him one of the world’s most beloved performers. After establishing himself in vaudeville, he became the star of a comedy series, The Jack Benny Program, that aired continuously, first on radio and then TV, from 1932 until 1965. Save for Bob Hope, no other comedian of his time was so popular.
With the demise of nighttime network radio as an entertainment medium, the 931 weekly episodes of The Jack Benny Program became the province of comedy obsessives—and because Benny’s TV series was filmed in black-and-white, it is no longer shown in syndication with any regularity. And while he also made Hollywood films, some of which were box-office hits, only one, Ernst Lubitsch’s To Be or Not to Be (1942), is today seen on TV other than sporadically.
Nevertheless, connoisseurs of comedy still regard Benny, who died in 1974, as a giant, and numerous books, memoirs, and articles have been published about his life and art. Most recently, Kathryn H. Fuller-Seeley, a professor at the University of Texas at Austin, has brought out Jack Benny and the Golden Age of Radio Comedy, the first book-length primary-source academic study of The Jack Benny Program and its star.1 Fuller-Seeley’s genuine appreciation for Benny’s work redeems her anachronistic insistence on viewing it through the fashionable prism of gender- and race-based theory, and her book, though sober-sided to the point of occasional starchiness, is often quite illuminating.
Most important of all, off-the-air recordings of 749 episodes of the radio version of The Jack Benny Program survive in whole or part and can easily be downloaded from the Web. As a result, it is possible for people not yet born when Benny was alive to hear for themselves why he is still remembered with admiration and affection—and why one specific aspect of his performing persona continues to fascinate close observers of the American scene.B orn Benjamin Kubelsky in Chicago in 1894, Benny was the son of Eastern European émigrés (his father was from Poland, his mother from Lithuania). He started studying violin at six and had enough talent to pursue a career in music, but his interests lay elsewhere, and by the time he was a teenager, he was working in vaudeville as a comedian who played the violin as part of his act. Over time he developed into a “monologist,” the period term for what we now call a stand-up comedian, and he began appearing in films in 1929 and on network radio three years after that.
Radio comedy, like silent film, is now an obsolete art form, but the program formats that it fostered in the ’20s and ’30s all survived into the era of TV, and some of them flourish to this day. One, episodic situation comedy, was developed in large part by Jack Benny and his collaborators. Benny and Harry Conn, his first full-time writer, turned his weekly series, which started out as a variety show, into a weekly half-hour playlet featuring a regular cast of characters augmented by guest stars. Such playlets, relying as they did on a setting that was repeated from week to week, were easier to write than the free-standing sketches favored by Allen, Hope, and other ex-vaudevillians, and by the late ’30s, the sitcom had become a staple of radio comedy.
The process, as documented by Fuller-Seeley, was a gradual one. The Jack Benny Program never broke entirely with the variety format, continuing to feature both guest stars (some of whom, like Ronald Colman, ultimately became semi-regular members of the show’s rotating ensemble of players) and songs sung by Dennis Day, a tenor who joined the cast in 1939. Nor was it the first radio situation comedy: Amos & Andy, launched in 1928, was a soap-opera-style daily serial that also featured regular characters. Nevertheless, it was Benny who perfected the form, and his own character would become the prototype for countless later sitcom stars.
The show’s pivotal innovation was to turn Benny and the other cast members into fictionalized versions of themselves—they were the stars of a radio show called “The Jack Benny Program.” Sadye Marks, Benny’s wife, played Mary Livingstone, his sharp-tongued secretary, with three other characters added as the self-reflexive concept took shape. Don Wilson, the stout, genial announcer, came on board in 1934. He was followed in 1936 by Phil Harris, Benny’s roguish bandleader, and, in 1939, by Day, Harris’s simple-minded vocalist. To this team was added a completely fictional character, Rochester Van Jones, Benny’s raspy-voiced, outrageously impertinent black valet, played by Eddie Anderson, who joined the cast in 1938.
As these five talented performers coalesced into a tight-knit ensemble, the jokey, vaudeville-style sketch comedy of the early episodes metamorphosed into sitcom-style scripts that portrayed their offstage lives, as well as the making of the show itself. Scarcely any conventional jokes were told, nor did Benny’s writers employ the topical and political references in which Allen and Hope specialized. Instead, the show’s humor arose almost entirely from the close interplay of character and situation.
Benny was not solely responsible for the creation of this format, which was forged by Conn and perfected by his successors. Instead, he doubled as the star and producer—or, to use the modern term, show runner—closely supervising the writing of the scripts and directing the performances of the other cast members. In addition, he and Conn turned the character of Jack Benny from a sophisticated vaudeville monologist into the hapless butt of the show’s humor, a vain, sexually inept skinflint whose character flaws were ceaselessly twitted by his colleagues, who in turn were given most of the biggest laugh lines.
This latter innovation was a direct reflection of Benny’s real-life personality. Legendary for his voluble appreciation of other comedians, he was content to respond to the wisecracking of his fellow cast members with exquisitely well-timed interjections like “Well!” and “Now, cut that out,” knowing that the comic spotlight would remain focused on the man of whom they were making fun and secure in the knowledge that his own comic personality was strong enough to let them shine without eclipsing him in the process.
And with each passing season, the fictional personalities of Benny and his colleagues became ever more firmly implanted in the minds of their listeners, thus allowing the writers to get laughs merely by alluding to their now-familiar traits. At the same time, Benny and his writers never stooped to coasting on their familiarity. Even the funniest of the “cheap jokes” that were their stock-in-trade were invariably embedded in carefully honed dramatic situations that heightened their effectiveness.
A celebrated case in point is the best-remembered laugh line in the history of The Jack Benny Program, heard in a 1948 episode in which a burglar holds Benny up on the street. “Your money or your life,” the burglar says—to which Jack replies, after a very long pause, “I’m thinking it over!” What makes this line so funny is, of course, our awareness of Benny’s stinginess, reinforced by a decade and a half of constant yet subtly varied repetition. What is not so well remembered is that the line is heard toward the end of an episode that aired shortly after Ronald Colman won an Oscar for his performance in A Double Life. Inspired by this real-life event, the writers concocted an elaborately plotted script in which Benny talks Colman (who played his next-door neighbor on the show) into letting him borrow the Oscar to show to Rochester. It is on his way home from this errand that Benny is held up, and the burglar not only robs him of his money but also steals the statuette, a situation that was resolved to equally explosive comic effect in the course of two subsequent episodes.
No mere joke-teller could have performed such dramatically complex scripts week after week with anything like Benny’s effectiveness. The secret of The Jack Benny Program was that its star, fully aware that he was not “being himself” but playing a part, did so with an actor’s skill. This was what led Ernst Lubitsch to cast him in To Be or Not to Be, in which he plays a mediocre Shakespearean tragedian, a character broadly related to but still quite different from the one who appeared on his own radio show. As Lubitsch explained to Benny, who was skeptical about his ability to carry off the part:
A clown—he is a performer what is doing funny things. A comedian—he is a performer what is saying funny things. But you, Jack, you are an actor, you are an actor playing the part of a comedian and this you are doing very well.
To Be or Not to Be also stands out from the rest of Benny’s work because he plays an identifiably Jewish character. The Jack Benny character that he played on radio and TV, by contrast, was never referred to or explicitly portrayed as Jewish. To be sure, most listeners were in no doubt of his Jewishness, and not merely because Benny made no attempt in real life to conceal his ethnicity, of which he was by all accounts proud. The Jack Benny Program was written by Jews, and the ego-puncturing insults with which their scripts were packed, as well as the schlemiel-like aspect of Benny’s “fall guy” character, were quintessentially Jewish in style.
As Benny explained in a 1948 interview cited by Fuller-Seeley:
The humor of my program is this: I’m a big shot, see? I’m fast-talking. I’m a smart guy. I’m boasting about how marvelous I am. I’m a marvelous lover. I’m a marvelous fiddle player. Then, five minutes after I start shooting off my mouth, my cast makes a shmo out of me.
Even so, his avoidance of specific Jewish identification on the air is noteworthy precisely because his character was a miser. At a time when overt anti-Semitism was still common in America, it is remarkable that Benny’s comic persona was based in large part on an anti-Semitic stereotype—yet one that seems not to have inspired any anti-Semitic attacks on Benny himself. When, in 1945, his writers came up with the idea of an “I Can’t Stand Jack Benny Because . . . ” write-in campaign, they received 270,000 entries. Only three made mention of his Jewishness.
As for the winning entry, submitted by a California lawyer, it says much about what insulated Benny from such attacks: “He fills the air with boasts and brags / And obsolete, obnoxious gags / The way he plays his violin / Is music’s most obnoxious sin / His cowardice alone, indeed, / Is matched by his obnoxious greed / And all the things that he portrays / Show up MY OWN obnoxious ways.” It is clear that Benny’s foibles were seen by his listeners not as particular but universal, just as there was no harshness in the razzing of his fellow cast members, who very clearly loved the Benny character in spite of his myriad flaws. So, too, did the American people. Several years after his TV series was cancelled, a corporation that was considering using him as a spokesman commissioned a national poll to find out how popular he was. It learned that only 3 percent of the respondents disliked him.
Therein lay Benny’s triumph: He won total acceptance from the American public and did so by embodying a Jewish stereotype from which the sting of prejudice had been leached. Far from being a self-hating whipping boy for anti-Semites, he turned himself into WASP America’s Jewish uncle, preposterous yet lovable.W hen the bottom fell out of network radio, Benny negotiated the move to TV without a hitch, debuting on the small screen in 1950 and bringing the radio version of The Jack Benny Program to a close five years later, making it one of the very last radio comedy series to shut up shop. Even after his weekly TV series was finally canceled by CBS in 1965, he continued to star in well-received one-shot specials on NBC.
But Benny’s TV appearances, for all their charm, were never quite equal in quality to his radio work, which is why he clung to the radio version of The Jack Benny Program until network radio itself went under: Better than anyone else, he knew how good the show had been. For the rest of his life, he lived off the accumulated comic capital built up by 21 years of weekly radio broadcasts.
Now, at long last, he belongs to the ages, and The Jack Benny Program is a museum piece. Yet it remains hugely influential, albeit at one or more removes from the original. From The Dick Van Dyke Show and The Danny Thomas Show to Seinfeld, Everybody Loves Raymond, and The Larry Sanders Show, every ensemble-cast sitcom whose central character is a fictionalized version of its star is based on Benny’s example. And now that the ubiquity of the Web has made the radio version of his series readily accessible for the first time, anyone willing to make the modest effort necessary to seek it out is in a position to discover that The Jack Benny Program, six decades after it left the air, is still as wonderfully, benignly funny as it ever was, a monument to the talent of the man who, more than anyone else, made it so.
Review of 'The Transferred Life of George Eliot' By Philip Davis
Not that there’s any danger these theoretically protesting students would have read George Eliot’s works—not even the short one, Silas Marner (1861), which in an earlier day was assigned to high schoolers. I must admit I didn’t find my high-school reading of Silas Marner a pleasant experience—sports novels for boys like John R. Tunis’s The Kid from Tomkinsville were inadequate preparation. I must confess, too, that when I was in graduate school, determined to study 17th-century English verse, my reaction to the suggestion that I should also read Middlemarch (1871–72) was “What?! An 800-page novel by the guy who wrote Silas Marner?” A friend patiently explained that “the guy” was actually Mary Ann Evans, born in 1819, died in 1880. Partly because she was living in sin with the literary jack-of-all-trades George Henry Lewes (legally and irrevocably bound to his estranged wife), she adopted “George Eliot” as a protective pseudonym when, in her 1857 debut, she published Scenes from Clerical Life.
I did, many times over and with awe and delight, go on to read Middlemarch and the seven other novels, often in order to teach them to college students. Students have become less and less receptive over the years. Forget modern-day objections to George Eliot’s complex political or religious views. Adam Bede (1859) and The Mill on the Floss (1860) were too hefty, and the triple-decked Middlemarch and Deronda, even if I set aside three weeks for them, rarely got finished.
The middle 20th century was perhaps a more a propitious time for appreciating George Eliot, Henry James, and other 19th-century English and American novelists. Influential teachers like F.R. Leavis at Cambridge and Lionel Trilling at Columbia were then working hard to persuade students that the study of literature, not just poetry and drama but also fiction, matters both to their personal lives—the development of their sensibility or character—and to their wider society. The “moral imagination” that created Middlemarch enriches our minds by dramatizing the complications—the frequent blurring of good and evil—in our lives. Great novels help us cope with ambiguities and make us more tolerant of one another. Many of Leavis’s and Trilling’s students became teachers themselves, and for several decades the feeling of cultural urgency was sustained. In the 1970s, though, between the leftist emphasis on literature as “politics by other means” and the deconstructionist denial of the possibility of any knowledge, literary or otherwise, independent of political power, the high seriousness of Leavis and Trilling began to fade.
The study of George Eliot and her life has gone through many stages. Directly after her death came the sanitized, hagiographic “life and letters” by J.W. Cross, the much younger man she married after Lewes’s death. Gladstone called it “a Reticence in three volumes.” The three volumes helped spark, if they didn’t cause, the long reaction against the Victorian sages generally that culminated in the dismissively satirical work of the Bloomsbury biographer and critic Lytton Strachey in his immensely influential Eminent Victorians (1916). Strachey’s mistreatment of his forbears was, with regard to George Eliot at least, tempered almost immediately by Virginia Woolf. It was Woolf who in 1919 provocatively said that Middlemarch had been “the first English novel for adults.” Eventually, the critical tide against George Eliot was decisively reversed in the ’40s by Joan Bennett and Leavis, who made the inarguable case for her genuine and lasting achievement. That period of correction culminated in the 1960s with Gordon S. Haight’s biography and with interpretive studies by Barbara Hardy and W.J. Harvey. Books on George Eliot over the last four decades have largely been written by specialists for specialists—on her manuscripts or working notes, and on her affiliations with the scientists, social historians, and competing novelists of her day.
The same is true, only more so, of the books written, with George Eliot as the ostensible subject, to promote deconstructionist or feminist agendas. Biographies have done a better job appealing to the common reader, not least because the woman’s own story is inherently compelling. The question right now is whether a book combining biographical and interpretive insight—one “pitched,” as publishers like to say, not just at experts but at the common reader—is past praying for.
Philip Davis, a Victorian scholar and an editor at Oxford University Press, hopes not. His The Transferred Life of George Eliot—transferred, that is, from her own experience into her letters, journals, essays, and novels, and beyond them into us—deserves serious attention. Davis is conscious that George Eliot called biographies of writers “a disease of English literature,” both overeager to discover scandals and too inclined to substitute day-to-day travels, relationships, dealings with publishers and so on, for critical attention to the books those writers wrote. Davis therefore devotes himself to George Eliot’s writing. Alas, he presumes rather too much knowledge on the reader’s part of the day-to-day as charted in Haight’s marvelous life. (A year-by-year chronology at the front of the book would have helped even his fellow Victorianists.)
As for George Eliot’s writing, Davis is determined to refute “what has been more or less said . . . in the schools of theory for the last 40 years—that 19th-century realism is conservatively bland and unimaginative, bourgeois and parochial, not truly art at all.” His argument for the richness, breadth, and art of George Eliot’s realism—her factual and sympathetic depiction of poor and middling people, without omitting a candid representation of the rich—is most convincing. What looms largest, though, is the realist, the woman herself—the Mary Ann Evans who, from the letters to the novels, became first Marian Evans the translator and essayist and then later “her own greatest character”: George Eliot the novelist. Davis insists that “the meaning of that person”—not merely the voice of her omniscient narrators but the omnipresent imagination that created the whole show—“has not yet exhausted its influence nor the larger future life she should have had, and may still have, in the world.”
The transference of George Eliot’s experience into her fiction is unquestionable: In The Mill on the Floss, for example, Mary Ann is Maggie, and her brother Isaac is Tom Tulliver. Davis knows that a better word might be transmutation, as George Eliot had, in Henry James’s words, “a mind possessed,” for “the creations which brought her renown were of the incalculable kind, shaped themselves in mystery, in some intellectual back-shop or secret crucible, and were as little as possible implied in the aspect of her life.” No data-accumulating biographer, even the most exhaustive, can account for that “incalculable . . . mystery.”
Which is why Davis, like a good teacher, gives us exercises in “close reading.” He pauses to consider how a George Eliot sentence balances or turns on an easy-to-skip-over word or phrase—the balance or turn often representing a moment when the novelist looks at what’s on the underside of the cards.
George Eliot’s style is subtle because her theme is subtle. Take D.H. Lawrence’s favorite heroine, the adolescent Maggie Tulliver. The external event in The Mill on the Floss may be the girl’s impulsive cutting off her unruly hair to spite her nagging aunts, or the young woman’s drifting down the river with a superficially attractive but truly impossible boyfriend. But the real “action” is Maggie’s internal self-blame and self-assertion. No Victorian novelist was better than George Eliot at tracing the psychological development of, say, a husband and wife who realize they married each other for shallow reasons, are unhappy, and now must deal with the ordinary necessities of balancing the domestic budget—Lydgate and Rosamund in Middlemarch—or, in the same novel, the religiously inclined Dorothea’s mistaken marriage to the old scholar Casaubon. That mistake precipitates not merely disenchantment and an unconscious longing for love with someone else, but (very finely) a quest for a religious explanation of and guide through her quandary.
It’s the religio-philosophical side of George Eliot about which Davis is strongest—and weakest. Her central theological idea, if one may simplify, was that the God of the Bible didn’t exist “out there” but was a projection of the imagination of the people who wrote it. Jesus wasn’t, in Davis’s characterization of her view, “the impervious divine, but [a man who] shed tears and suffered,” and died feeling forsaken. “This deep acceptance of so-called weakness was what most moved Marian Evans in her Christian inheritance. It was what God was for.” That is, the character of Jesus, and the dramatic play between him and his Father, expressed the human emotions we and George Eliot are all too familiar with. The story helps reconcile us to what is, finally, inescapable suffering.
George Eliot came to this demythologized understanding not only of Judaism and Christianity but of all religions through her contact first with a group of intellectuals who lived near Coventry, then with two Germans she translated: David Friedrich Strauss, whose 1,500-page Life of Jesus Critically Examined (1835–36) was for her a slog, and Ludwig Feuerbach, whose Essence of Christianity (1841) was for her a joy. Also, in the search for the universal morality that Strauss and Feuerbach believed Judaism and Christianity expressed mythically, there was Spinoza’s utterly non-mythical Ethics (1677). It was seminal for her—offering, as Davis says, “the intellectual origin for freethinking criticism of the Bible and for the replacement of religious superstition and dogmatic theology by pure philosophic reason.” She translated it into English, though her version did not appear until 1981.
I wish Davis had left it there, but he takes it too far. He devotes more than 40 pages—a tenth of the whole book—to her three translations, taking them as a mother lode of ideational gold whose tailings glitter throughout her fiction. These 40 pages are followed by 21 devoted to Herbert Spencer, the Victorian hawker of theories-of-everything (his 10-volume System of Synthetic Philosophy addresses biology, psychology, sociology, and ethics). She threw herself at the feet of this intellectual huckster, and though he rebuffed her painfully amorous entreaties, she never ceased revering him. Alas, Spencer was a stick—the kind of philosopher who was incapable of emotion. And she was his intellectual superior in every way. The chapter is largely unnecessary.
The book comes back to life when Davis turns to George Henry Lewes, the man who gave Mary Ann Evans the confidence to become George Eliot—perhaps the greatest act of loving mentorship in all of literature. Like many prominent Victorians, Lewes dabbled in all the arts and sciences, publishing highly readable accounts of them for a general audience. His range was as wide as Spencer’s, but his personality and writing had an irrepressible verve that Spencer could only have envied. Lewes was a sort Stephen Jay Gould yoked to Daniel Boorstin, popularizing other people’s findings and concepts, and coming up with a few of his own. He regarded his Sea-Side Studies (1860) as “the book . . . which was to me the most unalloyed delight,” not least because Marian, whom he called Polly, had helped gather the data. She told a friend “There is so much happiness condensed in it! Such scrambles over rocks, and peeping into clear pool [sic], and strolls along the pure sands, and fresh air mingling with fresh thoughts.” In his remarkably intelligent 1864 biography of Goethe, Lewes remarks that the poet “knew little of the companionship of two souls striving in emulous spirit of loving rivalry to become better, to become wiser, teaching each other to soar.” Such a companionship Lewes and George Eliot had in spades, and some of Davis’s best passages describe it.
Regrettably, Davis also offers many passages well below the standard of his best—needlessly repeating an already established point or obfuscating the obvious. Still, The Transferred Lives is the most formidably instructive, and certainly complete, life-and-works treatment of George Eliot we have.