The United States is once again locked in a struggle with a deadly global enemy. Herewith a critical, comprehensive guide…
A Note to the Reader
This past spring, when it seemed that everything that could go wrong in Iraq was going wrong, a plague of amnesia began sweeping through the country. Caught up in the particulars with which we were being assaulted 24 hours a day, we seemed to have lost sight of the context in which such details could be measured and understood and related to one another. Small things became large, large things became invisible, and hysteria filled the air.
Since then, of course, and especially after the hand over of authority on June 30 to an interim Iraqi government, matters have become more complicated. But the relentless pressure of events, and the continuing onslaught both of details and of their often tendentious or partisan interpretation, have hardly let up at all. It is for this reason that, in what follows, I have tried to step back from the daily barrage and to piece together the story of what this nation has been fighting to accomplish since September 11, 2001.
In doing this, I have drawn freely from my own past writings on the subject, and especially from three articles that appeared in these pages two or more years ago.1 In some instances, I have woven sections of these articles into a new setting; other passages I have adapted and updated.
Telling the story properly has required more than a straight narrative leading from 9/11 to the time of writing. For one thing, I have had to interrupt the narrative repeatedly in order to confront and clear away the many misconceptions, distortions, and outright falsifications that have been perpetrated. In addition, I have had to broaden the perspective so as to make it possible to see why the great struggle into which the United States was plunged by 9/11 can only be understood if we think of it as World War IV.
My hope is that telling the story from this perspective and in these ways will demonstrate that the road we have taken since 9/11 is the only safe course for us to follow. As we proceed along this course, questions will inevitably arise as to whether this or that move was necessary or right; and such questions will breed hesitations and even demands that we withdraw from the field. Some of this happened even in World War II, perhaps the most popular war the United States has ever fought, and much more of it in World War III (that is, the cold war); and now it is happening again, notably with respect to Iraq.
But as I will attempt to show, we are only in the very early stages of what promises to be a very long war, and Iraq is only the second front to have been opened in that war: the second scene, so to speak, of the first act of a five-act play. In World War II and then in World War III, we persisted in spite of impatience, discouragement, and opposition for as long as it took to win, and this is exactly what we have been called upon to do today in World War IV.
For today, no less than in those titanic conflicts, we are up against a truly malignant force in radical Islamism and in the states breeding, sheltering, or financing its terrorist armory. This new enemy has already attacked us on our own soil—a feat neither Nazi Germany nor Soviet Russia ever managed to pull off—and openly announces his intention to hit us again, only this time with weapons of infinitely greater and deadlier power than those used on 9/11. His objective is not merely to murder as many of us as possible and to conquer our land. Like the Nazis and Communists before him, he is dedicated to the destruction of everything good for which America stands. It is this, then, that (to paraphrase George W. Bush and a long string of his predecessors, Republican and Democratic alike) we in our turn, no less than the “greatest generation” of the 1940's and its spiritual progeny of the 1950's and after, have a responsibility to uphold and are privileged to defend.
Out of the Blue
The attack came, both literally and metaphorically, like a bolt out of the blue. Literally, in that the hijacked planes that crashed into the twin towers of the World Trade Center on the morning of September 11, 2001 had been flying in a cloudless sky so blue that it seemed unreal. I happened to be on jury duty that day, in a courthouse only a half-mile from what would soon be known as Ground Zero. Some time after the planes reached their targets, we all poured into the street—just as the second tower collapsed. And this sight, as if it were not impossible to believe in itself, was made all the more incredible by the perfection of the sky stretching so beautifully over it. I felt as though I had been deposited into a scene in one of those disaster movies being filmed (as they used to say) in glorious technicolor.
But the attack came out of the blue in a metaphorical sense as well. About a year later, in November 2002, a commission would be set up to investigate how and why such a huge event could have taken us by surprise and whether it might have been prevented. Because the commission's public hearings were not held until the middle of this year's exceptionally poisonous presidential election campaign, they quickly degenerated into an attempt by the Democrats on the panel to demonstrate that the administration of George W. Bush had been given adequate warnings but had failed to act on them.
Reinforcing this attempt was the testimony of Richard A. Clarke, who had been in charge of the counterterrorist operation in the National Security Council under Bill Clinton and then under Bush before resigning in the aftermath of 9/11. What Clarke for all practical purposes did—both at the hearings and in his hot-off-the-press book, Against All Enemies—was to blame Bush, who had been in office for a mere eight months when the attack occurred, while exonerating Clinton, who had spent eight long years doing little of any significance in response to the series of terrorist assaults on American targets in various parts of the world that were launched on his watch.
The point I wish to stress is not that Clarke was exaggerating or lying.2 It is that the attack on 9/11 did indeed come out of the blue in the sense that no one ever took such a possibility seriously enough to figure out what to do about it. Even Clarke, who did stake a dubious claim to prescience, had to admit under questioning by one of the 9/11 commissioners that if all his recommendations had been acted upon, the attack still could not have been prevented. And in its final report, released on July 22 of this year, the commission, while digging up no fewer than ten episodes that with hindsight could be seen as missed “operational opportunities,” thought that these opportunities could not have been acted on effectively enough to frustrate the attack. Indeed not—not, that is, in the real America as it existed at the time: an America in which hobbling constraints had been placed on both the CIA and the FBI; in which a “wall of separation” had been erected to obstruct communication or cooperation between law-enforcement and national-security agents; and in which politicians and the general public alike were still unable and/or unwilling to believe that terrorism might actually represent a genuine threat.
Slightly contradicting itself, the commission said that “the 9/11 attacks were a shock, but they should not have come as a surprise.” Maybe so; and yet there was no one, either in government or out, to whom they did not come as a surprise, either in general or in the particular form they took. The commission also spoke of a “failure of imagination.” Maybe so again; and yet the word “failure” seems inappropriate, implying as it does that success was possible. Surely a failure so widespread deserves to be considered inevitable.
To the New York Times, however, the failure was not at all inevitable. In a front-page editorial disguised as a “report,” the Times credited the commission's final report with finding that “an attack described as unimaginable had in fact been imagined, repeatedly.” But not a shred of the documentary evidence cited by the Times for this categorical statement actually predicted that al Qaeda would hijack commercial airliners and crash them into buildings in New York and Washington. Moreover, all of the evidence, such as it was, came from the 1990's. Nevertheless, the Times “report” contrived to convey the impression that in the fall of 2000 the Bush administration—then not yet in office—had received fair warning of an imminent attack. To bolster this impression, the Times went on to quote from a briefing given to Bush a month before 9/11. But the document in question was vague about details, and in any case was only one of many intelligence briefings with no special claim to credibility over conflicting assessments.
Thus the Bush administration, which had just been excoriated in hearings held by the Senate Intelligence Committee for having invaded Iraq on the basis of faulty intelligence, was now excoriated by some of the 9/11 commissioners for not having acted on the basis of even sketchier intelligence to head off 9/11 itself. This contradiction elicited a mordant comment from Charles Hill, a former government official who had been a regular “consumer” of intelligence:
Intelligence collection and analysis is a very imperfect business. Refusal to face this reality has produced the almost laughable contradiction of the Senate Intelligence Committee criticizing the Bush administration for acting on third-rate intelligence, even as the 9/11 commission criticizes it for not acting on third-rate intelligence.3
However, the point I most wish to stress is that there was something unwholesome, not to say unholy, about the recriminations on this issue that befouled the commission's public hearings and some of the interim reports by the staff. It therefore came, so to speak, both as a shock and as a surprise that this same unholy spirit was almost entirely exorcised from the final report. In the end the commission agreed that no American President and no American policy could be held responsible in any degree for the aggression against the United States unleashed on 9/11.
Amen to that. For the plain truth is that the sole and entire responsibility rests with al Qaeda, along with the regimes that provided it with protection and support. Furthermore, to the extent that American passivity and inaction opened the door to 9/11, neither Democrats nor Republicans, and neither liberals nor conservatives, are in a position to derive any partisan or ideological advantage. The reason, quite simply, is that much the same methods for dealing with terrorism were employed by the administrations of both parties, stretching as far back as Richard Nixon in 1970 and proceeding through Gerald Ford, Jimmy Carter, Ronald Reagan (yes, Ronald Reagan), George H.W. Bush, Bill Clinton, and right up to the pre-9/11 George W. Bush.
A “Paper Tiger”
The record speaks dismally for itself. From 1970 to 1975, during the administrations of Nixon and Ford, several American diplomats were murdered in Sudan and Lebanon while others were kidnapped. The perpetrators were all agents of one or another faction of the Palestine Liberation Organization (PLO). In Israel, too, many American citizens were killed by the PLO, though, except for the rockets fired at our embassy and other American facilities in Beirut by the Popular Front for the Liberation of Palestine (PFLP), these attacks were not directly aimed at the United States. In any case, there were no American military reprisals.
Our diplomats, then, were for some years already being murdered with impunity by Muslim terrorists when, in 1979, with Carter now in the White House, Iranian students—with either the advance or subsequent blessing of the country's clerical ruler, Ayatollah Khomeini—broke into the American embassy in Tehran and seized 52 Americans as hostages. For a full five months, Carter dithered. At last, steeling himself, he authorized a military rescue operation which had to be aborted after a series of mishaps that would have fit well into a Marx Brothers movie like Duck Soup if they had not been more humiliating than comic. After 444 days, and just hours after Reagan's inauguration in January 1981, the hostages were finally released by the Iranians, evidently because they feared that the hawkish new President might actually launch a military strike against them.
Yet if they could have foreseen what was coming under Reagan, they would not have been so fearful. In April 1983, Hizbullah—an Islamic terrorist organization nourished by Iran and Syria—sent a suicide bomber to explode his truck in front of the American embassy in Beirut, Lebanon. Sixty-three employees, among them the Middle East CIA director, were killed and another 120 wounded. But Reagan sat still.
Six months later, in October 1983, another Hizbullah suicide bomber blew up an American barracks in the Beirut airport, killing 241 U.S. Marines in their sleep and wounding another 81. This time Reagan signed off on plans for a retaliatory blow, but he then allowed his Secretary of Defense, Caspar Weinberger, to cancel it (because it might damage our relations with the Arab world, of which Weinberger was always tenderly solicitous). Shortly thereafter, the President pulled the Marines out of Lebanon.
Having cut and run in Lebanon in October, Reagan again remained passive in December, when the American embassy in Kuwait was bombed. Nor did he hit back when, hard upon the withdrawal of the American Marines from Beirut, the CIA station chief there, William Buckley, was kidnapped by Hizbullah and then murdered. Buckley was the fourth American to be kidnapped in Beirut, and many more suffered the same fate between 1982 and 1992 (though not all died or were killed in captivity).
These kidnappings were apparently what led Reagan, who had sworn that he would never negotiate with terrorists, to make an unacknowledged deal with Iran, involving the trading of arms for hostages. But whereas the Iranians were paid off handsomely in the coin of nearly 1,500 antitank missiles (some of them sent at our request through Israel), all we got in exchange were three American hostages—not to mention the disruptive and damaging Iran-contra scandal.
In September 1984, six months after the murder of Buckley, the U.S. embassy annex near Beirut was hit by yet another truck bomb (also traced to Hizbullah). Again Reagan sat still. Or rather, after giving the green light to covert proxy retaliations by Lebanese intelligence agents, he put a stop to them when one such operation, directed against the cleric thought to be the head of Hizbullah, failed to get its main target while unintentionally killing 80 other people.
It took only another two months for Hizbullah to strike once more. In December 1984, a Kuwaiti airliner was hijacked and two American passengers employed by the U.S. Agency for International Development were murdered. The Iranians, who had stormed the plane after it landed in Tehran, promised to try the hijackers themselves, but instead allowed them to leave the country. At this point, all the Reagan administration could come up with was the offer of a $250,000 reward for information that might lead to the arrest of the hijackers. There were no takers.
The following June, Hizbullah operatives hijacked still another airliner, an American one (TWA flight 847), and then forced it to fly to Beirut, where it was held for more than two weeks. During those weeks, an American naval officer aboard the plane was shot, and his body was ignominiously hurled onto the tarmac. For this the hijackers were rewarded with the freeing of hundreds of terrorists held by Israel in exchange for the release of the other passengers. Both the United States and Israel denied that they were violating their own policy of never bargaining with terrorists, but as with the arms-for-hostages deal, and with equally good reason, no one believed them, and it was almost universally assumed that Israel had acted under pressure from Washington. Later, four of the hijackers were caught but only one wound up being tried and jailed (by Germany, not the United States).
The sickening beat went on. In October 1985, the Achille Lauro, an Italian cruise ship, was hijacked by a group under the leadership of the PLO's Abu Abbas, working with the support of Libya. One of the hijackers threw an elderly wheelchair-bound American passenger, Leon Klinghoffer, overboard. When the hijackers attempted to escape in a plane, the United States sent Navy fighters to intercept it and force it down. Klinghoffer's murderer was eventually apprehended and sent to prison in Italy, but the Italian authorities let Abu Abbas himself go. Washington—evidently having exhausted its repertoire of military reprisals—now confined itself to protesting the release of Abu Abbas. To no avail.
Libya's involvement in the Achille Lauro hijacking was, though, the last free pass that country's dictator, Muammar Qaddafi, was destined to get from the United States under Reagan. In December 1985, five Americans were among the 20 people killed when the Rome and Vienna airports were bombed, and then in April 1986 another bomb exploded in a discotheque in West Berlin that was a hangout for American servicemen. U.S. intelligence tied Libya to both of these bombings, and the eventual outcome was an American air attack in which one of the residences of Qaddafi was hit.
In retaliation, the Palestinian terrorist Abu Nidal executed three U.S. citizens who worked at the American University in Beirut. But Qaddafi himself—no doubt surprised and shaken by the American reprisal—went into a brief period of retirement as a sponsor of terrorism. So far as we know, it took nearly three years (until December 1988) before he could pull himself together to the point of undertaking another operation: the bombing of Pan Am flight 103 over Lockerbie, Scotland, in which a total of 270 people lost their lives. Of the two Libyan intelligence agents who were tried for planting the bomb, one was convicted (though not until the year 2001) and the other acquitted. Qaddafi himself suffered no further punishment from American warplanes.
In January 1989, Reagan was succeeded by the elder George Bush, who, in handling the fallout from the destruction of Pan Am 103, was content to adopt the approach to terrorism taken by all his predecessors. During the elder Bush's four-year period in the White House, there were several attacks on Americans in Turkey by Islamic terrorist organizations, and there were others in Egypt, Saudi Arabia, and Lebanon. None of these was as bloody as previous incidents, and none provoked any military response from the United States.
In January 1993, Bill Clinton became President. Over the span of his two terms in office, American citizens continued to be injured or killed in Israel and other countries by terrorists who were not aiming specifically at the United States. But several spectacular terrorist operations occurred on Clinton's watch of which the U.S. was most emphatically the target.
The first, on February 26, 1993, only 38 days after his inauguration, was the explosion of a truck bomb in the parking garage of the World Trade Center in New York. As compared with what would happen on September 11, 2001, this was a minor incident in which “only” six people were killed and over 1,000 injured. The six Muslim terrorists responsible were caught, tried, convicted, and sent to prison for long terms.
But in following the by-now traditional pattern of treating such attacks as common crimes, or the work of rogue groups acting on their own, the Clinton administration willfully turned a deaf ear to outside experts like Steven Emerson and even the director of the CIA, R. James Woolsey, who strongly suspected that behind the individual culprits was a terrorist Islamic network with (at that time) its headquarters in Sudan. This network, then scarcely known to the general public, was called al Qaeda, and its leader was a former Saudi national who had fought on our side against the Soviets in Afghanistan but had since turned against us as fiercely as he had been against the Russians. His name was Osama bin Laden.
The next major episode was not long in trailing the bombing of the World Trade Center. In April 1993, less than two months after that attack, former President Bush visited Kuwait, where an attempt was made to assassinate him by—as our own investigators were able to determine—Iraqi intelligence agents. The Clinton administration spent two more months seeking approval from the UN and the “international community” to retaliate for this egregious assault on the United States. In the end, a few cruise missiles were fired into the Iraqi capital of Baghdad, where they fell harmlessly onto empty buildings in the middle of the night.
In the years immediately ahead, there were many Islamic terrorist operations (in Turkey, Pakistan, Saudi Arabia, Lebanon, Yemen, and Israel) that were not specifically aimed at the United States but in which Americans were nevertheless murdered or kidnapped. In March 1995, however, a van belonging to the U.S. consulate in Karachi, Pakistan, was hit by gunfire, killing two American diplomats and injuring a third. In November of the same year, five Americans died when a car bomb exploded in Riyadh, Saudi Arabia, near a building in which a U.S. military advisory group lived.
All this was trumped in June 1996 when another building in which American military personnel lived—the Khobar Towers in Dhahran, Saudi Arabia—was blasted by a truck bomb. Nineteen of our airmen were killed, and 240 other Americans on the premises were wounded.
In 1993, Clinton had been so intent on treating the World Trade Center bombing as a common crime that for some time afterward he refused even to meet with his own CIA director. Perhaps he anticipated that he would be told things by Woolsey—about terrorist networks and the states sponsoring them—that he did not wish to hear, because he had no intention of embarking on the military action that such knowledge might force upon him. Now, in the wake of the bombing of the Khobar Towers, Clinton again handed the matter over to the police; but the man in charge, his FBI director, Louis Freeh, who had intimations of an Iranian connection, could no more get through to him than Woolsey before. There were a few arrests, and the action then moved into the courts.
In June 1998, grenades were unsuccessfully hurled at the U.S. embassy in Beirut. A little later, our embassies in the capitals of Kenya (Nairobi) and Tanzania (Dar es Salaam) were not so lucky. On a single day—August 7, 1998—car bombs went off in both places, leaving more than 200 people dead, of whom twelve were Americans. Credit for this coordinated operation was claimed by al Qaeda. In what, whether fairly or not, was widely interpreted, especially abroad, as a move to distract attention from his legal troubles over the Monica Lewinsky affair, Clinton fired cruise missiles at an al Qaeda training camp in Afghanistan, where bin Laden was supposed to be at that moment, and at a building in Sudan, where al Qaeda also had a base. But bin Laden escaped harm, while it remained uncertain whether the targeted factory in Sudan was actually manufacturing chemical weapons or was just a normal pharmaceutical plant.
This fiasco—so we have learned from former members of his administration—discouraged any further such action by Clinton against bin Laden, though we have also learned from various sources that he did authorize a number of covert counterterrorist operations and diplomatic initiatives leading to arrests in foreign countries. But according to Dick Morris, who was then Clinton's political adviser:
The weekly strategy meetings at the White House throughout 1995 and 1996 featured an escalating drumbeat of advice to President Clinton to take decisive steps to crack down on terrorism. The polls gave these ideas a green light. But Clinton hesitated and failed to act, always finding a reason why some other concern was more important.
In the period after Morris left, more began going on behind the scenes, but most of it remained in the realm of talk or planning that went nowhere. In contrast to the flattering picture of Clinton that Richard Clarke would subsequently draw, Woolsey (who after a brief tenure resigned from the CIA out of sheer frustration) would offer a devastating retrospective summary of the President's overall approach:
Do something to show you're concerned. Launch a few missiles in the desert, bop them on the head, arrest a few people. But just keep kicking the ball down field.
Bin Laden, picking up that ball on October 12, 2000, when the destroyer USS Cole had docked for refueling in Yemen, dispatched a team of suicide bombers. The bombers did not succeed in sinking the ship, but they inflicted severe damage upon it, while managing to kill seventeen American sailors and wounding another 39.
Clarke, along with a few intelligence analysts, had no doubt that the culprit was al Qaeda. But the heads neither of the CIA nor of the FBI thought the case was conclusive. Hence the United States did not so much as lift a military finger against bin Laden or the Taliban regime in Afghanistan, where he was now ensconced and being protected. As for Clinton, so obsessively was he then wrapped up in a futile attempt to broker a deal between the Israelis and the Palestinians that all he could see in this attack on an American warship was an effort “to deter us from our mission of promoting peace and security in the Middle East.” The terrorists, he resoundingly vowed, would “fail utterly” in this objective.
Never mind that not the slightest indication existed that bin Laden was in the least concerned over Clinton's negotiations with the Israelis and the Palestinians at Camp David, or even that the Palestinian issue was of primary importance to him as compared with other grievances. In any event, it was Clinton who failed, not bin Laden. The Palestinians under Yasir Arafat, spurning an unprecedentedly generous offer that had been made by the Israeli prime minister Ehud Barak with Clinton's enthusiastic endorsement, unleashed a new round of terrorism. And bin Laden would soon succeed all too well in his actual intention of striking another brazen blow at the United States.
The sheer audacity of what bin Laden went on to do on September 11 was unquestionably a product of his contempt for American power. Our persistent refusal for so long to use that power against him and his terrorist brethren—or to do so effectively whenever we tried—reinforced his conviction that we were a nation on the way down, destined to be defeated by the resurgence of the same Islamic militancy that had once conquered and converted large parts of the world by the sword.
As bin Laden saw it, thousands or even millions of his followers and sympathizers all over the Muslim world were willing, and even eager, to die a martyr's death in the jihad, the holy war, against the “Great Satan,” as the Ayatollah Khomeini had called us. But, in bin Laden's view, we in the West, and especially in America, were all so afraid to die that we lacked the will even to stand up for ourselves and defend our degenerate way of life.
Bin Laden was never reticent or coy in laying out this assessment of the United States. In an interview on CNN in 1997, he declared that “the myth of the superpower was destroyed not only in my mind but also in the minds of all Muslims” when the Soviet Union was defeated in Afghanistan. That the Muslim fighters in Afghanistan would almost certainly have failed if not for the arms supplied to them by the United States did not seem to enter into the lesson he drew from the Soviet defeat. In fact, in an interview a year earlier he had belittled the United States as compared with the Soviet Union. “The Russian soldier is more courageous and patient than the U.S. soldier,” he said then. Hence, “Our battle with the United States is easy compared with the battles in which we engaged in Afghanistan.”
Becoming still more explicit, bin Laden wrote off the Americans as cowards. Had Reagan not taken to his heels in Lebanon after the bombing of the Marine barracks in 1983? And had not Clinton done the same a decade later when only a few American Rangers were killed in Somalia, where they had been sent to participate in a “peacekeeping” mission? Bin Laden did not boast of this as one of his victories, but a State Department dossier charged that al Qaeda had trained the terrorists who ambushed the American servicemen. (The ugly story of what happened to us in Somalia was told in the film version of Mark Bowden's Black Hawk Down, which reportedly became Saddam Hussein's favorite movie.)
Bin Laden summed it all up in a third interview he gave in 1998:
After leaving Afghanistan the Muslim fighters headed for Somalia and prepared for a long battle thinking that the Americans were like the Russians. The youth were surprised at the low morale of the American soldiers and realized, more than before, that the American soldier was a paper tiger and after a few blows ran in defeat.
Bin Laden was not the first enemy of a democratic regime to have been emboldened by such impressions. In the 1930's, Adolf Hitler was convinced by the failure of the British to arm themselves against the threat he posed, as well as by the policy of appeasement they adopted toward him, that they were decadent and would never fight no matter how many countries he invaded.
Similarly with Joseph Stalin in the immediate aftermath of World War II. Encouraged by the rapid demobilization of the United States, which to him meant that we were unprepared and unwilling to resist him with military force, Stalin broke the pledges he had made at Yalta to hold free elections in the countries of Eastern Europe he had occupied at the end of the war. Instead, he consolidated his hold over those countries, and made menacing gestures toward Greece and Turkey.
After Stalin's death, his successors repeatedly played the same game whenever they sensed a weakening of the American resolve to hold them back. Sometimes this took the form of maneuvers aimed at establishing a balance of military power in their favor. Sometimes it took the form of using local Communist parties or other proxies as their instrument. But thanks to the decline of American power following our withdrawal from Vietnam—a decline reflected in the spread during the late 1970's of isolationist and pacifist sentiment, which was in turn reflected in severely reduced military spending—Leonid Brezhnev felt safe in sending his own troops into Afghanistan in 1979.
It was the same decline of American power, so uncannily personified by Jimmy Carter, that, less than two months before the Soviet invasion of Afghanistan, had emboldened the Ayatollah Khomeini to seize and hold American hostages. To be sure, there were those who denied that this daring action had anything to do with Khomeini's belief that the United States under Carter had become impotent. But this denial was impossible to sustain in the face of the contrast between the attack on our embassy in Tehran and the protection the Khomeini regime extended to the Soviet embassy there when a group of protesters tried to storm it after the invasion of Afghanistan. The radical Muslim fundamentalists ruling Iran hated Communism and the Soviet Union at least as much as they hated us—especially now that the Soviets had invaded a Muslim country. Therefore the difference in Khomeini's treatment of the two embassies could not be explained by ideological or political factors. What could and did explain it was his fear of Soviet retaliation as against his expectation that the United States, having lost its nerve, would go to any lengths to avoid the use of force.
And so it was with Saddam Hussein. In 1990, with the first George Bush sitting in the White House, Saddam Hussein invaded Kuwait in what was widely, and accurately, seen as a first step in a bid to seize control of the oil fields of the Middle East. The elder Bush, fortified by the determination of Margaret Thatcher, who was then prime minister of England, declared that the invasion would not stand, and he put together a coalition that sent a great military force into the region. This alone might well have frightened Saddam Hussein into pulling out of Kuwait if not for the wave of hysteria in the United States about the tens of thousands of “body bags” that it was predicted would be flown home if we actually went to war with Iraq. Not unreasonably, Saddam concluded that, if he held firm, it was we who would blink and back down.
The fact that Saddam miscalculated, and that in the end we made good on our threat, did not overly impress Osama bin Laden. After all—dreading the casualties we would suffer if we went into Baghdad after liberating Kuwait and defeating the Iraqi army on the battlefield—we had allowed Saddam to remain in power. To bin Laden, this could only have looked like further evidence of the weakness we had shown in the ineffectual policy toward terrorism adopted by a long string of American Presidents. No wonder he was persuaded that he could strike us massively on our own soil and get away with it.
Yet just as Saddam had miscalculated in 1990-91, and would again in 2002, bin Laden misread how the Americans would react to being hit where, literally, they lived. In all likelihood he expected a collapse into despair and demoralization; what he elicited instead was an outpouring of rage and an upsurge of patriotic sentiment such as younger Americans had never witnessed except in the movies, and had most assuredly never experienced in their own hearts and souls, or, for those who enlisted in the military, on their own flesh.
In that sense, bin Laden did for this country what the Ayatollah Khomeini had done before him. In seizing the American hostages in 1979, and escaping retaliation, Khomeini inflicted a great humiliation on the United States. But at the same time, he also exposed the foolishness of Jimmy Carter's view of the world. The foolishness did not lie in Carter's recognition that American power—military, economic, political, and moral—had been on a steep decline at least since Vietnam. This was all too true. What was foolish was the conclusion Carter drew from it. Rather than proposing policies aimed at halting and then reversing the decline, he took the position that the cause was the play of historical forces we could do nothing to stop or even slow down. As he saw it, instead of complaining or flailing about in a vain and dangerous effort to recapture our lost place in the sun, we needed first to acknowledge, accept, and adjust to this inexorable historical development, and then to act upon it with “mature restraint.”
In one fell swoop, the Ayatollah Khomeini made nonsense of Carter's delusionary philosophy in the eyes of very large numbers of Americans, including many who had previously entertained it. Correlatively, new heart was given to those who, rejecting the idea that American decline was inevitable, had argued that the cause was bad policies and that the decline could be turned around by returning to the better policies that had made us so powerful in the first place.
The entire episode thereby became one of the forces behind an already burgeoning determination to rebuild American power that culminated in the election of Ronald Reagan, who had campaigned on the promise to do just that. For all the shortcomings of his own handling of terrorism, Reagan did in fact keep his promise to rebuild American power. And it was this that set the stage for victory in the multifaceted cold war we had been waging since 1947, when the United States under President Harry Truman (aroused by Stalin's miscalculation) decided to resist any further advance of the Soviet empire.
Few, if any, of Truman's contemporaries would have dreamed that this product of a Kansas City political machine, who as a reputedly run-of-the-mill U.S. Senator had spent most of his time on taxes and railroads, would rise so resolutely and so brilliantly to the threat represented by Soviet imperialism. Just so, 54 years later in 2001, another politician with a small reputation and little previous interest in foreign affairs would be confronted with a challenge perhaps even greater than the one faced by Truman; and he too astonished his own contemporaries by the way he rose to it.
Enter the Bush Doctrine
In “The Sources of Soviet Conduct” (1947), the theoretical defense he constructed of the strategy Truman adopted for fighting the war ahead, George F. Kennan (then the director of the State Department's policy planning staff, and writing under the pseudonym “X”) described that strategy as
a long-term, patient but firm and vigilant containment of Russian expansive tendencies . . . by the adroit and vigilant application of counterforce at a series of constantly shifting geographical and political points.
In other words (though Kennan himself did not use those words), we were faced with the prospect of nothing less than another world war; and (though in later years, against the plain sense of the words that he himself did use, he tried to claim that the “coun-terforce” he had in mind was not military) it would not be an entirely “cold” one, either. Before it was over, more than 100,000 Americans would die on the far-off battlefields of Korea and Vietnam, and the blood of many others allied with us in the political and ideological struggle against the Soviet Union would be spilled on those same battlefields, and in many other places as well.
For these reasons, I agree with one of our leading contemporary students of military strategy, Eliot A. Cohen, who thinks that what is generally called the “cold war” (a term, incidentally, coined by Soviet propagandists) should be given a new name. “The cold war,” Cohen writes, was actually “World War III, which reminds us that not all global conflicts entail the movement of multimillion-man armies, or conventional front lines on a map.” I also agree that the nature of the conflict in which we are now engaged can only be fully appreciated if we look upon it as World War IV. To justify giving it this name—rather than, say, the “war on terrorism”—Cohen lists “some key features” that it shares with World War III:
that it is, in fact, global; that it will involve a mixture of violent and nonviolent efforts; that it will require mobilization of skill, expertise, and resources, if not of vast numbers of soldiers; that it may go on for a long time; and that it has ideological roots.
There is one more feature that World War IV shares with World War III and that Cohen does not mention: both were declared through the enunciation of a presidential doctrine.
The Truman Doctrine of 1947 was born with the announcement that “it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressure.” Beginning with a special program of aid to Greece and Turkey, which were then threatened by Communist takeovers, the strategy was broadened within a few months by the launching of a much larger and more significant program of economic aid that came to be called the Marshall Plan. The purpose of the Marshall Plan was to hasten the reconstruction of the war-torn economies of Western Europe: not only because this was a good thing in itself, and not only because it would serve American interests, but also because it could help eliminate the grievances on which Communism fed. But then came a Communist coup in Czechoslovakia. Following as it had upon the installation by the Soviet Union of puppet regimes in the occupied countries of East Europe, the Czech coup demonstrated that economic measures would not be enough by themselves to ward off a comparable danger posed to Italy and France by huge local Communist parties entirely subservient to Moscow. Out of this realization—and out of a parallel worry about an actual Soviet invasion of Western Europe—there emerged the North Atlantic Treaty Organization (NATO).
Containment, then, was a three-sided strategy made up of economic, political, and military components. All three would be deployed in a shifting relative balance over the four decades it took to win World War III.4
If the Truman Doctrine unfolded gradually, revealing its entire meaning only in stages, the Bush Doctrine was pretty fully enunciated in a single speech, delivered to a joint session of Congress on September 20, 2001. It was then clarified and elaborated in three subsequent statements: Bush's first State of the Union address on January 29, 2002; his speech to the graduating class of the U.S. Military Academy at West Point on June 1, 2002; and the remarks on the Middle East he delivered three weeks later, on June 24. This difference aside, his contemporaries were at least as startled as Truman's had been, both by the substance of the new doctrine and by the transformation it bespoke in its author. For here was George W. Bush, who in foreign affairs had been a more or less passive disciple of his father, talking for all the world like a fiery follower of Ronald Reagan.
In sharp contrast to Reagan, generally considered a dangerous ideologue, the first President Bush—who had been Reagan's Vice President and had then succeeded him in the White House—was often accused of being deficient in what he himself inelegantly dismissed as “the vision thing.” The charge was fair in that the elder Bush had no guiding sense of what role the United States might play in reshaping the post-cold-war world. A strong adherent of the “realist” perspective on world affairs, he believed that the maintenance of stability was the proper purpose of American foreign policy, and the only wise and prudential course to follow. Therefore, when Saddam Hussein upset the balance of power in the Middle East by invading Kuwait in 1991, the elder Bush went to war not to create a new configuration in the region but to restore the status quo ante. And it was precisely out of the same overriding concern for stability that, having achieved this objective by driving Saddam out of Kuwait, Bush then allowed him to remain in power.
As for the second President Bush, before 9/11 he was, to all appearances, as deficient in the “vision thing” as his father before him. If he entertained any doubts about the soundness of the “realist” approach, he showed no sign of it. Nothing he said or did gave any indication that he might be dissatisfied with the idea that his main job in foreign affairs was to keep things on an even keel. Nor was there any visible indication that he might be drawn to Ronald Reagan's more “idealistic” ambition to change the world, especially with the “Wilsonian” aim of making it “safe for democracy” by encouraging the spread to as many other countries as possible of the liberties we Americans enjoyed.
Which is why Bush's address of September 20, 2001 came as so great a surprise. Delivered only nine days after the attacks on the World Trade Center and the Pentagon, and officially declaring that the United States was now at war, the September 20 speech put this nation, and all others, on notice that whether or not George W. Bush had been a strictly conventional realist in the mold of his father, he was now politically born again as a passionate democratic idealist of the Reaganite stamp.
It was also this speech that marked the emergence of the Bush Doctrine, and that pointed just as clearly to World War IV as the Truman Doctrine had to War World III. Bush did not explicitly give the name World War IV to the struggle ahead, but he did characterize it as a direct successor to the two world wars that had immediately preceded it. Thus, of the “global terrorist network” that had attacked us on our own soil, he said:
We have seen their kind before. They're the heirs of all the murderous ideologies of the 20th century. By sacrificing human life to serve their radical visions, by abandoning every value except the will to power, they follow in the path of fascism, Nazism, and totalitarianism. And they will follow that path all the way to where it ends in history's unmarked grave of discarded lies.
As this passage, coming toward the beginning of the speech, linked the Bush Doctrine to the Truman Doctrine and to the great struggle led by Franklin D. Roosevelt before it, the wind-up section demonstrated that if the second President Bush had previously lacked “the vision thing,” his eyes were blazing with it now. “Great harm has been done to us,” he intoned toward the end. “We have suffered great loss. And in our grief and anger we have found our mission and our moment.” Then he went on to spell out the substance of that mission and that moment:
The advance of human freedom, the great achievement of our time and the great hope of every time, now depends on us. Our nation, this generation, will lift the dark threat of violence from our people and our future. We will rally the world to this cause by our efforts, by our courage. We will not tire, we will not falter, and we will not fail.
Finally, in his peroration, drawing on some of the same language he had been applying to the nation as a whole, Bush shifted into the first person, pledging his own commitment to the great mission we were all charged with accomplishing:
I will not forget the wound to our country and those who inflicted it. I will not yield, I will not rest, I will not relent in waging this struggle for freedom and security for the American people. The course of this conflict is not known, yet its outcome is certain. Freedom and fear, justice and cruelty, have always been at war, and we know that God is not neutral between them.
Not even Ronald Reagan, the “Great Communicator” himself, had ever been so eloquent in expressing the “idealistic” impetus behind his conception of the American role in the world.5
This was not the last time Bush would sound these themes. Two-and-a-half years later, at a moment when things seemed to be going badly in the war, it was with the same ideas he had originally put forward on September 20, 2001 that he sought to reassure the nation. The occasion would be a commencement address at the Air Force Academy on June 2, 2004, where he would repeatedly place the “war against terrorism” in direct succession to World War II and World War III. He would also be unusually undiplomatic in making no bones about his rejection of realism:
For decades, free nations tolerated oppression in the Middle East for the sake of stability. In practice, this approach brought little stability and much oppression, so I have changed this policy.
And again, even less diplomatically:
Some who call themselves realists question whether the spread of democracy in the Middle East should be any concern of ours. But the realists in this case have lost contact with a fundamental reality: America has always been less secure when freedom is in retreat; America is always more secure when freedom is on the march.
To top it all off, he would go out of his way to assert that his own policy, which he properly justified in the first place as a better way to protect American interests than the alternative favored by the realists, also bore the stamp of the Reaganite version of Wilsonian idealism:
This conflict will take many turns, with setbacks on the course to victory. Through it all, our confidence comes from one unshakable belief: We believe in Ronald Reagan's words that “the future belongs to the free.”
The first pillar of the Bush Doctrine, then, was built on a repudiation of moral relativism and an entirely unapologetic assertion of the need for and the possibility of moral judgment in the realm of world affairs. And just to make sure that the point he had first made on September 20, 2001 had hit home, Bush returned to it even more outspokenly and in greater detail in the State of the Union address of January 29, 2002.
Bush had won enthusiastic plaudits from many for the “moral clarity” of his September 20 speech, but he had also provoked even greater dismay and disgust among “advanced” thinkers and “sophisticated” commentators and diplomats both at home and abroad. Now he intensified and exacerbated their outrage by becoming more specific. Having spoken in September only in general terms about the enemy in World War IV, Bush proceeded in his second major wartime pronouncement to single out three such nations—Iraq, Iran, and North Korea—which he described as forming an “axis of evil.”
Here again he was following in the footsteps of Ronald Reagan, who had denounced the Soviet Union, our principal enemy in World War III, as an “evil empire,” and who had been answered with a veritably hysterical outcry from chancelleries and campuses and editorial pages all over the world. Evil? What place did a word like that have in the lexicon of international affairs, assuming it would ever occur to an enlightened person to exhume it from the grave of obsolete concepts in any connection whatsoever? But in the eyes of the “experts,” Reagan was not an enlightened person. Instead, he was a “cowboy,” a B-movie actor, who had by some freak of democratic perversity landed in the White House. In denouncing the Soviet empire, he was accused either of signaling an intention to trigger a nuclear war or of being too stupid to understand that his wildly provocative rhetoric might do so inadvertently.
The reaction to Bush was perhaps less hysterical and more scornful than the outcry against Reagan, since this time there was no carrying-on about a nuclear war. But the air was just as thick with the old sneers and jeers. Who but an ignoramus and a simpleton—or a fanatical religious fundamentalist, of the very type on whom Bush was declaring war—would resort to archaic moral absolutes like “good” and “evil”? On the one hand, it was egregiously simple-minded to brand a whole nation as evil, and on the other, only a fool could bring himself to believe, as Bush (once more like Reagan) had evidently done in complete and ingenuous sincerity, that the United States, of all countries, represented the good. Surely only a know-nothing illiterate could be oblivious of the innumerable crimes committed by America both at home and abroad—crimes that the country's own leading intellectuals had so richly documented in the by-now standard academic view of its history.
Here is how Gore Vidal, one of those intellectuals, stated the case:
I mean, to watch Bush doing his little war dance in Congress . . . about “evildoers” and this “axis of evil” . . . I thought, he doesn't even know what the word axis means. Somebody just gave it to him. . . . This is about as mindless a statement as you could make. Then he comes up with about a dozen other countries that have “evil” people in them, who might commit “terrorist acts.” What is a terrorist act? Whatever he thinks is a terrorist act. And we are going to go after them. Because we are good and they are evil. And we're “gonna git 'em.”
This was rougher and cruder than the language issuing from editorial pages and think tanks and foreign ministries and even most other intellectuals, but it was no different from what nearly all of them thought and how many of them talked in private.6
As soon became clear, however, Bush was not deterred. In subsequent statements he continued to uphold the first pillar of his new doctrine and to affirm the universality of the moral purposes animating this new war:
Some worry that it is somehow undiplomatic or impolite to speak the language of right and wrong. I disagree. Different circumstances require different methods, but not different moralities. Moral truth is the same in every culture, in every time, and in every place. . . . We are in a conflict between good and evil, and America will call evil by its name.
Then, in a fascinating leap into the great theoretical debate of the post-cold-war era (though without identifying the main participants), Bush came down squarely on the side of Francis Fukuyama's much-misunderstood view of “the end of history,” according to which the demise of Communism had eliminated the only serious competitor to our own political system7:
The 20th century ended with a single surviving model of human progress, based on non-negotiable demands of human dignity, the rule of law, limits on the power of the state, respect for women and private property and free speech and equal justice and religious tolerance.
Having endorsed Fukuyama, Bush now brushed off the political scientist Samuel Huntington, whose rival theory postulated a “clash of civilizations” arising from the supposedly incompatible values prevailing in different parts of the world:
When it comes to the common rights and needs of men and women, there is no clash of civilizations. The requirements of freedom apply fully to Africa and Latin America and the entire Islamic world. The peoples of the Islamic nations want and deserve the same freedoms and opportunities as people in every nation. And their governments should listen to their hopes.
The Second Pillar
If the first of the four pillars on which the Bush Doctrine stood was a new moral attitude, the second was an equally dramatic shift in the conception of terrorism as it had come to be defined in standard academic and intellectual discourse.
Under this new understanding—confirmed over and over again by the fact that most of the terrorists about whom we were learning came from prosperous families—terrorism was no longer considered a product of economic factors. The “swamps” in which this murderous plague bred were swamps not of poverty and hunger but of political oppression. It was only by “draining” them, through a strategy of “regime change,” that we would be making ourselves safe from the threat of terrorism and simultaneously giving the peoples of “the entire Islamic world” the freedoms “they want and deserve.”
In the new understanding, furthermore, terrorists, with rare exceptions, were not individual psychotics acting on their own but agents of organizations that depended on the sponsorship of various governments. Our aim, therefore, could not be merely to capture or kill Osama bin Laden and wipe out the al Qaeda terrorists under his direct leadership. Bush vowed that we would also uproot and destroy the entire network of interconnected terrorist organizations and cells “with global reach” that existed in as many as 50 or 60 countries. No longer would we treat the members of these groups as criminals to be arrested by the police, read their Miranda rights, and brought to trial. From now on, they were to be regarded as the irregular troops of a military alliance at war with the United States, and indeed the civilized world as a whole.
Not that this analysis of terrorism had exactly been a secret. The State Department itself had a list of seven state sponsors of terrorism (all but two of which, Cuba and North Korea, were predominantly Muslim), and it regularly issued reports on terrorist incidents throughout the world. But aside from such things as the lobbing of a cruise missile or two, diplomatic and/or economic sanctions that were inconsistently and even perfunctorily enforced, and a number of covert operations, the law-enforcement approach still prevailed.
September 11 changed much—if not yet all—of that; still in use were atavistic phrases like “bringing the terrorists to justice.” But no one could any longer dream that the American answer to what had been done to us in New York and Washington would begin with an FBI investigation and end with a series of ordinary criminal trials. War had been declared on the United States, and to war we were going to go.
But against whom? Since it was certain that Osama bin Laden had masterminded September 11, and since he and the top leadership of al Qaeda were holed up in Afghanistan, the first target, and thus the first testing ground of this second pillar of the Bush Doctrine, chose itself.
Before resorting to military force, however, Bush issued an ultimatum to the extreme Islamic radicals of the Taliban who were then ruling Afghanistan. The ultimatum demanded that they turn Osama bin Laden and his people over to us and that they shut down all terrorist training camps there. By rejecting this ultimatum, the Taliban not only asked for an invasion but, under the Bush Doctrine, also asked to be overthrown. And so, on October 7, 2001, the United States—joined by Great Britain and about a dozen other countries—launched a military campaign against both al Qaeda and the regime that was providing it with “aid and safe haven.”
As compared with what would come later, there was relatively little opposition either at home or abroad to the opening of this first front of World War IV. The reason was that the Afghan campaign could easily be justified as a retaliatory strike against the terrorists who had attacked us. And while there was a good deal of murmuring about the dangers of pursuing a policy of “regime change,” there was very little sympathy in practice (outside the Muslim world, that is) for the Taliban.
Whatever opposition was mounted to the battle of Afghanistan mainly took the form of skepticism over the chances of winning it. True, such skepticism was in some quarters a mask for outright opposition to American military power in general. But once the Afghan campaign got under way, the main focus shifted to everything that seemed to be going awry on the battlefield.
For example, only a couple of weeks into the campaign, when there were missteps involving the use of the Afghan fighters of the Northern Alliance, observers like R.W. Apple of the New York Times immediately rushed to conjure up the ghost of Vietnam. This restless spirit, having been called forth from the vasty deep, henceforth refused to be exorcised, and would go on to elbow its way into every detail of the debates over all the early battles of World War IV. On this occasion, its message was that we were falling victim to the illusion that we could rely on an incompetent local force to do the fighting on the ground while we supplied advice and air support. This strategy would inevitably fail, and would suck us into the same “quagmire” into which we had been dragged in Vietnam. After all, as Apple and others argued, the Soviet Union had suffered its own “Vietnam” in Afghanistan—and unlike us, it had not been hampered by the logistical problems of projecting power over a great distance. How could we expect to do better?
When, however, the B-52's and the 15,000-pound “Daisy Cutter” bombs were unleashed, they temporarily banished the ghost of Vietnam and undercut the fears of some and the hopes of others that we were heading into a quagmire. Far from being good for nothing but “pounding the rubble,” as the critics had sarcastically charged, the Daisy Cutters exerted, as even a New York Times report was forced to concede, “a terrifying psychological impact as they exploded just above ground, wiping out everything for hundreds of yards.”
But the Daisy Cutters were only the half of it. As we were all to discover, our “smart-bomb” technology had advanced far beyond the stage it had reached when first introduced in 1991. In Afghanistan in 2001, such bombs—guided by “spotters” on the ground equipped with radios, laptops, and lasers, and often riding on horseback, and also aided by unmanned satellite drones and other systems in the air—were both incredibly precise in avoiding civilian casualties and absolutely lethal in destroying the enemy. It was this “new kind of American power,” added the New York Times report, that “enabled a ragtag opposition” (i.e., the same Northern Alliance supposedly dragging us into a quagmire) to rout the “battle-hardened troops” of the Taliban regime in less than three months, and with the loss of very few American troops.
In the event, Osama bin Laden was not captured and al Qaeda was not totally destroyed. But it was certainly damaged by the campaign in Afghanistan. As for the Taliban regime, it was overthrown and replaced by a government that would no longer give aid and comfort to terrorists. Moreover, while Afghanistan under the new government may not have been exactly democratic, it was infinitely less oppressive than its totalitarian predecessor. And thanks to the clearing of political ground that had been covered over by the radical Islamic extremism of the Taliban, the seeds of free institutions were being sown and given a fighting chance to sprout and grow.
The campaign in Afghanistan demonstrated in the most unmistakable terms what followed from the new understanding of terrorism that formed the second pillar of the Bush Doctrine: countries that gave safe haven to terrorists and refused to clean them out were asking the United States to do it for them, and the regimes ruling these countries were also asking to be overthrown in favor of new leaders with democratic aspirations. Of course, as circumstances permitted and prudence dictated, other instruments of power, whether economic or diplomatic, would be deployed. But Afghanistan showed that the military option was open, available for use, and lethally effective.
The Third Pillar
The third pillar on which the Bush Doctrine rested was the assertion of our right to preempt. Bush had already pretty clearly indicated on September 20, 2001 that he had no intention of waiting around to be attacked again (“We will pursue nations that provide aid or safe haven to terrorism”). But in the State of the Union speech in January 2002, he became much more explicit on this point too:
We'll be deliberate, yet time is not on our side. I will not wait on events, while dangers gather. I will not stand by, as peril draws closer and closer. The United States of America will not permit the world's most dangerous regimes to threaten us with the world's most destructive weapons.
To those with ears to hear, the January speech should have made it abundantly clear that Bush was now proposing to go beyond the fundamentally retaliatory strike against Afghanistan and to take preemptive action. Yet at first it went largely unnoticed that this right to strike, not in retaliation for but in anticipation of an attack, was a logical extension of the general outline Bush had provided on September 20. Nor did the new position attract much attention even when it was reiterated in the plainest of words on January 29. It was not until the third in the series of major speeches elaborating the Bush Doctrine—the one delivered on June 1, 2002 at West Point to the graduating class of newly commissioned officers of the United States Army—that the message got through at last.
Perhaps the reason the preemption pillar finally became clearly visible at West Point was that, for the first time, Bush placed his new ideas in historical context:
For much of the last century, America's defense relied on the cold-war doctrines of deterrence and containment. In some cases, those strategies still apply. But new threats also require new thinking. Deterrence—the promise of massive retaliation against nations—means nothing against shadowy terrorist networks with no nation or citizens to defend.
This covered al Qaeda and similar groups. But Bush then proceeded to explain, in addition, why the old doctrines could not work with a regime like Saddam Hussein's in Iraq:
Containment is not possible when unbalanced dictators with weapons of mass destruction can deliver those weapons or missiles or secretly provide them to terrorist allies.
Refusing to flinch from the implications of this analysis, Bush repudiated the previously sacred dogmas of arms control and treaties against the proliferation of weapons of mass destruction as a means of dealing with the dangers now facing us from Iraq and other members of the axis of evil:
We cannot defend America and our friends by hoping for the best. We cannot put our faith in the word of tyrants, who solemnly sign nonproliferation treaties, and then systematically break them.
Hence, Bush inexorably continued,
If we wait for threats to fully materialize, we will have waited too long. . . . [T]he war on terror will not be won on the defensive. We must take the battle to the enemy, disrupt his plans, and confront the worst threats before they emerge. In the world we have entered, the only path to safety is the path of action. And this nation will act.
At this early stage, the Bush administration was still denying that it had reached any definite decision about Saddam Hussein; but everyone knew that, in promising to act, Bush was talking about him. The immediate purpose was to topple the Iraqi dictator before he had a chance to supply weapons of mass destruction to the terrorists. But this was by no means the only or—surprising though it would seem in retrospect—even the decisive consideration either for Bush or his supporters (or, for that matter, his opponents).8 And in any case, the long-range strategic rationale went beyond the proximate causes of the invasion. Bush's idea was to extend the enterprise of “draining the swamps” begun in Afghanistan and then to set the entire region on a course toward democratization. For if Afghanistan under the Taliban represented the religious face of Middle Eastern terrorism, Iraq under Saddam Hussein was its most powerful secular partner. It was to deal with this two-headed beast that a two-pronged strategy was designed.
Unlike the plan to go after Afghanistan, however, the idea of invading Iraq and overthrowing Saddam Hussein provoked a firestorm hardly less intense than the one that was still raging over Bush's insistence on using the words “good” and “evil.”
Even before the debate on Iraq in particular, there had been strong objection to the whole idea of preemptive action by the United States. Some maintained that such action would be a violation of international law, while others contended that it would set a dangerous precedent under which, say, Pakistan might attack India or vice-versa. But once the discussion shifted from the Bush Doctrine in general to the question of Iraq, the objections became more specific.
Most of these were brought together in early August 2002 (only about two months after Bush's speech at West Point) in a piece entitled “Don't Attack Iraq.” The author was Brent Scowcroft, who had been National Security Adviser to the elder President Bush. Scowcroft asserted, first, that there was
scant evidence to tie Saddam to terrorist organizations, and even less to the September 11 attacks. Indeed, Saddam's goals have little in common with the terrorists who threaten us, and there is little incentive for him to make common cause with them.
That being the case, Scowcroft continued, “An attack on Iraq at this time would seriously jeopardize, if not destroy, the global counterterrorist campaign we have undertaken,” the campaign that must remain “our preeminent security priority.”
But this was not the only “priority” that to Scowcroft was “preeminent”:
Possibly the most dire consequences [of attacking Saddam] would be the effect in the region. The shared view in the region is that Iraq is principally an obsession of the U.S. The obsession of the region, however, is the Israeli-Palestinian conflict.
Showing little regard for the American “obsession,” Scowcroft was very solicitous of the regional one:
If we were seen to be turning our backs on that bitter [Israeli-Palestinian] conflict . . . in order to go after Iraq, there would be an explosion of outrage against us. We would be seen as ignoring a key interest of the Muslim world in order to satisfy what is seen to be a narrow American interest.
This, added Scowcroft, “could well destabilize Arab regimes in the region,” than which, to a quintessential realist like him, nothing could be worse.
In coming out publicly, and in these terms, against the second President Bush's policy, Scow-croft underscored the extent to which the son had diverged from the father's perspective. In addition, by lending greater credence to the already credible rumor that the elder Bush opposed invading Iraq, Scowcroft's article belied what would soon become one of the favorite theories of the hard Left—namely, that the son had gone to war in order to avenge the attempted assassination of his father.
On the other hand, by implicitly assenting to the notion that toppling Saddam was merely “a narrow American interest,” Scowcroft gave a certain measure of aid and comfort to the hard Left and its fellow travelers within the liberal community. For from these circles the cry had been going out that it was the corporations, especially Halliburton (which Vice President Dick Cheney had formerly headed) and the oil companies that were dragging us into an unnecessary war.
So, too, with Scowcroft's emphasis on resolving “the Israeli-Palestinian conflict”—a standard euphemism for putting pressure on Israel, whose “intransigence” was taken to be the major obstacle to peace. By strongly insinuating that the Israeli prime minister Ariel Sharon was a greater threat to us than Saddam Hussein, Scowcroft provided a respectable rationale for the hostility toward Israel that had come shamelessly out of the closet within hours of the attacks of 9/11 and that had been growing more and more overt, more and more virulent, and more and more widespread ever since. To the “paleoconservative” Right, where the charge first surfaced, it was less the oil companies than Israel that was mainly dragging us into invading Iraq. Before long, the Left would add the same accusation to its own indictment, and in due course it would be imprinted more and more openly on large swatches of mainstream opinion.
A cognate count in this indictment held that the invasion of Iraq had been secretly engineered by a cabal of Jewish officials acting not in the interest of their own country but in the service of Israel, and more particularly of Ariel Sharon. At first the framers and early spreaders of this defamatory charge considered it the better part of prudence to identify the conspirators not as Jews but as “neoconservatives.” It was a clever tactic, in that Jews did in fact constitute a large proportion of the repentant liberals and leftists who, having some two or three decades earlier broken ranks with the Left and moved rightward, came to be identified as neoconservatives. Everyone in the know knew this, and for those to whom it was news, the point could easily be gotten across by singling out only those neoconservatives who had Jewish-sounding names and to ignore the many other leading members of the group whose clearly non-Jewish names might confuse the picture.
This tactic had been given a trial run by Patrick J. Buchanan in opposing the first Gulf war of 1991. Buchanan had then already denounced the Johnny-come-lately neoconservatives for having hijacked and corrupted the conservative movement, but now he descended deeper into the fever swamps by insisting that there were “only two groups beating the drums . . . for war in the Middle East—the Israeli Defense Ministry and its amen corner in the United States.” Among those standing in the “amen corner” he subsequently singled out four prominent hawks with Jewish-sounding names, counterposing them to “kids with names like McAllister, Murphy, Gonzales, and Leroy Brown” who would actually do the fighting if these Jews had their way.
Ten years later, in 2001, in the writings of Buchanan and other paleoconservatives within the journalistic fraternity (notably Robert Novak, Arnaud de Borchgrave, and Paul Craig Roberts), one of the four hawks of 1991, Richard Perle, made a return appearance. But Perle was now joined in starring roles by Paul Wolfowitz and Douglas Feith, both occupying high positions in the Pentagon, and a large supporting cast of identifiably Jewish intellectuals and commentators outside the government (among them Charles Krauthammer, William Kristol, and Robert Kagan). Like their predecessors in 1991, the members of the new ensemble were portrayed as agents of their bellicose counterparts in the Israeli government. But there was also a difference: the new group had managed to infiltrate the upper reaches of the American government. Having pulled this off, they had conspired to manipulate their non-Jewish bosses—Vice President Cheney, Secretary of Defense Donald Rumsfeld, National Security Adviser Condoleezza Rice, and George W. Bush himself—into invading Iraq.
Before long, this theory was picked up and circulated by just about everyone in the whole world who was intent on discrediting the Bush Doctrine. And understandably so: for what could suit their purposes better than to “expose” the invasion of Iraq—and by extension the whole of World War IV—as a war started by Jews and being waged solely in the interest of Israel?
To protect themselves against the taint of anti-Semitism, purveyors of this theory sometimes disingenuously continued to pretend that when they said “neoconservative” they did not mean “Jew.” Yet the theory inescapably rested on all-too-familiar anti-Semitic canards—principally that Jews were never reliably loyal to the country in which they lived, and that they were always conspiring behind the scenes, often successfully, to manipulate the world for their own nefarious purposes.9
Quite apart from its pernicious moral and political implications, the theory was ridiculous in its own right. To begin with, it asked one to believe the unbelievable: that strong-minded people like Bush, Rumsfeld, Cheney, and Rice could be fooled by a bunch of cunning subordinates, whether Jewish or not, into doing anything at all against their better judgment, let alone something so momentous as waging a war, let alone a war in which they could detect no clear relation to American interests.
In the second place, there was the evidence uncovered by the purveyors of this theory themselves. That evidence, to which they triumphantly pointed, consisted of published articles and statements in which the alleged conspirators openly and unambiguously advocated the very policies they now stood accused of having secretly foisted upon an unwary Bush administration. Nor had these allegedly secret conspirators ever concealed their belief that toppling Saddam Hussein and adopting a policy aimed at the democratization of the entire Middle East would be good not only for the United States and for the people of the region but also for Israel. (And what, an uncharacteristically puzzled Richard Perle asked a hostile interviewer, was wrong with that?)
Which brings us to the fourth pillar on which the Bush Doctrine was erected.
The Fourth Pillar
Listening to the laments of Scowcroft and many others, one would think that George W. Bush had been ignoring “the Israeli-Palestinian conflict” altogether in his misplaced “obsession” with Iraq. In fact, however, even before 9/11 it had been widely and authoritatively reported that Bush was planning to come out publicly in favor of establishing a Palestinian state as the only path to a peaceful resolution of the conflict; and in October, after a short delay caused by 9/11, he became the first American President actually to do so. Yet at some point in the evolution of his thinking over the months that followed, Bush seems to have realized that there was something bizarre about supporting the establishment of a Palestinian state that would be run by a terrorist like Yasir Arafat and his henchmen. Why should the United States acquiesce, let alone help, in adding yet another state to those harboring and sponsoring terrorism precisely at a time when we were at war to rid the world of just such regimes?
Presumably it was under the prodding of this question that Bush came up with an idea even more novel in its way than the new conception of terrorism he had developed after 9/11. This idea was broached only three weeks after his speech at West Point, on June 24, 2002, when he issued a statement adding conditions to his endorsement of a Palestinian state:
Today, Palestinian authorities are encouraging, not opposing terrorism. This is unacceptable. And the United States will not support the establishment of a Palestinian state until its leaders engage in a sustained fight against the terrorists and dismantle their infrastructure.
But engaging in such a fight, he added, required the election of “new leaders, leaders not compromised by terror,” who would embark on building “entirely new political and economic institutions based on democracy, market economics, and action against terrorism.”
It was with these words that Bush brought his “vision” (as he kept calling it) of a Palestinian state living peacefully alongside Israel into line with his overall perspective on the evil of terrorism. And having traveled that far, he went the distance by repositioning the Palestinian issue into the larger context from which Arab propaganda had ripped it. Since this move passed almost unnoticed, it is worth dwelling on why it was so important.
Even before Israel was born in 1948, the Muslim countries of the Middle East had been fighting against the establishment of a sovereign Jewish state—any Jewish state—on land they believed Allah had reserved for those faithful to his prophet Muhammad. Hence the Arab-Israeli conflict had pitted hundreds of millions of Arabs and other Muslims, in control of more than two dozen countries and vast stretches of territory, against a handful of Jews who then numbered well under three-quarters of a million and who lived on a tiny sliver of land the size of New Jersey. But then came the Six-Day war of 1967. Launched in an effort to wipe Israel off the map, it ended instead with Israel in control of the West Bank (formerly occupied by Jordan) and Gaza (which had been controlled by Egypt). This humiliating defeat, however, was eventually turned into a rhetorical and political victory by Arab propagandists, who redefined the ongoing war of the whole Muslim world against the Jewish state as, instead, a struggle merely between the Palestinians and the Israelis. Thus was Israel's image transformed from a David to a Goliath, a move that succeeded in alienating much of the old sympathy previously enjoyed by the outnumbered and besieged Jewish state.
Bush now reversed this reversal. Not only did he reconstruct a truthful framework by telling the Palestinian people that they had been treated for decades “as pawns in the Middle East conflict.” He also insisted on being open and forthright about the nations that belonged in this larger picture and about what they had been up to:
I've said in the past that nations are either with us or against us in the war on terror. To be counted on the side of peace, nations must act. Every leader actually committed to peace will end incitement to violence in official media and publicly denounce homicide bombs. Every nation actually committed to peace will stop the flow of money, equipment, and recruits to terrorist groups seeking the destruction of Israel, including Hamas, Islamic Jihad, and Hizbullah. Every nation committed to peace must block the shipment of Iranian supplies to these groups and oppose regimes that promote terror, like Iraq. And Syria must choose the right side in the war on terror by closing terrorist camps and expelling terrorist organizations.
Here, then, Bush rebuilt the context in which to understand the Middle East conflict. In the months ahead, pressured by his main European ally, the British prime minister Tony Blair, and by his own Secretary of State, Colin Powell, Bush would sometimes seem to backslide into the old way of thinking. But he would invariably recover. Nor would he ever lose sight of the “vision” by which he was guided on this issue, and through which he had simultaneously made a strong start in fitting not the Palestinian Authority alone but the entire Muslim world, “friends” no less than enemies, into his conception of the war against terrorism.
With the inconsistency thus removed and the resultant shakiness repaired by the addition of this fourth pillar to undergird it, the Bush Doctrine was now firm, coherent, and complete.
Saluting the Flag Again
Both as a theoretical construct and as a guide to policy, the new Bush Doctrine could not have been further from the “Vietnam syndrome”—that loss of self-confidence and concomitant spread of neoisolationist and pacifist sentiment throughout the American body politic, and most prominently in the elite institutions of American culture, which began during the last years of the Vietnam war. I have already pointed to a likeness between the Truman Doctrine's declaration that World War III had started and the Bush Doctrine's equally portentous declaration that 9/11 had plunged us into World War I V. But fully to measure the distance traveled by the Bush Doctrine, I want to look now at yet another presidential doctrine—the one developed by Richard Nixon in the late 1960's precisely in response to the Vietnam syndrome.
Contrary to legend, our military intervention into Vietnam under John F. Kennedy in the early 1960's had been backed by every sector of mainstream opinion, with the elite media and the professoriate leading the cheers. At the beginning, indeed, the only criticism from the mainstream concerned tactical issues. Toward the middle, however, and with Lyndon B. Johnson having succeeded Kennedy in the White House, doubts began to arise concerning the political wisdom of the intervention, and by the time Nixon had replaced Johnson, the moral character of the United States was being indicted and besmirched. Large numbers of Americans, including even many of the people who had led the intervention in the Kennedy years, were now joining the tiny minority on the Left who at the time had denounced them for stupidity and immorality, and were now saying that going into Vietnam had progressed from a folly into a crime.
To this new political reality the Nixon Doctrine was a reluctant accommodation. As getting into Vietnam under Kennedy and Johnson had worked to undermine support for the old strategy of containment, Nixon—along with his chief adviser in foreign affairs, Henry Kissinger—thought that our way of getting out of Vietnam could conversely work to create the new strategy that had become necessary.
First, American forces would be withdrawn from Vietnam gradually, while the South Vietnamese built up enough power to assume responsibility for the defense of their own country. The American role would then be limited to providing arms and equipment. The same policy, suitably modified according to local circumstances, would be applied to the rest of the world as well. In every major region, the United States would now depend on local surrogates rather than on its own military to deter or contain any Soviet-sponsored aggression, or any other potentially destabilizing occurrence. We would supply arms and other forms of assistance, but henceforth the deterring and the fighting would be left to others.
On every point, the new Bush Doctrine contrasted sharply with the old Nixon Doctrine. Instead of withdrawal and fallback, Bush proposed a highly ambitious forward strategy of intervention. Instead of relying on local surrogates, Bush proposed an active deployment of our own military power. Instead of deterrence and containment, Bush proposed preemption and “taking the fight to the enemy.” And instead of worrying about the stability of the region in question, Bush proposed to destabilize it through “regime change.”
The Nixon Doctrine had obviously harmonized with the Vietnam syndrome. What about the Bush Doctrine? Was the political and military strategy it put forward comparably in tune with the post-9/11 public mood?
Certainly this is how it seemed in the immediate aftermath of the attacks: so much so that a group of younger commentators were quick to proclaim the birth of an entirely new era in American history. What December 7, 1941 had done to the old isolationism, they announced, September 11, 2001 had done to the Vietnam syndrome. It was politically dead, and the cultural fallout of that war—all the damaging changes wrought by the 1960's and the 1970's—would now follow it into the grave.
The most obvious sign of the new era was that once again we were saluting our now ubiquitously displayed flag. This was the very flag that, not so long ago, leftist radicals had thought fit only for burning. Yet now, even on the old flag-burning Left, a few prominent personalities were painfully wrenching their unaccustomed arms into something vaguely resembling a salute.
It was a scene reminiscent of the response of some Communists to the suppression by the new Soviet regime of the sailors' revolt that erupted in Kronstadt in the early 1920's. Far more murderous horrors would pour out of the malignant recesses of Stalinist rule, but as the first in that long series of atrocities leading to disillusionment with the Soviet Union, Kronstadt became the portent of them all. In its way, 9/11 served as an inverse Kronstadt for a number of radical leftists of today. What it did was raise questions about what one of them was now honest enough to describe as their inveterately “negative faith in America the ugly.”
September 11 also brought to mind a poem by W.H. Auden written upon the outbreak of World War II and entitled “September 1, 1939.” Although it contained hostile sentiments about America, remnants of Auden's own Communist period, the opening lines seemed so evocative of September 11, 2001 that they were often quoted in the early days of this new war:
I sit in one of the dives
On Fifty-second Street
Uncertain and afraid
As the clever hopes expire
Of a low dishonest decade.
Auden's low dishonest decade was the 1930's, and its clever hopes centered on the construction of a workers' paradise in the Soviet Union. Our counterpart was the 1960's, and its less clever hopes centered not on construction, however illusory, but on destruction—the destruction of the institutions that made up the American way of life. For America was conceived in that period as the great obstacle to any improvement in the lot of the wretched of the earth, not least those within its own borders.
As a “founding father” of neoconservatism who had broken ranks with the Left precisely because I was repelled by its “negative faith in America the ugly,” I naturally welcomed this new patriotic mood with open arms. In the years since making that break, I had been growing more and more impressed with the virtues of American society. I now saw that America was a country in which more liberty and more prosperity abounded than human beings had ever enjoyed in any other country or any other time. I now recognized that these blessings were also more widely shared than even the most visionary utopians had ever imagined possible. And I now understood that this was an immense achievement, entitling the United States of America to an honored place on the roster of the greatest civilizations the world had ever known.
The new patriotic mood therefore seemed to me a sign of greater intellectual sanity and moral health, and I fervently hoped that it would last. But I could not fully share the confidence of some of my younger political friends that the change was permanent—that, as they exulted, nothing in American politics and American culture would ever be the same again. As a veteran of the political and cultural wars of the 1960's, I knew from my own scars how ephemeral such a mood might well turn out to be, and how vulnerable it was to seemingly insignificant forces.
In this connection, I was haunted by one memory in particular. It was of an evening in the year 1960, when I went to address a meeting of left-wing radicals on a subject that had then barely begun to show the whites of its eyes: the possibility of American military involvement in a faraway place called Vietnam. Accompanying me that evening was the late Marion Magid, a member of my staff at COMMENTARY, of which I had recently become the editor. As we entered the drafty old hall on Union Square in Manhattan, Marion surveyed the 50 or so people in the audience, and whispered to me: “Do you realize that every young person in this room is a tragedy to some family or other?”
The memory of this quip brought back to life some sense of how unpromising the future had then appeared to be for that bedraggled-looking assemblage. No one would have dreamed that these young people, and the generation about to descend from them politically and culturally, would within the blink of a historical eye come to be hailed as “the best informed, the most intelligent, and the most idealistic this country has ever known.” Those words, even more incredibly, would emanate from what the new movement regarded as the very belly of the beast: from, to be specific, Archibald Cox, a professor at the Harvard Law School and later Solicitor General of the United States. Similar encomia would flow unctuously from the mouths of parents, teachers, clergymen, artists, and journalists.
More incredible yet, the ideas and attitudes of the new movement, cleaned up but essentially unchanged, would within a mere ten years turn one of our two major parties upside down and inside out. In 1961, President John F. Kennedy had famously declared that we would “pay any price, bear any burden, . . . to assure the survival and the success of liberty.” By 1972, George McGovern, nominated for President by Kennedy's own party, was campaigning on the slogan, “Come Home, America.” It was a slogan that to an uncanny degree reflected the ethos of the embryonic movement I had addressed in Union Square only about a decade before.
The New “Jackal Bins”
In going over this familiar ground, I am trying to make two points. One is that the nascent radical movement of the late 1950's and early 1960's was up against an adversary, namely, the “Establishment,” that looked unassailable. Even so—and this is my second point—to the bewilderment of almost everyone, not least the radicals themselves, they blew and they blew and they blew the house down.
Here we had a major development that slipped in under the radar of virtually all the pundits and the trend-spotters. How well I remember John Roche, a political scientist then working in the Johnson White House, being quoted by the columnist Jimmy Breslin as having derisively labeled the radicals a bunch of “Upper West Side jackal bins.” As further investigation disclosed, Roche had actually said “Jacobins,” a word so unfamiliar to his interviewer that “jackal bins” was the best Breslin could do in transcribing his notes.
Much ink has been spilled, gallons of it by me, in the struggle to explain how and why a great “Establishment” representing so wide a national consensus could have been toppled so easily and so quickly by so small and marginal a group as these “jackal bins.” In the domain of foreign affairs, of course, the usual answer is Vietnam. In this view, it was by deciding to fight an unpopular war that the Establishment rendered itself vulnerable.
The ostensible problem with this explanation, to say it again, is that at least until 1965 Vietnam was a popular war. All the major media—from the New York Times to the Washington Post, from Time to Newsweek, from CBS to ABC—supported our intervention. So did most of the professoriate. And so did the public. Even when all but one or two of the people who had either directly led us into Vietnam, or had applauded our intervention, commenced falling all over themselves to join the antiwar parade, public opinion continued supporting the war.
But it did not matter. Public opinion had ceased to count. Indeed, as the Tet offensive of 1968 revealed, reality itself had ceased to count. As all would later come to agree and some vainly struggled to insist at the time, Tet was a crushing defeat not for us but for the North Vietnamese. But Walter Cronkite had only to declare it a defeat for us from the anchor desk of the CBS Evening News, and a defeat it became.
Admittedly, in electoral politics, where numbers are decisive, public opinion remained potent. Consequently, none of the doves contending for the presidency in 1968 or 1972 could beat Richard Nixon. Yet even Nixon felt it necessary to campaign on the claim that he had a “plan” not for winning but for getting us out of Vietnam.
All of which is to say that, on Vietnam, elite opinion trumped popular opinion. Nor were the effects restricted to foreign policy. They extended into the newly antagonistic attitude toward everything America was and represented.
It hardly needs stressing that this attitude found a home in the world of the arts, the universities, and the major media of news and entertainment, where intellectuals shaped by the 1960's, and their acolytes in the publishing houses of New York and in the studios of Hollywood, held sway. But it would be a serious mistake to suppose that the trickle-down effect of the professoriate's attitude was confined to literature, journalism, and show business.
John Maynard Keynes once said that “Practical men who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist.” Keynes was referring specifically to businessmen. But practical functionaries like bureaucrats and administrators are subject to the same rule, though they tend to be the slaves not of economists but of historians and sociologists and philosophers and novelists who are very much alive even when their ideas have, or should have, become defunct. Nor is it necessary for the “practical men” to have studied the works in question, or even ever to have heard of their authors. All they need do is read the New York Times, or switch on their television sets, or go to the movies—and, drip by drip, a more easily assimilable form of the original material is absorbed into their heads and their nervous systems.
These, in sum, were some of the factors that made me wonder whether the terrorist attacks of September 11, 2001 would turn out to mark a genuine turning point comparable to the bombing of Pearl Harbor on December 7, 1941. I was well aware that, before Pearl Harbor, several groups ranging across the political spectrum had fought against our joining the British, who had been at war with Nazi Germany since 1939. There were the isolationists, both liberal and conservative, who detected no American interest in this distant conflict; there were the right-wing radicals who thought that if we were going to go to war, it ought to be on the side of Nazi Germany against Communist Russia, not the other way around; and there were the left-wing radicals who saw the war as a struggle between two equally malign imperialistic systems in which they had no stake. Under the influence of these groups, a large majority of Americans had opposed our entry into the war right up to the moment of the Japanese attack on Pearl Harbor. But from that moment on, the opposition faded away. The antiwar groups either lost most of their members or lapsed into a morose silence, and public opinion did a 180-degree turn.
At first, September 11 did seem to resemble Pearl Harbor in its galvanizing effect, while by all indications the first battle of World War IV—the battle of Afghanistan—was supported by a perhaps even larger percentage of the public than Vietnam had been at the beginning. Nevertheless, even though the opposition in 2001 was still numerically insignificant, it was much stronger than it had been in the early days of Vietnam. The reason was that it now maintained a tight grip over the institutions that, in the later stages of that war, had been surrendered bit by bit to the anti-American Left.
There was, for openers, the literary community, which could stand in for the world of the arts in general. No sooner had the Twin Towers been toppled and the Pentagon smashed than a fierce competition began for the gold in the anti-American Olympics. Susan Sontag, one of my old ex-friends on the Left, seized an early lead in this contest with a piece in which she asserted that 9/11 was an attack “undertaken as a consequence of specific American alliances and actions.” Not content with suggesting that we had brought this aggression on ourselves, she went on to compare the backing in Congress for our “robotic President” to “the unanimously applauded, self-congratulatory bromides of a Soviet Party Congress.”
Another of my old ex-friends, Norman Mailer, surprisingly slow out of the starting gate, soon came up strong on the inside by comparing the Twin Towers to “two huge buck teeth,” and pronouncing the ruins at Ground Zero “more beautiful than the buildings were.” Still playing the enfant terrible even as he was closing in on his eightieth year, Mailer denounced us as “cultural oppressors and aesthetic oppressors” of the Third World. In what did this oppression consist? It consisted, he expatiated, in our establishing “enclaves of our food out there, like McDonald's” and in putting “our high-rise buildings” around the airports of even “the meanest, scummiest, capital[s] in the world.” For these horrendous crimes we had, on 9/11, received a measure—and only a small measure at that—of our just deserts.
Then there were the universities. A report issued shortly after 9/11 by the American Council of Trustees and Alumni (ACTA) cited about a hundred malodorous statements wafting out of campuses all over the country that resembled Son-tag and Mailer in blaming the attacks not on the terrorists but on America. Among these were three especially choice specimens. From a professor at the University of New Mexico: “Anyone who can blow up the Pentagon gets my vote.” From a professor at Rutgers: “[We] should be aware that the ultimate cause [of 9/11] is the fascism of U.S. foreign policy over the past many decades.” And from a professor at the University of Massachusetts: “[The American flag] is a symbol of terrorism and death and fear and destruction and oppression.”
When the ACTA report was issued, protesting wails of “McCarthyism” were heard throughout the land, especially from the professors cited. Like them, Susan Sontag, too, claimed that her freedom of speech was being placed in jeopardy. In this peculiar reading of the First Amendment, much favored by leftists in general, they were free to say anything they liked, but the right to free speech ended where criticism of what they had said began.
Actually, however, with rare exceptions, attempts to stifle dissent on the campus were largely directed at the many students and the few faculty members who supported the 9/11 war. All these attempts could be encapsulated into a single phenomenon: on a number of campuses, students or professors who displayed American flags or patriotic posters were forced to take them down. As for Susan Sontag's freedom of speech, hardly had the ink dried on her post-9/11 piece before she became the subject of countless fawning reports and interviews in periodicals and on television programs around the world.
Speaking of television, it was soon drowning us with material presenting Islam in glowing terms. Mainly, these programs took their cue from the President and other political leaders. Out of the best of motives, and for prudential reasons as well, elected officials were striving mightily to deny that the war against terrorism was a war against Islam. Hence they never ceased heaping praises on the beauties of that religion, about which few of them knew anything.
But it was from the universities, not from the politicians, that the substantive content of these broadcasts derived, in interviews with academics, many of them Muslims themselves, whose accounts of Islam were selectively roseate. Sometimes they were even downright untruthful, especially in sanitizing the doctrine of jihad or holy war, or in misrepresenting the extent to which leading Muslim clerics all over the world had been celebrating suicide bombers—not excluding those who had crashed into the World Trade Center and the Pentagon—as heroes and martyrs.
I do not bring this up in order to enter into a theological dispute. My purpose, rather, is to offer another case study in the continued workings of the trickle-down effect I have already described. Thus, hard on the heels of 9/11, the universities began adding innumerable courses on Islam to their curricula. On the campus, “understanding Islam” inevitably translated into apologetics for it, and most of the media dutifully followed suit. The media also adopted the stance of neutrality between the terrorists and ourselves that prevailed among the relatively moderate professoriate, as when the major television networks ordered their anchors to avoid exhibiting partisanship.
Here the great exception was the Fox News Channel. The New York Times, in an article deploring the fact that Fox was covering the war from a frankly pro-American perspective, expressed relief that no other network had so cavalierly discarded the sacred conventions dictating that journalists, in the words of the president of ABC News, must “maintain their neutrality in times of war.”
Although the vast majority of those who blamed America for having been attacked were on the Left, a few voices on the Right joined this perverted chorus. Speaking on Pat Robertson's TV program, the Reverend Jerry Falwell delivered himself of the view that God was punishing the United States for the moral decay exemplified by a variety of liberal groups among us. Both later apologized for singling out these groups, but each continued to insist that God was withdrawing His protection from America because all of us had become great sinners. And in the amen corner that quickly formed on the secular Right, commentators like Robert Novak and Pat Buchanan added that we had called the attack down on our heads not so much by our willful disobedience to divine law as by our manipulated obedience to Israel.
Oddly enough, however, within the Arab world itself, there was much less emphasis on Israel as the root cause of the attacks than was placed on it by most, if not all, of Buchanan's fellow paleoconservatives on the Right. Even to Osama bin Laden himself, support of Israel ranked only third on a list of our “crimes” against Islam.
Not, to be sure, that Arabs everywhere—together with most non-Arab Middle Eastern Muslims like the Iranians—had given up their dream of wiping Israel off the map. To anyone who thought otherwise, Fouad Ajami of Johns Hopkins, an American who grew up as a Muslim in Lebanon, had this to say about the Arab world's “great refusal” to accept Israel under any conditions whatsoever:
The great refusal persists in that “Arab street” of ordinary men and women, among the intellectuals and the writers, and in the professional syndicates. . . . The force of this refusal can be seen in the press of the governments and of the oppositionists, among the secularists and the Islamists alike, in countries that have concluded diplomatic agreements with Israel and those that haven't.
Ajami emphasized that the great refusal remained “fiercest in Egypt,” notwithstanding the peace treaty it had signed with Israel in 1978. It might have been expected, then, that the Egyptians would be eager to blame the widespread animus against the U.S. in their own country on American policy toward Israel, especially since Egypt, being second only to the Jewish state as a recipient of American aid, had a powerful incentive to explain away so ungrateful a response to the benevolent treatment it was receiving at our hands. But no. Only about two weeks before 9/11, Ab'd Al-Mun'im Murad, a columnist in Al-Akhbar, a daily newspaper sponsored by the Egyptian government, wrote:
The conflict that we call the Arab-Israeli conflict is, in truth an Arab conflict with Western, and particularly American, colonialism. The U.S. treats [the Arabs] as it treated the slaves inside the American continent. To this end, [the U.S.] is helped by the smaller enemy, and I mean Israel.
In another piece, the same writer expanded on this unusually candid acknowledgment:
The issue no longer concerns the Israeli-Arab conflict. The real issue is the Arab-American conflict—Arabs must understand that the U.S. is not “the American friend”—and its task, past, present, and future, is [to impose] hegemony on the world, primarily on the Middle East and the Arab world.
Then, in a third piece, also published in late August, Murad gave us an inkling of the reciprocal “task” he had in mind to be performed on America:
The Statue of Liberty, in New York Harbor, must be destroyed because of . . . the idiotic American policy that goes from disgrace to disgrace in the swamp of bias and blind fanaticism. . . . The age of the American collapse has begun.
If this was the kind of thing we were getting from an Arab country that everyone regarded as “moderate,” in radical states like Iraq and Iran nothing less would suffice than identifying America as the “Great Satan.” As for the Palestinians, their contempt for America was hardly exceeded by their loathing of Israel. For example, the mufti—or chief cleric—appointed by the Palestinian Authority under Yasir Arafat had prayed that God would “destroy America,” while the editor of a leading Palestinian journal proclaimed:
History does not remember the United States, but it remembers Iraq, the cradle of civilization. . . . History remembers every piece of Arab land, because it is the bosom of human civilization. On the other hand, the [American] murderers of humanity, the creators of the barbaric culture and the bloodsuckers of nations, are doomed to death and destined to shrink to a microscopic size, like Micronesia.
The absence of even a word here about Israel showed that if the Jewish state had never come into existence, the United States would still have stood as an embodiment of everything that most of these Arabs considered evil. Indeed, the hatred of Israel was in large part a surrogate for anti-Americanism, rather than the reverse. Israel was seen as the spearhead of the American drive for domination over the Middle East. As such, the Jewish state was a translation of America into, as it were, Hebrew—the “little enemy,” the “little Satan.” To rid the region of it would thus be tantamount to cleansing an area belonging to Islam (dar al-Islam) of the blasphemous political, social, and cultural influences emanating from a barbaric and murderous force. But the force, so to speak, was with America, of which Israel was merely an instrument.
Although Buchanan and Novak were earlier and more outspoken in blaming 9/11 on American friendliness toward Israel, this idea was not confined to the Right or to the marginal precincts of paleoconservatism. On the contrary: while it popped up on the Right, it thoroughly pervaded the radical Left and much of the soft Left, and was even espoused by a number of liberal centrists like Mickey Kaus. For the moment, indeed, the blame-Israel-firsters were concentrated most heavily on the Left.
It was also on the Left, and above all in the universities, that their fraternal twins, the blame-America-firsters, were located. Yet Eric Foner, a professor of history at my own alma mater, Columbia, risibly claimed that the ACTA report was misleading since the polls proved that there was “firm support” for the war among college students. “If our aim is to indoctrinate students with unpatriotic beliefs,” Foner smirked, “we're obviously doing a very poor job of it.”
True enough. But what Foner, as a historian, must have known but neglected to mention was that even at the height of the radical fevers on the campus in the 1960's, only a minority of students sided with the antiwar radicals. Still, even though they were in the majority, the non-radical students were unable to make themselves heard above the antiwar din, and whenever they tried, they were shouted down. This is how it was, too, on the campus after 9/11. There were, here and there, brave defiers of the academic orthodoxies. But mostly, the silent majority remained silent, for fear of incurring the disapproval of their teachers, or even of being punished for the crime of “insensitivity.”
Such, then, was the assault that began to be mounted within hours of 9/11 by the guerrillas-with-tenure in the universities, along with their spiritual and political disciples scattered throughout other quarters of our culture. Could this “tiny handful of aging Rip van Winkles,” as they were breezily brushed off by one commentator, grow into a force as powerful as the “jackal bins” of yesteryear? Was the upsurge of confidence in America, and American virtue, that spontaneously materialized on 9/11 strong enough to withstand them this time around?
Some who shared my apprehensions believed that if things went well on the military front, all would be well on the home front, too. And that is how it appeared from the effect wrought by the spectacular success of the Afghanistan campaign, which disposed of the “quagmire” theory and also dampened antiwar activity on at least a number of campuses. Nevertheless, the mopping-up operation in Afghanistan created an opportunity for more subtle forms of opposition to gain traction. There were complaints that the terrorists captured in Afghanistan and then sent to a special facility in Guantanamo were not being treated as regular prisoners of war. And there were also allegations of the threat to civil liberties posed in America itself by measures like the Patriot Act, which had been designed to ward off any further terrorist attacks at home. Although these concerns were mostly based on misreadings of the Geneva Convention and of the Patriot Act itself, some people no doubt raised them in good faith. But there is also no doubt that such issues could—and did—serve as a respectable cover for wholesale opposition to the entire war.
Another respectable cover was the charge that Bush was following a policy of “unilateralism.” The alarm over this supposedly unheard-of outrage was first sounded by the chancelleries and chattering classes of Western Europe when Bush stated that, in taking the fight to the terrorists and their sponsors, we would prefer to do so with allies and with the blessing of the UN, but if necessary we would go it alone and without an imprimatur from the Security Council.
This was too much for the Europeans. Having duly offered us their condolences over 9/11, they could barely let a decent interval pass before going back into the ancient family business of showing how vastly superior in wisdom and finesse they were to the Americans, whose primitive character was once again on display in the “simplistic” ideas and crude moralizing of George W. Bush. Now they urged that our military operations end with Afghanistan, and that we leave the rest to diplomacy in deferential consultation with the great masters of that recondite art in Paris and Brussels.
Taking their cue from these masters, the New York Times, along with many other publications ranging from the Center to the hard Left—and soon to be seconded by all the Democratic candidates in the presidential primaries, except for Senator Joseph Lieberman—began hitting Bush for recklessness and overreaching. What we saw developing here was a broader coalition than the antiwar movement spawned by Vietnam had managed to put together, especially in its first few years. The antiwar movement then had been made up almost entirely of leftists and liberals, whereas this new movement was bringing together the whole of the hard Left, elements of the soft Left, and sectors of the American Right.
Treading the path previously marked out by his colleague Mickey Kaus on the issue of Israel, Michael Kinsley of the soft Left allied himself with Pat Buchanan in bringing forth yet another respectable cover. This was to indict the President for evading the Constitution by proposing to fight undeclared wars. Meanwhile, the same charge was moving into the political mainstream through Democratic Senators like Robert Byrd, Edward M. Kennedy, and Tom Daschle, though they also continued carrying on about quagmires and slippery slopes and “unilateralism.”
I for one was certain that, as the military facet of World War IV widened—with Iraq clearly being the next most likely front—opposition would not only grow but would acquire enough assurance to dispense with any respectable covers. Which was to say that it would be taken over by extremists and radicalized. About this I turned out to be correct, while those who scoffed at the “jackal bins” and the “aging Rip Van Winkles” as a politically insignificant bunch turned out to be wrong. But I never imagined that the new antiwar movement would so rapidly arrive at the stage of virulence it had taken years for its ancestors of the Vietnam era to reach.
Varieties of Anti-Americanism
A possible explanation of the great velocity achieved by the new antiwar movement was that, like the respectable critique immediately preceding it, the radical opposition was following the lead of European opinion. In this instance, encouragement and reinforcement came from the almost incredible degree of hostility to America that erupted in the wake of 9/11 all over the European continent, and most blatantly in France and Germany, and that gathered even more steam in the run-up to the battle of Iraq. If demonstrations and public-opinion polls could be believed, huge numbers of Europeans loathed the United States so deeply that they were unwilling to side with it even against one of the most tyrannical and murderous despots on earth.
That this was the feeling in the Muslim world did not come as a surprise. Unlike in Europe, where the attacks of 9/11 did elicit a passing moment of sympathy for the United States (“We Are All Americans Now,” proclaimed a headline the next day in the leading leftist daily in Paris), in the realm of Islam the news of 9/11 brought dancing in the streets and screams of jubilation. Almost to a man, Muslim clerics in their sermons assured the faithful that in striking a blow against the “Great Satan,” Osama bin Laden had acted as a jihadist, or holy warrior, in strict accordance with the will of God.
This could have been predicted from a debate on the topic “Bin Laden—The Arab Despair and American Fear” that was televised on the Arabic-language network Al-Jazeera about two months before 9/11. Using “American Fear” in the title was a bit premature, since this was a time when very few Americans were frightened by Islamic terrorism, for the simple reason that scarcely any had ever heard of bin Laden or al Qaeda. Be that as it may, at the conclusion of the program, the host said to the lone guest who had been denouncing bin Laden as a terrorist: “I am looking at the viewers' reactions for one that would support your positions—but . . . I can't find any.” He then cited “an opinion poll in a Kuwaiti paper which showed that 69 percent of Kuwaitis, Egyptians, Syrians, Lebanese, and Palestinians think bin Laden is an Arab hero and an Islamic jihad warrior.” And on the basis of the station's own poll, he also estimated that among all Arabs “from the Gulf to the Ocean,” the proportion sharing this view of bin Laden was “maybe even 99 percent.”
Surely, then, the chairman of the Syrian Arab Writers Associations was speaking for hordes of his “brothers” in declaring shortly after 9/11 that
When the twin towers collapsed . . . I felt deep within me like someone delivered from the grave; I [felt] that I was being carried in the air above the corpse of the mythological symbol of arrogant American imperialist power. . . . My lungs filled with air, and I breathed in relief, as I had never breathed before.
If this was how the Arab/Muslim world largely felt about 9/11, what could have been expected from that world when the United States picked itself up off the ground—Ground Zero, to be exact—and began fighting back? What could have been expected is precisely what happened: another furious outburst of anti-Americanism. Only this time the outbursts were infused not by jubilation but by the desperate hope that the United States would somehow be humiliated. This hope was soon extinguished by the quick defeat of the Taliban regime in Afghanistan, but it was immediately rekindled by the way Saddam Hussein was standing up against America. Saddam had killed hundreds of thousands of Muslims in Iran, and countless Arabs in his own country and Kuwait. Obviously, however, to his Arab and Muslim “brothers” this was completely canceled out by his defiance of the United States.
Was there, perhaps, an element of the same twisted sentiment in the willingness of millions upon millions of Europeans to lend de-facto aid and comfort to this monster? Of course, the claim was that most such people were neither pro-Saddam nor anti-American: all they wanted was to “give peace a chance.” But this claim was belied by the slogans, the body language, the speeches, and the manifestos of the “peace” party. Though hatred of America may not have been universal among opponents of American military action, it was obviously very widespread and very deep. And though other considerations (pacifist sentiment, concern about civilian casualties, contempt for George Bush, faith in the UN, etc.) were at work, these factors had no trouble coexisting harmoniously with extreme hostility to the United States.
Thus, within two months of 9/11, a survey of influential people in 23 countries was undertaken by the Pew Research Center, the Princeton Survey Research Associates, and the International Herald Tribune. Here is how a British newspaper summarized the findings:
Did America somehow ask for the terrorist outrages in New York and Washington? . . . [M]ost people of influence in the rest of the world . . . believe that, to a certain extent, the U.S. was asking for it. . . . From its closest allies, in Europe, to the Middle East, Russia, and Asia, a uniform 70 percent said people considered it good that after September 11 Americans had realized what it was to be vulnerable.
It would therefore seem that the Italian playwright Dario Fo, winner of the Nobel Prize for Literature in 1997, was more representative of European opinion than he may at first have appeared when spewing out the following sentiment:
The great speculators wallow in an economy that every year kills tens of millions of people with poverty—so what is 20,000 [sic] dead in New York? Regardless of who carried out the massacre, this violence is the legitimate daughter of the culture of violence, hunger, and inhumane exploitation.
In France, a leading philosopher and social theorist, Jean Baudrillard, produced a somewhat different type of apologia for the terrorists of 9/11 and their ilk. This was so laden with postmodern jargon and so convoluted that it bordered on parody (“The collapse of the towers of the World Trade Center is unimaginable, but this does not suffice to make it a real event”). But Baudrillard's piece did at least contain a revealing confession:
That we have dreamed of this event, that everyone without exception has dreamed of it, . . . is unacceptable for the Western moral conscience, but it is still a fact. . . . Ultimately, they [al Qaeda] did it, but we willed it.
Much the same idea, in even more straightforward terms, was espoused across the Channel by Mary Beard, a teacher of classics at my other alma mater, Cambridge University, who wrote: “[H]owever tactfully you dress it up, the United States had it coming. . . . World bullies . . . will in the end pay the price.” With this the highly regarded novelist Martin Amis agreed. But Beard's old-fashioned English plainness evidently being a little too plain for him, Amis resorted to a bit of fancy continental footwork in formulating his own endorsement of the idea that America had been asking for it:
Terrorism is political communication by other means. The message of September 11 ran as follows: America, it is time you learned how implacably you are hated. . . . Various national characteristics—self-reliance, a fiercer patriotism than any in Western Europe, an assiduous geographical incuriosity—have created a deficit of empathy for the sufferings of people far away.
What on earth was going on here? After 9/11, most Americans had gradually come to recognize that we were hated by the terrorists who had attacked us and their Muslim cheerleaders not for our failings and sins but precisely for our virtues as a free and prosperous country. But why should we be hated by hordes of people living in other free and prosperous countries? In their case, presumably, it must be for our sins. And yet most of us knew for certain that, whatever sins we might have committed, they were not the ones of which the Europeans kept accusing us.
To wit: far from being a nation of overbearing bullies, we were humbly begging for the support of tiny countries we could easily have pushed around. Far from being “unilateralists,” we were busy soliciting the gratuitous permission and the dubious blessing of the Security Council before taking military action against Saddam Hussein. Far from “rushing into war,” we were spending months dancing a diplomatic gavotte in the vain hope of enlisting the help of France, Germany, and Russia. And so on, and so on, down to the last detail in the catalogue.
What, then, was going on? An answer to this puzzling question that would eventually gain perhaps the widest circulation came from Robert Kagan of the Carnegie Endowment. In a catchy formulation that soon became famous, Kagan proposed that Americans were from Mars and Europeans were from Venus. Expanding on this formulation, he wrote:
On the all-important question of power—the efficacy of power, the morality of power, the desirability of power—American and European perspectives are diverging. Europe is turning away from power, or to put it a little differently, it is moving beyond power into a self-contained world of laws and rules and transnational negotiation and cooperation. It is entering a post-historical paradise of peace and relative prosperity, the realization of Kant's “Perpetual Peace.” The United States, meanwhile, remains mired in history, exercising power in the anarchic Hobbesian world where international laws and rules are unreliable and where true security and the defense and promotion of a liberal order still depend on the possession and use of military might.
In developing his theory, Kagan got many things right and cast a salubrious light into many dark corners. But it also seemed to me that he was putting the shoes of his theory on the wrong feet. Although I fully accepted Kagan's description of the divergent attitudes toward military power, I did not agree that the Europeans were already living in the future while the United States remained “mired” in the past. In my judgment, the opposite was closer to the truth.
The “post-historical paradise” into which the Europeans were supposedly moving struck me as nothing more than the web of international institutions that had been created at the end of World War II under the leadership of the United States in the hope that they would foster peace and prosperity. These included the United Nations, the World Bank, the World Court, and others. Then after 1947, and again under the leadership of the United States, adaptations were made to the already existing institutions and new ones like NATO were added to fit the needs of World War III. With the victorious conclusion of World War III in 1989-90, the old international order became obsolete, and new arrangements tailored to a new era would have to be forged. But more than a decade elapsed before 9/11 finally made the contours of the “post-cold-war era” clear enough for these new arrangements to begin being developed.
Looked at from this angle, the Bush Doctrine revealed itself as an extremely bold effort to break out of the institutional framework and the strategy constructed to fight the last war. But it was more: it also drew up a blueprint for a new structure and a new strategy to fight a different breed of enemy in a war that was just starting and that showed signs of stretching out into the future as far as the eye could see. Facing the realities of what now confronted us, Bush had come to the conclusion that few if any of the old instrumentalities were capable of defeating this new breed of enemy, and that the strategies of the past were equally helpless before this enemy's way of waging war. To move into the future meant to substitute preemption for deterrence, and to rely on American military might rather than the “soft power” represented by the UN and the other relics of World War III. Indeed, not even the hard power of NATO—which had specifically been restricted by design to the European continent and whose deployment in other places could, and would be, obstructed by the French—was of much use in the world of the future.
Examined from this same angle, the European justifications for resisting the Bush Doctrine—the complaints about “unilateralism,” trigger-happiness, and the rest—were unveiled as mere rationalizations. Here I went along with Kagan in tracing these rationalizations to a decline in the power of the Europeans. He put it very well:
World War II all but destroyed European nations as global powers. . . . For a half-century after World War II, however, this weakness was masked by the unique geopolitical circumstances of the cold war. Dwarfed by the two superpowers on its flanks, a weakened Europe nevertheless served as the central strategic theater of the worldwide struggle between Communism and democratic capitalism. . . . Although shorn of most traditional measures of great-power status, Europe remained the geopolitical pivot, and this, along with lingering habits of world leadership, allowed Europeans to retain international influence well beyond what their sheer military capabilities might have afforded. Europe lost this strategic centrality after the cold war ended, but it took a few more years for the lingering mirage of European global power to fade.
So far, so good. Where I parted company with Kagan's analysis was over his acquiescence in the claim that the Europeans had in fact made the leap into the post-national, or postmodern, “Kantian paradise” of the future. To me it seemed clear that it was they, and not we Americans, who were “mired” in the past. They were fighting tooth and nail against the American effort to move into the future precisely because holding onto the ideas, the strategic habits, and the international institutions of the cold war would allow them to go on exerting “international influence well beyond what their sheer military capabilities might have afforded.” It was George W. Bush—that “simplistic” moralizer and trigger-happy “cowboy,” that flouter of international law and reckless unilateralist—who had possessed the wit to see the future and had summoned up the courage to cross over into it.
But Bush was also a politician, and as such he felt it necessary to make some accommodation to the pressures coming at him both at home and from abroad. What this required was an occasional return visit to the past. On such visits, as when he would seek endorsements from the UN Security Council, he showed a polite measure of deference to those, again both at home and abroad, who insisted on reading the Bush Doctrine not as a blueprint for the future but as a reckless repudiation of the approach favored by the allegedly more sophisticated Europeans and their American counterparts. In Kagan's apt description of how the Europeans saw themselves:
Europeans insist they approach problems with greater nuance and sophistication. They try to influence others through subtlety and indirection. . . . They generally favor peaceful responses to problems, preferring negotiation, diplomacy, and persuasion to coercion. They are quicker to appeal to international law, international conventions, and international opinion to adjudicate disputes. They try to use commercial and economic ties to bind nations together. They often emphasize process over result, believing that ultimately process can become substance.
None of this was new: the Europeans had made almost exactly the same claim of superior sophistication during the Reagan years. At that time—in 1983—it had elicited a definitive comment from Owen Harries (the former head of policy planning in the Australian Department of Foreign Affairs and himself a member of the realist school):
When one is exposed to this claim of superior realism and sophistication, one's first inclination is to ask where exactly is the evidence for it. If one considers some of the salient episodes in the history of Europe in this century—the events leading up to 1914, the Versailles peace conference, Munich, the extent of the effort Europe has been prepared to make to secure its own defense since 1948, and the current attitude toward the defense of its vital interests in the Persian gulf—one is not irresistibly led to concede European superiority.
Two decades later, Harries as a realist would have his own grave reservations about the Bush Doctrine. But I had no hesitation in adding the “sophisticated” European opposition to it as the latest episode in the long string of disastrously mistaken judgments he had enumerated back in 1983.
The astonishing success of the campaigns in Afghanistan and Iraq made a hash of the skepticism of the many pundits who had been so sure that we had too few troops or were following the wrong battle plan. Instead of getting bogged down, as they had predicted, our forces raced through these two campaigns in record time; and instead of ten of thousands of body bags being flown home, the casualties were numbered in the hundreds. As the military historian Victor Davis Hanson summarized what had transpired in Iraq:
In a span of about three weeks, the United States military overran a country the size of California. It utterly obliterated Saddam Hussein's military hardware . . . and tore apart his armies. Of the approximately 110 American deaths in the course of the hostilities, fully a fourth occurred as a result of accidents, friendly fire, or peacekeeping mishaps rather than at the hands of enemy soldiers. The extraordinarily low ratio of total American casualties per number of U.S. soldiers deployed . . . is almost unmatched in modern military history.
True, the aftermath of major military operations, especially in Iraq, turned out to be rougher than the Pentagon seems to have expected. Thanks to the guerrilla insurgency mounted by a coalition of intransigent Saddam loyalists, radical Shiite militias, and terrorists imported from Iran and Syria, American soldiers continued to be killed. Nevertheless, by any historical standard—the more than 6,500 who died on D-Day alone in World War II, to cite only one example—our total losses remained amazingly low.
But it was not military matters that aroused the equally sour skepticism of the realists. Their doubts centered, rather, on the issue of whether the Bush Doctrine was politically viable. Most of all, they questioned the idea that democratization represented the best and perhaps even the only way to defeat militant Islam and the terrorism it was using as its main weapon against us. Bush had placed his bet on a belief in the universality of the desire for freedom and the prosperity that freedom brought with it. But what if he was wrong? What if the Middle East was incapable of democratization? What if the peoples of that region did not wish to be as free and as prosperous as we were? And what if Islam as a religion was by its very nature incompatible with democracy?
These were hard questions about which reasonable men could and did differ. But those of us who backed Bush's bet had our own set of doubts about the doubts of the realists. They seemed to forget that the Middle East of today had not been created by Allah in the 7th century, and that the miserable despotisms there had not evolved through some inexorable historical process powered entirely by internal cultural forces. Instead, the states in question had all been conjured into existence less than a hundred years ago out of the ruins of the defeated Ottoman empire in World War I. Their boundaries were drawn by the victorious British and French with the stroke of an often arbitrary pen, and their hapless peoples were handed over in due course to one tyrant after another.
Mindful of this history, we backers of the Bush Doctrine wondered why it should have been taken as axiomatic that these states would and/or should last forever in their present forms, and why the political configuration of the Middle East should be eternally immune from the democratizing forces that had been sweeping the rest of the world.
And we wondered, too, whether it could really be true that Muslims were so different from most of their fellow human beings that they liked being pushed around and repressed and beaten and killed by thugs—even if the thugs wore clerical garb or went around quoting from the Quran. We wondered whether Muslims really preferred being poor and hungry and ill-housed to enjoying the comforts and conveniences that we in the West took so totally for granted that we no longer remembered to be grateful for them. And we wondered why, if all this were the case, there had been so great an outburst of relief and happiness among the people of Kabul after we drove out their Taliban oppressors.
Yes, came the response, but what about the people of Iraq? Most supporters of the invasion—myself included—had predicted that we would be greeted there with flowers and cheers; yet our troops encountered car bombs and hatred. Nevertheless, and contrary to the impression created by the media, survey after survey demonstrated that the vast majority of Iraqis did welcome us, and were happy to be liberated from the murderous tyranny under which they had lived for so long under Saddam Hussein. The hatred and the car bombs came from the same breed of jihadists who had attacked us on 9/11, and who, unlike the skeptics in our own country, were afraid that we were actually succeeding in democratizing Iraq. Indeed, this was the very warning sent by the terrorist leader Abu Musab al Zarqawi to the remnants of al Qaeda still hunkered down in the caves of Afghanistan: “Democracy is coming, and there will be no excuse thereafter [for terrorism in Iraq].”
Speaking for many of his fellow realists, Fareed Zakaria of Newsweek disagreed with al Zarqawi that democracy was coming to Iraq and contended that it was premature to try establishing it there or anywhere else in the Middle East:
We do not seek democracy in the Middle East—at least not yet. We seek first what might be called the preconditions for democracy . . . the rule of law, individual rights, private property, independent courts, the separation of church and state. . . . We should not assume that what took hundreds of years in the West can happen overnight in the Middle East.
Now, those of us who believed in the Bush Doctrine saw nothing wrong with pursuing Zakaria's agenda. But we rejected the charge—often made not only by realists like Zakaria but also by paleoconservatives like Buchanan—that our position was too “ideological” or naively “idealistic” or even “utopian.” We agreed entirely with what the President had long since contended: that the realist alternative of settling for autocratic and despotic regimes in the Middle East had neither brought the regional stability it promised nor—as 9/11 horribly demonstrated—made us safe at home. Bush had also long since given his answer to the question posed by “some who call themselves realists” as to whether “the spread of democracy in the Middle East should be any concern of ours.” It was, he affirmed in the strongest terms, a concern of ours precisely because democratization would make us more secure, and he accused the realists of having “lost contact with a fundamental reality” on this point. In this respect, I would argue, Bush was adopting a course akin to the one taken by the Marshall Plan, which had simultaneously served American interests and benefited others. Like the Marshall Plan, his new policy was a synthesis of realism and idealism: a case of doing well by doing good.
Those of us who supported the new policy also took issue with the view that democracy and capitalism could grow only in a soil that had been cultivated for centuries. We reminded the realists that in the aftermath of World War II, the United States managed within a single decade to transform both Nazi Germany and imperial Japan into capitalist democracies. And in the aftermath of the defeat of Communism in World War III, a similar process got under way on its own steam in Central and Eastern Europe, and even in the old heartland of the evil empire itself. Why not the Islamic world? The realist answer was that things were different there. To which our answer was that things were different everywhere, and a thousand reasons to expect the failure of any enterprise could always be conjured up to discourage making an ambitious effort.
To this, in turn, the counter frequently was that the Bush administration had wildly underestimated the special difficulties of democratizing Iraq and had correlatively misjudged the time so great a transformation would take, even assuming it to be possible at all. Yet talk about a “cakewalk” and the like mainly came from outside the administration; and in any event it had been applied to the future military campaign (which definitely did turn out to be a cakewalk), not to the ensuing reconstruction of Iraq. As to the latter, the administration kept repeating that we would stay in Iraq “for as long as it takes and not a day longer.” How long would that be? For those who opposed the Bush Doctrine, a year (or even a month?) after the end of major combat operations was already much too much; for those of us who supported it, “as long as it takes and not a day longer” still seemed, given the stakes, the only satisfactory formula.
As with democratization, so with the reform and modernization of Islam. In considering this even more difficult question, we found ourselves asking whether Islam could really go on for all eternity resisting the kind of reformation and modernization that had begun within Christianity and Judaism in the early modern period. Not that we were so naive as to imagine that Islam could be reformed overnight, or from the outside. In its heyday, Islam was able to impose itself on large parts of the world by the sword; there was no chance today of an inverse instant transformation of Islam by the force of American arms.
There was, however, a very good chance that a clearing of the ground, and a sowing of the seeds out of which new political, economic, and social conditions could grow, would gradually give rise to correlative religious pressures from within. Such pressures would take the form of an ultimately irresistible demand on theologians and clerics to find warrants in the Quran and the sharia under which it would be possible to remain a good Muslim while enjoying the blessings of decent government, and even of political and economic liberty. In this way a course might finally be set toward the reform and modernization of the Islamic religion itself.
The Democrats of 2004
What I have been trying to say is that the obstacles to a benevolent transformation of the Middle East—whether military, political, or religious—are not insuperable. In the long run they can be overcome, and there can be no question that we possess the power and the means and the resources to work toward their overcoming. But do we have the skills and the stomach to do what will be required? Can we in our present condition play even so limited and so benign an imperial role as we did in occupying Germany and Japan after World War II?
Some of our critics on the European Right sneer at us not, as the Left does, for being imperialists but for being such clumsy ones—for lacking the political dexterity to oversee the emergence of successor governments more amenable to reform and modernization than the despotisms now in place. I confess that I am prey to anxieties about our capabilities, and to others stemming from our character as a nation. And in thinking about our long record of inattention and passivity toward terrorism before 9/11, I fear a relapse into appeasement, diplomatic evasion, and ineffectual damage control.
Anxieties and fears like these were given a great boost by the attacks on the Bush Doctrine that became so poisonous in the 2004 presidential primary campaigns of the Democratic party. I have already told of my early apprehensions about the potential spread of the antiwar movement from the margins to the center, and my subsequent amazement in watching it go so far so fast. Whereas it took twelve years for the radicals I addressed in that drafty union hall in 1960 to capture the Democratic party behind George McGovern, their political and spiritual heirs of 2001 seemed to be pulling off the same trick in less than two. This time their leader of choice was the raucously antiwar Howard Dean. Though he eventually failed to win the nomination, his early successes frightened most of the relatively moderate candidates into a sharp leftward turn on Iraq, and drove out the few who supported the campaign there. As for John Kerry, in order to win the nomination, he had to disavow the vote he had cast authorizing the President to use force against Saddam Hussein.
To make matters worse, the campaign to discredit the action in Iraq moved from the hustings into the halls of Congress, where it wore the camouflage of a series of allegedly nonpartisan hearings. In these hearings, the most prominent of which was held by the Senate Intelligence Committee, high officials of the Bush administration were hectored by Democratic legislators (and even a few Republicans) in terms that often came close to sounding like the many articles and books in circulation that were accusing the President of having lied to us in going after Saddam Hussein. This was no slow process of trickle-down; this was an instantaneous inundation of the whole political landscape.
Among the lies through which Bush supposedly misled John Kerry and everyone else was that there might have been some connection between Saddam and al Qaeda. Now, even those of us who believed in such a connection were willing to admit that the evidence was not (yet) definitive; but this was a far cry from denying that there was any basis for it at all.10 So far a cry, that according to the reports that would be issued both by the Senate Intelligence Committee and the 9/11 Commission in the summer of 2004 (and contrary to how their conclusions would be interpreted in the media), al Qaeda did in fact have a cooperative, if informal, relationship with Iraqi agents working under Saddam.11
It was the same with another of the lies Bush allegedly told to justify the invasion of Iraq. In his State of the Union address of 2003, he said that “The British government has learned that Saddam Hussein recently sought significant quantities of uranium from Africa.” Then an obscure retired diplomat named Joseph C. Wilson IV, who had earlier been sent to Niger by the CIA to check out this claim, earned his 15 minutes of fame—not to mention a best-selling book—by loudly denouncing this assertion as a lie. But it would in due course be established that every one of the notorious “sixteen words” Bush had uttered was true. This was the consensus of the Senate Intelligence Committee report, two separate British investigations, and a variety of European intelligence agencies, including even the French.12 Not only that, but it turned out that Wilson's own report to the CIA had tended to confirm the suspicion that Saddam had been shopping for uranium in Africa, and not, as he went around declaring, to debunk it.13 The liar here, then, was not Bush but Wilson.
But of course the biggest lie Bush was charged with telling was that Saddam possessed weapons of mass destruction. On this issue, too, those of us who still suspected that the WMD remained hidden, or that they had been shipped to Syria, or both, were willing to admit that we might well be wrong. But how could Bush have been lying when every intelligence agency in every country in the world was convinced that Saddam maintained an arsenal of such weapons? And how could Bush have “hyped” or exaggerated the reports he was given by our own intelligence agencies when the director of the CIA himself described the case as a “slam dunk”?
To be sure, again according to the Senate Intelligence Committee report, the case, far from being a “slam dunk,” actually rested on weak or faulty evidence. Yet the committee itself “did not find any evidence that administration officials attempted to coerce, influence, or pressure analysts to change their judgments related to Iraq's weapons of mass destruction capabilities.” The CIA, that is, did not tell the President what it thought he wanted to hear. It told him what it thought it knew; and what it told him, he had every reason to believe.14
In the wake of the WMD issue, several others emerged that did even more to shake the confidence of some who had been enthusiastic supporters of the operation in Iraq. On top of the mounting number of American soldiers being killed as they were trying to bring security to Iraq, and on the heels of the horrendous episodes of the murder and desecration of the bodies of four American contractors in Falluja, came the revelation that Iraqi prisoners in Abu Ghraib had been subjected to ugly mistreatment by their American captors.
Among supporters of the Bush Doctrine, these setbacks set off a great wave of defeatist gloom that was deepened by the nervous tactical shifts they produced in our military planners (such as the decision to hold back from cleaning out the terrorist militias hiding in and behind holy places in Falluja and Najaf). Even the formerly unshakable Fouad Ajami was shaken. In a piece entitled “Iraq May Survive, But the Dream is Dead,” he wrote: “Let's face it: Iraq is not going to be America's showcase in the Arab-Muslim world.”
That the antiwar party would batten on all this—and would continue ignoring the enormous progress we had made in the reconstruction of Iraqi society—was only to be expected. It was also only natural for the Democrats to take as much political advantage of the setbacks as they could. But it was not necessarily to be expected that the Democrats would seize just as eagerly as the radicals upon every piece of bad news as another weapon in the war against the war. Nor was it necessarily to be expected that mainstream Democratic politicians would go so far off the intellectual and moral rails as to compare the harassment and humiliation of the prisoners in Abu Ghraib—none of whom, so far as anyone then knew, was even maimed, let alone killed—to the horrendous torturing and murdering that had gone on in that same prison under Saddam Hussein or, even more outlandishly, to the Soviet gulag in which many millions of prisoners died.
Yet this was what Edward M. Kennedy did on the floor of the Senate, where he declared that the torture chamber of Saddam Hussein had been reopened “under new management—U.S. management,” and this was what Al Gore did when he accused Bush of “establishing an American gulag.” Joining with the politicians was the main financial backer of the Democratic party's presidential campaign, George Soros, who actually said that Abu Ghraib was even worse than the attack of 9/11. On the platform with Soros when he made this morally disgusting statement was Senator Hillary Rodham Clinton, who let it go by without a peep of protest.
Equally ignominious was the response of mainstream Democrats to the most effective demagogic exfoliation of the antiwar radicals, Michael Moore's film Fahrenheit 9/11. Shortly after 9/11—that is, long before the appearance of this movie but with many of its charges against Bush already on vivid display in Moore's public statements about Afghanistan—one liberal commentator had described him as a “well-known crank, regarded with considerable distaste even on the Left.” The same commentator (shades of how the “jackal bins” of yore were regarded) had also dismissed as “preposterous” the idea that Moore's views “represent a significant body of antiwar opinion.” Lending a measure of plausibility to this assessment was the fact that Moore elicited a few boos when, in accepting an Academy Award for Bowling for Columbine in 2003, he declared:
We live in the time where we have fictitious election results that elect a fictitious president. We live in a time where we have a man sending us to war for fictitious reasons. . . . [W]e are against this war, Mr. Bush. Shame on you, Mr. Bush, shame on you.
By 2004, however, when Fahrenheit 9/11 came out, things had changed. True, this movie—a compendium of every scurrility ever hurled at George W. Bush, and a few new ones besides, all gleefully stitched together in the best conspiratorial traditions of the “paranoid style in American politics”—did manage to embarrass even several liberal commentators. One of them described the film as a product of the “loony Left,” and feared that its extremism might discredit the “legitimate” case against Bush and the war. Yet in an amazing reversal of the normal pattern in the distribution of prudence, such fears of extremism were more pronounced among liberal pundits than among mainstream Democratic politicians.
Thus, so many leading Democrats flocked to a screening of Fahrenheit 9/11 in Washington that (as the columnist Mark Steyn quipped) the business of Congress had to be put on hold; and when the screening was over, nary a dissonant boo disturbed the harmony of the ensuing ovation. The chairman of the Democratic National Committee, Terry McAuliffe, pronounced the film “very powerful, much more powerful than I thought it would be.” Then, when asked by CNN whether he thought “the movie was essentially fair and factually based,” McAuliffe answered, “I do. . . . Clearly the movie makes it clear that George Bush is not fit to be President of this country.” Senator Tom Harkin of Iowa seconded McAuliffe and urged all Americans to see the film: “It's important for the American people to understand what has gone on before, what led us to this point, and to see it sort of in this unvarnished presentation by Michael Moore.”
Possibly some of the other important Democrats who attended the screening—including Senators Tom Daschle, Max Baucus, Barbara Boxer, and Bill Nelson; Congressmen Charles Rangel, Henry Waxman, and Jim McDermott; and elders of the party like Arthur Schlesinger, Jr. and Theodore Sorensen—disagreed with Harkin and McAuliffe. But if so, they remained remarkably quiet about it.
As for John Kerry himself, he did not take time out to see Fahrenheit 9/11, explaining that there was no need since he had “lived it.”
2004 and 1952
Returning now to the gloom that afflicted supporters of the Bush Doctrine in the spring of 2004: one of the reasons Fouad Ajami gave for it was that “our enemies have taken our measure; they have taken stock of our national discord over the war.” Emboldened by our restraint in Falluja and elsewhere within Iraq, as well as by our concomitant willingness to bring the UN back into the political picture, our enemies had begun to breathe easier—and not only in Iraq:
Once the administration talked of a “Greater Middle East” where the “deficits” of freedom, knowledge, and women's empowerment would be tackled, where our power would be used to erode the entrenched despotisms in the Arab-Muslim world.
But now, Ajami lamented, it had become clear that “we shall not chase the Syrian dictator to a spider hole, nor will we sack the Iranian theocracy.” There were even indications that, abandoning the dream of democracy altogether, we might settle for the rule of a “strong man” in Iraq.
But how accurate was the measure our enemies had taken of us? Was it possible that their gauge was being thrown off by the overheated atmosphere of a more than usually bitter presidential campaign, and by the caution George Bush felt it necessary to adopt in seeking reelection?
This seemed to me then, and it still seems to me now, the most decisive question of all. I therefore want to conclude by examining it, and I want to do so by returning to the analogy I drew earlier between the start of World War III in 1947 and the start of World War IV in 2001.
When the Truman Doctrine was enunciated in 1947, it was attacked from several different directions. On the Right, there were the isolationists who—after being sidelined by World War II—had made something of a comeback in the Republican party under the leadership of Senator Robert Taft. Their complaint was that Truman had committed the United States to endless interventions that had no clear bearing on our national interest. But there was also another faction on the Right that denounced containment not as recklessly ambitious but as too timid. This group was still small, but within the next few years it would find spokesmen in Republican political figures like Richard Nixon and John Foster Dulles and conservative intellectuals like William F. Buckley, Jr. and James Burnham.
At the other end of the political spectrum, there were the Communists and their “liberal” fellow travelers who—strengthened by our alliance with the Soviet Union in World War II—had emerged as a relatively sizable group and would soon form a new political party behind Henry Wallace. In their view, the Soviets had more cause to defend themselves against us than we had to defend ourselves against them, and it was Truman, not Stalin, who posed the greater danger to “free peoples everywhere.” But criticism also came from the political center, as represented by Walter Lippmann, the most influential and most prestigious commentator of the period. Lippmann argued that Truman had sounded “the tocsin of an ideological crusade” that was nothing less than messianic in its scope.
In the election of 1948, Truman had the seemingly impossible task of confronting all three of these challenges (and a few others as well). When, against what every poll had predicted, he succeeded in warding them off, he could reasonably claim a mandate for his foreign policy. And so it came about that, under the aegis of the Truman Doctrine, American troops were sent off in 1950 to fight in Korea. “What a nation can do or must do,” Truman would later write, “begins with the willingness and the ability of its people to shoulder the burden,” and Truman was rightly confident that the American people were willing to shoulder the burden of Korea.
Even so, enough bitter opposition remained within and around the Republican party to leave it uncertain as to whether containment was an American policy or only the policy of the Democrats. This uncertainty was exacerbated by the presidential election of 1952, when the Republicans behind Dwight D. Eisenhower ran against Truman's hand-picked successor Adlai Stevenson in a campaign featuring strident attacks on the Truman Doctrine by Eisenhower's running mate Richard Nixon and his future Secretary of State John Foster Dulles. Nixon, for example, mocked Stevenson as a graduate of the “Cowardly College of Communist Containment” run by Truman's Secretary of State Dean Acheson, while Dulles repeatedly called for ditching containment in favor of a policy of “rollback” and “liberation.” And both Nixon and Dulles strongly signaled their endorsement of General Douglas MacArthur's insistence that Truman was wrong to settle for holding the line in Korea instead of going all the way—or, as MacArthur had famously put it, “There is no substitute for victory.”
Yet when Eisenhower came into office, he hardly touched a hair on the head of the Truman Doctrine. Far from adopting a bolder and more aggressive strategy, the new President ended the Korean war on the basis of the status quo ante—in other words, precisely on the terms of containment. Even more telling was Eisenhower's refusal three years later to intervene when the Hungarians, apparently encouraged by the rhetoric of liberation still being employed in the broadcasts of Radio Free Europe, rose up in revolt against their Soviet masters. For better or worse, this finally dispelled any lingering doubt as to whether containment was the policy just of the Democratic party. With full bipartisan support behind it, the Truman Doctrine had become the official policy of the United States of America.
The analogy is obviously not perfect, but the resemblances between the political battles of 1952 and those of 2004 are striking enough to help us in thinking about what a few moments ago I called the most decisive of all the questions now facing the United States. To frame the question in slightly different terms from the ones I originally used: what will happen if the Democrats behind John Kerry defeat George W. Bush in November? Will they follow through on their violent denunciations of Bush's policy, or will they, like the Republicans of 1952 with respect to Korea, quietly forget their campaign promises of reliance on the UN and the Europeans, and continue on much the same course as Bush has followed in Iraq? And looking beyond Iraq itself, will they do unto the Bush Doctrine as the Republicans of 1952 did unto the Truman Doctrine? Will they treat Iraq as only one battle in the larger war—World War IV—into which 9/11 plunged us? Will they resolve to go on fighting that war with the strategy adumbrated by the Bush Doctrine, and for as long as it may take to win it?
From the way the Democrats have been acting and speaking, I fear that the answer is no. Nor was I reassured by the flamboyant display of hawkishness they put on at their national convention in July. Yet as a passionate supporter of the Bush Doctrine I pray that I am wrong about this. If John Kerry should become our next President, and he may, it would be a great calamity if he were to abandon the Bush Doctrine in favor of the law-enforcement approach through which we dealt so ineffectually with terrorism before 9/11, while leaving the rest to those weakest of reeds, the UN and the Europeans. No matter how he might dress up such a shift, it would—rightly—be interpreted by our enemies as a craven retreat, and dire consequences would ensue. Once again the despotisms of the Middle East would feel free to offer sanctuary and launching pads to Islamic terrorists; once again these terrorists would have the confidence to attack us—and this time on an infinitely greater scale than before.
If, however, the victorious Democrats were quietly to recognize that our salvation will come neither from the Europeans nor from the UN, and if they were to accept that the Bush Doctrine represents the only adequate response to the great threat that was literally brought home to us on 9/11, then our enemies would no longer be emboldened—certainly not to the extent they have recently been—by “our national discord over the war.”
In World War III, despite the bipartisan consensus that became apparent after 1952 (and contrary to the roseate reminiscences of how it was then), plenty of “discord” remained, and there were plenty of missteps—most notably involving Vietnam—along the way to victory. There were also moments when it looked as though we were losing, and when our enemies seemed so strong that the best we could do was in effect to sue for a negotiated peace.
Now, with World War IV barely begun, a similar dynamic is already at work. In World War III, we as a nation persisted in spite of the inevitable setbacks and mistakes and the defeatism they generated, until, in the end, we won. To us the reward of victory was the elimination of a military, political, and ideological threat. To the people living both within the Soviet Union itself and in its East European empire, it brought liberation from a totalitarian tyranny. Admittedly, liberation did not mean that everything immediately came up roses, but it would be foolish to contend that nothing changed for the better when Communism landed on the very ash heap of history that Marx had predicted would be the final resting place of capitalism.
Suppose that we hang in long enough to carry World War IV to a comparably successful conclusion. What will victory mean this time around? Well, to us it will mean the elimination of another, and in some respects greater, threat to our safety and security. But because that threat cannot be eliminated without “draining the swamps” in which it breeds, victory will also entail the liberation of another group of countries from another species of totalitarian tyranny. As we can already see from Afghanistan and Iraq, liberation will no more result in the overnight establishment of ideal conditions in the Middle East than it has done in East Europe. But as we can also see from Afghanistan and Iraq, better things will immediately happen, and a genuine opportunity will be opened up for even better things to come.
The memory of how it was toward the end of World War III suggests another intriguing parallel with how it is now in the early days of World War IV. We have learned from the testimony of former officials of the Soviet Union that, unlike the elites here, who heaped scorn on Ronald Reagan's idea that a viable system of missile defense could be built, the Russians (including their best scientists) had no doubt that the United States could and would succeed in creating such a system and that this would do them in. Today the same kind of scorn is heaped by the same kind of people on George W. Bush's idea that the Middle East can be democratized, while our enemies in the region—like the Russians with respect to “Star Wars”—believe that we are actually succeeding.
One indication is the warning to this effect issued by al Zarqawi to al Qaeda, from which I have already quoted. But his letter is not the only sign that the secular despots and the Islamofascists in the Middle East are deeply worried over what the Bush Doctrine holds in store for them. There is Libya's Qaddafi, who has admitted that it was his anxiety about “being next” that induced him to give up his nuclear program. And there are the Syrians and the Iranians. Of course they keep making defiant noises and they keep trying to create as much trouble for us as possible, but with all due respect to the disappointed expectations of Fouad Ajami, I have to ask: why would they be sending jihadists and weapons into Iraq if not in a desperate last-ditch campaign to derail a process whose prospects are in their judgment only too fair and whose repercussions they fear are only too likely to send them flying?
This fear may, as Ajami says, have been tempered by our response to the troubles they themselves have been causing us. But it cannot have been altogether assuaged, since it is solidly grounded in the new geostrategic realities in their region that have been created under the aegis of the Bush Doctrine. Professor Haim Harari, a former president of the Weizmann Institute, describes these realities succinctly:
Now that Afghanistan, Iraq, and Libya are out, two-and-a-half terrorist states remain: Iran, Syria, and Lebanon, the latter being a Syrian colony. . . . As a result of the conquest of Afghanistan and Iraq, both Iran and Syria are now totally surrounded by territories unfriendly to them. Iran is encircled by Afghanistan, by the Gulf States, Iraq, and the Muslim republics of the former Soviet Union. Syria is surrounded by Turkey, Iraq, Jordan, and Israel. This is a significant strategic change and it applies strong pressure on the terrorist countries. It is not surprising that Iran is so active in trying to incite a Shiite uprising in Iraq. I do not know if the American plan was actually to encircle both Iran and Syria, but that is the resulting situation.
Finally, there is the effect the Bush Doctrine has had on the forces pushing for liberalization throughout the Middle East. When Ronald Reagan used the word “evil” in speaking of the Soviet Union, and even confidently predicted its demise, he gave new hope to democratic dissidents in and out of the gulag. Back then, very much like Ajami on Bush, some of us fell into near despair when Reagan failed to act in full accordance with his own convictions. When, for example, he responded tepidly to the great Polish crisis of 1982 that culminated in the imposition of martial law, the columnist George F. Will, one of his staunchest supporters, angrily declared that the administration headed by Reagan “loved commerce more than it loathed Communism,” and I wrote an article expressing “anguish” over his foreign policy. Yet even though (once more like Ajami today) our criticisms were mostly right in detail, we were proved wondrously wrong about the eventual outcome. It was different with the dissidents behind the Iron Curtain. They knew better than to get stuck on tactical details, and they never once lost heart.
So it has been with the Bush Doctrine. Bush has made reform and democratization the talk of the entire Middle East. Where before there was only silence, now there are countless articles and speeches and conferences, and even sermons, dedicated to the cause of political and religious liberalization and exploring ways to bring it about. Like the dissidents behind the Iron Curtain in the 1980's, the democratizers in the Middle East today evidently remain undiscouraged. Falluja and the rest notwithstanding, there has been, if anything, a steady increase in the volume and range of the reformist talk that was and continues to be inspired by the Bush Doctrine.15
I do not wish to exaggerate. Except in Iran, and perhaps also one or two other non-Arab Muslim states, the democratizers are still a relatively small group, and as yet their ranks seem to contain no one comparable in intellectual stature or moral and political influence to Sakharov or Solzhenitsyn or Sharansky. But the editor of the Middle East Review of International Affairs, Barry Rubin, who has generally been very skeptical about the chances for democratization in the region, offers a cautious assessment that seems reasonable to me:
Democracy and reform are on the Arab world's agenda. It will be a long, uphill fight to bring change to those countries, but at least a process has begun. Liberals remain few and weak; the dictatorships are strong and the Islamist threat will discourage openness or innovation. Still, at least there are more people trying to move things in the right direction.
To which I (though not Rubin) would add, thanks to George W. Bush.
Then there is Gaza, where at least some elements of the fabled Palestinian street have for the very first time exploded with denunciations not of Israel or the United States, but of Yasir Arafat's tyrannical and corrupt rule. For the first time, too, we find articles in the Arab press calling for Arafat's removal—in favor not of the Islamist alternative represented by Hamas but of a different kind of leadership.
Here, for example, is the Jordan Times:
The rapid deterioration of the domestic political order in Gaza mirrors similar dilemmas that plague most of the Arab world, revolving around the tendency of small power elites or single men to monopolize political and economic power in their hands via their direct, personal control of domestic security and police systems. Gaza is yet another warning about the failure of the modern Arab security state and the need for a better brand of statehood based on law-based citizen rights rather than gun-based regime protection and perpetual incumbency.
And here is the Arab Times of Kuwait:
Arafat should quit his position because he is the head of a corrupt authority. Arafat has destroyed Palestine. He has led it to terrorism, death, and a hopeless situation.
And there is this, from the Gulf News in Dubai:
Palestinians are saying their president for life—Arafat—is the problem along with his cronies who rule them, rob them, and impoverish them. Arabs have a responsibility here too. They can say “Israel” until they are all blue in the face, but it does not change the fact that a large part of the fault lies with the Palestinians and the Arabs.
According to a Palestinian legislator quoted by the Washington Post, “what is happening in the streets of Gaza has [nothing] to do with reform. It's a simple power struggle.” By contrast, the Iranian-born commentator Amir Taheri sees it as a new kind of “intifada aimed at bringing down yet another Arab tyranny.” Chances are that there is some truth in both of these opposing judgments, and in any event it is still too early to tell how the turmoil in Gaza will play itself out. But it is surely not too early to say that there would have been no uprising against Arafat, and much less talk about reform, if not for George W. Bush's policies combined with his courageous willingness to back those of Ariel Sharon.
In his first State of the Union address, President Bush affirmed that history had called America to action, and that it was both “our responsibility and our privilege to fight freedom's fight”—a fight he also characterized as “a unique opportunity for us to seize.” Only last May, he reminded us that “We did not seek this war on terror,” but, having been sought out by it, we responded, and now we were trying to meet the “great demands” that “history has placed on our country.”
In this language, and especially in the repeated references to history, we can hear an echo of the concluding paragraphs of George F. Kennan's “X” essay, written at the outbreak of World War III:
The issue of Soviet-American relations is in essence a test of the overall worth of the United States as a nation among nations. To avoid destruction the United States need only measure up to its own best traditions and prove itself worthy of preservation as a great nation.
Kennan then went on to his peroration:
In the light of these circumstances, the thoughtful observer of Russian-American relations will experience a certain gratitude for a Providence which, by providing the American people with this implacable challenge, has made their entire security as a nation dependent on their pulling themselves together and accepting the responsibilities of moral and political leadership that history plainly intended them to bear.
Substitute “Islamic terrorism” for “Russian-American relations,” and every other word of this magnificent statement applies to us as a nation today. In 1947, we accepted the responsibilities of moral and political leadership that history “plainly intended” us to bear, and for the next 42 years we acted on them. We may not always have acted on them wisely or well, and we often did so only after much kicking and screaming. But act on them we did. We thereby ensured our own “preservation as a great nation,” while also bringing a better life to millions upon millions of people in a major region of the world.
Now “our entire security as a nation”—including, to a greater extent than in 1947, our physical security—once more depends on whether we are ready and willing to accept and act upon the responsibilities of moral and political leadership that history has yet again so squarely placed upon our shoulders. Are we ready? Are we willing? I think we are, but the jury is still out, and will not return a final verdict until well after the election of 2004.
—August 2, 2004
1 “How to Win World War IV” (February 2002), “The Return of the Jackal Bins” (April 2002), and “In Praise of the Bush Doctrine” (September 2002). A fourth piece I used was “Israel Isn't the Issue” (Wall Street Journal, September 20, 2001).
2 He did, however, seem to have committed a sin of omission. Richard Lowry, the editor of National Review, reports that according to John Lehman, one of the Republican commissioners, “Clarke's original testimony included ‘a searing indictment of some Clinton officials and Clinton policies.’ That was the Clarke, even-handed in his criticisms of both the Bush and Clinton administrations, whom Lehman and other Republican commissioners expected to show up at the public hearings. It was a surprise ‘that he would come out against Bush that way.’ Republicans were taken aback: ‘It caught us flat-footed, but not the Democrats.’ ” In a different though related context, the commission quotes material written by Clarke while he was still in office that is inconsistent with his more recent, much-publicized denial of any relationship whatsoever between Iraq and al Qaeda.
3 Hill was referring here to the hearings of the 9/11 commission, not its final report, which did not single out the Bush administration for criticism on this score.
4 The analysis offered by Kennan in “The Sources of Soviet Conduct”—as against his own later revisionist interpretation of it—turned out to be right in almost every important detail, except for the timing. He thought it would take only fifteen years for the strategy to succeed in causing the “implosion” of the Soviet empire.
5 In expressing his determination to win the war, however, Bush was mainly reaching back to the language of Winston Churchill, who vowed as World War II was getting under way in 1940: “We shall not flag or fail. We shall go on to the end.”
6 It is worth noting that Churchill, who had been the target of many derogatory epithets in his long career but who was never regarded even by his worst enemies as “simple-minded,” had no hesitation in attaching a phrase like “monster of wickedness” to Hitler. Nor did the political philosopher Hannah Arendt, whose mind was, if anything, overcomplicated rather than too simple, have any problem in her masterpiece, The Origins of Totalitarianism, with calling both Nazism and Communism “absolute evil.”
7 Fukuyama did not return the compliment. While not exactly rejecting the Bush Doctrine, he would later criticize it and call for a “recalibration.” He would do this more in sorrow than in anger, but still in terms that were otherwise not always easy to distinguish from those of what I characterize below as the respectable opposition.
8 As John Podhoretz would later write: “Those who supported the war, in overwhelming numbers, believed there were multiple justifications for it. Those who opposed and oppose it, in equally overwhelming numbers, weren't swayed by the WMD arguments. Indeed, many of them had no difficulty opposing the war while believing that Saddam possessed vast quantities of such weapons. Take Sen. Edward Kennedy. ‘We have known for many years,’ he said in September 2002, ‘that Saddam Hussein is seeking and developing weapons of mass destruction.’ And yet only a few weeks later he was one of 23 senators who voted against authorizing the Iraq war. Take French President Jacques Chirac, who believed Saddam had WMD and still did everything in his power to block the war. So whether policymakers supported or opposed the war effort was not determined by their conviction about the presence of weapons of mass destruction.”
9 The classic expression of this fantasy was, of course, The Protocols of the Elders of Zion, a document that had been forged by the Czarist secret police in the late 19th century but that had more recently been resurrected and distributed by the millions throughout the Arab-Muslim world, and beyond. It would also form the basis of a dramatic television series produced in Egypt.
10 Stephen F. Hayes has done especially good work on this issue, both in a series of articles in the Weekly Standard and in his book The Connection: How al Qaeda's Collaboration with Saddam Hussein Has Endangered America.
11 Additional corroboration of “meetings . . . between senior Iraqi representatives and senior al Qaeda operatives” would come from a comparable British investigation conducted by Lord Butler, whose report would be released around the same time as the Senate Intelligence Committee.
12 From the Butler Report: “We conclude also that the statement in President Bush's State of the Union Address of 28 January 2003 that ‘The British Government has learned that Saddam Hussein recently sought significant quantities of uranium from Africa’ was well-founded.”
13 From the Senate Intelligence Committee Report: “He [the CIA reports officer] said he judged that the most important fact in the report [by Wilson] was that Nigerian officials admitted that the Iraqi delegation had traveled there in 1999, and that the Nigerian prime minister believed the Iraqis were interested in purchasing uranium, because this provided some confirmation of foreign government service reporting.”
14 Going even further than the Senate Intelligence Committee, the Butler Report concluded: “We believe that it would be a rash person who asserted at this stage that evidence of Iraqi possession of stocks of biological or chemical agents, or even of banned missiles, does not exist or will never be found.”
15 A representative sample can be found on the website of the Middle East Media Research Institute (http://www.memri.org/reform.html).
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
World War IV: How It Started, What It Means, and Why We Have to Win
Must-Reads from Magazine
f all the surprises of the Trump era, none is more notable than the pronounced shift toward Israel. Such a shift was not predictable from Donald Trump’s conduct on the campaign trail; as he sought the Republican nomination, Trump distinguished himself by his refusal to express unqualified support for Israel and his airy conviction that his business experience gave him unique insight into how to strike “a real-estate deal” to resolve the Israeli–Palestinian conflict. In addition, his isolationist talk alarmed Israel’s friends in the United States and elsewhere if for no other reason than that isolationism, anti-Zionism, and anti-Semitism often go hand in hand in hand.
But shift he did. In the 14 months since his inauguration, the new president has announced that the United States accepts Jerusalem as Israel’s capital and has declared his intention to build a new U.S. Embassy in Jerusalem, first mandated by U.S. law in 1996. He has installed one of his Orthodox Jewish lawyers as the U.S. ambassador and another as his key envoy on Israeli–Palestinian issues. America’s ambassador to the United Nations has not only spoken out on Israel’s behalf forcefully and repeatedly; Nikki Haley has also led the way in cutting the U.S. stipend to the refugee relief agency that is an effective front for the Palestinian terror state in Gaza. And, as Meir Y. Soloveichik and Michael Medved both detail elsewhere in this issue, his vice president traveled to Israel in January and delivered the most pro-Zionist speech any major American politician has ever given.
Part of this shift can also be seen in what Trump has not done. He has not signaled, in interviews or in policy formulations, that the United States views Israeli actions in and around Gaza and the West Bank as injurious to a future peace. And his administration has not complained about Israeli actions taken in self-defense in Lebanon and Syria but has, instead, supported Israel’s right to defend itself.
This marks a breathtaking contrast with the tone and spirit of the relationship between the two countries during the previous administration. The eight Obama years were characterized by what can only be called a gut hostility rooted in the president’s own ideological distaste for the Jewish state.
The intensity of that hostility ebbed and flowed depending on circumstances, but from early 2009, it kept the relationship between the United States and Israel in a condition of low-grade fever throughout Barack Obama’s tenure—never comfortable, never easy, always a bit off-kilter, always with a bit of a headache that never went away, and always in danger of spiking into a dangerous pyrexia. That fever spike happened no fewer than five times during the Obama presidency. Although these spikes were usually portrayed as the consequences of the personal friction between Obama and Israeli Prime Minister Benjamin Netanyahu, that friction was itself the result of the ideas about the Middle East and the world in general Obama had brought with him to the White House. In this case, the political became the personal, not the other way around.
Given the general leftish direction of his foreign-policy views from college onward, it would have been a miracle had Obama felt kindly disposed toward the Jewish state’s own understanding of its tactical and strategic condition. And Netanyahu spoke out openly and forcefully to kindly disposed Americans—from evangelical Christians to congressional Republicans—about the threats to his country from nearby terrorism and rockets, and a developing nuclear Iran 900 miles away. His candor proved a perpetual irritant to a president whose opening desire was to see “daylight” (as he said in February 2009) between the two countries. Obama caused one final fever spike as he left office by refusing to veto a hostile United Nations resolution. This appeared churlish but was, in fact, Obama allowing himself the full rein of his true and long-standing convictions on his way out the door.T
he things Trump both has and has not done should not seem startling. They constitute the baseline of what we ought to expect one ally would say and not say about the behavior of another ally. But as Obama’s disgraceful conduct demonstrated, Israel is not just another ally and never has been. It is a unique experiment in statehood—a Western country on Mideast soil, born from an anti-colonialist movement that is now viewed by many former colonial powers as an unjust colonial power, created by an international organization that is now largely organized as a means of expressing rage against it.
Historically, American leaders have had to reckon with these unique realities—and the fact that the hostile nations surrounding Israel and hungering for its destruction happen to sit atop the lifeblood of the industrial economy. The so-called realists who claim to view the world and the pursuit of America’s interests through cold and unsentimental eyes have experienced Israel mostly as a burden.
Through many twists and turns over the seven decades of Israel’s existence, they have felt that America’s support for Israel is mostly the result of short-sighted domestic political concerns for which they have little patience—the wishes of Jewish voters, or the religious concerns of evangelical voters, or post-Holocaust sympathy that has required (though they would never say it aloud) an unnatural suspension of our pursuit of the American national interest.
Israel created problems with oil countries, and with the United Nations, and with those who see the claims for the necessity of a Jewish state as a form of special pleading. As a result, the realists have spent the past seven decades whispering in the ears of America’s leaders that they have the right to expect Israel to do things we would not expect of another ally and to demand it behave in ways we would not demand of any other friendly country.
The realists and others have spent nearly 50 years propounding a unified-field theory of Middle East turmoil according to which many if not all of the region’s problems are the result of Israel’s existence. Were it not for Israel, there would not have been regional wars in 1956, 1967, 1973, and 1982—no matter who might have borne the greatest degree of responsibility for them. There would have been other conflicts, but not this one. There would have been no world-recession-inducing oil embargo in 1973 because there would have been no response to the Yom Kippur War. Were it not for Israel, for example, there would be no Israeli–Palestinian problem; there would have been some other version of the problem, but not this one.
Unhappiness about the condition of the Palestinians in a world with Israel was held to be the cause of existential unhappiness on the Arab street and therefore of instability in friendly authoritarian regimes throughout the Middle East. Meanwhile, Israel’s own pursuit of what it and its voting populace took to be their national interests was usually treated with disdain at the very least and outright fury at moments of crisis.
It was therefore axiomatic that the solution to many if not most of the region’s problems ran right though the center of Jerusalem. It would take a complex process, a peace process, that would lead to a deal—a deal no one who believed in this magical process could actually describe honestly and forthrightly or give a sense as to what its final contours would be. If you could create a peace process leading to a deal, though, that deal itself would work like a bone-marrow transplant—through a mysterious process spreading new immunities to instability in the Middle East that would heal the causes of conflict and bring about a new era.
Again, this was the view of the realists. With Israel’s 70th anniversary coming hard upon us, the question one needs to ask is this: What if the realists were nothing but fantasists? What if their approach to the Middle East from the time of Israel’s founding was based in wildly unrealistic ideas and emotions? Central to their gullibility was the wild and irrational idea that peace was or ever could be the result of a process. No, peace is a condition of soul, an exhaustion from the impact of conflict, born of a desire to end hostilities. Only after this state is achieved can there be a workable process, because both parties would already have crossed the Rubicon dividing them and would only then need to work out the details of coexistence.
There was no peace to be had. The Arab states didn’t want it. The Palestinians didn’t want it. The Israelis did and do, but not at the expense of their existence. The Arabs demanded concessions, and the Israelis have made many over the years, but they could not concede the security of the millions of Israel’s citizens who had made this miracle of a country an enduring reality. The realists fetishized “process” because it seemed the only way to compel change from the outside. And so Israel has borne the brunt of the anger that follows whenever a fantasist is forced to confront a reality he would rather close his eyes to.
That is why I think what Trump and his people have done over the past 14 months represents a new and genuine realism. They are dealing with Israel and its relationships in the region as they are, not as they would wish them to be. They are seeing how the government of Egypt under Abdel Fattah el-Sisi is making common cause with Israel against the Hamas entity in Gaza and against ISIS forces in the Suez. They are witness to the effort at radical reformation in Saudi Arabia under Muhammad bin-Salman—and how that seems to be going hand in hand with an astonishing new concord between Israel and the Desert Kingdom over the common threat from Iran. This is a harmonizing of interests that would have seemed positively science-fictional in living memory.
Mostly, what they are seeing is that an ally is an ally. Israel’s intelligence agencies are providing the kind of information America cannot get on its own about Syria and Iran and the threat from ISIS. Israel is a technological powerhouse whose innovations are already helping to revolutionize American military know-how. Israel’s army is the strongest in the world apart from the regional superpowers—and the only one outside Western Europe and the United States firmly locked in alliance with the West. Things are changing radically in the Middle East, and as the 21st century progresses it is possible that Israel will play a constructive and influential role outside its borders in helping to maintain and strengthen a Pax Americana.
Donald Trump is a flighty man. All of this could change. But for now, the replacement of the false realism of the past with a new realism for the 21st century seems like a revolutionary development that needs to be taken very, very seriously.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
f the making of Washington movies, there is no end. Kohelet said this in Ecclesiastes, I think. Or maybe it was Gene Shalit on the Today Show. It’s a truism in any case. Steven Spielberg’s latest entry in the genre, The Post, is for many Washingtonians the most powerful example in the long line. When the movie opened here in late December, there were reports of audiences cheering lustily and even dissolving in tears at the movie’s end, as if they were watching a speech by President Obama. The local paper ran news articles about it, along with numberless feature stories, interviews, op-eds, fact-checks, reviews, and reviews of reviews.
Which is excusable, I guess, since the movie is about the Washington Post. But then The Post is supposed to be about so many things. It’s about the First Amendment, depicting the agonies of the Post’s editor Ben Bradlee, and its owner, Katherine Graham, as they defy the Nixon administration to publish the top-secret Pentagon Papers. It’s about feminism and the personal evolution of Mrs. Graham from an insecure Georgetown socialite to Master of the Boardroom. It’s the story of the lonely courage of the leaker/whistleblower/traitor (your call) Daniel Ellsberg. It is also, so I read in the Post, a warning about the imperial designs of President Trump to smother a free press. And it’s been understood as a straightforward tale of political history, though the liberties Spielberg takes with his based-on-a-true-story are so extreme as to render it useless as a guide to what happened in the summer of 1971.
Running beneath it all is the motive that animates so many Washington movies: an impatience with the stuttering, halting processes of self-government. The wellspring from which the Washington movie flows is Frank Capra’s Mr. Smith Goes to Washington. The plot is familiar to everyone. Mr. Smith, a small-town bumpkin played by Jimmy Stewart—talk about stuttering and halting!—is appointed by sinister political bosses to a vacant Senate seat, on the assumption that he will be easily manipulated, like a movie audience. Instead, Smith stumbles upon an illicit land deal and exposes the Senate as a den of thieves. His filibustering floor speech rouses a populist outpouring from an army of alarmingly cute children. By the end of the movie, Mr. Smith has restored the nation to its democratic ideals.
Capra intended his movie to be a hymn to those ideals, and for nearly 80 years that’s what audiences have taken it to be. It is no such thing. Mr. Smith seethes with contempt for the raw materials of democracy: debate, quid pro quo deal-making, back-scratching compromise—all the tedious, unsightly mechanics that turn democratic ideals into functioning self-government. In Capra’s telling, democracy can be rescued only by anti-democratic means. An appointed charismatic savior (he’s not even elected!) uses a filibuster (favorite parliamentary trick of bullies and autocrats) to release the volatile pressure of a disenfranchised mob (the great fear of every democratic theorist since Aristotle). From Mr. Smith to Legally Blonde 2, the point of the Washington movie is clear: Left to its own devices, without an outside agent to penetrate it and cleanse it of its sins, self-government sinks into corruption and despotism.
Steven Spielberg is the closest thing we have to Capra’s successor. Like all his movies, The Post has many charms: a running visual joke about Bradlee’s daughter making a killing with her lemonade stand threads in and out of the heavier moments like a rope light. On the other hand, his painstaking obsession with period detail often fails: A hippie demonstration against the Vietnam War looks as if it’s been staged by the cast of Hair. The set-piece speeches are insufferable, an icky glue of sanctimony and sentimentality. What we call the Pentagon Papers was a classified history of the lies, misjudgments, and incompetence of four presidents, from Harry Truman to Lyndon Johnson, ending in 1968. Sometimes the speechifying is directed at the malfeasance of these men, as when Bradlee bellows: “The way they lied—those days have to be over!”
Weirdly, though, the full force of the movie’s indignation is aimed at Richard Nixon. Historians might point out that Nixon wasn’t even president during the period covered by the Pentagon Papers. Intelligence officials told the president that the release of the papers would pose an unprecedented threat to national security. He ordered the Justice Department to sue to prevent the New York Times and the Post from publishing the top-secret material. In the movie’s account, this ill-judged if understandable response is equivalent to the official, strategic lies that accompanied tens of thousands of American soldiers to their deaths.
A particularly rich moment comes when Robert McNamara warns Mrs. Graham about Nixon’s capacity for evil. As Kennedy and Johnson’s defense secretary, McNamara was an early version of Saturday Night Live’s Tommy Flanagan, Pathological Liar: The Viet Cong are on the run! Yeah, sure, that’s the ticket! As much as anyone, McNamara, with his stupidity and dishonesty, guaranteed the tragedy of Vietnam. And yet here he is, issuing a clarion call to Mrs. Graham. “Nixon will muster the full power of the presidency, and if there’s a way to destroy you, by God, he’ll find it!” Later Bradlee compares Nixon to his predecessors: “He’s doing the same thing!”
Um, no. From his inauguration in 1969 onward, Nixon’s every move in Vietnam was intended to extricate the U.S. from the quicksand previous presidents had led us (and him) into. In this case, if in no other, Nixon was the good guy. He had nothing to lose, personally, from the publication of the Pentagon Papers, and maybe a lot to gain. After all, they demonstrated the villainy of his predecessors, not his own. (That came later.)
Yet the movie can’t entertain the possibility that Nixon could act on anything but the basest motives. He is a sinister presence. We see him through the Oval Office window, always alone, with his back turned, stabbing the air with a pudgy finger and cursing the Washington Post to subordinates over the phone. It’s actually Nixon’s voice in the movie, taken from the infamous tapes. Unfortunately, the actor’s movements don’t synchronize with the words; in such a somber thriller, the effect is inadvertently comic. It reminded me of watching the back of George Steinbrenner’s head in Seinfeld while Larry David spoke the Yankee owner’s dialogue. And Nixon was no Steinbrenner.
The most plausible explanation is that Nixon, in trying to stop publication of the Pentagon Papers, was doing what he said he was doing: his job. American voters had elected him to protect national security and, not incidentally, the prerogative of the president and the federal government to determine how best to protect it, including determining whether sensitive information should be kept secret. If he didn’t do his job the way voters wanted him to, they could get rid of him next time. You know, like in a democracy.
Ben Bradlee, Katherine Graham, and Stephen Spielberg, not to mention those teary audiences, have no patience with such niceties. As it happens, in the end, the Pentagon Papers were a bust. The sickening detail they disclosed deepened but did not broaden the historical record, and by all accounts their impact on national security was negligible. Those facts don’t alter the creepiness of The Post’s premise—that the antagonists of an elected regime are allowed to go outside the law when it suits their view of the national interest. Charismatic saviors (and few people were more charismatic than Ben Bradlee) can save democracy from itself, but only by ignoring the requirements of democracy. Spielberg continues the tradition of the Washington movie. The Post is Capraesque—in the only true sense of the word.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Is Harvard assaulting the rights of students to free association in the name of a diversity standard it doesn’t live up to itself?
arvard College is home to six all-male “final clubs.” Their members have access to houses in which they eat, socialize, and form bonds with their fellows. These clubs are as historic as they are renowned; most were formed in the 19th century and have had Kennedys, Roosevelts, and an endless procession of politicians, writers, and businessmen as former members. From the time of their origination, these exclusive institutions have been an object of fascination. When doors are closed, and only a small, elite group selected from an already hyper-elite campus has been invited inside, jealousy, curiosity, and frustration are sure to prevail.
The final clubs are financially independent from Harvard and have been entirely unaffiliated with the university since the 1980s, when the administration and the clubs clashed over the latter’s refusal to admit women. But that conflict, which had cooled over time, has recently resurfaced in a new and heightened manner.
In March 2016 Rakesh Khurana, the dean of Harvard College, set an April 15 deadline for the final clubs, at which time they were to inform the administration whether they would change course and become co-ed. Two forces drove Khurana’s action. The first was a report by Harvard’s Task Force on Sexual Assault Prevention released days earlier, after years of research. The report indicated that students who were involved with the final clubs were significantly more likely to have experienced some form of assault than those who were not. The second impetus was the administration’s position that the final clubs—and the ways in which they screened members—were in direct conflict with the ethos of the university.
The deadline passed without response from the clubs. On May 6, 2016, Dean Khurana wrote a letter to Harvard President Drew Faust. He proposed that, beginning with incoming freshmen who would matriculate in the fall of 2017, students who became members of what he termed “unrecognized single-gender social organizations” should be ineligible for leadership positions in Harvard organizations—meaning they could not serve as publication editors, captains of sports teams, leaders of theatrical troupes, and the like. And they would also be ineligible for letters of recommendation from the dean, necessary for many prestigious postgraduate opportunities such as the Rhodes and Marshall scholarships.
Khurana’s letter, and the sanctions proposed within, quickly became a cause célèbre. Harry R. Lewis, a professor of computer science and himself a former dean of the college, wrote Khurana a letter expressing his concern that “by asserting, for the first time, such broad authority over Harvard students’ off-campus associations, the good you may achieve will in the long run be eclipsed by the bad: a College culture of fear and anxiety about nonconformity.” Lewis went on to note:
The reliance on your judgement of what count[s] as Harvard’s values, and using that judgment to decide which students will receive institutional support, is a frightening prospect….The discretion exercised by the dean and his representatives will chill the activism of students in causes that might also be considered noncompliant with Harvard standards—for example, advocacy for a religion that does not allow women to be full participants, or a political party that opposes affirmative action. Such groups are excluded from your mandate, but only as a matter of your discretion. Why wouldn’t activism for such organizations color the support the College would offer their members, on the basis that such students are showing that their true colors are not pure Crimson?
Lewis also referenced the faculty’s responsibilities and noted that there was no precedent in Harvard’s Handbook for Students for the sanctions, thus suggesting that Khurana’s proposals might be outside the administration’s jurisdiction.
In September 2016, Khurana detailed the responsibilities of the “Single-Gender Social Organizations Implementation Committee.” The committee was tasked with
consulting broadly with the College community to address the following questions: 1) What leadership roles and endorsements are affected by the policy; 2) How organizations can transition to fulfill the expectations of inclusive membership practices; and 3) How the College should handle transgressions of the policy.
In addition to the committee’s work, the faculty went through several rounds of motions and debate, discussing myriad permutations of the sanctions, as well as the validity of the sanctions themselves.
In December 2017, the discussions came to a halt. Harvard’s administration flatly announced it would engage in sanctions against students who joined those “unrecognized single-gender social organizations,” or USGSOs. This ostensibly final decision has provoked renewed outrage from students, faculty, and alumni, who have grounded their varied objections in ethical, philosophical, and legal concerns.U
ntil the 1960s shattered the American elite consensus on such matters, the collegiate experience was vastly different for students. Universities used to view their role as being in loco parentis—serving in place of the parents from whom their charges had recently separated. Today, on Harvard’s enchanting campus, teenagers and twentysomethings tend to rule the roost. Students have tremendous flexibility in building their course schedules, and rare is the lecture professor who takes attendance. Undergraduates come and go as they please, to and from wherever they please, with whomever they please, from the darkest hours of the night to the earliest hours of the morning.
But from the time America’s colleges came into being in the 17th and 18th centuries until just a few decades ago, these institutions imposed rules and regulations, curtailed freedoms, and designed a microcosmic world in which young adults would—in theory—learn how to navigate the reality that awaited them after graduation. They were eased into the world in a setting that constricted their choices and where the powers that be very consciously, and intentionally, refrained from treating them like adults. This was most evident in the controls placed on contact between the sexes.
A 1989 Harvard Crimson article by Katherine E. Bliss detailed the so-called parietal rules of the 1960s. It noted that “in 1964, the primary goal of College administrators was maintaining ‘an open door and one foot on the floor’ policy for students entertaining guests of the opposite sex in their rooms.” At that time, the student body and the administration were in conflict over the right to do as they pleased in their own dorms: “Students in 1964 were concerned with lengthening the number of hours they were allowed to spend with members of the opposite sex in the privacy of their own rooms.” If this sounds quaint, consider Bliss’s next point. “Few,” she observed, “could appreciate the fact that only a decade earlier, men and women were not allowed to enter the dormitories of the opposite sex at all.”
The original parietal rules meant that the women of Radcliffe, Harvard’s sister college, could have been in the Harvard Houses only between the hours of 4 and 7 p.m. Robert Watson, a Harvard dean, explained at the time: “We have to watch the mores of our students. I do not want to see Harvard play a leading role in relaxing the moral code of college youth.” Indeed, he went on to say that “the college must follow the customs of the time and the community.…We cannot have rules more liberal than a standard generally accepted by the American public.”
Is there a single standard generally accepted by the American public today? For most of the country—with exceptions in deeply religious Jewish, Christian, and Islamic communities—ours is not an age that concerns itself with the amount of time that men and women spend together in solitude. But that doesn’t mean our era isn’t concerned with the moral development of our youth. On the contrary, leaders of America’s elite institutions today are as preoccupied with strengthening the souls of their charges as were the men who designed the parietal codes all those years ago. Only their aim is not sexual purity anymore, but rather social diversity. It is the heart and soul of the moral vision of our times, and administrators today are no less determined to see that students hew to that standard. But in their effort to serve in loco parentis in this fashion, educators are leaping across ethical—and possibly, legal—lines.
The fraternity-like final clubs have always been difficult to get into, much like Harvard itself. And for many years, the all-male final clubs were certainly characterized by discrimination. In a 1965 piece for the Crimson, Herbert H. Denton Jr., then an undergraduate, noted that while “the tacit ban on Jews has been relaxed in most clubs,” the “ban on Negroes is still in effect.” The same cannot be said today; while several of the final clubs are trying to retain their character by remaining single-gender organizations, they do not screen would-be members on the basis of race or religion.
Nonetheless, the administration has determined that they espouse values and ideas contrary to the Harvard spirit and must consequently be treated as an anachronistic wrong to be extirpated. In a statement issued in December, President Faust (along with William F. Lee, senior fellow of the Harvard Corporation) declared that
the final clubs in particular are a product of another era, a time when Harvard’s student body was all male, culturally homogenous, and overwhelmingly white and affluent. Our student body today is significantly different. We self-consciously seek to admit a class that is diverse on many dimensions, including on gender, race, and socioeconomic status.
The clubs have strict rules about speaking with the press, and every member I spoke with—both former and current students—did so on the condition of anonymity. Many brought up the topic of diversity, noting that in their experience, the members of their clubs were diverse in both ethnic and socioeconomic respects. Members of multiple clubs told me about policies under which an inability to pay club dues has no bearing on whether or not a student will be accepted. Indeed, one went so far as to note that the financial-aid offer is blatantly highlighted during the initiation process, so that those lower on the socioeconomic ladder are not even temporarily burdened by the misconception that their financial status might affect their membership.
The final clubs, like Harvard itself, may indeed be a product of another era. But just as Harvard has evolved, the final clubs have changed. Faust, Lee, and all of the actors in the anti-final-clubs camp, ignore this. They also espouse a position that is as illogical as it is incoherent: Faust and Lee claim both that “students may decide to join a USGSO and remain in good standing” and that “decisions often have consequences, as they do here in terms of students’ eligibility for decanal1 endorsements and leadership positions supported by institutional resources.”
Most parents would not believe that their sons and daughters were in “good standing” if they came home from campus for winter break and told them they would be unable to be editor of the newspaper, captain of the debate team, or eligible for a Rhodes or Marshall scholarship. Yet Faust and Lee insist that “the policy does not discipline or punish the students.” It merely “recognizes that students who serve as leaders of our community should exemplify the characteristics of non-discrimination and inclusivity that are so important to our campus.” It’s hard to believe that Faust and Lee might honestly think that excluding students from leadership roles or prestigious postgrad opportunities would be construed as anything other than a punishment.
So why the insistence to the contrary? If the final clubs are, in the administration’s eyes, archaic, narrow-minded, discriminatory organizations, why not come out with an honest statement that calls for disciplining the students who dare to participate in these institutions? Lewis, the former dean, has explained this by making reference to what Faust and Lee do not mention—namely, Harvard’s Statutes—the internal bylaws governing the institution. Lewis cites part of the 12th statute, which lays out that “the several faculties have authority…to inflict at their discretion, all proper means of discipline.” He notes that “by declaring that ineligibility for honors and distinctions are ‘not discipline,’ what President Faust and Mr. Lee are saying is that the Statutes are not implicated, the matter is not one for the Faculty to decide, and no Faculty vote is needed to carry out the policy.” Indeed, Lewis notes that “it is important that the…policy not be discipline, because if it were discipline, and disciplinary action were taken against a student without a Faculty vote authorizing that policy, that student could challenge the action as not properly authorized.”
There is something else the Faust-Lee statement does not reference—and tellingly. In the beginning of the Harvard administration’s war on final clubs, concerns over sexual assault seemed to form the core of the issue. The Task Force on Sexual Assault Prevention reported that 47 percent of female college seniors who were in some way involved in final clubs—either because they attend events at the male clubs, or because they themselves are members of female clubs—said they had experienced “nonconsensual sexual contact since entering college.” Since “31 percent of female Harvard seniors reported nonconsensual sexual contact since entering college,” the report said, the data proved that “a Harvard College woman is half again more likely to experience sexual assault if she is involved with a Club than the average female Harvard College senior.” But Harvard’s sexual assault survey also found that 75 percent of “incidents of nonconsensual complete and attempted penetration reported by Harvard College females” happened in…Harvard dorms.
The report is sloppy and lumps together things that are not alike. For example, the Porcellian—Harvard’s oldest final club—does not allow any nonmembers through its doors. Charles Storey, who was then the Porcellian’s graduate president, provided a statement to the Crimson in which, among other things, he claimed that the club was “being used as a scapegoat for the sexual assault problem at Harvard despite its policies to help avoid the potential for sexual assault.” The Porcellian, he said, was “mystified as to why the current administration feels that forcing our club to accept female members would reduce the incidence of sexual assault on campus.” Indeed, Storey said, “forcing single gender organizations to accept members of the opposite sex could potentially increase, not decrease the potential for sexual misconduct.”
A day later, Storey apologized for his statement. A few days after that, he resigned as the Porcellian’s graduate president. His reasoning was admittedly inelegant, as it could be interpreted to suggest that club members would be unable to restrain themselves from committing sexual assault should women enter their domain. But Storey was not incorrect in pointing out that, by definition, women could not be subjected to unwanted touching in the Porcellian clubhouse if they were not allowed inside. For a club like the Porcellian, then, where instances of male-on-female sexual assault within the house are currently nonexistent, going co-ed would inherently guarantee that the opportunity for assault would expand. And that is why it is noteworthy (Storey’s humiliation notwithstanding) that the Faust-Lee declaration eliminated the attack on the final clubs for their ostensibly heightened role in unwanted sexual conduct. And why the entirety of the case against them now rests on their failure to hew to the administration’s convictions on gender egalitarianism.
The role that final clubs play in Harvard social life has been a contentious topic for decades. The perception has long been that socially, the members of Harvard’s male final clubs have too much power. On a campus with limited space for social gathering, the final-club mansions are often the source of the college’s most sought-after nightlife. Arguments have been made consistently over time that the exclusionary practices of the clubs—they typically accept only 10 to 25 new members a year—make for unpleasant and unfair campus social dynamics. But again, this conversation is happening at Harvard, an institution that prides itself on its prestige and exclusivity, and which accepted a mere 5.2 percent of its applicants to the 2021 class.
Lewis, the former dean, is not exactly a natural ally for the clubs. He told me that he was “pretty tough with them” during his tenure, and that he was “instrumental in trying to get some of the bad behavior of some of the final clubs under control.” The issues that arose during his time as dean seem to have mostly been related to parties that grew too loud or students who became too drunk. But confronting specific problems as they arise is an approach entirely different from issuing an all-encompassing sanction on free association. At Harvard, specifically, the implications of such a policy could have long-term ramifications. “As an educational institution that, for better or worse, graduates more than its fair share of the leadership of the country, in both industry and technology, and government and law,” Lewis said, “we should not be teaching students that the way you control social problems is by creating bans and penalties against joining organizations.” His “bigger worry,” he said, is that “students will come to think it’s a reasonable thing to do.”
Beyond all these considerations lies an additional layer of complication: legality. Even as a private institution, Harvard’s autonomy may not be as absolute as it seems to believe. I spoke by phone with Harvey Silverglate, a lawyer who is currently representing the Fly, one of the clubs. He told me that “Harvard is misinformed if it has been told by its lawyers or by the office of the general counsel that it can do what it is trying to do, that is to say, punish a private off-campus club, punish Harvard students for joining a legal off-campus club, that is not on Harvard property, and over which the university has no control.” If Harvard goes forward with its plan, Silverglate noted, it will have “overstepped its legal powers.” He spoke extensively about the specific challenges that Harvard would face under Massachusetts state law, explaining that there are free-speech provisions in the Massachusetts constitution that are more protective of speech than the First Amendment to the U.S. Constitution. In fact, Silverglate noted, the state’s supreme court has ruled in several instances that Massachusetts’s declaration of rights “limits the power of private institutions over the people it governs.”
In its desire to avoid a lawsuit, the Harvard administration—or the team of lawyers that doubtlessly advised it—carefully crafted a rule that would apply equally to men and women. Had the sanctions applied solely to male-only clubs, the university would likely have been faced with a federal lawsuit or investigation into gender discrimination. Yet despite the male final clubs being the primary target of the sanctions, they seem to have done the most harm so far to Harvard’s fraternities, sororities, and female final clubs.
One female student I spoke with is a member of one of the originally all-female final clubs that has recently gone co-ed rather than face the sanctions. She explained that within the club, there is a “feeling of resentment.” The USGSOs were all given the choice to either go co-ed or face the sanctions. “The girls clubs,” she told me, “have accepted it because they don’t have a lot of money.” While the male clubs have old and powerful alumni—and the money that comes with them—the female clubs are young and, by comparison, poor. “The boys can all sue,” she said, but “the girls clubs don’t have that privilege.” Having men in the club has certainly changed things for her. She explained: “It’s definitely different—I loved having an all-female space, and there was lots of merit to that socially and even in terms of networking.… I had this strong female network, and that was kind of eroded by going co-ed.”
Sorority members are facing similar challenges, but unlike the male and female final clubs that do not answer to a national body, they are unable to adapt as they see fit. Sororities and fraternities are unable to go co-ed without violating the rules of their national charters; the sanctions policy therefore affects their organizations most.
I spoke by phone with Evan Ribot, a Harvard alumnus from the class of 2014 who was president of the fraternity AEPI while on campus. Stressing that he could speak only for himself, and not on behalf of AEPI or the AEPI alumni network, he told me there was a “tenuous relationship between the administration and the fraternities” when he was on campus. “There was a sense that we operated in a gray zone because the university knew we existed,” he told me. “So we weren’t underground, but we also were not a recognized group.” As a result of the sanctions, AEPI at Harvard has dissolved itself and become a new organization, the gender-neutral “Aleph.” The organization is no longer affiliated with AEPI national.
“It’s a shame,” he said, “because some of my best friends were looking to join AEPI not because they wanted to be in an exclusionary single-sex organization but because they were looking for a place to fit in on a challenging campus.” The same is true for women: Ribot noted “The sororities were an avenue for women to find their own spaces—not because they were looking to exclude men but because there is an inherent value to a group of women hanging out, just like there can be an inherent value to have men hanging out.… It’s not rooted in exclusion.”
In some circumstances, it appears, Faust agrees. She herself attended Bryn Mawr—a women’s college— and serves as a special representative on the board of trustees of her alma mater. “It is impossible to figure out how Faust can reconcile helping to provide that singular experience to women while at the same time denying any portion of that experience to the women she is responsible for at Harvard,” said Richard Porteus, graduate president of the Fly Club. He graduated from Harvard in 1978 and was elected a member of the Fly Club in 1976. He spoke of the diversity of his club class and reflected that while “there were some people whose names also appeared on Harvard buildings,” he “didn’t come from wealth” and was not only elected to the club but became an officer. Porteus explained that “one’s socioeconomic standing did not matter.” All that mattered, he said, was “the potential for forming life-long friendships.”
The debate over Harvard’s final clubs would have taken place in an entirely different framework if we were still living in a time when university administrators saw their role as fill-in parents—and if that role were viewed as a comfort by the parents themselves. But today’s universities are, for better or worse, largely a free-for-all. The curtailing of certain freedoms thus becomes all the more apparent, and all the more disturbing, when measured against the backdrop of a prevailing “you do you” attitude. The core of the administration’s position seems to be reinforced by an overwhelming need to groom a student body that shares all the same beliefs and values—those that echo the principles that the administration itself espouses. If it deems single-sex social groups discriminatory, then there is no room for those students who see them not as beacons of gender exclusivity but as opportunities for friendship and support. In an educational institution, the only kind of diversity that should matter is diversity of thought. That’s a lesson the Harvard administration desperately needs to learn.
Harvard’s own questionable record on diversity is currently under harsh scrutiny—and not because of the behavior of clubs that have a tenuous connection to the university’s educational mission. Research has demonstrated that to gain entry into an institution like Harvard, Asian-American applicants must score an average of 140 points higher on their SATs than white applicants, 270 points higher than Hispanic applicants, and an astonishing 450 points higher than African-American applicants. The Justice Department has taken note and is investigating the matter. In December, the New York Times reported that the university has agreed to give the DOJ access to applicant and student records. That Harvard’s administration has become consumed with the goal of bringing an end to institutions that fail to meet a 21st-century standard for diversity is not without its savage ironies.
1 Meaning something a dean does.
Choose your plan and pay nothing for six Weeks!
Review of 'In the Enemy’s House' By Howard Blum
Nearly a decade would pass until the FBI and NSA began to release the actual Venona transcripts in 1995. In the years since, a number of books (including several co-authored by me) have analyzed the Venona revelations, while others have mined Communist International files and the KGB archives. Virtually all the major mysteries about Soviet espionage in the United States have been resolved by these once-secret documents. In addition to confirming the guilt of the Rosenbergs, Alger Hiss, Harry Dexter White, and virtually every other person accused of spying in the 1940s by the ex-spies Whittaker Chambers and Elizabeth Bentley, these books have exposed several important and previously unknown agents such as Theodore Hall, Russell McNutt, and I.F. Stone. Indeed, the only accused spy who turns out to have been innocent (although he was a secret Communist almost up until the day he took charge of developing an atomic bomb) was J. Robert Oppenheimer.
A handful of espionage deniers, centered around the Nation magazine, continue to argue, against all evidence and logic, that Alger Hiss is still innocent. The Rosenberg children continue to distort their mother’s role in espionage. And some hard-core McCarthyites still demonize Oppenheimer. But in truth, the bloody battle over who spied is over.
Lamphere’s book emphasized his collaboration with the Army cryptographer Meredith Gardner in the hard work of unraveling the spy rings using the Venona cables. Employing those 1986 recollections as a template, the Vanity Fair contributor Howard Blum has now given us In the Enemy’s House, an overly dramatized but largely accurate account of the friendship between the outgoing, hard-driving, atypical G-man Lamphere and the shy, scholarly, soft-spoken Gardner as they worked together to find and prosecute those Americans who had betrayed their nation.
Blum intersperses the American hunt for spies with the recollections of Julius Rosenberg’s KGB controller, Alexander Feklisov, who ran Rosenberg in 1944 and 1945 and supervised Fuchs in Great Britain from 1947 to 1949. Feklisov watched with mounting dread as the KGB’s atomic spy networks were exposed, both because of Venona and the KGB’s own blunders—most notably because the Russians used Harry Gold, Fuch’s contact, to pick up espionage material from David Greenglass, who was Julius Rosenberg’s brother-in-law and part of his spy ring.
Blum also uses information from many of the scholarly accounts that have already appeared, although not always carefully. His only new source of data comes from interviews with members of the Lamphere and Gardner families and access to their personal notebooks. But while he provides a list of his sources for each chapter, Blum does not use footnotes, so that although many of the personal and emotional reactions to the investigation he attributes to people, and especially to Lamphere, presumably come from these sources, it is never clear whether they are based on contemporaneous written notes or third-party recollections of events more than 50 years in the past.
Such objections are not mere academic carping. While Blum successfully turns this oft-told story into an interesting and suspenseful narrative, his approach comes at a cost. For example: He is eager to transform Lamphere from a diligent and resourceful FBI investigator who often chafed at the bureaucracy and petty rules that governed the agency into a full-blown rebel who almost singlehandedly forced the FBI to take up the problem of Soviet espionage. To do so, Blum suggests that until the FBI received an anonymous letter in Russian in August 1943 alleging widespread spying and naming KGB operatives, the Bureau regarded the investigation of potential Soviet spies as useless because allies did not spy on each other.
This is wrong. In fact, the FBI had already mounted two large-scale investigations—one of Comintern activities in the United States undertaken in 1940 and the other of attempted espionage directed at atomic-bomb research at the Radiation Laboratory in Berkeley, which began in early 1943. Both had unearthed information on atomic espionage. These included discomfiting details about Robert Oppenheimer’s Communist connections; efforts by Steve Nelson, a CPUSA leader in the Bay Area in contact with known Soviet spies, to obtain atomic information; and contacts between a Soviet spy and Clarence Hiskey, a chemist on the Manhattan Project.
At one point, Blum renders one of Hiskey’s contacts, Zalmond Franklin, as Franklin Zelman and mischaracterizes him as “a KGB spook working under student cover.” In fact, Franklin was a veteran of the Abraham Lincoln Brigade working as a KGB courier. In any event, the FBI neutralized this threat by transferring Hiskey from Chicago to a military base near the Arctic Circle, thereby scaring his scientific contacts (whom he had introduced to a Soviet agent) into cooperating with the Bureau.
There are other occasions where Blum demonstrates an uncertain grasp of the history of Soviet intelligence. He misstates Elizabeth Bentley’s motives for defecting; angry at being pushed aside by the Soviets, she feared she was under FBI surveillance. And he claims that only three witnesses testified against the Rosenbergs (Ethel’s brother and sister-in-law and Harry Gold), which leaves off others (Bentley, Max Elitcher, and the photographer who had taken passport photos for the family just prior to their arrests).
Blum’s account of the way the KGB encoded and enciphered its messages is oversimplified. The mistake that made it possible for American counterintelligence to break into the Soviet messages was their intelligence services’ use of some one-use-only pads a second time. Not all of the one-time pads were used twice, and only if such a pad was used twice could the FBI strip the random numbers from the message sent by Western Union. That process allowed Gardner to attempt to break the underlying code. The vast majority of the Soviet cables remained unbreakable, and many could be only partially decrypted. And most of the decrypted cables had nothing to do with atomic espionage but concerned the stealing of diplomatic, political, industrial, and other military secrets.
Partly to heighten suspense, Blum misrepresents or distorts the timelines on matters involving Klaus Fuchs and the Rosenberg ring. He harps on Lamphere’s frustration about not being able to use the decrypts in court, but the FBI had concluded it was highly unlikely that they could be legally introduced into evidence without exposing valuable cryptological techniques, a conflict Lamphere surely understood. That very problem helps explain the FBI’s inability to prosecute Theodore Hall, the youngest physicist at Los Alamos, who had been exposed as a Soviet spy. Blum mistakenly suggests that the FBI agent in Chicago who investigated Hall was unaware of Venona. But that agent did know; the problem was that when the FBI began its investigation in the spring of 1950, Hall had temporarily ceased spying. He was eventually brought in for questioning, but neither he nor his one-time courier and friend, Saville Sax, broke and confessed. Lacking independent evidence, the FBI was stymied.T he most significant flaw of In the Enemy’s House is its assertion that Ethel Rosenberg’s conviction and execution were monumental acts of injustice that disillusioned both Lamphere and Gardner, soured their sense of accomplishment, and left them consumed by guilt. It is true that Lamphere had opposed Ethel’s execution and had drafted a memo that J. Edgar Hoover sent to the judge urging she be spared as the mother of two young sons. Gardner had translated one Venona message that indicated Ethel knew of her husband’s espionage but because of her delicate health “did not work,” which Gardner interpreted to mean she was not part of the spy ring. But, as Lamphere pointed out in his own book, her brother David Greenglass had testified to her involvement in his recruitment. And KGB messages available following the collapse of the Soviet Union now make clear that Ethel had played a key role in persuading her sister-in-law, Ruth Greenglass, to urge her husband to spy.
In The FBI-KGB War, Lamphere never evinced deep moral qualms about their fate. He expressed a more complex set of emotions. “I knew the Rosenbergs were guilty,” he writes, “but that did not lessen my sense of grim responsibility at their deaths.” And he calls claims that the case was a mockery of freedom and justice both “abominable and untruthful.” Blum insists that Gardner was “stunned” by their deaths and quotes him as saying somewhere: “I never wanted to get anyone in trouble” (which would suggest a monumental naiveté if true).
Blum’s claim that Lamphere and Gardner had condemned themselves “to another sort of death sentence” for their roles is a wild exaggeration. So, too, is his charge that Lamphere believed that in the Rosenberg case the United States “might prove to be as ruthless and vindictive as its enemies.”
Finally, Blum links Lamphere’s decision to leave the FBI for a high-level position in the Veteran’s Administration to a sense of lingering guilt. But in his own book, Lamphere attributes the move to the frustration he felt once he realized he would be stuck as a Soviet espionage supervisor for years to come. Blum links Gardner’s brief posting to Great Britain to work with its code-breaking agency as an effort to escape his guilt, but he never mentions that Gardner returned to work at the National Security Agency for many years.
Retired intelligence agents friendly with both men have no recollection of their expressing regret about their role in the Rosenberg case. It is possible that they may have made some such comment to a family member or jotted down something in a notebook, but without very specific and sourced comments, the idea that they ever regretted their work exposing Soviet spies is nonsense that mars Blum’s otherwise entertaining account.
Choose your plan and pay nothing for six Weeks!
What we got instead was a combination of celebrity puffery and partisan cheap shots at the Trump administration. The politics of North and South Korea, and the equally complex and intricate relations between these two countries and China, Japan, Russia, and the United States, were reduced to just another amateur sport. Ignorant and supercilious reporters transposed the clichés of the electoral horse race, complete with winners, losers, buzz, and sick burns, to nuclear brinkmanship. Major news organizations could not have done Kim’s job any better for her.
A representative example was written by no less than seven CNN reporters and researchers who concluded, “Kim Jong Un’s sister is stealing the show at the Winter Olympics.” The lead of this news article—I repeat, news article—was the following: “If ‘diplomatic dance’ were an event at the Winter Olympics, Kim Jong Un’s younger sister would be favored to win gold.” Gag me.
Then the authors let loose this howler: “Seen by some as her brother’s answer to American first daughter Ivanka Trump, Kim, 30, is not only a powerful member of Kim Jong Un’s kitchen cabinet but also a foil to the perception of North Korea as antiquated and militaristic.” Kim’s “Kitchen Cabinet”—why, he’s just like Andrew Jackson. And how could anyone have the “perception” that North Korea is “antiquated” and “militaristic”? Sure, they might threaten the world with nuclear annihilation. But have you seen Donald Trump’s latest tweet?
New York Times reporters are either smarter or more efficient than their peers at CNN, because it took only two of them to write “Kim Jong-Un’s sister turns on the charm, taking Pence’s spotlight.” Motoko Rich and Choe Sang-Hun described Kim’s “sphinx-like smile” and “no-nonsense hairstyle and dress, her low-key makeup, and the sprinkle of freckles on her cheeks.” They contrasted the “old message” of Vice President Pence, who has no freckles, with Kim’s “messages of reconciliation.” They cited one Mintaro Oba, a “former diplomat at the State Department specializing in the Koreas, who now works as a speechwriter in Washington.” What they did not mention is that Oba worked at Barack Obama’s State Department and writes speeches for a Democratic firm. Not that he has an axe to grind or anything.
The typical Kim puff piece began with her charm, grace, poise, statesmanship, and desire for unity and peace. Then, 10 paragraphs later, the journalist would mention that oh, by the way, North Korea is a totalitarian hellscape that Kim’s family has been plundering for over half a century. For instance, describing the South Korean reaction to Kim, Anna Fifield of the Washington Post wrote,
They marveled at her barely-there makeup and her lack of bling. They commented on her plain black outfits and simple purse. They noted the flower-shaped clip that kept her hair back in a no-nonsense style. Here she was, a political princess, but the North Korean “first sister” had none of the hallmarks of power and wealth that Koreans south of the divide have come to expect.
A political princess! It’s like Enchanted, except with gulags and famine.
Deep in Fifield’s article, however, we come across this sentence: “Certainly, Kim, who is under U.S. sanctions for human rights abuses related to her role in censoring information, was treated like royalty during her visit.” Just thinking out loud here, but maybe human-rights abuses and censorship deserve more than a glancing reference in a subordinate clause. Fifield went on to say that “Vice President Pence, who was also in South Korea for the opening of the Winter Olympics but studiously avoided Kim, had worried in advance that North Korea would ‘hijack’ the Olympic Games with its ‘propaganda.’” Now where could he have gotten that idea?
The fascination with Kim revealed both the superficiality and condescension of much of our press. Fifield’s colleague, national correspondent Philip Bump, tweeted out (and later deleted) a photo of Kim sitting behind Pence at the opening ceremonies with the comment, “Kim Jong Un’s sister with deadly side-eye at Pence,” as if he were being snarky about an episode of Real Housewives.
When Kim departed the Olympics, Christine Kim of Reuters wrote an article headlined, “Head held high, Kim’s sister returns to North Korea.” Here’s how it began:
A prim, young woman with a high forehead and hair half-swept back quietly gazes at the throngs of people pushing for a glimpse of her, a faint smile on her lips and eyelids low as four bodyguards jostle around her.
The Reuters piece ends this way: “Her big smiles and relaxed manner left a largely positive impression on the South Korean public. But her sometimes aloof expression and high-tilted chin also spoke of someone who sees herself ‘of royalty’ and ‘above anyone else,’ leadership experts and some critics said.” Thank goodness for the experts.
Kim Jong Un could not have anticipated more glowing coverage for his sister, for the robot-like cheerleaders he sent alongside her, or for his transparent attempt to drive a wedge between South Korea and its democratic allies. “North Korea has emerged as the early favorite to grab one of the Winter Olympics’ most important medals: the diplomatic gold,” wrote Soyoung Kim and James Pearson of Reuters, who called Pence “one of the loneliest figures at the opening event.” Quoting on background “a senior diplomatic source close to North Korea,” Will Ripley of CNN wrote an article headlined, “Pence’s Olympic trip a ‘missed opportunity’ for North Korea diplomacy.” But who was Ripley’s source? Dennis Rodman?
What most disturbed me was the difference in coverage of Kim Yo Jong and Fred Warmbier, whose son Otto died last year after being tortured and held captive in North Korea. Fred Warmbier accompanied Pence to the Olympics as a reminder of the North’s inhumanity and menace. Journalists ignored, dismissed, and even criticized this grieving man. Among many examples of thoughtlessness and callousness was a Politico tweet that read: “Fred Warmbier criticizes North Korean Olympic spirit.” He must have missed Kim’s freckles.
Washington Post columnist Christine Emba asked: “Is Otto Warmbier a symbol, or a prop?” You see, Emba wrote, “Otto’s father may want his son to be a symbol. But the nature of his escort risks turning him into a prop.” Why? Well, because “symbols stand for something” while “props are used by someone.” And “the Trump administration, which hosted Warmbier, is made up of shameless instrumentalizers who have made clear that they stand for very little.” So there you go. We should be skeptical of Fred Warmbier because Trump.
Emba’s not all wrong. There were a lot of props and tools at the Olympics. You could find them in the press box.
Choose your plan and pay nothing for six Weeks!
was nine when I made my first trip to Israel in June of 1968, almost exactly a year after the Six-Day War. My parents had been in Italy the autumn before, and while vacationing in Rome they learned that there were inexpensive flights leaving twice a week for Tel Aviv. The whole of Israel was giddy at the time, unburdened by their insecurities for the moment with the stunning success of their having just won the Six-Day War and their having increased the total size of their young, besieged nation by more than two-thirds.
My mother finally found a use for the crumpled phone numbers of distant Israeli relatives she’d been carrying in her purse for the past several months, relatives on both her father’s and her mother’s side, Romanians all. Osnat, my mother’s second cousin once removed, had had the misfortune of remaining in Europe while the Nazis were on the move. She spoke of having spent five days hiding from the Germans in the liquid filth of an outhouse and breathing through a tube when they came near.
Meeting scores of warm and loving relatives and having been feted by them as “our dear American Mishpacha” was partly why my parents were both so taken with Israel—that and the Israeli people themselves, the Sabras, so proud and brash, and the ancient beauty of the land. With some talk of perhaps making Aliyah, or at least exploring the idea of our moving to Israel, my parents, my siblings, my first cousins, and my Grandma Rose and her younger brother, Uncle Sol, gathered up a month’s worth of warm-weather clothing and flew en masse to Tel Aviv. We were greeted at Lod Airport by a crush of relations, all of them clambering to hug and kiss us. And then as the sun descended into the Mediterranean and night fell over the coastal plain, they drove us all north in a rag-tag caravan of tiny old Fiats, Renaults, and Peugeots to the beach town of Netanya, where we stayed for the entire summer in a tiny flat just behind the home Osnat shared with her husband, Shlomo.
Days later, I’m with my father and my brother Paul at the Wailing Wall. It’s weird to think that only a week ago I was at home watching Gilligan’s Island and looking for my dad’s Japanese Playboys in the bottom drawer of his bedroom closet during the commercials. Now, I’m in Jerusalem, in the glaring sun beneath this gigantic wall of stone. When I’m sure no one’s looking, I put both hands on the wall, and then I touch my forehead to it. The stones are colder than you’d think they’d be in all this heat.
For reasons I don’t understand, I start to cry. I’d be embarrassed if my brother or my dad saw me like this, so I pretend that I’m praying. I wonder, though, am I just crying because you’re supposed to cry here? If the rabbis from the Talmud Torah had shown me pictures of some random bridge in Saint Paul from the time I was in nursery school, would I have cried at that, too?
When I look up at the wall again, I see some birds’ nests and a million pieces of paper with people’s prayers in them, all stuffed into the cracks between the stones. Everyone who comes here wants God’s attention. I’ll bet He loves all the notes. They probably make Him feel like someone gives a shit about the cool stuff He does.I
had been born a Jew in Minneapolis. Growing up Jewish there wasn’t a good or a bad thing any more than growing up with snow was good or bad. It just was. Because we Jews were so few, being one made us all feel different. It wasn’t a difference we’d asked for or earned either. It, too, just was. It was natural for us, that is, becoming somewhat Jew-centric. We were fond of staying close to one another, close to our causes and to our history, it was just a natural reaction to being the “other.”
It’s 1970 and I’m in junior high, on my way to English, when I see Nelson Gomez, Stuey Nyberg, and Craig Walner. They’re hip-checking kids into the tall metal lockers that line the hall. They are the three kings of the Westwood Junior High’s dirtball dynasty, young hoodlums who regularly and without fear skip school, smoke filter-less Marlboros, and shout “Fuck you, faggot” to students and staff members alike, save perhaps for Mr. H, the anti-Semitic shop teacher with whom they have forged an abiding friendship.
To the left and right of me, hapless students fly, body-slammed with alarming speed into the lockers by the three of them. It doesn’t escape my notice that these unfortunates have not been chosen randomly. There goes Brian Resnick. Next it’s Shelly Abramovitz and then Alvin Fishbein. As I round the corner, Stuey Nyberg grabs my second cousin, Elaine Kamel, by the shoulders and slams her face-first into her own locker. She and they were selected for no other reason than their Jewishness.
I grab Stuey by his neck with both hands and I claw at him until my fingernails pierce his pale skin and blood spurts from his jugular. Now I take the clear plastic aquarium algae scraper that I made in Mr. H’s shop class this very morning and use it to gouge out one of Nelson Gomez’s eyeballs, making sure he can see it in the palm of my hand with his remaining eye. Craig Walner tries to run, but I catch him by his mullet and shove his head into Elaine Kamel’s locker. I slam her locker door on him again and again. I don’t stop until his head is severed from his neck…
…and my daydream comes to an abrupt halt when Stuey Nyberg says, “Himmelman, it’s your turn to meet the lockers, you fucking kike.” Without a word of warning, he clouts me with a stinging jab right to my nose. It’s the first time I’ve ever been hit in the face, and while it’s agonizing, the blow is also somehow euphoric. I’m supercharged with adrenaline, I feel as if I’m on fire. But of course, I don’t hit Stuey back. God, no. I simply stand there glowering at the three of them, blood dripping from my large Jewish nose. And for the first time in my life, I feel downright heroic. I look around me and I see that, for now at least, our bitterest enemies have stopped hip-checking what feels like the entire Jewish nation.
Six months later it’s summer vacation, and we Himmelmans fly from Minneapolis to New York and connect with a nonstop to Tel Aviv. In less than two days, I’m on a towel on the beach in Netanya looking out at the cerulean blue of the Mediterranean.
As I lay on the hot sand, Mirage fighter jets with blue Jewish stars emblazoned under their wings suddenly streak so low across the water that I can smell jet fuel. As they scream overhead, the whole beach seems to shake. With a strange sense of clannish pride, I laugh and stare up at the planes as they accelerate and finally rocket out of range.
My father died, after suffering from Stage IV lymphoma for five years, in 1984. I was 25 years old. A year later, I was living in the Twin Cities working on music with my band when I received a call from a woman named Ruth Grosh. She asked if I’d be willing to write some songs for a therapeutic teddy bear she’d dreamed up called Spinoza Bear. Ruth, a bona fide subversive by nature and New Age before anyone had even come up with the term, named her ursine brainchild after Baruch Spinoza, the heretical 17th-century Jewish philosopher. Spinoza was seen as harmful to, and at odds with, the views of the Jewish establishment of Amsterdam at the time. Eventually, both he and his writings were placed under a religious ban called a “cherem” by the Dutch Jewish community where he lived and worked. Aside from the fact that he was reviled for his modernist views, no one had much bad to say about him personally, except that “he was fond of watching spiders chase flies.”
The songs were to play off a battery-operated tape deck that fit into a zippered pouch beneath the soft brown fur of the bear’s stomach. A red heart-shaped knob on the bear’s chest served as the on-off switch. By today’s standards, the technology would seem crude, but at the time, with just a modicum of suspension of disbelief, it was possible to feel that the voice of the bear along with the music was issuing directly from its cheery muzzle. As to whom to hire to be the voice of Spinoza Bear, it was decided after some deliberation that not only would I write and sing the songs, I should also be the kind, concerned voice of the bear itself.
Each of the dozen or so cassette tapes that were eventually recorded had themes of self-empowerment, a kind of you-can-make-it-if-you-try bent. After just two years, the bear became a huge success—not as some plebeian, retail teddy, but as something greater. Spinoza Bear soon found his way into hospitals, health clinics, and centers for healing of all kinds. By holding the bear and listening closely to his stories and songs of wellness and inner light, rape victims, grief-stricken parents, bone-lonely pensioners, autistic kids, as well as children on cancer wards all across America found it possible to relieve some of their pain and fear.
Aside from the good works, the bear provided me with twenty grand in seed money that our band, Sussman Lawrence, used to set sail for New York City in 1985.
We were five new-wave rockers in an Oldsmobile Regal Vista Cruiser wagon, and two roadies in a spanking-new Dodge cube van. The van, which we were overjoyed to discover, had been hastily christened from bumper to bumper with graffiti sometime during our 45-minute debut set at CBGBs, the legendary East Village rock-and-roll club, only days after arriving on the East Coast.
Given the high cost of living in New York City, New Jersey seemed the next best thing. As it turned out, there were very few homeowners interested in renting a house to a band. I hatched a plan, which involved my calling on a middle-aged real-estate agent named Carol we’d found advertising in a Bergen County newspaper. When I finally got her on the line, I explained to her that we were medical students enrolled that fall at nearby Rutgers University and in need of a quiet place to live and study.
The following morning, as the rest of the guys waited outside in the Oldsmobile, I and my cousin Jeff, our band’s gifted keyboard player, showed up at Carol’s office in suits and ties we’d purchased at a local thrift shop and carrying responsible-looking briefcases. I had boned up on some medical terms as well, orthopedic surgical techniques mostly, in case she needed proof that we were actually who we were claiming to be. But there had been no need. We had the cash and seemed honest enough—“honest enough” to let her know that a few of us were also part-time musicians and that there might be some music playing, quietly of course, from time to time, just to ease the strain of our intense studies.
Two days later, Jeff and I woke up early, signed the lease papers, and pulled our now multihued, invective-laden cube van into the driveway of 133 Busteed Drive in Midland Park, New Jersey.
Trying for as much discretion as possible, lest the neighbors notice anything out of the ordinary, we backed the van up to the garage, lugged the gear up a short flight of stairs and into a large, unfurnished living room. Once upstairs, we began unloading beer-stained amplifiers, at least a dozen guitar cases, a drum set packed tightly into three large metal flight cases, assorted keyboards, and an entire public-address system and lighting rig. Aside from some bad scrapes in the hardwood floor and a gaping hole or two in the walls on our way in, the load-in was accomplished with speed and efficiency. We were up and practicing by late afternoon, our new-wave rock blaring fast and loud into the New Jersey autumn night.
A month after settling in, Ruth Grosh reached me at dinnertime by long distance, in the squalor of our band-house collective. After some catching up, she gently let me know me that some psychic friends had explained to her that I had just a few months left on the planet. “What!” I said, “they told you I was gonna die?” Ruth was practiced at this kind of thing, it seemed, although her nonchalance about my imminent demise didn’t make me feel any less concerned. “They asked me to find out if you’d like to come in for a free consultation,” she said. I was due to fly back to Minneapolis later that week anyway, and I figured I might as well find out what all this planet-leaving nonsense was about.
Back home, on the morning of my appointment with the psychics, I found my mother, who was normally quite composed, flitting around the kitchen and singing quietly to herself. She had agreed to a lunch date that afternoon with the contra bass player from the Minnesota symphony, her first since my dad had died almost two years before.
“Does this blouse look good on me?” she asked. “Be honest.”
“Yeah, it looks great,” I said.
I was uncomfortable in the extreme watching my mother dart around the house like a schoolgirl primping for a date with some dude who wasn’t my dad. True, it’d been two years since he’d died, and given all that she’d been through, it wasn’t like she didn’t deserve to live a little. After all, I thought, it was just lunch. But the more I saw of this weird, giddy side of her, the less I liked it. A car honked. It was Ruth.
She and I rode wordlessly as Japanese New Age wooden flutes intoned from her car stereo. We arrived after twenty minutes at the northern suburb of Brooklyn Center, and Ruth parked her car near a long row of newly built town houses. A man and a woman in their mid-forties greeted us at the front door, both smiling in a scary, off-putting way. They appeared to be a kind of husband-and-wife psychic tag team, and they rushed headlong into the consultation by asking if I’d like to give them some names of people I knew.
“We’ll be able to tell you all about them,” the woman said and smiled again. I thought it was just some cheesy method of showing off.
“The first names are enough,” said the man.
“Okay, let’s go with Jeff,” I said.
My cousin Jeff is a musical genius, a pianist of remarkable facility, who’s had to contend with neuromuscular tics most of his life. The two psychics were seated facing each other in cheap leather armchairs. In an instant, they were both precisely mimicking my cousin’s facial tics. I recognized each of them from the names Jeff and I had given them. When Jeff’s thumbs bent downward spasmodically, we called it “Southerner.” When his palms flexed upward in a sort of hand-waving motion, we called it “Reckless Greeter.” In another, with his eyebrows pinched together, lips compressed, and eyes blinking, Jeff looked like someone who was very curious about his environment. We called that one “Curious Man.” His most frequent tic was also his most unsettling. We called that one “Round the World.” It involved his eyeballs rolling uncontrollably in their sockets. Suddenly, to my astonishment, the corners of both of the psychics’ mouths had formed narrow half smiles. Their eyebrows began squeezing together; their eyes were blinking—open-shut-open-shut—perfectly mimicking Jeff’s Curious Man.
“The music, he can’t stop the music,” the woman shouted in excitement. Her husband, whose hands then began a remarkable imitation of Reckless Greeter added, “Yes, good God, the music! Can’t you feel it just pouring out of him?”
I was thinking this had to be some kind of brilliant trick, albeit a devilish one. It was astonishing, yes, but I wasn’t yet convinced that they were real. Next, I said the name “Beverly,” my mother’s, and they both giggled. It’s disconcerting to see adults giggle at any time, but when a pair of middle-aged psychics giggle at the mention of your bereaved mother’s name, it’s triply so.
“She’s doing something she feels guilty about,” the woman offered.
“Yes,” said the man. “Something she’s afraid of doing, but it seems to us that she’s also very excited.”
Almost in unison, the psychics said, “She’s acting like a little schoolgirl today!”
How in hell could they have known what I’d just experienced myself for the first time in my life that very morning? If these two freaks had wanted my undivided attention, they sure as hell had it now.
The room fell silent. I didn’t dare speak. They had officially scared the living daylights out of me with their last trick. Soon, they broached the subject I’d come all this way to talk about.
“Is it your wish to leave the planet?” the woman asked, more casually than I would have imagined possible for someone questioning a fellow human being about whether he wanted to live or die.
I paused and breathed deeply for a minute or so. It was a question I stopped and thought about longer than a mentally stable person might have.
“No,” I finally told them, “I have no intention of leaving anytime soon.”
This seemed to relieve them. The man said, “The reason we’ve been so concerned about you is that we believe music is more important to you than you may be aware. It represents your very essence, and by working as single-mindedly as you have to get a record deal, with the kind of music you’ve been making with your band, you’ve been cheapening and compromising your integrity. You’ve been, in a sense, unfaithful to your muse. That’s what’s causing this spiritual disconnect and, should it continue, my wife and I both feel like it will shorten your stay here.”
His wife took over: “What you need to do is uncover a deeper, more honest expression in your music, something closer to the bone. We know you love the blues and reggae. We think it’ll be helpful to start playing music you love, rather than music you think will sell.”
By this time, tears were spilling down my cheeks. “There’s this song,” I began telling them, “that I wrote for my dad over two years ago on Father’s Day, that almost no one has heard. It’s something that was written with the sole intention of connecting with him before he died. It’s on a cassette tape, just sitting there on a shelf in my closet.”
“Why not put that song out as your next single,” the man said.
I was suddenly speechless. Why had I never thought of this? It was such a simple yet profound idea. I flew back to New Jersey, determined to release not just the one song, but an entire album dedicated to my father.
The guys picked me up in the Oldsmobile at Newark Airport the next day. We were standing around the luggage carousel waiting for my bags when I told them I was going to record a solo record, a tribute to my father, whom they all loved and respected.
My bandmates understood this was something I needed to do. They also knew it wasn’t just talk. A solo album, produced for whatever reasons, also signaled the possibility that the ethos of the band may well have been coming to an end. Nevertheless, they played their hearts out on the record and, by doing so, tacitly gave me their blessings and their assurances that whatever happened with it would be for the best.
The recording featured the song I’d written for my dad, and it eventually became my debut album, This Father’s Day, for Island Records.
Its release also became a powerful catalyst for me personally. It took me from where I had been, locked up in pain and confusion, to some other, more hopeful place. Even before my meeting with the psychics, I thought I’d gotten beyond most of the hurt, that it was simply time to grit my teeth and persevere. It had been two years, after all. But I was mistaken. The process of mending broken hearts is never as pat as that. As much as I needed to forget, to emerge clear-eyed from the jumble and rawness of my father’s death, I knew I’d have to face my worst fears again and again. But I felt ready. I also knew, in a way I hadn’t before, that I really didn’t want to die.
While my father was suffering in the last five years of his life, I found myself in a different state of mind from that of my friends and bandmates, who were, for the most part, blithely moving through their young lives. I’m not saying pain made me wise; it’s just that it can, for those willing to accept its hard lessons, provide a bit of perspective, shine some light on what’s sacred and what’s less so.
During those years I was working very hard to become famous, whatever that might have meant. I felt that I needed to reach some level of achievement before my dad died. I suppose I was conducting a search for miracles. It’s no wonder. For my family and for me at least, miracles seemed to have been in very short supply back then.
It’s miracles after all, that compel us forward, that encourage us to move with some degree of willingness into the next day. But, despite what we might believe, it’s hardly ever the big ones that truly move us. The sea can split, we can win the lottery, we can even become rock stars, and still, those phenomenal circumstances are never what matter most. In the end, the only miracle worth wishing for is the ability to be made aware of the smallest splendors, the most inconsequential truths, and the overlooked rhythms that connect us to the people and things we love.
I felt a kind of heat rising up around me in those days, a sense that what had long been static was now stuttering back into motion. There was a pleasant strangeness to the feeling, but like many things that at first strike us as unusual, it wasn’t wholly unfamiliar, either. I’d felt that same unnamable sensation, lying awake in my bed in the dark as a young child, focusing on individual moonlit snowflakes as they fell outside my window. I felt it again in Jerusalem, at nine years old, when I first touched the sunbaked stones of the Western Wall. I felt it the first time I’d snorkeled in the Red Sea and became drunk from sheer beauty. I felt it the frigid November morning we buried my father. I felt it on the evening I finally met my wife, and again, the moment when each of my children was born.
The circumstances were wildly varying, but in each instance there was a sense of being taken from one place to another, of inertia finally giving way to movement. It was as if my mundane life had cracked open and I saw, arrayed in front of me, some image of the unseen hand that forms and directs the universe.M
y first experiences in Crown Heights, Brooklyn, at age 27 were catalytic. A rabbi named Simon Jacobson had posed a single question and it, too, set me into motion: “Why is walking on the surface of the Earth any less miraculous than flying above it?” he’d asked.
The idea that the world is a wondrous, mysterious place—even as we are destined to walk on the mundane surface of it, even if we cannot truly fly—is both a liberating and comforting notion. Being attuned to wonder is my preferred condition. Perhaps it’s natural for each of us. But why, then, are so many moments not imbued with this sense of the miraculous? Why is there such a divide between barely sensing and deeply feeling?
What I did know in the autumn of 1987, with a certainty I hadn’t known before—perhaps couldn’t have known—was that I needed to get married. I had awakened to the idea that there was nothing I was doing with my life, not my music, not my friendships, not my finally getting that almighty record deal, more important than finding the right woman with whom to create a family and live out my days. I also knew that to do this, I would need to create a powerful forcing frame for myself, not one that would constrict or limit me, but one that would allow me to channel my outsized ego and my creative proclivities toward more productive ends than I’d ever dreamed possible.
Eventually, I made a sort of pact with myself, a silent, personal agreement. It came down to this simple declaration: The next time I sleep with a woman, it will be with my wife. This meant that I had to extricate myself from my longtime girlfriend. Though I was, and still am, extremely fond of her, I could never envision her as a lifetime partner or the mother of my children. In addition, our arrangement was somewhat nebulous, and so this new, self-imposed structure also meant that I’d have to cut off any contact with the other women with whom I was having casual sex. I had to make a fundamental cultural and emotional shift. I would need to wean myself away from years of assumptions about the very nature of what a modern relationship meant. I would have to forge a new way of looking at women, at my role as a man, and at the world at large.
It became clear to me that the freedom I had always longed for could be obtained only through the somewhat paradoxical means of setting limits, delaying gratification, and cutting away many experiences that an all-pervasive consumerist culture had been (and continues to be) hell-bent on selling. If you’ll allow me, I’ll explain this further by way of metaphor.
Music is among the most transcendent of all art forms, both for the performer and listener. Since it has no form or substance, it can easily serve as a model for the boundlessness of spirituality. But as anyone who has mastered a musical instrument knows, musical ideas are expressed almost exclusively by means of structure and restriction, words very few of us would correlate with freedom.
At first glance, this seems like a paradox. How could something as liberating and intangible as music be based on restriction? Not only is music based on restriction, I’d go so far as to say that, aside from the existence of raw sound—elemental white noise, if you will—the only other thing that allows music to take place, the only thing that differentiates it from this pure noise, is what sounds the musician chooses to leave behind. In this sense, music comes about not by choosing notes but by the elimination of notes. Take a look at the idea in this somewhat inverse manner: Only by rejecting all other sonic choices are we left with the ones we truly desire. To make music, we don’t add, we subtract.
Here’s how something as commonplace as the key signature of a particular piece of music also reflects this idea. Unless you were trying to achieve a harsh atonal musical effect, you wouldn’t want to be playing in the key of B-flat minor while your key signature called for you to be playing in A major. The ensuing “music” would sound like a chaotic racket to most people. The time signatures of compositions, along with their tempos, which require that a particular note last only so long and that it be played at a particular speed, also function with this same principle—creation by negation. Avoiding the time signature, or playing at any speed without regard for the overall tempo, is another good way to produce only noise.
It is only through adherence to the limiting factors of time and tempo that music can take shape. In that same sense, if it weren’t for the constraint of playing only certain keys on a piano, and thereby negating all other choices, you would hear only noise. Anyone who has heard his or her toddler pounding away on a piano knows exactly what this sounds like.
Most, if not all, musical instruments also work on this principle of restriction. The trumpet, for example, is based upon compression and restriction. If the air a player blows into the trumpet’s mouthpiece weren’t compressed and regulated by the embouchure, the only sound you’d be able to hear would be a soft wind-like noise passing through the horn.
As I became more and more immersed in the wisdom of Jewish thought and practice, the idea of freedom-in-structure became clearer and ever more personally relevant. If it was true for music I wondered, how much more true must it be for all of life itself? And given that human sexuality (whether or not the participants engaged in a sexual act are conscious of it) concerns the creation of life, it occurred to me that causing dissonance in that most meaningful—dare I say mystical—arena of life was something I definitely needed to avoid.
I knew I had to place a set of restrictions on myself in order to make music out of my life, as opposed to just raw sound. Although this conception of the universe felt new to me, new in the sense that it was radically different from the one I’d been acting on for so many years, it wasn’t unfamiliar. Without my knowing it, I had undergone an awakening. I became alert to a perspective I recalled vaguely, even from my earliest childhood. It was as if I could see something important forming (though what it was, was still unclear) out of a barely examined and often fleeting sliver of thought. All at once, the world around me seemed to feel very much as it did when I was a child. I could remember clearly, lying feverish in bed, waiting for sleep, with every last thing in the world unknown and unexplained.
It was frightening as an adult to feel these thoughts growing stronger and more pervasive, but it also felt safe in ways—as though there’d been a kind of revelation, one that seemed to say: “Peter, son of David, there is a purpose to everything you’ve experienced in the recent past and everything you see before you now. From this moment on, there are things you must do and ways you must act.”
The mantra to live without restrictions, which had guided me for most of my life, seemed at that point to be leading me only to chaos. I believed I could, and must, do better for myself. My most fervent wish was no longer to become a rock star; it was to create my own family, one that could become a replacement for the one I’d been missing, the one that had changed so drastically when my father died.
So, in a tour bus rolling across the American continent, I did the three most practical things I could think of: I stuck to my private pact, I dreamed, and I prayed several times a day to an unseen Deity for strength and for love.
This part of the story really begins a few months after my dad’s funeral, when I found myself in a cramped apartment in South Minneapolis auditioning some songs I’d written for a local performer named Doug Maynard. I sang him a few things and he nodded quietly. Doug wasn’t a big talker. Finally he chose one. “Man, I think I could do this justice,” he said. It was called “My First Mistake.”
You taste like pepper frosting on a granite cake.
Baby fallin’ in love with you was my first mistake…
Less than a year later, Doug was found dead in his living room, stone-drunk and drowned on his own vomit at the age of forty. Before this happened, however, he had introduced me to his manager, who had introduced me to a New York City music lawyer, who had introduced me to a record producer named Kenny Vance.
Kenny had worked with a lot of famous people and he wasn’t particularly shy about mentioning just whom. “I used to date Diane Keaton,” he told me. “I know Woody Allen—been in a couple of his films. I was the music director for Saturday Night Live.” Then he said, “Tonight I’m gonna take you to my main connection, a religious Jew in Brooklyn.”
Before long, Kenny and I were crossing the Brooklyn Bridge. We arrived at an apartment in Crown Heights where Kenny’s friend, Simon Jacobson, greeted us. I liked Simon right off the bat. His eyes reflected some essential paradox, some awareness that being alive is both a source of great humor and great sadness. His wife, Shaindy, introduced herself with a gracious smile and placed glass bowls of almonds and chocolate-covered coffee beans on a yacht-sized table before excusing herself to tend to her young children. The thing I didn’t understand at first was how a big hirsute guy like Simon, in an oversize yarmulke, with a massive beard and in a white polyester button-up, was able to land such a good-looking wife. I soon learned that around these parts, it wasn’t the guy who could throw a football the farthest who got the girl. Simon had another thing going for him.
His, at the time, was to memorize every word of the Lubavitcher Rebbe’s Shabbos dissertations and record them on Saturday night for publication later in the week. To understand the scope of the job, it’s necessary to know that when the Rebbe spoke, it was often for four or more hours straight, without breaks, without notes, and in a manner of cyclical and increasing complexity. To make things even more challenging, the Rebbe wasn’t freestyling. Everything he taught was derived from a compendium of source materials that ranged into the tens of thousands of books. And they could not be recorded because it was the Sabbath and no electricity could be used.
When I once mentioned to Simon how awed I was at his ability to memorize this much information, he looked at me and said: “The memorization is the least of it. It’s the task of compiling it with the proper source notes that’s the real challenge. Every day I correspond with the Rebbe, and he writes me back with perfect editor’s notes. Once I wrote and said I didn’t understand a particular passage and couldn’t find the source for it. The Rebbe had a sharp sense of humor. He sent me back a markup with a big red circle, not just on the sentence I was having an issue with, but around the whole page, with the words, ‘What do you understand?’”
It was getting late. Kenny had left me there and driven back to the city. As Simon spoke to me, I kept looking up at the oil paintings of shtetl life and the Rebbe hanging on the walls. I was prodded more by fatigue than bravado when I finally asked, “What’s the deal with those pictures of the Rebbe? They seem sort of cultish to me.”
“I like the pictures,” he said, “To me, the Rebbe is like a very inspiring grandfather, and I get a lot out of reflecting on the things he says and the way he lives his life. There are people for whom there is no sense of self. People called Tzadikim, and they have no need for personal gain. A Tzadik lives only to serve others and they can do anything they wish.”
“Really,” I asked with just a hint of comic disdain. “Can they fly?”
“Understand, I’ve never seen anyone fly,” Simon answered. “But for a Tzadik, the act of flying is no greater miracle than the act of walking.”
This idea stunned me. Not because it was new. The things that move us most never are. They are things we already know, beliefs that are buried away inside us. Of course, when you stop and think about it, there’s absolutely no difference between the weights of the two miracles, walking and flight. It’s just that we non-Tzadikim get so tired of the one that happens all the time.
At that moment, at that table in Brooklyn, I started thinking about the little-known rhythm-and-blues singer Doug Maynard. I was remembering the sound of his voice and simultaneously considering the infinite number, the impossible number, of tiny coincidences—the tendrils, if you will, that in their unfathomable complexity, had guided me to that particular apartment on that particular night. The thought was so vivid, it was as if I could hear Doug singing again. Singing most soulfully, most truthfully about the joy, and the sweat, and the pain of this world. It wasn’t long after that I met the Lubavitcher Rebbe for the first time. He handed me a bottle of vodka and a blessing for success, and I started becoming more Jewishly observant right away: keeping Shabbos in my tiny apartment in Hell’s Kitchen, keeping kosher, and putting on tefillin. I married Maria two years later. We’ve been married for nearly 30 years.
About a year ago my cousin Jeff asked me what it had been like to meet the Rebbe. This is exactly how I answered him.
“You know when you’ve done something you think is horrible (whatever the hell it may be) and you start going down—deeper and deeper into the rabbit hole of regret? When you’re in so deep that you start to feel like the biggest loser ever born, like nothing is possible, that nothing good is ever gonna come your way, and that you can’t even face yourself in the mirror?”
“Sure,” Jeff said. “I’ve been there.”
“Well,” I said, “meeting the Rebbe was the exact opposite of what I just described.”