By invading Iraq, Bush forced us into an unnecessary war, the critics argue. But would Al Gore have behaved differently?
According to an April 2008 poll in U.S. News & World Report, fully 61 percent of American historians agree that George W. Bush is the worst President in our history. Some of these scholars cite the President’s position on the environment, or on taxes, or on the economy. For most, though, the chief qualification for obloquy lies in Bush’s decision to go to war in Iraq.
In this, of course, the historians are hardly alone: five years after the launching of Operation Iraqi Freedom, both the mainstream media and America’s political elites treat the Iraq war as a disaster virtually without precedent in our national experience. But while politicians and journalists are not necessarily expected to be adepts of the long view, for professional historians the long view is a defining necessity. As the English historian F.W. Maitland wrote more than a century ago, “It is very hard to remember that events that are long in the past were once in the future.” Hard it may be, but the job of historians is not only to remember it but to judge events accordingly.
In this light—that is, in light of what was actually known at the time about Saddam Hussein’s actions and intentions, and in light of what was added to our knowledge through his post-capture interrogations by the FBI—the decision to go to war takes on a very different character. The story that emerges is of a choice not only carefully weighed and deliberately arrived at but, in the circumstances, the one moral choice that any American President could make.
Had, moreover, Bush failed to act when he did, the consequences could have been truly disastrous. The next American President would surely have faced the need, in decidedly less favorable circumstances, to pick up the challenge Bush had neglected. And since Bush’s unwillingness to do the necessary thing might rightly have cost him his second term, that next President would probably have been one of the many Democrats who, until March 2003, actually saw the same threat George Bush did.
It is too often forgotten, not least by historians, that George W. Bush did not invent the idea of deposing the Iraqi tyrant. For years before he came on the scene, removing Saddam Hussein had been a priority embraced by the Democratic administration of Bill Clinton and by Clinton’s most vocal supporters in the Senate:
Saddam Hussein must not be allowed to threaten his neighbors or the world with nuclear arms, poison gas, or biological weapons. . . . Other countries possess weapons of mass destruction and ballistic missiles. With Saddam, there is one big difference: he has used them. Not once, but repeatedly. . . . I have no doubt today that, left unchecked, Saddam Hussein will use these terrible weapons again.
These were the words of President Clinton on the night of December 16, 1998 as he announced a four-day bombing campaign over Iraq. Only six weeks earlier, Clinton had signed the Iraq Liberation Act authorizing Saddam’s overthrow—an initiative supported unanimously in the Senate and by a margin of 360 to 38 in the House. “Iraqis deserve and desire freedom,” Clinton had declared. On the evening the bombs began to drop, Vice President Al Gore told CNN’s Larry King:
You allow someone like Saddam Hussein to get nuclear weapons, ballistic missiles, chemical weapons, biological weapons. How many people is he going to kill with such weapons? . . . We are not going to allow him to succeed. [emphasis added]
What these and other such statements remind us is that, by the time George Bush entered the White House in January 2001, the United States was already at war with Iraq, and in fact had been at war for a decade, ever since the first Gulf war in the early 1990’s. (This was literally the case, the end of hostilities in 1991 being merely a cease-fire and not a formal surrender followed by a peace treaty.) Not only that, but the diplomatic and military framework Bush inherited for neutralizing the Middle East’s most fearsome dictator had been approved by the United Nations. It consisted of (a) regular UN inspections to track and dispose of weapons of mass destruction (WMD’s) remaining in Saddam’s arsenal since the first Gulf war; (b) UN-monitored sanctions to prevent Saddam from acquiring the means to make more WMD’s; and (c) the creation of so-called “no-fly zones” over large sections of southern and northern Iraq to deter Saddam from sending the remnants of his air force against resisting Kurds and Shiite Muslims.
The problem, as Bill Clinton discovered at the start of his second term, was that this “containment regime” was collapsing. By this point Saddam was not just the brutal dictator who had killed as many as two million of his own people and used chemical weapons in battle against Iran (and in 1988 against Iraqis themselves). Nor was he just the regional aggressor who had to be driven out of Kuwait in 1991 by an international coalition of armed forces in Operation Desert Storm. As Clinton recognized, Saddam’s WMD programs, in combination with his ties to international terrorists, posed a direct challenge to the United States.
In a February 17, 1998 speech at the Pentagon, Clinton focused on what in his State of the Union address a few weeks earlier he had called an “unholy axis” of rogue states and predatory powers threatening the world’s security. “There is no more clear example of this threat,” he asserted, “than Saddam Hussein’s Iraq,” and he added that the danger would grow many times worse if Saddam were able to realize his thoroughly documented ambition, going back decades and at one point close to accomplishment, of acquiring an arsenal of nuclear as well as chemical and biological weapons. The United States, Clinton said, “simply cannot allow this to happen.”
But how to prevent it? An opportunity arose later the same year. In October 1998, Saddam threw out ten Americans who were part of a UN inspection team, and on the last day of the month announced that he would cease all cooperation with UNSCOM, the UN inspection body. On December 15, UNSCOM’s director, Richard Butler, reported that Iraq was engaged in systematic obstruction and deception of the internationally mandated inspection regime. Although the UN hesitated to invoke the technical term “material breach,” which would almost certainly have triggered a demand for a response with force by the world body, Clinton himself was determined to act. He had already received a letter from a formidable list of U.S. Senators, including fellow Democrats Carl Levin, Tom Daschle, and John Kerry, urging him to “respond effectively”—with air strikes if necessary—to the “threat posed by Iraq’s refusal to end its WMD programs.” After consulting with Great Britain and other allies, Clinton ordered Butler to pull out the remaining inspectors. On December 16, he launched Operation Desert Fox.
For four days, American and British planes and cruise missiles bombarded Iraqi sites in an effort to degrade Saddam’s programs. The key objective was to knock out communication-and-control networks—and in this, a Clinton official would assert, Desert Fox “exceeded expectations.” But the attacks did virtually nothing to destroy facilities suspected of housing weapons, most of which were in unknown locations. The only way to find out where they might be was by reintroducing UN inspectors, something Saddam now adamantly refused to permit.
Thus, in the end, Desert Fox proved a failure, not because of insufficient American firepower but because of Saddam’s defiance—and because of a lack of forceful follow-up. True, passage of the Iraq Liberation Act meant that the United States now had a regime-change resolution on the books and was providing a certain amount of money and aid for covert internal action against Saddam. True, too, Vice President Al Gore was a particularly strong supporter of these initiatives. But in the wake of Desert Fox, Saddam had conducted his own violent crackdown on potential opposition figures, which meant there was no hope for Iraqis to retake their country without massive outside help.
As 1999 dawned, the choices narrowed. Inspections had failed. So had air strikes and covert action. So had international trade sanctions, which imposed a new level of misery on the Iraqi people without putting any pressure on Saddam himself. The UN’s Oil-for-Food Program, created in 1996 in order to allow Iraq to sell some of its oil in exchange for food and other necessary supplies, appeared to be still another failure: Iraqis continued to starve, while Saddam seemed to grow only richer.
And so, “starting in early 1999,” as Kenneth Pollack, an official in Clinton’s National Security Council, would later recount, “the Clinton administration began to develop options to overthrow Saddam’s regime.”
A plan for an actual land invasion of Iraq had been drawn up a few years earlier under the stewardship of Colin Powell, then the chairman of the Joint Chiefs of Staff. It was updated after Desert Fox. Although (Pollack writes) “no one thought the U.S. public would support such an invasion,” this was now beginning to seem the only option.
Concurring with this judgment was Scott Ritter, an American who had served on the UN’s weapons-inspection term and had become notorious for his aggressive approach to his job. In testimony to the Senate Foreign Relations Committee in late 1998, Ritter castigated the Clinton White House for failing to confront Saddam with the threat of invasion. This hardly endeared him to the President, but it did win him two warm allies in the Senate. One was the Republican John McCain. The other was the Democrat John Kerry, who outspokenly declared that since Saddam clearly intended “to build WMD’s no matter what the cost,” America “must be prepared to use force to achieve its goals.”
But nothing would happen in 1999. At the end of the year, the UN passed Resolution 1284—an effort to get Saddam to accept a new inspection regime, called UNMOVIC, in exchange for lifting sanctions on all goods for civilian use. Yet, weak as the resolution was, it led to a split in the Security Council, with four members—including France, Russia, and China—abstaining from the vote. That split would become permanent. By 2000, life at the Security Council would turn into a constant battle of wills, with the U.S. and Great Britain in one corner and Russia, France, Germany, and China in another. Although George W. Bush would later come to be blamed for wrecking the coalition that had fought the first Gulf war, the reality is otherwise: the wreck occurred three years before he became President.
All the same, as the military historian John Keegan has pointed out, Resolution 1284 did signal the beginning of the end of Saddam Hussein. By refusing to re-admit inspectors, even under a relaxed sanctions regime, Saddam made it unmistakably clear that only a credible threat of military force would make him budge, and only the exercise of military force would ever get him out.
Unfortunately, by this time Clinton had lost whatever limited appetite for armed confrontation he might earlier have entertained. According to Pollack, the lengthy campaign to dislodge Slobodan Milosevic in Kosovo had given the White House a taste of might go wrong in open-ended military operations, and Clinton’s advisers “were not looking to back into a war with Saddam the way they had backed into one with Milosevic.” Besides, the proposed invasion plan called for 400,000-500,000 troops and six months of laborious preparation, which would stretch to the breaking point an American military that, thanks to Clinton-era cuts, was now little more than half the size of the one that had fought Desert Storm.
In his final year in office, Clinton decided that his contribution to Middle East peace would lie not in the removal of Saddam Hussein but in a grand attempt to resolve the conflict between the Palestinians and Israel. With this, he missed his last chance to deal forcefully with the man he was publicly committed to overthrowing. Worse, by focusing his energies on a futile effort to placate Yasir Arafat, he diverted American attention not only from Saddam but from the mounting challenge represented by Osama bin Laden—not to mention the possibility that these two sinister figures might some day find common ground. As Clinton’s administration ended and George W. Bush’s began, Iraqi defectors were claiming that Saddam had set up camps in which terrorists connected with bin Laden were training to attack the United States.
Confronting the same threat faced by the Clinton administration, and the same policy predicament, the incoming Bush team arrived at the same conclusion—namely, to do nothing. Bush’s advisers, like Clinton’s, were split. In the Defense Department, some, like Paul Wolfowitz, seemed (according to Pollack) “obsessed” with getting rid of Saddam—though in point of historical fact Wolfowitz’s position was not strikingly dissimilar to Al Gore’s. For others, like Secretary of State Colin Powell, Iraq “simply did not measure up” to China or Russia or Europe on the scale of international importance.
Most, like Vice President Cheney, were in the middle. They saw plainly enough that containment was not working, and they also saw the long-term benefits of regime change. But they recognized as well that (to quote Pollack again) “toppling Saddam was going to be difficult, potentially costly, and risky.” The net result was that by the summer of 2001, despite the almost complete collapse of the sanctions regime, “it had become clear that the administration was not going to pursue a radically new approach to Iraq.”
Then came September 11. A hitherto obscure terrorist threat emanating from the Arab-Muslim world had reached out to commit mass murder against Americans on their own soil, and in so doing had changed everyone’s priorities. Hillary Clinton, the new junior Senator from New York, put it this way in an interview with Dan Rather two days after 9/11, using starkly confrontational language of the sort for which President Bush would soon be pilloried: “Every nation has to be either for us, or against us. Those who harbor terrorists, or who finance them, are going to pay a price.”
As for the administration, it had come to understand something else—namely, that its responsibility extended beyond the clear and present danger presented by nations, like Afghanistan, guilty of harboring terrorists. It had to prepare for future threats as well. In that regard, Iraq moved quickly to the head of the list.
As Douglas Feith explains in War and Decision, the recently published memoir of his days as Under Secretary for Policy in Donald Rumsfeld’s Defense Department, there were several reasons why a post-9/11 strategy had to focus on Saddam Hussein. First among them was Saddam’s ties to terrorist groups, of which the Clinton administration had been well aware and had repeatedly cited. Although no evidence existed that Saddam had been involved in al Qaeda’s attack on New York and Washington—and no Bush official ever asserted otherwise—the White House learned after the liberation of Afghanistan that Abu Musab Zarqawi, one of al Qaeda’s key operatives, had found safe haven in Iraq. There was also some evidence (cited by General Tommy Franks in his own memoir, American Soldier), that Zarqawi “had been joined there by other al-Qaeda leaders.”
In March 2002, a New Yorker article described the presence in Afghanistan of a radical Islamic group, Ansar al-Islam, whose members were being trained in al-Qaeda camps but being paid through Saddam Hussein’s intelligence service—suggesting a connection “far closer than previously thought.” From other intelligence sources it appeared that Zarqawi was in fact heading Ansar al-Islam, and that its members were training for WMD use against Western countries. Finally, in September 2002, the CIA released a report, Iraqi Support for Terrorism, asserting that “Iraq continues to be a safe haven, transit point, or operational node for groups and individuals who direct violence against the United States.”1
We now know, thanks to captured Iraqi documents, that American intelligence seriously underestimated the extent of Saddam’s ties with terrorist groups of all sorts. Throughout the 1990’s, it emerged, the Iraqi intelligence service had worked with Hamas, the Palestine Liberation Front, and Yasir Arafat’s private army (Force 17), and had given training to members of Islamic Jihad, the terrorist group that assassinated Egyptian president Anwar Sadat. Saddam also collaborated with jihadists fighting the American presence in Somalia, including some who were members of al Qaeda. It may be that al Qaeda had no formal presence in Iraq itself, but the captured documents show that it did not need such a presence. Saddam was willing to work with any terrorists who targeted the United States and its allies, and he reached out to al-Qaeda-affiliated groups (and vice-versa) whenever the occasion warranted.
Second, as Feith relates, Saddam had the WMD know-how, as well as probable stockpiles, that terrorist groups like al Qaeda might want for future operations. Just weeks before 9/11, a privately sponsored exercise had simulated a smallpox attack on the United States. The results were chilling: more than three million people infected within two months, and one million dead. “Today,” declared the official report, “we are ill-equipped to prevent the dire consequences of a biological-weapon attack”—a conclusion that would cast a shadow of apprehension over the post-9/11 Defense Department, as dark as the shadow cast by the anthrax scare that gripped the country after five people received fatal doses in the mail and by the discovery during the invasion of Afghanistan that the Taliban had been experimenting with chemical weapons.
Where would terrorists look to acquire such inefficient but murderous weapons? As far as anyone knew, the place to start would be Saddam’s Iraq. UNSCOM had uncovered Saddam’s extensive biological-weapons (BW) program, dating back to before Desert Storm, only in 1995. Since then, Iraq claimed to have destroyed its BW stockpile—but there was no proof of this. Similar doubts surrounded Saddam’s chemical-weapons (CW) program, of which even bigger stockpiles remained unaccounted for. (In UNSCOM’s estimate, there were 1.5 undocumented tons of VX gas alone.) In addition, UNSCOM believed Saddam still possessed clandestine Scud missiles, useful as a delivery system for a chemical attack.
Third was Saddam’s declared antipathy toward the United States. In 1993 he had hatched a plot to assassinate his then-nemesis, former President Bush, during a visit by the latter to Kuwait. A “general suspicion” among Clinton-administration officials, in Pollack’s words, was that Saddam was also “working on a variety of terrorist contingencies” in the event that the United States ever tried to topple his regime. He was the only world leader who actually applauded the attacks of 9/11.
Finally and most ominously, Saddam was emerging, like a great malignant moth, from the containment regime in place since the end of the first Gulf war. By the end of the 1990’s, sanctions had become a joke, proving less a liability to Saddam than an asset in rebuilding his power. In October 2000 a supposedly “contained” Iraq had boldly renewed its military cooperation with Syria, moving divisions to the Syrian border and even deploying troops into Syria itself to put pressure on Israel. Since then, Saddam’s attacks on American and British air patrols over Iraq had grown more intense. When General Tommy Franks met with Defense Secretary Donald Rumsfeld after the liberation of Afghanistan, these attacks headed his daily list of challenges. “It would only be a matter of time,” Feith writes, “before Iraq was once again engaged in a violent clash with the United States.”
With the fall of Afghanistan, moreover, Bush’s military planners had become more rather than less nervous about the Iraqi threat. Osama bin Laden’s escape from his Tora Bora hideout raised the possibility that he might find safe haven in Baghdad. (Saddam had offered the terrorist leader sanctuary at least once before, after his 1997 expulsion from Sudan.) And as for weapons of mass destruction, on this issue the CIA and its director, George Tenet, still had no doubts, and Tenet’s dogmatic certainty on the point was backed up by the UN inspectors themselves.
Since 1998, no inspector had visited Iraq. Huge quantities of chemical WMD’s were known to have existed before Desert Storm. Quantities had been destroyed since. How much more was left? Saddam had never made the accounting demanded by the UN. In its absence, the UN’s chief weapons inspector, Hans Blix, reasonably inferred that considerable quantities must still have existed.
Today we know that this conviction—which had underlain Clinton’s air strikes in 1998 and the UN’s desperate efforts to reinsert its inspectors into Iraq, and which was shared by virtually every foreign intelligence service, from the French and Germans to the British and Japanese—was the weakest link in the case for going to war with Iraq. But who was responsible for the misimpression? Some have blamed it on the assurances of former Iraqi exiles, especially Ahmed Chalabi of the Iranian National Congress; their motive was presumably to convince the Bush administration to depose the dictator and put them in charge. A more likely culprit seems to have been another Iraqi exile, Rafid Ahmed Alwan, code-named “Curveball,” who arrived in Germany in 1999 telling horrific tales of Saddam’s BW arsenal.
Exiles and/or charlatans may indeed have played a part in misleading the CIA and other Western intelligence services. But by far the most important deceiver was Saddam himself. For more than a decade, he had consistently acted like a guilty man, evading inspections and moving trucks from palace to palace in the dead of night. Even his own army officers, Feith writes, believed he was hiding biological and chemical weapons. And as became clear from his post-capture interrogations, this was precisely the impression he intended to convey, assuming that it would be enough in itself to deter not only an American invasion but an insurrection by Iraqi Kurds or Shiites, or even—his most consistent worry—an attack by Iran.
It never seems to have occurred to Saddam that an American President would take him seriously enough to decide that his supposed WMD stockpiles and programs had to be destroyed by any means necessary. But there was nothing unreasonable about the President’s inference—which was the inference of most American politicians as well. No one knew for sure, just as no one knew what links Saddam might have with al Qaeda and other terrorist groups. If WMD’s existed once, they might well still exist; nothing, and certainly not Saddam’s behavior, suggested otherwise.
Nor was there any way to know, at least until troops were on the ground. Thus, dealing forthrightly with the issue entailed, first, threatening Iraq with a full-scale land invasion and then, if Saddam refused to back down, launching an actual attack.
Convincing Congress that the United States enjoyed a right of “anticipatory self-defense” against Saddam was hardly a difficult task. On the contrary, in September 2002 the Senate virtually arm-twisted Bush into giving it time to pass a new and more specific resolution than the Clinton-era one authorizing regime change in Iraq. In ringing the tocsin, moreover, leading Democrats spoke at least as assertively as leading Republicans. One of them was Charles Schumer:
Hussein’s vigorous pursuit of biological, chemical, and nuclear weapons, and his present and potential future support for terrorist acts and organizations . . . make him a terrible danger to the people of the United States.
Another was Hillary Clinton:
My position is very clear. The time has come for decisive action to eliminate the threat posed by Saddam Hussein’s WMD’s.
John Edwards was still another:
Every day [Saddam] gets closer to his long-term goal of nuclear capability.
Howard Dean, then the governor of Vermont, was of a similar mind:
There’s no question that Saddam Hussein is a threat to the U.S. and our allies.
More than half of Senate Democrats, including John Kerry and Joseph Biden, joined with Republicans in authorizing the President “to defend the national security of the United States against the continuing threat posed by Iraq,” and in so doing to enforce all the relevant but ineffectual resolutions passed by the UN Security Council. In the House, 81 Democrats (out of 209 in total) concurred. Later, many would claim that they had been tricked or misled or even lied to. In fact, the vote reflected nothing more than an affirmation of the old Clinton-era position, now urgently reinforced by the experience of 9/11.2
It was, after all, California’s Nancy Pelosi who had warned the nation on December 16, 1998, during Operation Desert Fox, that Saddam’s “development of WMD technology . . . is a threat to countries in the region.” During the House debate in October 2002, Pelosi sounded the same urgent theme, summing up a threat whose imminence the Democrats had been insisting upon for years. “Yes,” reiterated the tireless Pelosi, “[Saddam] has chemical weapons. He has biological weapons. He is trying to get nuclear weapons.”
That said it all.
As the leaves turned in Washington in the fall of 2002, mainstream Democrats were on board with Bush, just as they had been on board with Clinton. The real reluctance for war came from Republican ranks—and from within the administration itself. The most serious dissenter was Secretary of State Colin Powell, together with his assistant Richard Armitage. Both men wanted to find a way to prop up the containment “box” around Saddam without having to resort to drastic military action.
Their hopes, however, were already more than three years out of date. The main feature of the containment regime had become the Oil-for-Food program, set up by the United Nations in 1996 with Clinton-administration approval. Within months, the program had become a spigot of cash for Saddam and his family and cronies. The full extent of the corruption, and the full roster of who paid in and who was paid out, may not be known for decades, if ever. But the overall picture is reasonably clear, thanks again in large part to documents seized in the 2003 invasion.
Saddam had shrewdly realized that vouchers for the sale of his oil might serve as a kind of international currency, distributed by him to favored customers who would be obliged to pay him kickbacks, all out of reach of the scrutiny of the UN. Eventually, UN administrators were brought into the conspiracy as well.3 Within a year the program had miraculously restored Saddam’s personal wealth and power, even as the Iraqi people continued to suffer. By the time of the U.S. invasion, he had skimmed at least $21 billion from the program, in addition to the billions made through smuggled oil sales to other Middle East countries, including his old enemy Iran.
The list of recipients of Oil-for-Food vouchers grew to more than 270 names, constituting a Who’s Who of slippery international politicians and diplomats—all of whom, needless to say, opposed any talk of military action against Iraq. On the Security Council, Russia, France, and China, key adversaries of U.S. policy toward Iraq going back to Clinton days, were among Saddam’s key beneficiaries. Not only was Oil-for-Food the biggest scandal in UN history, it had turned the UN’s mandate inside out. A program established to punish a rogue tyrant was systematically making him more powerful; nations that were supposed to be his custodians had become his accomplices; and the institution whose purpose was to protect international order was destroying it.
At the time, though, no one in the Bush administration knew this. That was why, in September 2002, President Bush was willing to yield to Colin Powell and British prime minister Tony Blair and ask the UN for one more resolution, this one explicitly threatening Saddam with military force if he did not finally comply with all the preceding resolutions against him.
What Powell found at the UN astonished even him. At a press conference, the French foreign minister, Dominique de Villepin, shrieked that “nothing! nothing!” justified war—making Powell so angry that, as he would later tell the reporter Bob Woodward, he could barely contain himself. “Any leverage with Saddam was linked directly to the threat of war,” Powell recalled, “and the French had just taken the threat off the table.” He could not believe the Europeans’ stupidity. Neither could the President. But it was not stupidity; it was self-interested duplicity.
The UN’s refusal to hold Saddam accountable had the unintended effect of bringing even Powell into line with the White House. In conversations with Bush, he began to use terms like “mosh pit” and “quagmire” to describe the world body. Still, the decision had been made to go back for another, tougher resolution—something that Bill Clinton in his time had conspicuously not secured—either for Desert Fox or for Kosovo.
In going to the UN, Bush willy-nilly allowed the focus to shift from the threat posed by Saddam to the United States, which would justify anticipatory action in self-defense, to Saddam’s defiance of existing UN resolutions, which conferred on the Security Council the right to approve or disapprove of action. Suddenly the salient point at issue was Saddam’s actual stockpiles, determining the nature and extent of which had been the UN’s focus for more than a decade. This led to a crucial delay of more than six months, from September 2002 until March 2003, a period Saddam duly exploited both to build an international coalition aimed at blocking Security Council action and to prepare his own defensive plans.
The case against Saddam, even by the UN’s own rules, was rock solid, and in November 2002 the Security Council did unanimously issue Resolution 1441, ordering him to disarm his WMD’s or face “serious consequences.” Everyone understood that “serious consequences” meant the use of force, including on Iraqi territory. But the Europeans, determined to thwart the U.S., declined to take it that way. No military action was envisaged, they insisted; the passage of Resolution 1441 was action enough. Large crowds mobilized across Western Europe to denounce the very thought of war.
On November 25, 2002, under the terms of 1441, UN inspectors re-entered Iraq. They came back empty-handed. On December 7, Iraq dumped thousands of pages of documents on UNMOVIC. Even Hans Blix recognized that this mountain of materials, some of them over a decade old, contained nothing to clear up the question of what had happened to Saddam’s stockpiles. All the same, Blix asked for time to sift through the document dump, knowing the task would consume months.
As Bob Woodward notes in Plan of Attack, his account of the run-up to the war, Bush so far had been “a study in patience.” (It is also true that General Franks was not yet ready for offensive operations, and needed time for the buildup of American forces in Kuwait that was the leverage behind the implicit threat of force.) The President held back until Blix’s interim report on January 27, 2003, which even the New York Times labeled “grim.” There was nothing in it to suggest that Iraq had accepted the principle of complying with UN resolutions or intended to take any of the steps that, in Blix’s words, “it needs to carry out to win the confidence of the world and to live in peace.”
Blix himself still held out the hope that, somehow, at some future time, Saddam would yet decide to comply. But his mission was doomed from the start. “UNMOVIC had the impossible task,” John Keegan notes, “of proving a negative, that Saddam no longer had forbidden weapons.” But the burden of proof belonged legally on Saddam himself, as stated in Resolution 1441, and it was his failure to comply with that demand, and not Bush’s supposed doctrine of “preemptive war,” that triggered the U.S. invasion. What finally forced the Americans’ hand was the UN’s failure or refusal to acknowledge the very existence of the demand that it itself had made.
The UN’s moment of truth came on February 5, 2003, when Powell gave a final presentation of the case against Saddam to the Security Council, with CIA director George Tenet sitting behind him. Powell’s 76-minute exercise in destructive analysis documented what everyone knew was the case: that Saddam was in “material breach” of the UN’s own stated requirements. That being so, the UN had lost any empirical grounds for declining to take military action. The only question left was whether the Security Council had the moral courage to stand behind its own resolution.
Later, Powell’s defenders would charge that he had been tricked or deceived into making the speech—and in retrospect he said he was humiliated by the thought that he had conveyed false or misleading information. In fact, as Feith shows, the speech came at Powell’s own suggestion, and before giving it he had ruthlessly winnowed out any evidence he considered shoddy or dubious. Even so, he offered over 100 examples of Saddam’s evasion and deceit, evidence based on eyewitness accounts, radio intercepts, and satellite photos. Nor did he hesitate to bring up the al-Qaeda connection as an indicator of possible future horrors along the lines of 9/11. “Ambition and hatred are enough to bring Iraq and al Qaeda together,” Powell asserted, and only military action could ensure that they forever remained apart.
His words were wasted. Russia, France, and Germany stood fast against war “under any circumstances.” Their intransigence, reinforced by their own secret links to Saddam, doomed any final Security Council vote for action. But Powell’s speech did at least confirm the near-unanimity of the official U.S. position. As the late Washington Post columnist Mary McGrory wrote the next day, “I can only say he persuaded me, and I was as tough as France to convince.” Indeed, even before Powell’s speech, Joseph Biden, reacting to Blix’s interim report, had summed up the feeling of many Democrats in these words:
Saddam is in material breach of the latest UN resolution. . . . The legitimacy of the Security Council is at stake, as well as the integrity of the UN. [If] Saddam does not give up those WMD’s and the Security Council does not call for the use of force, I think we have little option but to act with a larger group of willing nations, if possible, and alone if we must.
The die was cast.
Operation Iraqi Freedom got under way on March 21, 2003. In October of that year, the Iraqi Survey Group (ISG) reported it was unable to find any of the WMD stockpiles that everyone believed were in Iraq. Still, what the group did find, in the words of its director David Kay, was “dozens of WMD-related program activities and significant amounts of equipment” that Saddam had concealed from Blix’s inspectors in 2002: proof, in other words, of Saddam’s clear material breach of Resolution 1441.
Of course, this was not the element of the ISG report that attracted the attention of the war’s critics. According to the New York Times, the ISG’s findings supported the view that Bush had “used dubious intelligence to justify his decision to go to war.” That was and is false.
While Kay and his ISG inspectors found no WMD’s, they did not say there had been none. To the contrary: “My view,” Kay stated, is that “Iraq indeed had WMD’s” and that smaller stocks still existed on Iraqi territory. Later he told Britain’s Daily Telegraph that he had found evidence of some WMD’s having been moved to Syria before the war. A question mark hangs over that possibility to this day.
In testifying to the Senate, moreover, Kay asserted unequivocally that “the world is far safer with the disappearance and removal of Saddam Hussein,” adding that the upper echelons of the Iraqi regime had become divided into two factions: those willing to sell to the highest bidder whatever they knew about manufacturing WMD’s and those, including Saddam himself, willing to buy someone else’s know-how at equally high prices. Saddam’s FBI interrogations would confirm Kay’s analysis. There Saddam admitted that he intended to rebuild his WMD programs once he rid himself of the international sanctions imposed after 1991. He knew that WMD’s were the key to his future power, just as they had been in the past. Had he been allowed to remain Iraq’s dictator, he would have emerged as an even greater international menace than before the Gulf war.
Those who condemn Bush’s decision to go to war, bemoan its cost in material and human terms, and deplore the damage it has allegedly done to the American image around the world should consider what would have happened if there had been no war. It is not just that millions of Iraqis would still be in the iron grip of Saddam and his police state. The fact is that, by 2002, no inspection regime and no amount of international pressure, no matter how plumped up by yet another UN resolution, would have kept him contained any longer. The Oil-for-Food corruption would have continued to grow unrestrained, finding reliable co-conspirators in Europe and the Middle East. Rising oil prices over the next half-decade would have kept Saddam awash in cash, allowing him to rebuild his military and cement his connections with powers like Syria and Russia. He had called our bluff before; but this time it was no bluff.
Given the logic of the situation, at what point could Bush have avoided war? To have taken the military option off the table before going to the UN would have undercut everything his analysts and policy advisers, including at the CIA, had been saying since 9/11—and brought howls of protests from leading Democrats in Congress. Doing so after the passage of Resolution 1441 would have made a mockery of the rationale for going to the UN in the first place, and, as Powell explicitly recognized, undermined the resolution itself.
Should we have backed off after the Blix report on January 27, 2003, even as the American troop buildup in Kuwait was in full swing? That would have devastated Bush’s reputation as a war leader after his resounding success in Afghanistan, and guaranteed that he would never be more than a one-term President (which may have been the real objective of his critics anyway).
Saddam Hussein had become a virus infecting the international body politic. The leading symptom of that infection was Oil-for-Food—emblematic of a moral anarchy let loose in the world that would prevail as long as Saddam remained in power. That anarchy had destroyed Iraq; eaten away the legitimacy of the United Nations; and almost wrecked NATO. Indeed, it is hard to see how NATO members already embittered by the diplomatic battle in the UN in 2002 could have continued to cooperate militarily in Kosovo or Afghanistan. Nor is it clear that Eastern European nations would want to join a NATO led by a power, the United States, that had displayed such bare-faced unwillingness to stand up to a dangerous dictator.
“My job is to secure America,” George Bush told Bob Woodward in 2004. “I also believe that freedom is something people long for.” Had he wished, he could also have referred back to the words uttered by President Clinton six years earlier, in February 1998:
Let’s imagine the future. What if [Saddam] refuses to comply, and we fail to act, or take some ambiguous third route? . . . Well, he will conclude that the international community has lost its will. He will then conclude that he can go right on and do more to rebuild an arsenal of devastating destruction. And some day, I guarantee you, he’ll use the arsenal.
Whatever one wants to say about the conduct of the Iraq war, going to war to remove Saddam Hussein in 2003 was a necessary act. It should and could have been done earlier, had not the Clinton White House, which understood the need, not wasted the opportunity through timidity and bluster. If, after 9/11, Bush had then blinked in his turn, he might indeed have found himself out of office by January 2005, and someone else would have had to tackle the job under much more disadvantageous conditions.
To judge by his unequivocal pronouncements pre-2003, and as improbable as it sounds now, that someone might well have been Al Gore, the erstwhile hawkish Vice President who had championed the Iraq Liberation Act, or indeed John Kerry, who back in 1998 told Scott Ritter that containment of Saddam was not working and that the time had come to use force. If Bush had failed to act, either one of these two men might have come to office in January 2005 publicly prepared to deal with the “gathering threat” that his predecessor had unaccountably allowed to grow larger and closer and ever more virulent.
1 This document would become central to later claims that the administration “manipulated intelligence” for political purposes. But neither the bipartisan Silberman-Robb Commission nor the Senate Intelligence Committee found a single case of such manipulation or, for that matter, of political pressure being put on intelligence analysts. What the analysts reported was sometimes wrong, but not because policymakers made it so.
2 For a full refutation of the charge that we were “misled” into war, see Norman Podhoretz, “Who Is Lying About Iraq?,” in the December 2005 COMMENTARY.
3 See Claudia Rosett, “The Oil-for-Food Scam: What Did Kofi Annan Know, and When Did He Know It?,” COMMENTARY, May 2004.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Why Iraq Was Inevitable
Must-Reads from Magazine
f all the surprises of the Trump era, none is more notable than the pronounced shift toward Israel. Such a shift was not predictable from Donald Trump’s conduct on the campaign trail; as he sought the Republican nomination, Trump distinguished himself by his refusal to express unqualified support for Israel and his airy conviction that his business experience gave him unique insight into how to strike “a real-estate deal” to resolve the Israeli–Palestinian conflict. In addition, his isolationist talk alarmed Israel’s friends in the United States and elsewhere if for no other reason than that isolationism, anti-Zionism, and anti-Semitism often go hand in hand in hand.
But shift he did. In the 14 months since his inauguration, the new president has announced that the United States accepts Jerusalem as Israel’s capital and has declared his intention to build a new U.S. Embassy in Jerusalem, first mandated by U.S. law in 1996. He has installed one of his Orthodox Jewish lawyers as the U.S. ambassador and another as his key envoy on Israeli–Palestinian issues. America’s ambassador to the United Nations has not only spoken out on Israel’s behalf forcefully and repeatedly; Nikki Haley has also led the way in cutting the U.S. stipend to the refugee relief agency that is an effective front for the Palestinian terror state in Gaza. And, as Meir Y. Soloveichik and Michael Medved both detail elsewhere in this issue, his vice president traveled to Israel in January and delivered the most pro-Zionist speech any major American politician has ever given.
Part of this shift can also be seen in what Trump has not done. He has not signaled, in interviews or in policy formulations, that the United States views Israeli actions in and around Gaza and the West Bank as injurious to a future peace. And his administration has not complained about Israeli actions taken in self-defense in Lebanon and Syria but has, instead, supported Israel’s right to defend itself.
This marks a breathtaking contrast with the tone and spirit of the relationship between the two countries during the previous administration. The eight Obama years were characterized by what can only be called a gut hostility rooted in the president’s own ideological distaste for the Jewish state.
The intensity of that hostility ebbed and flowed depending on circumstances, but from early 2009, it kept the relationship between the United States and Israel in a condition of low-grade fever throughout Barack Obama’s tenure—never comfortable, never easy, always a bit off-kilter, always with a bit of a headache that never went away, and always in danger of spiking into a dangerous pyrexia. That fever spike happened no fewer than five times during the Obama presidency. Although these spikes were usually portrayed as the consequences of the personal friction between Obama and Israeli Prime Minister Benjamin Netanyahu, that friction was itself the result of the ideas about the Middle East and the world in general Obama had brought with him to the White House. In this case, the political became the personal, not the other way around.
Given the general leftish direction of his foreign-policy views from college onward, it would have been a miracle had Obama felt kindly disposed toward the Jewish state’s own understanding of its tactical and strategic condition. And Netanyahu spoke out openly and forcefully to kindly disposed Americans—from evangelical Christians to congressional Republicans—about the threats to his country from nearby terrorism and rockets, and a developing nuclear Iran 900 miles away. His candor proved a perpetual irritant to a president whose opening desire was to see “daylight” (as he said in February 2009) between the two countries. Obama caused one final fever spike as he left office by refusing to veto a hostile United Nations resolution. This appeared churlish but was, in fact, Obama allowing himself the full rein of his true and long-standing convictions on his way out the door.T
he things Trump both has and has not done should not seem startling. They constitute the baseline of what we ought to expect one ally would say and not say about the behavior of another ally. But as Obama’s disgraceful conduct demonstrated, Israel is not just another ally and never has been. It is a unique experiment in statehood—a Western country on Mideast soil, born from an anti-colonialist movement that is now viewed by many former colonial powers as an unjust colonial power, created by an international organization that is now largely organized as a means of expressing rage against it.
Historically, American leaders have had to reckon with these unique realities—and the fact that the hostile nations surrounding Israel and hungering for its destruction happen to sit atop the lifeblood of the industrial economy. The so-called realists who claim to view the world and the pursuit of America’s interests through cold and unsentimental eyes have experienced Israel mostly as a burden.
Through many twists and turns over the seven decades of Israel’s existence, they have felt that America’s support for Israel is mostly the result of short-sighted domestic political concerns for which they have little patience—the wishes of Jewish voters, or the religious concerns of evangelical voters, or post-Holocaust sympathy that has required (though they would never say it aloud) an unnatural suspension of our pursuit of the American national interest.
Israel created problems with oil countries, and with the United Nations, and with those who see the claims for the necessity of a Jewish state as a form of special pleading. As a result, the realists have spent the past seven decades whispering in the ears of America’s leaders that they have the right to expect Israel to do things we would not expect of another ally and to demand it behave in ways we would not demand of any other friendly country.
The realists and others have spent nearly 50 years propounding a unified-field theory of Middle East turmoil according to which many if not all of the region’s problems are the result of Israel’s existence. Were it not for Israel, there would not have been regional wars in 1956, 1967, 1973, and 1982—no matter who might have borne the greatest degree of responsibility for them. There would have been other conflicts, but not this one. There would have been no world-recession-inducing oil embargo in 1973 because there would have been no response to the Yom Kippur War. Were it not for Israel, for example, there would be no Israeli–Palestinian problem; there would have been some other version of the problem, but not this one.
Unhappiness about the condition of the Palestinians in a world with Israel was held to be the cause of existential unhappiness on the Arab street and therefore of instability in friendly authoritarian regimes throughout the Middle East. Meanwhile, Israel’s own pursuit of what it and its voting populace took to be their national interests was usually treated with disdain at the very least and outright fury at moments of crisis.
It was therefore axiomatic that the solution to many if not most of the region’s problems ran right though the center of Jerusalem. It would take a complex process, a peace process, that would lead to a deal—a deal no one who believed in this magical process could actually describe honestly and forthrightly or give a sense as to what its final contours would be. If you could create a peace process leading to a deal, though, that deal itself would work like a bone-marrow transplant—through a mysterious process spreading new immunities to instability in the Middle East that would heal the causes of conflict and bring about a new era.
Again, this was the view of the realists. With Israel’s 70th anniversary coming hard upon us, the question one needs to ask is this: What if the realists were nothing but fantasists? What if their approach to the Middle East from the time of Israel’s founding was based in wildly unrealistic ideas and emotions? Central to their gullibility was the wild and irrational idea that peace was or ever could be the result of a process. No, peace is a condition of soul, an exhaustion from the impact of conflict, born of a desire to end hostilities. Only after this state is achieved can there be a workable process, because both parties would already have crossed the Rubicon dividing them and would only then need to work out the details of coexistence.
There was no peace to be had. The Arab states didn’t want it. The Palestinians didn’t want it. The Israelis did and do, but not at the expense of their existence. The Arabs demanded concessions, and the Israelis have made many over the years, but they could not concede the security of the millions of Israel’s citizens who had made this miracle of a country an enduring reality. The realists fetishized “process” because it seemed the only way to compel change from the outside. And so Israel has borne the brunt of the anger that follows whenever a fantasist is forced to confront a reality he would rather close his eyes to.
That is why I think what Trump and his people have done over the past 14 months represents a new and genuine realism. They are dealing with Israel and its relationships in the region as they are, not as they would wish them to be. They are seeing how the government of Egypt under Abdel Fattah el-Sisi is making common cause with Israel against the Hamas entity in Gaza and against ISIS forces in the Suez. They are witness to the effort at radical reformation in Saudi Arabia under Muhammad bin-Salman—and how that seems to be going hand in hand with an astonishing new concord between Israel and the Desert Kingdom over the common threat from Iran. This is a harmonizing of interests that would have seemed positively science-fictional in living memory.
Mostly, what they are seeing is that an ally is an ally. Israel’s intelligence agencies are providing the kind of information America cannot get on its own about Syria and Iran and the threat from ISIS. Israel is a technological powerhouse whose innovations are already helping to revolutionize American military know-how. Israel’s army is the strongest in the world apart from the regional superpowers—and the only one outside Western Europe and the United States firmly locked in alliance with the West. Things are changing radically in the Middle East, and as the 21st century progresses it is possible that Israel will play a constructive and influential role outside its borders in helping to maintain and strengthen a Pax Americana.
Donald Trump is a flighty man. All of this could change. But for now, the replacement of the false realism of the past with a new realism for the 21st century seems like a revolutionary development that needs to be taken very, very seriously.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
f the making of Washington movies, there is no end. Kohelet said this in Ecclesiastes, I think. Or maybe it was Gene Shalit on the Today Show. It’s a truism in any case. Steven Spielberg’s latest entry in the genre, The Post, is for many Washingtonians the most powerful example in the long line. When the movie opened here in late December, there were reports of audiences cheering lustily and even dissolving in tears at the movie’s end, as if they were watching a speech by President Obama. The local paper ran news articles about it, along with numberless feature stories, interviews, op-eds, fact-checks, reviews, and reviews of reviews.
Which is excusable, I guess, since the movie is about the Washington Post. But then The Post is supposed to be about so many things. It’s about the First Amendment, depicting the agonies of the Post’s editor Ben Bradlee, and its owner, Katharine Graham, as they defy the Nixon administration to publish the top-secret Pentagon Papers. It’s about feminism and the personal evolution of Mrs. Graham from an insecure Georgetown socialite to Master of the Boardroom. It’s the story of the lonely courage of the leaker/whistleblower/traitor (your call) Daniel Ellsberg. It is also, so I read in the Post, a warning about the imperial designs of President Trump to smother a free press. And it’s been understood as a straightforward tale of political history, though the liberties Spielberg takes with his based-on-a-true-story are so extreme as to render it useless as a guide to what happened in the summer of 1971.
Running beneath it all is the motive that animates so many Washington movies: an impatience with the stuttering, halting processes of self-government. The wellspring from which the Washington movie flows is Frank Capra’s Mr. Smith Goes to Washington. The plot is familiar to everyone. Mr. Smith, a small-town bumpkin played by Jimmy Stewart—talk about stuttering and halting!—is appointed by sinister political bosses to a vacant Senate seat, on the assumption that he will be easily manipulated, like a movie audience. Instead, Smith stumbles upon an illicit land deal and exposes the Senate as a den of thieves. His filibustering floor speech rouses a populist outpouring from an army of alarmingly cute children. By the end of the movie, Mr. Smith has restored the nation to its democratic ideals.
Capra intended his movie to be a hymn to those ideals, and for nearly 80 years that’s what audiences have taken it to be. It is no such thing. Mr. Smith seethes with contempt for the raw materials of democracy: debate, quid pro quo deal-making, back-scratching compromise—all the tedious, unsightly mechanics that turn democratic ideals into functioning self-government. In Capra’s telling, democracy can be rescued only by anti-democratic means. An appointed charismatic savior (he’s not even elected!) uses a filibuster (favorite parliamentary trick of bullies and autocrats) to release the volatile pressure of a disenfranchised mob (the great fear of every democratic theorist since Aristotle). From Mr. Smith to Legally Blonde 2, the point of the Washington movie is clear: Left to its own devices, without an outside agent to penetrate it and cleanse it of its sins, self-government sinks into corruption and despotism.
Steven Spielberg is the closest thing we have to Capra’s successor. Like all his movies, The Post has many charms: a running visual joke about Bradlee’s daughter making a killing with her lemonade stand threads in and out of the heavier moments like a rope light. On the other hand, his painstaking obsession with period detail often fails: A hippie demonstration against the Vietnam War looks as if it’s been staged by the cast of Hair. The set-piece speeches are insufferable, an icky glue of sanctimony and sentimentality. What we call the Pentagon Papers was a classified history of the lies, misjudgments, and incompetence of four presidents, from Harry Truman to Lyndon Johnson, ending in 1968. Sometimes the speechifying is directed at the malfeasance of these men, as when Bradlee bellows: “The way they lied—those days have to be over!”
Weirdly, though, the full force of the movie’s indignation is aimed at Richard Nixon. Historians might point out that Nixon wasn’t even president during the period covered by the Pentagon Papers. Intelligence officials told the president that the release of the papers would pose an unprecedented threat to national security. He ordered the Justice Department to sue to prevent the New York Times and the Post from publishing the top-secret material. In the movie’s account, this ill-judged if understandable response is equivalent to the official, strategic lies that accompanied tens of thousands of American soldiers to their deaths.
A particularly rich moment comes when Robert McNamara warns Mrs. Graham about Nixon’s capacity for evil. As Kennedy and Johnson’s defense secretary, McNamara was an early version of Saturday Night Live’s Tommy Flanagan, Pathological Liar: The Viet Cong are on the run! Yeah, sure, that’s the ticket! As much as anyone, McNamara, with his stupidity and dishonesty, guaranteed the tragedy of Vietnam. And yet here he is, issuing a clarion call to Mrs. Graham. “Nixon will muster the full power of the presidency, and if there’s a way to destroy you, by God, he’ll find it!” Later Bradlee compares Nixon to his predecessors: “He’s doing the same thing!”
Um, no. From his inauguration in 1969 onward, Nixon’s every move in Vietnam was intended to extricate the U.S. from the quicksand previous presidents had led us (and him) into. In this case, if in no other, Nixon was the good guy. He had nothing to lose, personally, from the publication of the Pentagon Papers, and maybe a lot to gain. After all, they demonstrated the villainy of his predecessors, not his own. (That came later.)
Yet the movie can’t entertain the possibility that Nixon could act on anything but the basest motives. He is a sinister presence. We see him through the Oval Office window, always alone, with his back turned, stabbing the air with a pudgy finger and cursing the Washington Post to subordinates over the phone. It’s actually Nixon’s voice in the movie, taken from the infamous tapes. Unfortunately, the actor’s movements don’t synchronize with the words; in such a somber thriller, the effect is inadvertently comic. It reminded me of watching the back of George Steinbrenner’s head in Seinfeld while Larry David spoke the Yankee owner’s dialogue. And Nixon was no Steinbrenner.
The most plausible explanation is that Nixon, in trying to stop publication of the Pentagon Papers, was doing what he said he was doing: his job. American voters had elected him to protect national security and, not incidentally, the prerogative of the president and the federal government to determine how best to protect it, including determining whether sensitive information should be kept secret. If he didn’t do his job the way voters wanted him to, they could get rid of him next time. You know, like in a democracy.
Ben Bradlee, Katharine Graham, and Stephen Spielberg, not to mention those teary audiences, have no patience with such niceties. As it happens, in the end, the Pentagon Papers were a bust. The sickening detail they disclosed deepened but did not broaden the historical record, and by all accounts their impact on national security was negligible. Those facts don’t alter the creepiness of The Post’s premise—that the antagonists of an elected regime are allowed to go outside the law when it suits their view of the national interest. Charismatic saviors (and few people were more charismatic than Ben Bradlee) can save democracy from itself, but only by ignoring the requirements of democracy. Spielberg continues the tradition of the Washington movie. The Post is Capraesque—in the only true sense of the word.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Is Harvard assaulting the rights of students to free association in the name of a diversity standard it doesn’t live up to itself?
arvard College is home to six all-male “final clubs.” Their members have access to houses in which they eat, socialize, and form bonds with their fellows. These clubs are as historic as they are renowned; most were formed in the 19th century and have had Kennedys, Roosevelts, and an endless procession of politicians, writers, and businessmen as former members. From the time of their origination, these exclusive institutions have been an object of fascination. When doors are closed, and only a small, elite group selected from an already hyper-elite campus has been invited inside, jealousy, curiosity, and frustration are sure to prevail.
The final clubs are financially independent from Harvard and have been entirely unaffiliated with the university since the 1980s, when the administration and the clubs clashed over the latter’s refusal to admit women. But that conflict, which had cooled over time, has recently resurfaced in a new and heightened manner.
In March 2016 Rakesh Khurana, the dean of Harvard College, set an April 15 deadline for the final clubs, at which time they were to inform the administration whether they would change course and become co-ed. Two forces drove Khurana’s action. The first was a report by Harvard’s Task Force on Sexual Assault Prevention released days earlier, after years of research. The report indicated that students who were involved with the final clubs were significantly more likely to have experienced some form of assault than those who were not. The second impetus was the administration’s position that the final clubs—and the ways in which they screened members—were in direct conflict with the ethos of the university.
The deadline passed without response from the clubs. On May 6, 2016, Dean Khurana wrote a letter to Harvard President Drew Faust. He proposed that, beginning with incoming freshmen who would matriculate in the fall of 2017, students who became members of what he termed “unrecognized single-gender social organizations” should be ineligible for leadership positions in Harvard organizations—meaning they could not serve as publication editors, captains of sports teams, leaders of theatrical troupes, and the like. And they would also be ineligible for letters of recommendation from the dean, necessary for many prestigious postgraduate opportunities such as the Rhodes and Marshall scholarships.
Khurana’s letter, and the sanctions proposed within, quickly became a cause célèbre. Harry R. Lewis, a professor of computer science and himself a former dean of the college, wrote Khurana a letter expressing his concern that “by asserting, for the first time, such broad authority over Harvard students’ off-campus associations, the good you may achieve will in the long run be eclipsed by the bad: a College culture of fear and anxiety about nonconformity.” Lewis went on to note:
The reliance on your judgement of what count[s] as Harvard’s values, and using that judgment to decide which students will receive institutional support, is a frightening prospect….The discretion exercised by the dean and his representatives will chill the activism of students in causes that might also be considered noncompliant with Harvard standards—for example, advocacy for a religion that does not allow women to be full participants, or a political party that opposes affirmative action. Such groups are excluded from your mandate, but only as a matter of your discretion. Why wouldn’t activism for such organizations color the support the College would offer their members, on the basis that such students are showing that their true colors are not pure Crimson?
Lewis also referenced the faculty’s responsibilities and noted that there was no precedent in Harvard’s Handbook for Students for the sanctions, thus suggesting that Khurana’s proposals might be outside the administration’s jurisdiction.
In September 2016, Khurana detailed the responsibilities of the “Single-Gender Social Organizations Implementation Committee.” The committee was tasked with
consulting broadly with the College community to address the following questions: 1) What leadership roles and endorsements are affected by the policy; 2) How organizations can transition to fulfill the expectations of inclusive membership practices; and 3) How the College should handle transgressions of the policy.
In addition to the committee’s work, the faculty went through several rounds of motions and debate, discussing myriad permutations of the sanctions, as well as the validity of the sanctions themselves.
In December 2017, the discussions came to a halt. Harvard’s administration flatly announced it would engage in sanctions against students who joined those “unrecognized single-gender social organizations,” or USGSOs. This ostensibly final decision has provoked renewed outrage from students, faculty, and alumni, who have grounded their varied objections in ethical, philosophical, and legal concerns.U
ntil the 1960s shattered the American elite consensus on such matters, the collegiate experience was vastly different for students. Universities used to view their role as being in loco parentis—serving in place of the parents from whom their charges had recently separated. Today, on Harvard’s enchanting campus, teenagers and twentysomethings tend to rule the roost. Students have tremendous flexibility in building their course schedules, and rare is the lecture professor who takes attendance. Undergraduates come and go as they please, to and from wherever they please, with whomever they please, from the darkest hours of the night to the earliest hours of the morning.
But from the time America’s colleges came into being in the 17th and 18th centuries until just a few decades ago, these institutions imposed rules and regulations, curtailed freedoms, and designed a microcosmic world in which young adults would—in theory—learn how to navigate the reality that awaited them after graduation. They were eased into the world in a setting that constricted their choices and where the powers that be very consciously, and intentionally, refrained from treating them like adults. This was most evident in the controls placed on contact between the sexes.
A 1989 Harvard Crimson article by Katherine E. Bliss detailed the so-called parietal rules of the 1960s. It noted that “in 1964, the primary goal of College administrators was maintaining ‘an open door and one foot on the floor’ policy for students entertaining guests of the opposite sex in their rooms.” At that time, the student body and the administration were in conflict over the right to do as they pleased in their own dorms: “Students in 1964 were concerned with lengthening the number of hours they were allowed to spend with members of the opposite sex in the privacy of their own rooms.” If this sounds quaint, consider Bliss’s next point. “Few,” she observed, “could appreciate the fact that only a decade earlier, men and women were not allowed to enter the dormitories of the opposite sex at all.”
The original parietal rules meant that the women of Radcliffe, Harvard’s sister college, could have been in the Harvard Houses only between the hours of 4 and 7 p.m. Robert Watson, a Harvard dean, explained at the time: “We have to watch the mores of our students. I do not want to see Harvard play a leading role in relaxing the moral code of college youth.” Indeed, he went on to say that “the college must follow the customs of the time and the community.…We cannot have rules more liberal than a standard generally accepted by the American public.”
Is there a single standard generally accepted by the American public today? For most of the country—with exceptions in deeply religious Jewish, Christian, and Islamic communities—ours is not an age that concerns itself with the amount of time that men and women spend together in solitude. But that doesn’t mean our era isn’t concerned with the moral development of our youth. On the contrary, leaders of America’s elite institutions today are as preoccupied with strengthening the souls of their charges as were the men who designed the parietal codes all those years ago. Only their aim is not sexual purity anymore, but rather social diversity. It is the heart and soul of the moral vision of our times, and administrators today are no less determined to see that students hew to that standard. But in their effort to serve in loco parentis in this fashion, educators are leaping across ethical—and possibly, legal—lines.
The fraternity-like final clubs have always been difficult to get into, much like Harvard itself. And for many years, the all-male final clubs were certainly characterized by discrimination. In a 1965 piece for the Crimson, Herbert H. Denton Jr., then an undergraduate, noted that while “the tacit ban on Jews has been relaxed in most clubs,” the “ban on Negroes is still in effect.” The same cannot be said today; while several of the final clubs are trying to retain their character by remaining single-gender organizations, they do not screen would-be members on the basis of race or religion.
Nonetheless, the administration has determined that they espouse values and ideas contrary to the Harvard spirit and must consequently be treated as an anachronistic wrong to be extirpated. In a statement issued in December, President Faust (along with William F. Lee, senior fellow of the Harvard Corporation) declared that
the final clubs in particular are a product of another era, a time when Harvard’s student body was all male, culturally homogenous, and overwhelmingly white and affluent. Our student body today is significantly different. We self-consciously seek to admit a class that is diverse on many dimensions, including on gender, race, and socioeconomic status.
The clubs have strict rules about speaking with the press, and every member I spoke with—both former and current students—did so on the condition of anonymity. Many brought up the topic of diversity, noting that in their experience, the members of their clubs were diverse in both ethnic and socioeconomic respects. Members of multiple clubs told me about policies under which an inability to pay club dues has no bearing on whether or not a student will be accepted. Indeed, one went so far as to note that the financial-aid offer is blatantly highlighted during the initiation process, so that those lower on the socioeconomic ladder are not even temporarily burdened by the misconception that their financial status might affect their membership.
The final clubs, like Harvard itself, may indeed be a product of another era. But just as Harvard has evolved, the final clubs have changed. Faust, Lee, and all of the actors in the anti-final-clubs camp, ignore this. They also espouse a position that is as illogical as it is incoherent: Faust and Lee claim both that “students may decide to join a USGSO and remain in good standing” and that “decisions often have consequences, as they do here in terms of students’ eligibility for decanal1 endorsements and leadership positions supported by institutional resources.”
Most parents would not believe that their sons and daughters were in “good standing” if they came home from campus for winter break and told them they would be unable to be editor of the newspaper, captain of the debate team, or eligible for a Rhodes or Marshall scholarship. Yet Faust and Lee insist that “the policy does not discipline or punish the students.” It merely “recognizes that students who serve as leaders of our community should exemplify the characteristics of non-discrimination and inclusivity that are so important to our campus.” It’s hard to believe that Faust and Lee might honestly think that excluding students from leadership roles or prestigious postgrad opportunities would be construed as anything other than a punishment.
So why the insistence to the contrary? If the final clubs are, in the administration’s eyes, archaic, narrow-minded, discriminatory organizations, why not come out with an honest statement that calls for disciplining the students who dare to participate in these institutions? Lewis, the former dean, has explained this by making reference to what Faust and Lee do not mention—namely, Harvard’s Statutes—the internal bylaws governing the institution. Lewis cites part of the 12th statute, which lays out that “the several faculties have authority…to inflict at their discretion, all proper means of discipline.” He notes that “by declaring that ineligibility for honors and distinctions are ‘not discipline,’ what President Faust and Mr. Lee are saying is that the Statutes are not implicated, the matter is not one for the Faculty to decide, and no Faculty vote is needed to carry out the policy.” Indeed, Lewis notes that “it is important that the…policy not be discipline, because if it were discipline, and disciplinary action were taken against a student without a Faculty vote authorizing that policy, that student could challenge the action as not properly authorized.”
There is something else the Faust-Lee statement does not reference—and tellingly. In the beginning of the Harvard administration’s war on final clubs, concerns over sexual assault seemed to form the core of the issue. The Task Force on Sexual Assault Prevention reported that 47 percent of female college seniors who were in some way involved in final clubs—either because they attend events at the male clubs, or because they themselves are members of female clubs—said they had experienced “nonconsensual sexual contact since entering college.” Since “31 percent of female Harvard seniors reported nonconsensual sexual contact since entering college,” the report said, the data proved that “a Harvard College woman is half again more likely to experience sexual assault if she is involved with a Club than the average female Harvard College senior.” But Harvard’s sexual assault survey also found that 75 percent of “incidents of nonconsensual complete and attempted penetration reported by Harvard College females” happened in…Harvard dorms.
The report is sloppy and lumps together things that are not alike. For example, the Porcellian—Harvard’s oldest final club—does not allow any nonmembers through its doors. Charles Storey, who was then the Porcellian’s graduate president, provided a statement to the Crimson in which, among other things, he claimed that the club was “being used as a scapegoat for the sexual assault problem at Harvard despite its policies to help avoid the potential for sexual assault.” The Porcellian, he said, was “mystified as to why the current administration feels that forcing our club to accept female members would reduce the incidence of sexual assault on campus.” Indeed, Storey said, “forcing single gender organizations to accept members of the opposite sex could potentially increase, not decrease the potential for sexual misconduct.”
A day later, Storey apologized for his statement. A few days after that, he resigned as the Porcellian’s graduate president. His reasoning was admittedly inelegant, as it could be interpreted to suggest that club members would be unable to restrain themselves from committing sexual assault should women enter their domain. But Storey was not incorrect in pointing out that, by definition, women could not be subjected to unwanted touching in the Porcellian clubhouse if they were not allowed inside. For a club like the Porcellian, then, where instances of male-on-female sexual assault within the house are currently nonexistent, going co-ed would inherently guarantee that the opportunity for assault would expand. And that is why it is noteworthy (Storey’s humiliation notwithstanding) that the Faust-Lee declaration eliminated the attack on the final clubs for their ostensibly heightened role in unwanted sexual conduct. And why the entirety of the case against them now rests on their failure to hew to the administration’s convictions on gender egalitarianism.
The role that final clubs play in Harvard social life has been a contentious topic for decades. The perception has long been that socially, the members of Harvard’s male final clubs have too much power. On a campus with limited space for social gathering, the final-club mansions are often the source of the college’s most sought-after nightlife. Arguments have been made consistently over time that the exclusionary practices of the clubs—they typically accept only 10 to 25 new members a year—make for unpleasant and unfair campus social dynamics. But again, this conversation is happening at Harvard, an institution that prides itself on its prestige and exclusivity, and which accepted a mere 5.2 percent of its applicants to the 2021 class.
Lewis, the former dean, is not exactly a natural ally for the clubs. He told me that he was “pretty tough with them” during his tenure, and that he was “instrumental in trying to get some of the bad behavior of some of the final clubs under control.” The issues that arose during his time as dean seem to have mostly been related to parties that grew too loud or students who became too drunk. But confronting specific problems as they arise is an approach entirely different from issuing an all-encompassing sanction on free association. At Harvard, specifically, the implications of such a policy could have long-term ramifications. “As an educational institution that, for better or worse, graduates more than its fair share of the leadership of the country, in both industry and technology, and government and law,” Lewis said, “we should not be teaching students that the way you control social problems is by creating bans and penalties against joining organizations.” His “bigger worry,” he said, is that “students will come to think it’s a reasonable thing to do.”
Beyond all these considerations lies an additional layer of complication: legality. Even as a private institution, Harvard’s autonomy may not be as absolute as it seems to believe. I spoke by phone with Harvey Silverglate, a lawyer who is currently representing the Fly, one of the clubs. He told me that “Harvard is misinformed if it has been told by its lawyers or by the office of the general counsel that it can do what it is trying to do, that is to say, punish a private off-campus club, punish Harvard students for joining a legal off-campus club, that is not on Harvard property, and over which the university has no control.” If Harvard goes forward with its plan, Silverglate noted, it will have “overstepped its legal powers.” He spoke extensively about the specific challenges that Harvard would face under Massachusetts state law, explaining that there are free-speech provisions in the Massachusetts constitution that are more protective of speech than the First Amendment to the U.S. Constitution. In fact, Silverglate noted, the state’s supreme court has ruled in several instances that Massachusetts’s declaration of rights “limits the power of private institutions over the people it governs.”
In its desire to avoid a lawsuit, the Harvard administration—or the team of lawyers that doubtlessly advised it—carefully crafted a rule that would apply equally to men and women. Had the sanctions applied solely to male-only clubs, the university would likely have been faced with a federal lawsuit or investigation into gender discrimination. Yet despite the male final clubs being the primary target of the sanctions, they seem to have done the most harm so far to Harvard’s fraternities, sororities, and female final clubs.
One female student I spoke with is a member of one of the originally all-female final clubs that has recently gone co-ed rather than face the sanctions. She explained that within the club, there is a “feeling of resentment.” The USGSOs were all given the choice to either go co-ed or face the sanctions. “The girls clubs,” she told me, “have accepted it because they don’t have a lot of money.” While the male clubs have old and powerful alumni—and the money that comes with them—the female clubs are young and, by comparison, poor. “The boys can all sue,” she said, but “the girls clubs don’t have that privilege.” Having men in the club has certainly changed things for her. She explained: “It’s definitely different—I loved having an all-female space, and there was lots of merit to that socially and even in terms of networking.… I had this strong female network, and that was kind of eroded by going co-ed.”
Sorority members are facing similar challenges, but unlike the male and female final clubs that do not answer to a national body, they are unable to adapt as they see fit. Sororities and fraternities are unable to go co-ed without violating the rules of their national charters; the sanctions policy therefore affects their organizations most.
I spoke by phone with Evan Ribot, a Harvard alumnus from the class of 2014 who was president of the fraternity AEPI while on campus. Stressing that he could speak only for himself, and not on behalf of AEPI or the AEPI alumni network, he told me there was a “tenuous relationship between the administration and the fraternities” when he was on campus. “There was a sense that we operated in a gray zone because the university knew we existed,” he told me. “So we weren’t underground, but we also were not a recognized group.” As a result of the sanctions, AEPI at Harvard has dissolved itself and become a new organization, the gender-neutral “Aleph.” The organization is no longer affiliated with AEPI national.
“It’s a shame,” he said, “because some of my best friends were looking to join AEPI not because they wanted to be in an exclusionary single-sex organization but because they were looking for a place to fit in on a challenging campus.” The same is true for women: Ribot noted “The sororities were an avenue for women to find their own spaces—not because they were looking to exclude men but because there is an inherent value to a group of women hanging out, just like there can be an inherent value to have men hanging out.… It’s not rooted in exclusion.”
In some circumstances, it appears, Faust agrees. She herself attended Bryn Mawr—a women’s college— and serves as a special representative on the board of trustees of her alma mater. “It is impossible to figure out how Faust can reconcile helping to provide that singular experience to women while at the same time denying any portion of that experience to the women she is responsible for at Harvard,” said Richard Porteus, graduate president of the Fly Club. He graduated from Harvard in 1978 and was elected a member of the Fly Club in 1976. He spoke of the diversity of his club class and reflected that while “there were some people whose names also appeared on Harvard buildings,” he “didn’t come from wealth” and was not only elected to the club but became an officer. Porteus explained that “one’s socioeconomic standing did not matter.” All that mattered, he said, was “the potential for forming life-long friendships.”
The debate over Harvard’s final clubs would have taken place in an entirely different framework if we were still living in a time when university administrators saw their role as fill-in parents—and if that role were viewed as a comfort by the parents themselves. But today’s universities are, for better or worse, largely a free-for-all. The curtailing of certain freedoms thus becomes all the more apparent, and all the more disturbing, when measured against the backdrop of a prevailing “you do you” attitude. The core of the administration’s position seems to be reinforced by an overwhelming need to groom a student body that shares all the same beliefs and values—those that echo the principles that the administration itself espouses. If it deems single-sex social groups discriminatory, then there is no room for those students who see them not as beacons of gender exclusivity but as opportunities for friendship and support. In an educational institution, the only kind of diversity that should matter is diversity of thought. That’s a lesson the Harvard administration desperately needs to learn.
Harvard’s own questionable record on diversity is currently under harsh scrutiny—and not because of the behavior of clubs that have a tenuous connection to the university’s educational mission. Research has demonstrated that to gain entry into an institution like Harvard, Asian-American applicants must score an average of 140 points higher on their SATs than white applicants, 270 points higher than Hispanic applicants, and an astonishing 450 points higher than African-American applicants. The Justice Department has taken note and is investigating the matter. In December, the New York Times reported that the university has agreed to give the DOJ access to applicant and student records. That Harvard’s administration has become consumed with the goal of bringing an end to institutions that fail to meet a 21st-century standard for diversity is not without its savage ironies.
1 Meaning something a dean does.
Choose your plan and pay nothing for six Weeks!
Review of 'In the Enemy’s House' By Howard Blum
Nearly a decade would pass until the FBI and NSA began to release the actual Venona transcripts in 1995. In the years since, a number of books (including several co-authored by me) have analyzed the Venona revelations, while others have mined Communist International files and the KGB archives. Virtually all the major mysteries about Soviet espionage in the United States have been resolved by these once-secret documents. In addition to confirming the guilt of the Rosenbergs, Alger Hiss, Harry Dexter White, and virtually every other person accused of spying in the 1940s by the ex-spies Whittaker Chambers and Elizabeth Bentley, these books have exposed several important and previously unknown agents such as Theodore Hall, Russell McNutt, and I.F. Stone. Indeed, the only accused spy who turns out to have been innocent (although he was a secret Communist almost up until the day he took charge of developing an atomic bomb) was J. Robert Oppenheimer.
A handful of espionage deniers, centered around the Nation magazine, continue to argue, against all evidence and logic, that Alger Hiss is still innocent. The Rosenberg children continue to distort their mother’s role in espionage. And some hard-core McCarthyites still demonize Oppenheimer. But in truth, the bloody battle over who spied is over.
Lamphere’s book emphasized his collaboration with the Army cryptographer Meredith Gardner in the hard work of unraveling the spy rings using the Venona cables. Employing those 1986 recollections as a template, the Vanity Fair contributor Howard Blum has now given us In the Enemy’s House, an overly dramatized but largely accurate account of the friendship between the outgoing, hard-driving, atypical G-man Lamphere and the shy, scholarly, soft-spoken Gardner as they worked together to find and prosecute those Americans who had betrayed their nation.
Blum intersperses the American hunt for spies with the recollections of Julius Rosenberg’s KGB controller, Alexander Feklisov, who ran Rosenberg in 1944 and 1945 and supervised Fuchs in Great Britain from 1947 to 1949. Feklisov watched with mounting dread as the KGB’s atomic spy networks were exposed, both because of Venona and the KGB’s own blunders—most notably because the Russians used Harry Gold, Fuch’s contact, to pick up espionage material from David Greenglass, who was Julius Rosenberg’s brother-in-law and part of his spy ring.
Blum also uses information from many of the scholarly accounts that have already appeared, although not always carefully. His only new source of data comes from interviews with members of the Lamphere and Gardner families and access to their personal notebooks. But while he provides a list of his sources for each chapter, Blum does not use footnotes, so that although many of the personal and emotional reactions to the investigation he attributes to people, and especially to Lamphere, presumably come from these sources, it is never clear whether they are based on contemporaneous written notes or third-party recollections of events more than 50 years in the past.
Such objections are not mere academic carping. While Blum successfully turns this oft-told story into an interesting and suspenseful narrative, his approach comes at a cost. For example: He is eager to transform Lamphere from a diligent and resourceful FBI investigator who often chafed at the bureaucracy and petty rules that governed the agency into a full-blown rebel who almost singlehandedly forced the FBI to take up the problem of Soviet espionage. To do so, Blum suggests that until the FBI received an anonymous letter in Russian in August 1943 alleging widespread spying and naming KGB operatives, the Bureau regarded the investigation of potential Soviet spies as useless because allies did not spy on each other.
This is wrong. In fact, the FBI had already mounted two large-scale investigations—one of Comintern activities in the United States undertaken in 1940 and the other of attempted espionage directed at atomic-bomb research at the Radiation Laboratory in Berkeley, which began in early 1943. Both had unearthed information on atomic espionage. These included discomfiting details about Robert Oppenheimer’s Communist connections; efforts by Steve Nelson, a CPUSA leader in the Bay Area in contact with known Soviet spies, to obtain atomic information; and contacts between a Soviet spy and Clarence Hiskey, a chemist on the Manhattan Project.
At one point, Blum renders one of Hiskey’s contacts, Zalmond Franklin, as Franklin Zelman and mischaracterizes him as “a KGB spook working under student cover.” In fact, Franklin was a veteran of the Abraham Lincoln Brigade working as a KGB courier. In any event, the FBI neutralized this threat by transferring Hiskey from Chicago to a military base near the Arctic Circle, thereby scaring his scientific contacts (whom he had introduced to a Soviet agent) into cooperating with the Bureau.
There are other occasions where Blum demonstrates an uncertain grasp of the history of Soviet intelligence. He misstates Elizabeth Bentley’s motives for defecting; angry at being pushed aside by the Soviets, she feared she was under FBI surveillance. And he claims that only three witnesses testified against the Rosenbergs (Ethel’s brother and sister-in-law and Harry Gold), which leaves off others (Bentley, Max Elitcher, and the photographer who had taken passport photos for the family just prior to their arrests).
Blum’s account of the way the KGB encoded and enciphered its messages is oversimplified. The mistake that made it possible for American counterintelligence to break into the Soviet messages was their intelligence services’ use of some one-use-only pads a second time. Not all of the one-time pads were used twice, and only if such a pad was used twice could the FBI strip the random numbers from the message sent by Western Union. That process allowed Gardner to attempt to break the underlying code. The vast majority of the Soviet cables remained unbreakable, and many could be only partially decrypted. And most of the decrypted cables had nothing to do with atomic espionage but concerned the stealing of diplomatic, political, industrial, and other military secrets.
Partly to heighten suspense, Blum misrepresents or distorts the timelines on matters involving Klaus Fuchs and the Rosenberg ring. He harps on Lamphere’s frustration about not being able to use the decrypts in court, but the FBI had concluded it was highly unlikely that they could be legally introduced into evidence without exposing valuable cryptological techniques, a conflict Lamphere surely understood. That very problem helps explain the FBI’s inability to prosecute Theodore Hall, the youngest physicist at Los Alamos, who had been exposed as a Soviet spy. Blum mistakenly suggests that the FBI agent in Chicago who investigated Hall was unaware of Venona. But that agent did know; the problem was that when the FBI began its investigation in the spring of 1950, Hall had temporarily ceased spying. He was eventually brought in for questioning, but neither he nor his one-time courier and friend, Saville Sax, broke and confessed. Lacking independent evidence, the FBI was stymied.T he most significant flaw of In the Enemy’s House is its assertion that Ethel Rosenberg’s conviction and execution were monumental acts of injustice that disillusioned both Lamphere and Gardner, soured their sense of accomplishment, and left them consumed by guilt. It is true that Lamphere had opposed Ethel’s execution and had drafted a memo that J. Edgar Hoover sent to the judge urging she be spared as the mother of two young sons. Gardner had translated one Venona message that indicated Ethel knew of her husband’s espionage but because of her delicate health “did not work,” which Gardner interpreted to mean she was not part of the spy ring. But, as Lamphere pointed out in his own book, her brother David Greenglass had testified to her involvement in his recruitment. And KGB messages available following the collapse of the Soviet Union now make clear that Ethel had played a key role in persuading her sister-in-law, Ruth Greenglass, to urge her husband to spy.
In The FBI-KGB War, Lamphere never evinced deep moral qualms about their fate. He expressed a more complex set of emotions. “I knew the Rosenbergs were guilty,” he writes, “but that did not lessen my sense of grim responsibility at their deaths.” And he calls claims that the case was a mockery of freedom and justice both “abominable and untruthful.” Blum insists that Gardner was “stunned” by their deaths and quotes him as saying somewhere: “I never wanted to get anyone in trouble” (which would suggest a monumental naiveté if true).
Blum’s claim that Lamphere and Gardner had condemned themselves “to another sort of death sentence” for their roles is a wild exaggeration. So, too, is his charge that Lamphere believed that in the Rosenberg case the United States “might prove to be as ruthless and vindictive as its enemies.”
Finally, Blum links Lamphere’s decision to leave the FBI for a high-level position in the Veteran’s Administration to a sense of lingering guilt. But in his own book, Lamphere attributes the move to the frustration he felt once he realized he would be stuck as a Soviet espionage supervisor for years to come. Blum links Gardner’s brief posting to Great Britain to work with its code-breaking agency as an effort to escape his guilt, but he never mentions that Gardner returned to work at the National Security Agency for many years.
Retired intelligence agents friendly with both men have no recollection of their expressing regret about their role in the Rosenberg case. It is possible that they may have made some such comment to a family member or jotted down something in a notebook, but without very specific and sourced comments, the idea that they ever regretted their work exposing Soviet spies is nonsense that mars Blum’s otherwise entertaining account.
Choose your plan and pay nothing for six Weeks!
What we got instead was a combination of celebrity puffery and partisan cheap shots at the Trump administration. The politics of North and South Korea, and the equally complex and intricate relations between these two countries and China, Japan, Russia, and the United States, were reduced to just another amateur sport. Ignorant and supercilious reporters transposed the clichés of the electoral horse race, complete with winners, losers, buzz, and sick burns, to nuclear brinkmanship. Major news organizations could not have done Kim’s job any better for her.
A representative example was written by no less than seven CNN reporters and researchers who concluded, “Kim Jong Un’s sister is stealing the show at the Winter Olympics.” The lead of this news article—I repeat, news article—was the following: “If ‘diplomatic dance’ were an event at the Winter Olympics, Kim Jong Un’s younger sister would be favored to win gold.” Gag me.
Then the authors let loose this howler: “Seen by some as her brother’s answer to American first daughter Ivanka Trump, Kim, 30, is not only a powerful member of Kim Jong Un’s kitchen cabinet but also a foil to the perception of North Korea as antiquated and militaristic.” Kim’s “Kitchen Cabinet”—why, he’s just like Andrew Jackson. And how could anyone have the “perception” that North Korea is “antiquated” and “militaristic”? Sure, they might threaten the world with nuclear annihilation. But have you seen Donald Trump’s latest tweet?
New York Times reporters are either smarter or more efficient than their peers at CNN, because it took only two of them to write “Kim Jong-Un’s sister turns on the charm, taking Pence’s spotlight.” Motoko Rich and Choe Sang-Hun described Kim’s “sphinx-like smile” and “no-nonsense hairstyle and dress, her low-key makeup, and the sprinkle of freckles on her cheeks.” They contrasted the “old message” of Vice President Pence, who has no freckles, with Kim’s “messages of reconciliation.” They cited one Mintaro Oba, a “former diplomat at the State Department specializing in the Koreas, who now works as a speechwriter in Washington.” What they did not mention is that Oba worked at Barack Obama’s State Department and writes speeches for a Democratic firm. Not that he has an axe to grind or anything.
The typical Kim puff piece began with her charm, grace, poise, statesmanship, and desire for unity and peace. Then, 10 paragraphs later, the journalist would mention that oh, by the way, North Korea is a totalitarian hellscape that Kim’s family has been plundering for over half a century. For instance, describing the South Korean reaction to Kim, Anna Fifield of the Washington Post wrote,
They marveled at her barely-there makeup and her lack of bling. They commented on her plain black outfits and simple purse. They noted the flower-shaped clip that kept her hair back in a no-nonsense style. Here she was, a political princess, but the North Korean “first sister” had none of the hallmarks of power and wealth that Koreans south of the divide have come to expect.
A political princess! It’s like Enchanted, except with gulags and famine.
Deep in Fifield’s article, however, we come across this sentence: “Certainly, Kim, who is under U.S. sanctions for human rights abuses related to her role in censoring information, was treated like royalty during her visit.” Just thinking out loud here, but maybe human-rights abuses and censorship deserve more than a glancing reference in a subordinate clause. Fifield went on to say that “Vice President Pence, who was also in South Korea for the opening of the Winter Olympics but studiously avoided Kim, had worried in advance that North Korea would ‘hijack’ the Olympic Games with its ‘propaganda.’” Now where could he have gotten that idea?
The fascination with Kim revealed both the superficiality and condescension of much of our press. Fifield’s colleague, national correspondent Philip Bump, tweeted out (and later deleted) a photo of Kim sitting behind Pence at the opening ceremonies with the comment, “Kim Jong Un’s sister with deadly side-eye at Pence,” as if he were being snarky about an episode of Real Housewives.
When Kim departed the Olympics, Christine Kim of Reuters wrote an article headlined, “Head held high, Kim’s sister returns to North Korea.” Here’s how it began:
A prim, young woman with a high forehead and hair half-swept back quietly gazes at the throngs of people pushing for a glimpse of her, a faint smile on her lips and eyelids low as four bodyguards jostle around her.
The Reuters piece ends this way: “Her big smiles and relaxed manner left a largely positive impression on the South Korean public. But her sometimes aloof expression and high-tilted chin also spoke of someone who sees herself ‘of royalty’ and ‘above anyone else,’ leadership experts and some critics said.” Thank goodness for the experts.
Kim Jong Un could not have anticipated more glowing coverage for his sister, for the robot-like cheerleaders he sent alongside her, or for his transparent attempt to drive a wedge between South Korea and its democratic allies. “North Korea has emerged as the early favorite to grab one of the Winter Olympics’ most important medals: the diplomatic gold,” wrote Soyoung Kim and James Pearson of Reuters, who called Pence “one of the loneliest figures at the opening event.” Quoting on background “a senior diplomatic source close to North Korea,” Will Ripley of CNN wrote an article headlined, “Pence’s Olympic trip a ‘missed opportunity’ for North Korea diplomacy.” But who was Ripley’s source? Dennis Rodman?
What most disturbed me was the difference in coverage of Kim Yo Jong and Fred Warmbier, whose son Otto died last year after being tortured and held captive in North Korea. Fred Warmbier accompanied Pence to the Olympics as a reminder of the North’s inhumanity and menace. Journalists ignored, dismissed, and even criticized this grieving man. Among many examples of thoughtlessness and callousness was a Politico tweet that read: “Fred Warmbier criticizes North Korean Olympic spirit.” He must have missed Kim’s freckles.
Washington Post columnist Christine Emba asked: “Is Otto Warmbier a symbol, or a prop?” You see, Emba wrote, “Otto’s father may want his son to be a symbol. But the nature of his escort risks turning him into a prop.” Why? Well, because “symbols stand for something” while “props are used by someone.” And “the Trump administration, which hosted Warmbier, is made up of shameless instrumentalizers who have made clear that they stand for very little.” So there you go. We should be skeptical of Fred Warmbier because Trump.
Emba’s not all wrong. There were a lot of props and tools at the Olympics. You could find them in the press box.
Choose your plan and pay nothing for six Weeks!
was nine when I made my first trip to Israel in June of 1968, almost exactly a year after the Six-Day War. My parents had been in Italy the autumn before, and while vacationing in Rome they learned that there were inexpensive flights leaving twice a week for Tel Aviv. The whole of Israel was giddy at the time, unburdened by their insecurities for the moment with the stunning success of their having just won the Six-Day War and their having increased the total size of their young, besieged nation by more than two-thirds.
My mother finally found a use for the crumpled phone numbers of distant Israeli relatives she’d been carrying in her purse for the past several months, relatives on both her father’s and her mother’s side, Romanians all. Osnat, my mother’s second cousin once removed, had had the misfortune of remaining in Europe while the Nazis were on the move. She spoke of having spent five days hiding from the Germans in the liquid filth of an outhouse and breathing through a tube when they came near.
Meeting scores of warm and loving relatives and having been feted by them as “our dear American Mishpacha” was partly why my parents were both so taken with Israel—that and the Israeli people themselves, the Sabras, so proud and brash, and the ancient beauty of the land. With some talk of perhaps making Aliyah, or at least exploring the idea of our moving to Israel, my parents, my siblings, my first cousins, and my Grandma Rose and her younger brother, Uncle Sol, gathered up a month’s worth of warm-weather clothing and flew en masse to Tel Aviv. We were greeted at Lod Airport by a crush of relations, all of them clambering to hug and kiss us. And then as the sun descended into the Mediterranean and night fell over the coastal plain, they drove us all north in a rag-tag caravan of tiny old Fiats, Renaults, and Peugeots to the beach town of Netanya, where we stayed for the entire summer in a tiny flat just behind the home Osnat shared with her husband, Shlomo.
Days later, I’m with my father and my brother Paul at the Wailing Wall. It’s weird to think that only a week ago I was at home watching Gilligan’s Island and looking for my dad’s Japanese Playboys in the bottom drawer of his bedroom closet during the commercials. Now, I’m in Jerusalem, in the glaring sun beneath this gigantic wall of stone. When I’m sure no one’s looking, I put both hands on the wall, and then I touch my forehead to it. The stones are colder than you’d think they’d be in all this heat.
For reasons I don’t understand, I start to cry. I’d be embarrassed if my brother or my dad saw me like this, so I pretend that I’m praying. I wonder, though, am I just crying because you’re supposed to cry here? If the rabbis from the Talmud Torah had shown me pictures of some random bridge in Saint Paul from the time I was in nursery school, would I have cried at that, too?
When I look up at the wall again, I see some birds’ nests and a million pieces of paper with people’s prayers in them, all stuffed into the cracks between the stones. Everyone who comes here wants God’s attention. I’ll bet He loves all the notes. They probably make Him feel like someone gives a shit about the cool stuff He does.I
had been born a Jew in Minneapolis. Growing up Jewish there wasn’t a good or a bad thing any more than growing up with snow was good or bad. It just was. Because we Jews were so few, being one made us all feel different. It wasn’t a difference we’d asked for or earned either. It, too, just was. It was natural for us, that is, becoming somewhat Jew-centric. We were fond of staying close to one another, close to our causes and to our history, it was just a natural reaction to being the “other.”
It’s 1970 and I’m in junior high, on my way to English, when I see Nelson Gomez, Stuey Nyberg, and Craig Walner. They’re hip-checking kids into the tall metal lockers that line the hall. They are the three kings of the Westwood Junior High’s dirtball dynasty, young hoodlums who regularly and without fear skip school, smoke filter-less Marlboros, and shout “Fuck you, faggot” to students and staff members alike, save perhaps for Mr. H, the anti-Semitic shop teacher with whom they have forged an abiding friendship.
To the left and right of me, hapless students fly, body-slammed with alarming speed into the lockers by the three of them. It doesn’t escape my notice that these unfortunates have not been chosen randomly. There goes Brian Resnick. Next it’s Shelly Abramovitz and then Alvin Fishbein. As I round the corner, Stuey Nyberg grabs my second cousin, Elaine Kamel, by the shoulders and slams her face-first into her own locker. She and they were selected for no other reason than their Jewishness.
I grab Stuey by his neck with both hands and I claw at him until my fingernails pierce his pale skin and blood spurts from his jugular. Now I take the clear plastic aquarium algae scraper that I made in Mr. H’s shop class this very morning and use it to gouge out one of Nelson Gomez’s eyeballs, making sure he can see it in the palm of my hand with his remaining eye. Craig Walner tries to run, but I catch him by his mullet and shove his head into Elaine Kamel’s locker. I slam her locker door on him again and again. I don’t stop until his head is severed from his neck…
…and my daydream comes to an abrupt halt when Stuey Nyberg says, “Himmelman, it’s your turn to meet the lockers, you fucking kike.” Without a word of warning, he clouts me with a stinging jab right to my nose. It’s the first time I’ve ever been hit in the face, and while it’s agonizing, the blow is also somehow euphoric. I’m supercharged with adrenaline, I feel as if I’m on fire. But of course, I don’t hit Stuey back. God, no. I simply stand there glowering at the three of them, blood dripping from my large Jewish nose. And for the first time in my life, I feel downright heroic. I look around me and I see that, for now at least, our bitterest enemies have stopped hip-checking what feels like the entire Jewish nation.
Six months later it’s summer vacation, and we Himmelmans fly from Minneapolis to New York and connect with a nonstop to Tel Aviv. In less than two days, I’m on a towel on the beach in Netanya looking out at the cerulean blue of the Mediterranean.
As I lay on the hot sand, Mirage fighter jets with blue Jewish stars emblazoned under their wings suddenly streak so low across the water that I can smell jet fuel. As they scream overhead, the whole beach seems to shake. With a strange sense of clannish pride, I laugh and stare up at the planes as they accelerate and finally rocket out of range.
My father died, after suffering from Stage IV lymphoma for five years, in 1984. I was 25 years old. A year later, I was living in the Twin Cities working on music with my band when I received a call from a woman named Ruth Grosh. She asked if I’d be willing to write some songs for a therapeutic teddy bear she’d dreamed up called Spinoza Bear. Ruth, a bona fide subversive by nature and New Age before anyone had even come up with the term, named her ursine brainchild after Baruch Spinoza, the heretical 17th-century Jewish philosopher. Spinoza was seen as harmful to, and at odds with, the views of the Jewish establishment of Amsterdam at the time. Eventually, both he and his writings were placed under a religious ban called a “cherem” by the Dutch Jewish community where he lived and worked. Aside from the fact that he was reviled for his modernist views, no one had much bad to say about him personally, except that “he was fond of watching spiders chase flies.”
The songs were to play off a battery-operated tape deck that fit into a zippered pouch beneath the soft brown fur of the bear’s stomach. A red heart-shaped knob on the bear’s chest served as the on-off switch. By today’s standards, the technology would seem crude, but at the time, with just a modicum of suspension of disbelief, it was possible to feel that the voice of the bear along with the music was issuing directly from its cheery muzzle. As to whom to hire to be the voice of Spinoza Bear, it was decided after some deliberation that not only would I write and sing the songs, I should also be the kind, concerned voice of the bear itself.
Each of the dozen or so cassette tapes that were eventually recorded had themes of self-empowerment, a kind of you-can-make-it-if-you-try bent. After just two years, the bear became a huge success—not as some plebeian, retail teddy, but as something greater. Spinoza Bear soon found his way into hospitals, health clinics, and centers for healing of all kinds. By holding the bear and listening closely to his stories and songs of wellness and inner light, rape victims, grief-stricken parents, bone-lonely pensioners, autistic kids, as well as children on cancer wards all across America found it possible to relieve some of their pain and fear.
Aside from the good works, the bear provided me with twenty grand in seed money that our band, Sussman Lawrence, used to set sail for New York City in 1985.
We were five new-wave rockers in an Oldsmobile Regal Vista Cruiser wagon, and two roadies in a spanking-new Dodge cube van. The van, which we were overjoyed to discover, had been hastily christened from bumper to bumper with graffiti sometime during our 45-minute debut set at CBGBs, the legendary East Village rock-and-roll club, only days after arriving on the East Coast.
Given the high cost of living in New York City, New Jersey seemed the next best thing. As it turned out, there were very few homeowners interested in renting a house to a band. I hatched a plan, which involved my calling on a middle-aged real-estate agent named Carol we’d found advertising in a Bergen County newspaper. When I finally got her on the line, I explained to her that we were medical students enrolled that fall at nearby Rutgers University and in need of a quiet place to live and study.
The following morning, as the rest of the guys waited outside in the Oldsmobile, I and my cousin Jeff, our band’s gifted keyboard player, showed up at Carol’s office in suits and ties we’d purchased at a local thrift shop and carrying responsible-looking briefcases. I had boned up on some medical terms as well, orthopedic surgical techniques mostly, in case she needed proof that we were actually who we were claiming to be. But there had been no need. We had the cash and seemed honest enough—“honest enough” to let her know that a few of us were also part-time musicians and that there might be some music playing, quietly of course, from time to time, just to ease the strain of our intense studies.
Two days later, Jeff and I woke up early, signed the lease papers, and pulled our now multihued, invective-laden cube van into the driveway of 133 Busteed Drive in Midland Park, New Jersey.
Trying for as much discretion as possible, lest the neighbors notice anything out of the ordinary, we backed the van up to the garage, lugged the gear up a short flight of stairs and into a large, unfurnished living room. Once upstairs, we began unloading beer-stained amplifiers, at least a dozen guitar cases, a drum set packed tightly into three large metal flight cases, assorted keyboards, and an entire public-address system and lighting rig. Aside from some bad scrapes in the hardwood floor and a gaping hole or two in the walls on our way in, the load-in was accomplished with speed and efficiency. We were up and practicing by late afternoon, our new-wave rock blaring fast and loud into the New Jersey autumn night.
A month after settling in, Ruth Grosh reached me at dinnertime by long distance, in the squalor of our band-house collective. After some catching up, she gently let me know me that some psychic friends had explained to her that I had just a few months left on the planet. “What!” I said, “they told you I was gonna die?” Ruth was practiced at this kind of thing, it seemed, although her nonchalance about my imminent demise didn’t make me feel any less concerned. “They asked me to find out if you’d like to come in for a free consultation,” she said. I was due to fly back to Minneapolis later that week anyway, and I figured I might as well find out what all this planet-leaving nonsense was about.
Back home, on the morning of my appointment with the psychics, I found my mother, who was normally quite composed, flitting around the kitchen and singing quietly to herself. She had agreed to a lunch date that afternoon with the contra bass player from the Minnesota symphony, her first since my dad had died almost two years before.
“Does this blouse look good on me?” she asked. “Be honest.”
“Yeah, it looks great,” I said.
I was uncomfortable in the extreme watching my mother dart around the house like a schoolgirl primping for a date with some dude who wasn’t my dad. True, it’d been two years since he’d died, and given all that she’d been through, it wasn’t like she didn’t deserve to live a little. After all, I thought, it was just lunch. But the more I saw of this weird, giddy side of her, the less I liked it. A car honked. It was Ruth.
She and I rode wordlessly as Japanese New Age wooden flutes intoned from her car stereo. We arrived after twenty minutes at the northern suburb of Brooklyn Center, and Ruth parked her car near a long row of newly built town houses. A man and a woman in their mid-forties greeted us at the front door, both smiling in a scary, off-putting way. They appeared to be a kind of husband-and-wife psychic tag team, and they rushed headlong into the consultation by asking if I’d like to give them some names of people I knew.
“We’ll be able to tell you all about them,” the woman said and smiled again. I thought it was just some cheesy method of showing off.
“The first names are enough,” said the man.
“Okay, let’s go with Jeff,” I said.
My cousin Jeff is a musical genius, a pianist of remarkable facility, who’s had to contend with neuromuscular tics most of his life. The two psychics were seated facing each other in cheap leather armchairs. In an instant, they were both precisely mimicking my cousin’s facial tics. I recognized each of them from the names Jeff and I had given them. When Jeff’s thumbs bent downward spasmodically, we called it “Southerner.” When his palms flexed upward in a sort of hand-waving motion, we called it “Reckless Greeter.” In another, with his eyebrows pinched together, lips compressed, and eyes blinking, Jeff looked like someone who was very curious about his environment. We called that one “Curious Man.” His most frequent tic was also his most unsettling. We called that one “Round the World.” It involved his eyeballs rolling uncontrollably in their sockets. Suddenly, to my astonishment, the corners of both of the psychics’ mouths had formed narrow half smiles. Their eyebrows began squeezing together; their eyes were blinking—open-shut-open-shut—perfectly mimicking Jeff’s Curious Man.
“The music, he can’t stop the music,” the woman shouted in excitement. Her husband, whose hands then began a remarkable imitation of Reckless Greeter added, “Yes, good God, the music! Can’t you feel it just pouring out of him?”
I was thinking this had to be some kind of brilliant trick, albeit a devilish one. It was astonishing, yes, but I wasn’t yet convinced that they were real. Next, I said the name “Beverly,” my mother’s, and they both giggled. It’s disconcerting to see adults giggle at any time, but when a pair of middle-aged psychics giggle at the mention of your bereaved mother’s name, it’s triply so.
“She’s doing something she feels guilty about,” the woman offered.
“Yes,” said the man. “Something she’s afraid of doing, but it seems to us that she’s also very excited.”
Almost in unison, the psychics said, “She’s acting like a little schoolgirl today!”
How in hell could they have known what I’d just experienced myself for the first time in my life that very morning? If these two freaks had wanted my undivided attention, they sure as hell had it now.
The room fell silent. I didn’t dare speak. They had officially scared the living daylights out of me with their last trick. Soon, they broached the subject I’d come all this way to talk about.
“Is it your wish to leave the planet?” the woman asked, more casually than I would have imagined possible for someone questioning a fellow human being about whether he wanted to live or die.
I paused and breathed deeply for a minute or so. It was a question I stopped and thought about longer than a mentally stable person might have.
“No,” I finally told them, “I have no intention of leaving anytime soon.”
This seemed to relieve them. The man said, “The reason we’ve been so concerned about you is that we believe music is more important to you than you may be aware. It represents your very essence, and by working as single-mindedly as you have to get a record deal, with the kind of music you’ve been making with your band, you’ve been cheapening and compromising your integrity. You’ve been, in a sense, unfaithful to your muse. That’s what’s causing this spiritual disconnect and, should it continue, my wife and I both feel like it will shorten your stay here.”
His wife took over: “What you need to do is uncover a deeper, more honest expression in your music, something closer to the bone. We know you love the blues and reggae. We think it’ll be helpful to start playing music you love, rather than music you think will sell.”
By this time, tears were spilling down my cheeks. “There’s this song,” I began telling them, “that I wrote for my dad over two years ago on Father’s Day, that almost no one has heard. It’s something that was written with the sole intention of connecting with him before he died. It’s on a cassette tape, just sitting there on a shelf in my closet.”
“Why not put that song out as your next single,” the man said.
I was suddenly speechless. Why had I never thought of this? It was such a simple yet profound idea. I flew back to New Jersey, determined to release not just the one song, but an entire album dedicated to my father.
The guys picked me up in the Oldsmobile at Newark Airport the next day. We were standing around the luggage carousel waiting for my bags when I told them I was going to record a solo record, a tribute to my father, whom they all loved and respected.
My bandmates understood this was something I needed to do. They also knew it wasn’t just talk. A solo album, produced for whatever reasons, also signaled the possibility that the ethos of the band may well have been coming to an end. Nevertheless, they played their hearts out on the record and, by doing so, tacitly gave me their blessings and their assurances that whatever happened with it would be for the best.
The recording featured the song I’d written for my dad, and it eventually became my debut album, This Father’s Day, for Island Records.
Its release also became a powerful catalyst for me personally. It took me from where I had been, locked up in pain and confusion, to some other, more hopeful place. Even before my meeting with the psychics, I thought I’d gotten beyond most of the hurt, that it was simply time to grit my teeth and persevere. It had been two years, after all. But I was mistaken. The process of mending broken hearts is never as pat as that. As much as I needed to forget, to emerge clear-eyed from the jumble and rawness of my father’s death, I knew I’d have to face my worst fears again and again. But I felt ready. I also knew, in a way I hadn’t before, that I really didn’t want to die.
While my father was suffering in the last five years of his life, I found myself in a different state of mind from that of my friends and bandmates, who were, for the most part, blithely moving through their young lives. I’m not saying pain made me wise; it’s just that it can, for those willing to accept its hard lessons, provide a bit of perspective, shine some light on what’s sacred and what’s less so.
During those years I was working very hard to become famous, whatever that might have meant. I felt that I needed to reach some level of achievement before my dad died. I suppose I was conducting a search for miracles. It’s no wonder. For my family and for me at least, miracles seemed to have been in very short supply back then.
It’s miracles after all, that compel us forward, that encourage us to move with some degree of willingness into the next day. But, despite what we might believe, it’s hardly ever the big ones that truly move us. The sea can split, we can win the lottery, we can even become rock stars, and still, those phenomenal circumstances are never what matter most. In the end, the only miracle worth wishing for is the ability to be made aware of the smallest splendors, the most inconsequential truths, and the overlooked rhythms that connect us to the people and things we love.
I felt a kind of heat rising up around me in those days, a sense that what had long been static was now stuttering back into motion. There was a pleasant strangeness to the feeling, but like many things that at first strike us as unusual, it wasn’t wholly unfamiliar, either. I’d felt that same unnamable sensation, lying awake in my bed in the dark as a young child, focusing on individual moonlit snowflakes as they fell outside my window. I felt it again in Jerusalem, at nine years old, when I first touched the sunbaked stones of the Western Wall. I felt it the first time I’d snorkeled in the Red Sea and became drunk from sheer beauty. I felt it the frigid November morning we buried my father. I felt it on the evening I finally met my wife, and again, the moment when each of my children was born.
The circumstances were wildly varying, but in each instance there was a sense of being taken from one place to another, of inertia finally giving way to movement. It was as if my mundane life had cracked open and I saw, arrayed in front of me, some image of the unseen hand that forms and directs the universe.M
y first experiences in Crown Heights, Brooklyn, at age 27 were catalytic. A rabbi named Simon Jacobson had posed a single question and it, too, set me into motion: “Why is walking on the surface of the Earth any less miraculous than flying above it?” he’d asked.
The idea that the world is a wondrous, mysterious place—even as we are destined to walk on the mundane surface of it, even if we cannot truly fly—is both a liberating and comforting notion. Being attuned to wonder is my preferred condition. Perhaps it’s natural for each of us. But why, then, are so many moments not imbued with this sense of the miraculous? Why is there such a divide between barely sensing and deeply feeling?
What I did know in the autumn of 1987, with a certainty I hadn’t known before—perhaps couldn’t have known—was that I needed to get married. I had awakened to the idea that there was nothing I was doing with my life, not my music, not my friendships, not my finally getting that almighty record deal, more important than finding the right woman with whom to create a family and live out my days. I also knew that to do this, I would need to create a powerful forcing frame for myself, not one that would constrict or limit me, but one that would allow me to channel my outsized ego and my creative proclivities toward more productive ends than I’d ever dreamed possible.
Eventually, I made a sort of pact with myself, a silent, personal agreement. It came down to this simple declaration: The next time I sleep with a woman, it will be with my wife. This meant that I had to extricate myself from my longtime girlfriend. Though I was, and still am, extremely fond of her, I could never envision her as a lifetime partner or the mother of my children. In addition, our arrangement was somewhat nebulous, and so this new, self-imposed structure also meant that I’d have to cut off any contact with the other women with whom I was having casual sex. I had to make a fundamental cultural and emotional shift. I would need to wean myself away from years of assumptions about the very nature of what a modern relationship meant. I would have to forge a new way of looking at women, at my role as a man, and at the world at large.
It became clear to me that the freedom I had always longed for could be obtained only through the somewhat paradoxical means of setting limits, delaying gratification, and cutting away many experiences that an all-pervasive consumerist culture had been (and continues to be) hell-bent on selling. If you’ll allow me, I’ll explain this further by way of metaphor.
Music is among the most transcendent of all art forms, both for the performer and listener. Since it has no form or substance, it can easily serve as a model for the boundlessness of spirituality. But as anyone who has mastered a musical instrument knows, musical ideas are expressed almost exclusively by means of structure and restriction, words very few of us would correlate with freedom.
At first glance, this seems like a paradox. How could something as liberating and intangible as music be based on restriction? Not only is music based on restriction, I’d go so far as to say that, aside from the existence of raw sound—elemental white noise, if you will—the only other thing that allows music to take place, the only thing that differentiates it from this pure noise, is what sounds the musician chooses to leave behind. In this sense, music comes about not by choosing notes but by the elimination of notes. Take a look at the idea in this somewhat inverse manner: Only by rejecting all other sonic choices are we left with the ones we truly desire. To make music, we don’t add, we subtract.
Here’s how something as commonplace as the key signature of a particular piece of music also reflects this idea. Unless you were trying to achieve a harsh atonal musical effect, you wouldn’t want to be playing in the key of B-flat minor while your key signature called for you to be playing in A major. The ensuing “music” would sound like a chaotic racket to most people. The time signatures of compositions, along with their tempos, which require that a particular note last only so long and that it be played at a particular speed, also function with this same principle—creation by negation. Avoiding the time signature, or playing at any speed without regard for the overall tempo, is another good way to produce only noise.
It is only through adherence to the limiting factors of time and tempo that music can take shape. In that same sense, if it weren’t for the constraint of playing only certain keys on a piano, and thereby negating all other choices, you would hear only noise. Anyone who has heard his or her toddler pounding away on a piano knows exactly what this sounds like.
Most, if not all, musical instruments also work on this principle of restriction. The trumpet, for example, is based upon compression and restriction. If the air a player blows into the trumpet’s mouthpiece weren’t compressed and regulated by the embouchure, the only sound you’d be able to hear would be a soft wind-like noise passing through the horn.
As I became more and more immersed in the wisdom of Jewish thought and practice, the idea of freedom-in-structure became clearer and ever more personally relevant. If it was true for music I wondered, how much more true must it be for all of life itself? And given that human sexuality (whether or not the participants engaged in a sexual act are conscious of it) concerns the creation of life, it occurred to me that causing dissonance in that most meaningful—dare I say mystical—arena of life was something I definitely needed to avoid.
I knew I had to place a set of restrictions on myself in order to make music out of my life, as opposed to just raw sound. Although this conception of the universe felt new to me, new in the sense that it was radically different from the one I’d been acting on for so many years, it wasn’t unfamiliar. Without my knowing it, I had undergone an awakening. I became alert to a perspective I recalled vaguely, even from my earliest childhood. It was as if I could see something important forming (though what it was, was still unclear) out of a barely examined and often fleeting sliver of thought. All at once, the world around me seemed to feel very much as it did when I was a child. I could remember clearly, lying feverish in bed, waiting for sleep, with every last thing in the world unknown and unexplained.
It was frightening as an adult to feel these thoughts growing stronger and more pervasive, but it also felt safe in ways—as though there’d been a kind of revelation, one that seemed to say: “Peter, son of David, there is a purpose to everything you’ve experienced in the recent past and everything you see before you now. From this moment on, there are things you must do and ways you must act.”
The mantra to live without restrictions, which had guided me for most of my life, seemed at that point to be leading me only to chaos. I believed I could, and must, do better for myself. My most fervent wish was no longer to become a rock star; it was to create my own family, one that could become a replacement for the one I’d been missing, the one that had changed so drastically when my father died.
So, in a tour bus rolling across the American continent, I did the three most practical things I could think of: I stuck to my private pact, I dreamed, and I prayed several times a day to an unseen Deity for strength and for love.
This part of the story really begins a few months after my dad’s funeral, when I found myself in a cramped apartment in South Minneapolis auditioning some songs I’d written for a local performer named Doug Maynard. I sang him a few things and he nodded quietly. Doug wasn’t a big talker. Finally he chose one. “Man, I think I could do this justice,” he said. It was called “My First Mistake.”
You taste like pepper frosting on a granite cake.
Baby fallin’ in love with you was my first mistake…
Less than a year later, Doug was found dead in his living room, stone-drunk and drowned on his own vomit at the age of forty. Before this happened, however, he had introduced me to his manager, who had introduced me to a New York City music lawyer, who had introduced me to a record producer named Kenny Vance.
Kenny had worked with a lot of famous people and he wasn’t particularly shy about mentioning just whom. “I used to date Diane Keaton,” he told me. “I know Woody Allen—been in a couple of his films. I was the music director for Saturday Night Live.” Then he said, “Tonight I’m gonna take you to my main connection, a religious Jew in Brooklyn.”
Before long, Kenny and I were crossing the Brooklyn Bridge. We arrived at an apartment in Crown Heights where Kenny’s friend, Simon Jacobson, greeted us. I liked Simon right off the bat. His eyes reflected some essential paradox, some awareness that being alive is both a source of great humor and great sadness. His wife, Shaindy, introduced herself with a gracious smile and placed glass bowls of almonds and chocolate-covered coffee beans on a yacht-sized table before excusing herself to tend to her young children. The thing I didn’t understand at first was how a big hirsute guy like Simon, in an oversize yarmulke, with a massive beard and in a white polyester button-up, was able to land such a good-looking wife. I soon learned that around these parts, it wasn’t the guy who could throw a football the farthest who got the girl. Simon had another thing going for him.
His, at the time, was to memorize every word of the Lubavitcher Rebbe’s Shabbos dissertations and record them on Saturday night for publication later in the week. To understand the scope of the job, it’s necessary to know that when the Rebbe spoke, it was often for four or more hours straight, without breaks, without notes, and in a manner of cyclical and increasing complexity. To make things even more challenging, the Rebbe wasn’t freestyling. Everything he taught was derived from a compendium of source materials that ranged into the tens of thousands of books. And they could not be recorded because it was the Sabbath and no electricity could be used.
When I once mentioned to Simon how awed I was at his ability to memorize this much information, he looked at me and said: “The memorization is the least of it. It’s the task of compiling it with the proper source notes that’s the real challenge. Every day I correspond with the Rebbe, and he writes me back with perfect editor’s notes. Once I wrote and said I didn’t understand a particular passage and couldn’t find the source for it. The Rebbe had a sharp sense of humor. He sent me back a markup with a big red circle, not just on the sentence I was having an issue with, but around the whole page, with the words, ‘What do you understand?’”
It was getting late. Kenny had left me there and driven back to the city. As Simon spoke to me, I kept looking up at the oil paintings of shtetl life and the Rebbe hanging on the walls. I was prodded more by fatigue than bravado when I finally asked, “What’s the deal with those pictures of the Rebbe? They seem sort of cultish to me.”
“I like the pictures,” he said, “To me, the Rebbe is like a very inspiring grandfather, and I get a lot out of reflecting on the things he says and the way he lives his life. There are people for whom there is no sense of self. People called Tzadikim, and they have no need for personal gain. A Tzadik lives only to serve others and they can do anything they wish.”
“Really,” I asked with just a hint of comic disdain. “Can they fly?”
“Understand, I’ve never seen anyone fly,” Simon answered. “But for a Tzadik, the act of flying is no greater miracle than the act of walking.”
This idea stunned me. Not because it was new. The things that move us most never are. They are things we already know, beliefs that are buried away inside us. Of course, when you stop and think about it, there’s absolutely no difference between the weights of the two miracles, walking and flight. It’s just that we non-Tzadikim get so tired of the one that happens all the time.
At that moment, at that table in Brooklyn, I started thinking about the little-known rhythm-and-blues singer Doug Maynard. I was remembering the sound of his voice and simultaneously considering the infinite number, the impossible number, of tiny coincidences—the tendrils, if you will, that in their unfathomable complexity, had guided me to that particular apartment on that particular night. The thought was so vivid, it was as if I could hear Doug singing again. Singing most soulfully, most truthfully about the joy, and the sweat, and the pain of this world. It wasn’t long after that I met the Lubavitcher Rebbe for the first time. He handed me a bottle of vodka and a blessing for success, and I started becoming more Jewishly observant right away: keeping Shabbos in my tiny apartment in Hell’s Kitchen, keeping kosher, and putting on tefillin. I married Maria two years later. We’ve been married for nearly 30 years.
About a year ago my cousin Jeff asked me what it had been like to meet the Rebbe. This is exactly how I answered him.
“You know when you’ve done something you think is horrible (whatever the hell it may be) and you start going down—deeper and deeper into the rabbit hole of regret? When you’re in so deep that you start to feel like the biggest loser ever born, like nothing is possible, that nothing good is ever gonna come your way, and that you can’t even face yourself in the mirror?”
“Sure,” Jeff said. “I’ve been there.”
“Well,” I said, “meeting the Rebbe was the exact opposite of what I just described.”