Operational failures abounded; but the real shortcomings ran much deeper, and may still be in place.
How did we fall victim to a second and more terrible Pearl Harbor? At first glance, this seems an unsolvable puzzle.
On the one hand, we had various kinds of warning. The bombing of the federal building in Oklahoma City in 1995 and the Tokyo nerve-gas attack that same year provided a powerful demonstration of some of the methods terrorists might employ and the destruction they could achieve. It was also abundantly clear, from a decade-long series of lethal attacks on U.S. facilities abroad, and an attempt to knock down the World Trade Center in 1993, that a group or groups of terrorists were making a determined effort to strike at the U.S. That these same terrorists might attempt once again to hit targets in the American homeland was an obvious possibility; indeed, the CIA issued a series of generalized alerts to that effect, the most recent appearing this past August.
On the other hand, a conspiracy came out into the open with ruthless efficiency on September 11 and caught us entirely by surprise. Scores of Arab terrorists, having carefully prepared for years, managed to execute a highly imaginative and precisely synchronized attack on the political, military, and financial nerve centers of the United States. Even though the terrorists did not reach all of their intended targets, they still achieved near total success: thousands of people were burned alive or buried in rubble, major financial and transportation links of the country were paralyzed or destroyed, symbols of our freedom and security were reduced to ashes, and fear of still more death and destruction was made ubiquitous across the land.
Now the attack on the United States has provoked, in response, what President George W. Bush has called “a war against terrorism,” the first major military phase of which began on October 7. But far more quietly, and partly behind the scenes, it has also provoked a war of finger-pointing. Some has been directed at the Federal Aviation Administration and other bodies in charge of airport security, where laxity unquestionably ruled the day. Some has been aimed at our immigration authorities, for opening the door to virtually all comers. And far more has been directed at the U.S. intelligence community—primarily the FBI and the CIA—for failing so utterly at a primary responsibility. Among other problems, we are told, the FBI lacked adequate authority to engage in electronic surveillance of suspected terrorists, while the CIA, thanks to a series of ill-guided “reforms,” has been chronically weak in gathering human intelligence.
But whatever genuine operational failures these postmortems reveal, our real shortcomings run much deeper. Terrorism is a problem the U.S. government has been contending with in an increasingly organized fashion since 1968, when Palestinian terrorists began hijacking aircraft and the modern era of international terrorism was born. In the intervening years, an intricate structure has been built to deal with the extraordinary number of different facets of the problem, ranging from prevention all the way to what is called “consequence management,” a euphemism for dealing with the aftermath of a major assault.
All told, some 45 separate governmental units and subunits are responsible for handling the different dimensions of the terrorist threat. This unwieldy structure has the burden of carrying out a policy that mirrors it in complexity. The formal aims of U.S. policy have been set forth in a myriad of official documents, with two “presidential decision directives” promulgated by President Clinton being especially significant codifications. These documents, which have been only partially declassified, declare that the policy of the United States is to “deter, defeat, and respond vigorously” to terrorist attacks against Americans no matter where they take place.
How precisely have these words been put into action? One recent and very impressive guide is Terrorism and U.S. Foreign Policy by Paul R. Pillar.1 Its author served throughout much of the 1990’s as the deputy director of the CIA’s Counterterrorist Center, and his book, published not long before September 11, is both authoritative and exceedingly well-informed. I draw upon it freely in what follows.
As we learn from Pillar, the U.S. has been fighting terrorism over the years in a variety of ways, ranging from the indirect to the direct, and by means of a variety of instruments, ranging from the peaceful to the violent. To begin at the indirect and peaceful end of things, let us look first at Pillar’s summary of attempts by the government to address the “root causes” of terrorism.
In the aftermath of September 11, this subject has become somewhat poisoned, with some on the Left (Susan Sontag most notoriously) citing American actions and alliances to explain—and explain away—the terror attack itself. But since, as Pillar notes, terrorism and terrorist groups “do not arise randomly, and they are not distributed evenly around the globe,” U.S. officials have necessarily been impelled to give serious attention to the conditions in which terrorism appears to flourish. Roughly, he discerns two types of such “antecedent” conditions: political repression and economic deprivation.
“Terrorism is a risky, dangerous, and very disagreeable business,” Pillar writes; “few people who have a reasonably good life will be inclined to get into” it. It follows that tamping down resentments that might lead to terrorism is an interest of the U.S., in line with our efforts to provide development assistance, promote democracy, and foster peace negotiations in troubled regions of the world like Northern Ireland and the Middle East.
Another indirect aspect of the U.S. counterterrorism program is the effort to shape the intentions of terrorists. Behind our oft-declared principle of not negotiating with hostage-takers is the idea, in Pillar’s words, “that not rewarding terrorism will give terrorists less incentive to try it again.” Our frequently reiterated determination to punish terrorists or bring them to “justice” is also presumed in at least some cases to exercise a similarly deterrent effect.
More significant than either of these aspects of policy has been erecting physical defenses against attack. This effort began in the late 1960’s with aviation-security measures designed to foil hijackings. Over the years, it has extended into other areas, primarily through increasing protection for key federal sites like the White House and Congress and civilian infrastructure like nuclear power plants. In the wake of a string of attacks on U.S. embassies and military bases abroad in the 1990’s, the U.S. has spent billions tightening the security of overseas buildings and placing concrete barriers in the way of truck bombers.
On the more active side of things, Pillar shows that the U.S. has also tried to interfere with the ability of terrorist organizations to carry out attacks. An enormous intelligence-gathering apparatus, employing techniques from satellite reconnaissance to electronic interception of communications to the recruitment of informers and the placement of moles, helps us track the movement of terrorists, impose financial controls on their organizations, and, when appropriate, apply force against them and the states that sponsor them.
Financial controls have received heightened attention as the U.S. tries to freeze the funds sustaining Osama bin Laden’s al Qaeda terrorist network. But the approach is not new. As the State Department’s annual report on terrorism shows, the U.S. has blocked the assets of dozens of organizations, including the Abu Nidal group based in the Middle East, the Tamil Tigers in Sri Lanka, and the Aum Shinrikyo in Japan.2 Similarly, U.S. intelligence-gathering has bolstered the effort to locate terrorists abroad and arrange for their arrest, rendition, and prosecution in U.S. or foreign courts. Indeed, in any tally of time, energy, and resources devoted to counterterrorism, the objective of “bringing terrorists to justice” would undoubtedly occupy first place.
The best-known example of this approach was our response to the downing of Pan Am 103 over Lockerbie, Scotland in December 1988, in which 270 passengers died, including 189 Americans. A decade-long effort to bring two Libyan intelligence agents to trial came to a culmination earlier this year with the acquittal of one and the sentencing of the other to life imprisonment (with the possibility of parole after twenty years) by a Scottish court sitting in the Netherlands. The same prosecutorial machinery has been set in motion following other major attacks against American targets, including the truck bombing of U.S. embassies in Tanzania and Kenya in 1998, which killed nearly 300 people including twelve Americans, and the suicide bombing of the USS Cole in 2000, which killed seventeen American sailors.
The legalistic approach has not entirely supplanted more forceful action. Though the subject is by definition wrapped in secrecy, the President has the authority under U.S. law to “use all necessary means, including covert action . . . to disrupt, dismantle, and destroy international infrastructure used by international terrorists, including overseas terrorist-training facilities and safe havens.” Press reports suggest that there have been at least a few covert operations launched against terrorists in recent years, including one mission that was to have been carried out by Pakistani proxy forces against Osama bin Laden in 1999 but was aborted on account of a coup d’etat in Pakistan.
Finally, open military action has also been part of the U.S. portfolio, but only, prior to our current war on the Taliban and al Qaeda, on three occasions and only in retaliatory fashion. In 1986, after Libyan agents placed a bomb in a Berlin discothèque frequented by American soldiers, the U.S. struck Libya from the air, hitting military sites as well as the compound of Libya’s leader, Muammar Qaddafi. In 1993, in response to an attempted assassination plot aimed at former President George Bush, Bill Clinton lobbed 23 cruise missiles at the headquarters of the Iraqi intelligence service in Baghdad. And in 1998, following truck-bombings of U.S. embassies in Kenya and Tanzania, Clinton again fired a fusillade of cruise missiles in an attempt to kill Osama bin Laden at one of his training camps in Afghanistan. Clinton also simultaneously struck a pharmaceutical plant in the Sudan believed by the CIA to be producing the nerve agent VX.
These, then, were the main elements of U.S. counterterrorism policy up until September 11. Roaming over the territory in great detail, along the way Pillar offers his own judgments of which elements were sound, which were unsound, and what was missing.
To begin again at the beginning, Pillar believes that “cutting out roots” can indeed be useful; as a case in point, he cites the U.S. role in fostering the Oslo peace accords between Israel and the PLO. That the Oslo process seems, if anything, to have fueled terrorism, and never more dangerously so than at Oslo’s peak under Israeli prime minister Ehud Barak, tells heavily against Pillar’s judgment here. But it should be noted that in general he does not favor putting “root causes” at the center of U.S. policy, and (in keeping with his habit of qualifying almost everything he writes) he is also careful to acknowledge that terrorism does not necessarily follow from oppression, that terrorist groups have emerged “in some wealthy Muslim societies like Kuwait but not in some poor ones like Niger,” and that peace processes can “enflame a minority that opposes a settlement.” Besides, even if all root causes were somehow removed, there would “always remain,” writes Pillar, “a core of incorrigibles—and these will include the terrorists about whom the United States must worry the most.”
Pillar is similarly realistic about efforts to shape the intentions of terrorists. A policy of making no concessions, for example, can help at the margins, and perhaps has served to prevent some hostage-taking incidents. But some terrorist attacks “are conducted without any particular concession in mind; the destruction is more of an end itself.” When dealing with this extreme brand of terrorism, “there is no way to influence intentions over the long term.”
Physical defenses are also no panacea. Although in some instances they have unquestionably saved lives and complicated the work of those who would attack us—Pillar cites the bombing of the U.S. embassies in Kenya and Tanzania in 1998, where U.S. casualties were held to a minimum—the limitations of such steps are no less obvious. For one thing, the resources poured into physical defense, however enormous, cannot begin to afford protection to all the facilities requiring it. What is more, such security measures may encourage terrorists to shift from secure to more vulnerable targets. In an open society like ours, there will always be an unlimited number of paths by which determined terrorists can wreak havoc and kill people en masse. Particularly difficult to stop—as the Israeli experience teaches us—are terrorists willing to commit suicide in the course of carrying out their assault. In sum, physical defenses at some times and in some places are reasonable and necessary, but in themselves they are not “a solution.”
The collection of intelligence is no less problematic, essential though it is as a first step toward more active efforts. For starters, the major technological tools of U.S. intelligence—satellite reconnaissance and the interception of electronic communications—are better suited to monitoring the maneuvers of a conventional adversary than the activities of small terrorist groups, which do not typically possess assets observable from high altitudes or outer space and whose members can elude electronic detection by arranging to communicate in face-to-face meetings.
As for human-source intelligence, that is especially hard to come by in the realm of terrorism. Even if the Clinton administration had not issued controversial guidelines discouraging the CIA from recruiting “unsavory” characters as sources, successful penetrations of terrorist plots would in all likelihood have remained the exception rather than the rule. The obstacles, writes Pillar, arise from the structure and composition of terrorist groups:
Those who are closest to the center of decisionmaking in a group (and thus most likely to be witting of all its operations) are the ones least likely to betray it and thus most resistant to recruitment as intelligence sources. Besides this problem of motivation, any attempt to recruit such individuals also faces a problem of access—of getting to them and cultivating relationships with them. This is an even greater difficulty with most religiously oriented terrorists of today than it was with, say, leftists who moved within bourgeois circles in Europe. . . . A well-placed human source is the best possible intelligence asset for counterterrorism, but for the reasons just given, such sources will be very few—and always will be.
Compounding the problems in gathering intelligence are the difficulties in analyzing it. The challenge goes beyond the hurdle of working through reams of documents in obscure languages like Dari or Pashtu. The greater challenge is to find small grains of wheat in a mountain of chaff. The “sheer magnitude of what there is to cover” includes, in Pillar’s words,
not only the whole lineup of existing terrorist groups but also terrorists who have not yet formed a group . . . and groups that have not yet gotten into terrorism. . . . [And] what about the plethora of other extreme religious cults around the world, many of which could represent future terrorist threats? There are not the resources to cover them all, and culling the ones that are most likely to pose such a threat is an awesome analytical task.
From these inherent difficulties, Pillar is impelled to conclude that there “will never be tactical warning of most attempted terrorist attacks, or even most major attempted attacks against U.S. targets.”
Bleak as all this sounds, Pillar is no less straightforward in enumerating the limitations on active measures. Thus, sanctions and financial controls are not likely to amount to much more than a pinprick, even if they were far more stringent than what we had in place before September 11. Unlike in narcotics smuggling, or money laundering, the salient characteristic of terrorism, writes Pillar, is that it is “cheap.” (The first attempt to topple the World Trade Center is estimated to have cost only $400 in total; the second, $500,000 at most.) The small sums involved would make the movement of money difficult to track even if it took place in this country, but most of it does not, and is subject only to the unwatchful eyes of governments rarely eager to cooperate with U.S. authorities. In the end, says Pillar, financial controls are primarily of “symbolic” significance.
Next, the criminal-justice approach, vaunted by several successive American administrations as a clear demonstration of our resolve. On the plus side, putting terrorists on trial does serve to reaffirm the U.S. commitment to the rule of law, and has succeeded in putting a number of dangerous terrorists behind bars. The mere prospect of being apprehended may also deter some, or at least interfere with their operational freedom. But the advantages have to be weighed against the serious disadvantages, of which Pillar enumerates several.
For one thing, the American criminal-justice machine is set in motion only when American citizens are victimized. But “the impact of international terrorism on U.S. interests cannot simply be measured in dead American bodies”; there are incidents abroad in which no Americans are killed but in which failure to intervene robs us of potentially valuable intelligence and also makes us appear callous, indifferent to the terrorism that afflicts our friends and allies but not ourselves.
For another thing, the criminal-justice approach tends to apprehend only the “working-level” operative while permitting the powers behind terrorism—heads of organizations and leaders of sponsoring regimes—to remain at large. This not only leaves the worst perpetrators unpunished but has the additional practical pitfall of giving the public, and perhaps the U.S. government itself, “a misleading sense of closure.”
Among our various tools, covert action is, in Pillar’s view, the most “effective possibility.” He conceives of it not as thriller-style raids by shadowy special forces but as a “painstaking cell-by-cell, terrorist-by-terrorist” campaign, inherently “small-scale,” with the U.S. playing a quiet, “behind-the scene” role (providing “encouragement, prodding, information, advice, . . . and perhaps monetary or logistical support”) while the main work is done by the countries where the terrorists are operating or hiding. Among other advantages of this method of procedure, the “U.S. hand can stay hidden, and the risk of reprisals is minimal.”
Though conceding that some groups are so violent and recalcitrant that they “should be exterminated, not engaged,” Pillar concludes that assassination of terrorist leaders, currently forbidden by a 1976 executive order, is on the whole a detrimental practice. Not only would it be perceived “as a stooping by the United States . . . to the level of the terrorists,” but “it would completely undercut the principle that terrorism is a matter of methods, not just of targets or purposes.” It would also “shake the confidence of many Americans in the relevant government institutions, resurrecting old suspicions about what the CIA and other U.S. intelligence and security services were doing.”
Finally, the open use of military force. This, in Pillar’s judgment, is another mixed bag. Retaliatory strikes of the kind we have carried out on three occasions may have some merit, but there is little evidence that they stop terrorists from striking again. Qaddafi, for example, “did not get out of the terrorism business” but continued to hit American targets, using the Japanese Red Army as a surrogate and also directly ordering his agents to place a bomb aboard Pan Am 103. The U.S. attacks against Saddam Hussein in 1993 and bin Laden in 1998 similarly seemed to exercise no deterrent effect.
Indeed, far from working to deter, such strikes in Pillar’s view can “serve some of the political and organizational purposes of terrorist leaders,” increasing publicity for their cause, bolstering their “sense of importance,” and reinforcing the message “that the United States is an evil enemy that knows only the language of force.” At best, retaliatory strikes can help sustain the spirit of a public demoralized by terrorism, and help mobilize allies by impressing on them our own seriousness. But in the last analysis, writes Pillar, such strikes “will always be primarily message-sending exercises, rather than a physically significant crippling of terrorist capabilities.” As for preemptive as opposed to retaliatory strikes, Pillar says little about them other than to note that they would lack “the justification of being a response to a terrorist attack” and for this and other reasons would be “unwise.”
Pillar’s survey, written while on sabbatical from the CIA, is cheerless indeed: none of our major tools promises to work very well, and the United States is bound to remain perpetually vulnerable to surprise attack. But Pillar himself does not appear especially perturbed by the state of affairs he sketches.
True, in some respects the dangers posed by terrorism have been getting worse. The casualty rate rose over the 1990’s even as the number of incidents declined, and across the same period the U.S. itself became more of a target, a development reflecting the “increased global reach of terrorists” and the “demonstration” provided by the 1993 attack on the World Trade Center and then by Timothy McVeigh that “low-tech methods could cause mass casualties even in the heart of America.” But Pillar also notes a more encouraging side of the picture: among Americans, the toll that terrorism has exacted over the past two decades—some 856 killed, including 190 from domestic terrorism—is “tiny” when measured against annual highway deaths or major U.S. wars, and does not even match the death rate from bathtub drownings or lightning strikes.
What such comparisons show, writes Pillar, is that terrorism “tends to have greater psychological impact relative to the physical harm it causes than do other lethal activities.” But the public focus on the failures of counterterrorism obscures the victories the U.S. has achieved—among them a dramatic cut in the “frequency of international terrorist incidents worldwide” since the mid-1980’s and the rapidity with which a number of major terrorist incidents, including the World Trade Center bombing of 1993, were solved and the perpetrators apprehended, tried, and imprisoned. All told, Pillar concludes, the U.S. track record against terrorism over the past decades has been “remarkable.”
As I mentioned early on, Pillar’s book was published before September 11. In the wake of September 11, of course, a number of his conclusions and observations—at one point he hails the “drastic reduction in skyjackings” over the past 25 years as a “major success story” of U.S. counterterrorism—seem not just wide of the mark but almost risible. But to pick at Pillar for not having foreseen the future is unfair. Much of what other government officials had to say about terrorism before September 11 is far less wise and clear-eyed than anything in this book.
Just this past July, for example, one of Pillar’s colleagues, a former CIA and State Department counterterrorism specialist named Larry C. Johnson, took to the op-ed page of the New York Times to dismiss the idea “that terrorism is the greatest threat to the United States,” that it is “becoming more widespread and lethal,” or that “extremist Islamic groups cause most terrorism.” Such “fantasies,” Johnson wrote, have been generated by “pundits who repeat myths” and “bureaucracies in the military and in intelligence agencies that are desperate to find an enemy to justify budget growth.”
Even after September 11 (but before the U.S. began its counterassault on the Taliban and al Qaeda in October), Philip C. Wilcox, Jr., who served in the State Department as ambassador at large for counterterrorism under Madeleine Albright, was insisting in the pages of the New York Review of Books that “armed force” is “usually an ineffective and often counterproductive weapon against terror.” What is needed instead, he wrote, is “a concerted international effort, with carefully calculated pressures and incentives—and cooperation from Pakistan, which is essential—to persuade bin Laden’s hosts to hand him over for trial”; in other words, diplomacy of the very same kind practiced to no effect by the Clinton administration. As for military action, this has several pronounced disadvantages, wrote Wilcox. For one thing, it “may alienate governments, especially in the Islamic world, whose cooperation we need.” For another, it “might kill innocents.” It could even “violate international laws, including treaties against terrorism that the U.S. had worked hard to strengthen.”
If we turn from the executive branch to another arm of government, mention must be made of Daniel Patrick Moynihan, widely regarded, before he retired last year, as the intellectual giant of the U.S. Senate. Having completed a slow journey from a 1970’s neoconservative hawk to a 1990’s neoliberal dove, the senator from New York spent much of the last decade campaigning to cut the CIAs budget, and indeed introduced a bill to abolish the agency altogether and turn over its intelligence-gathering functions to the State Department, where the likes of Larry C. Johnson and Philip C. Wilcox, Jr. were running the show. Among the provisions of Moynihan’s bill, one in particular stands out. It concerns the entry of aliens into the U.S. and might aptly be labeled the Free Admission for Terrorists clause:
Within two years of the effective date of this Act the United States Government shall delete from any Lookout List the name of any alien and all information pertaining to such alien placed on such list because of any past, current, or expected beliefs, statements, or associations, if such beliefs, statements, or associations would be lawful within the United States.
There is nothing remotely resembling such terminal astigmatism in Pillar’s Terrorism and U.S. Foreign Policy. Not only is it consistently cogent and sharply reasoned, but at every step of the way Pillar explores the weakness of his own arguments, offering necessary caveats, pointing out areas of uncertainty, and delineating the risks and costs of the policies he himself favors. This is, in sum, a specimen of official thinking at its best. If Pillar’s analysis is flawed—and it is deeply flawed—that is because his entire framework for thinking about his subject is misconceived.
To Pillar, the metaphor of a “ ‘war’ against terrorism” is not an apt one, for from that metaphor “it is a small step to conclude”—mistakenly—“that in this war there is no substitute for victory.” Instead, counterterrorism should more properly be likened to “the effort by public-health authorities to control communicable diseases” or the effort to improve “highway safety,” where regulators “can reduce deaths and injuries somewhat” by taking action on a variety of fronts but without any false idea of “defeating” the problem. Above all, we must be prepared to compromise. Although some terrorists are “monstrous vermin to be locked up or stamped out,” there will be occasions, Pillar writes, “when the greatest contribution the United States can make to counterterrorism will be to swallow hard and . . . to shake hands that carry stains of old blood, possibly including American blood.” In short, less emphasis on “absolute solutions” and more willingness to seek “accommodation.”
Pillar’s strategy was demolished on September 11; but even before then, its deficiencies were glaring. To think of terrorism as a public-health or traffic-safety problem, with whatever qualifications, is profoundly to misread it as a political and moral phenomenon. Yes, infectious disease and defective automobile tires do cause death, but they do not do so by deliberate human agency. To lose or blur the distinction between such radically disparate things is to deprive us of the clarity necessary to combat it. In particular, by ruling out preemptive strikes, we allowed terrorists to wage war against us at the time and place of their choosing, while we could never hit them, or the states harboring them, first. In other words, the catastrophe of September 11 was not a matter of inadequate airline security or porous border controls or even insufficient watchfulness by the CIA and the FBI, though all of these conditions and more undoubtedly obtained. Rather, what brought us low was a passive strategy, executed passively.
As the 1990’s wore on, the government was well aware that Osama bin Laden was targeting the United States for attack. Here, for example, is the relevant entry in the State Department’s 2000 annual report on terrorism, listing al Qaeda’s activities over the previous decade, but not yet including the attack on the USS Cole:
- Plotted to carry out terrorist operations against U.S. and Israeli tourists visiting Jordan for millennial celebrations. (Jordanian authorities thwarted the planned attacks and put 28 suspects on trial.)
- Conducted the bombings in August 1998 of the U.S. embassies in Nairobi, Kenya, and Dar es Salaam, Tanzania, that killed at least 301 persons and injured more than 5,000 others.
- Claims to have shot down U.S. helicopters and killed U.S. servicemen in Somalia in 1993 and to have conducted three bombings that targeted U.S. troops in Aden, Yemen, in December 1992.
Linked to the following plans that were not carried out:
- to assassinate Pope John Paul II during his visit to Manila in late 1994,
- simultaneous bombings of the U.S. and Israeli embassies in Manila and other Asian capitals in late 1994,
- the midair bombing of a dozen U.S. trans-Pacific flights in 1995,
- and to kill President Clinton during a visit to the Philippines in early 1995.
For years, the U.S. government was also aware that Afghanistan and a number of other countries were harboring bin Laden and his associates. It was aware that in some of these countries, terrorist training camps were graduating more than 2,000 men a year, all schooled in the arts of subterfuge and mayhem, all steeped in hatred of the West and preeminently hatred of America. It was aware that bin Laden was attempting to acquire chemical, biological, and nuclear weapons of mass destruction and had built facilities for their production in Afghanistan. And it was aware that bin Laden was himself only a part of a larger picture that included dozens of other affiliated terrorist organizations and more than a half-dozen states giving them succor.
As the problem posed by bin Laden grew steadily more acute over the course of the decade, how did we respond? “The tendency to overreact to shocking events, and to fall into complacency in their absence,” writes Pillar, “is natural and inevitable.” Here, however, was a case where the U.S. fell into complacency in the presence of shocking events. We did, it is true, launch at least one covert operation and one overt military operation against bin Laden. As is obvious, they failed.
The covert operation never got off the ground, and for a no less obvious reason: to rely on the Pakistani intelligence service to carry it out, when Pakistan was a main prop of the Taliban, was to doom it from the start. As for the overt operation, the retaliatory strike on bin Laden launched by Bill Clinton on August 20, 1998 was designed above all to eliminate any risk to the U.S. Thus Clinton opted only to fire a salvo of cruise missiles from a great distance and did not deploy troops on the ground. Next to the politically costly possibility of suffering American casualties, successfully hitting the target was a secondary consideration—and we missed the target. After bin Laden escaped unscathed, there was no follow-up action whatsoever.
At the time, Clinton’s Secretary of State Madeleine K. Albright, making the best of a mini-operation gone awry, told reporters that the strike had actually achieved its goal: “It is very likely something would have happened had we not done this.” In the wake of September 11, asked to comment on the Clinton administration’s failure to put bin Laden out of business, Albright said only that “I think we accomplished quite a lot.” More honest has been Nancy Soderberg, a former senior aide in Clinton’s National Security Council: “In hindsight, it wasn’t enough, and anyone involved in policy would have to admit that.”3
The U.S. under Clinton and both Bushes “accomplished quite a lot” of other things, too. In gaining the passage of a UN resolution imposing yet more sanctions against the Taliban last year, the U.S., the State Department proudly announced, had secured a “major victory” against terrorism. The opening in New York of the trial of bin Laden operatives accused of bombing the U.S. embassies in Kenya and Tanzania was, according to the State Department, still another “major victory.” When Qaddafi decided to permit the extradition of his intelligence agents to stand trial in the Netherlands, this was, in the words of a ranking Clinton diplomat, “a real achievement” for U.S. policy. And when the guilty verdict was handed up this past January, the reaction of the Bush administration was the same: a “momentous decision,” in the words of a State Department spokesman.
At the same time that we were compiling paper victories against our deadliest adversaries, we also declared that we were systematically choking them off. It would be more accurate to say that we were laying bare our own throats.
Consider the financial and immigration controls in place before September 11. “[A]ny contribution to a foreign terrorist organization,” the State Department declared in 1997, “regardless of the intended purpose,” was henceforth “prohibited.” But the same document announcing this blanket proscription went on to explain that because some terrorist groups had been operating “charitable activities such as clinics or schools,” Americans could still donate to them if their “contribution is limited to medicine or religious materials.” Hamas, the Palestinian terrorist group responsible for dozens of suicide bombings within Israel, was specifically singled out as an organization that could be aided in this way, but it was not the only eligible one. Up through September 11 and for a time beyond, it remained perfectly legal to make donations of medications to Osama bin Laden’s al Qaeda network, including Cipro—a helpful tool for safely working with anthrax. And one could also contribute texts—“religious materials”—calling for holy war against the United States.
Immigration controls: in theory, members of foreign terrorist organizations were subject to various limitations on their freedom of movement, at least as far as entrance to and residence in the U.S. were concerned. Actual practice was something else. Al Qaeda was officially labeled a terrorist group only in 1999 (a date that in itself speaks volumes about bureaucratic sloth), but as we now know, followers of bin Laden had little difficulty remaining here legally even after they were ostensibly banned.
“Four to five al Qaeda groups have operated in the United States for the last several years,” the Washington Post reported on September 23, 2001. These groups, it continued, “are under intensive government surveillance. The FBI has not made any arrests because the group members entered the country legally in recent years and have not been involved in illegal activities since they arrived, the officials said.” A few days later, on September 27, a New York Times article helpfully explained the rules governing the issuance of visas to terrorists: “According to the State Department manual for consular officers, participating in the planning or execution of terrorist acts would bar a foreigner from getting a visa, but ‘mere membership’ in a recognized terrorist group would not automatically disqualify a person from entering the United States. Nor would ‘advocacy of terrorism.’ ” There was no need, it turns out, for passage of Moynihan’s Free Admission for Terrorists provision; it had effectively been the law of the land all along.
Even worse, what the Washington Post and the New York Times have reported is but the tip of an iceberg. Last year, at the behest of Congress, the National Commission on Terrorism, a body of leading experts, issued findings that were duly praised as hitting hard at our complacency. But the report itself, which begins by declaring that “American strategies and policies are basically on the right track,” repeatedly illustrates the very attitude it purportedly condemns. Thus, the commission called attention to the “thousands” of students from the countries identified by the State Department as sponsors of terrorism who have been permitted to enter the U.S. to study. But the presence of such students, the report goes on to say, “is not objectionable in itself”; not only do the “vast majority” have “no adverse impact on U.S. national security,” but they actually strengthen us by “contribut[ing] to America’s diversity.” As for the small minority who might in fact be terrorists, the report raises only a very mild alarm about the lack of any functioning system that might tell authorities what they are studying, or whether one of them has suddenly switched his major from, say, business administration to microbiology (with a specialization in anthrax spores), or whether they even remain enrolled in school.
Another startling indicator of lassitude comes from the heart of counterterrorism itself. According to the commission, the guidelines governing the recruitment of “unsavory” sources, introduced by the Clinton administration in 1995, had created a climate within the CIA that was “overly risk-averse” and that contributed “to a marked decline in agency morale unparalleled since the 1970’s.” That is bad enough; but the morale problem had sources beyond the restrictive guidelines. Again according to the commission, some CIA officers and FBI special agents were being “sued individually” by terrorist suspects for actions taken in the course of their officially sanctioned duties. Instead of representing them in such suits, the government was letting the agents fend for themselves; those who chose to stay on the job were being forced to purchase personal-liability insurance to cover their legal bills.
Did the commission call for an end to this preposterous state of affairs, whereby accused terrorists have been able to turn the tables on their pursuers and bring them to court? Not at all. It asked only that the government provide “full reimbursement of the costs of personal-liability insurance.”
One can easily go on, but the point is clear. Foreign terrorists were waging war on the United States, and the United States was determined, above all, not to wage war back. Instead, we satisfied ourselves with palliatives: compiling lists of terrorist organizations and the states that sponsored them; capturing and extraditing underlings while letting the planners roam free; imposing sanctions replete with exceptions and loopholes; even, on occasion, closing our eyes entirely to the nature of the perils confronting us. It seems very long ago indeed but it was only last year that Madeleine Albright took to the podium of the State Department to announce that countries on the terrorism list would no longer be known as “rogue states.” Henceforth, she declared, they were to be called “states of concern.” When an ostrich sees one of its natural predators, it rushes to bury its head in the sand. Here was a great power doing the same.
For our refusal to face down adversaries who were openly bleeding us, and for our unwillingness to take even minimal risks in providing for our self-defense, we have now paid a heavy price. In numbers unknown, terrorists have infiltrated our society and struck a powerful blow. More blows are almost certain to follow. And now at last, to paraphrase Aleksandr Solzhenitsyn in a not altogether different context, the pitiless crowbar of events has pried open our eyes, and we are attempting to fight back.
But are we too late? More to the point, have we truly changed our ways, or will we soon be back on the road to “accommodation,” shaking hands with “moderate” members of the Taliban and cementing a phony alliance with the terrorist-sponsoring states of the Middle East? How many more of us will die before we steel ourselves to do what is necessary in order to secure the victory that many besides Paul Pillar assure us can never come?
1 Brookings Institution, 272 pp., $26.95.
2 By law, the State Department must annually compile a list of states that sponsor terrorism; those placed on it are subject to sanctions that bar them from purchasing various items from us—primarily military gear, trucks, aircraft, and some types of dual-use technology. As of last year, seven states fell on the list: Iran, Iraq, Syria, Libya, North Korea, Cuba, and Sudan. Though al Qaeda has long been active on its soil, Afghanistan was not included. Instead, through a chain of bureaucratic misfirings, it was given the far milder classification of “not cooperating fully with U.S. anti-terrorism efforts.”
3 In a series of post-September 11 interviews, Albright has also been contending that the Clinton administration would have lacked popular support for any action against bin Laden more vigorous than what it undertook, and that only with the attack on the World Trade Center and the Pentagon did public opinion awaken fully to the terrorism menace. “This has been such a horrible event,” says Albright, “that it has mobilized people in a way that [the destruction of] two embassies and the Cole didn’t.”
But this line of reasoning only serves to remind us once again of how heavily the Clinton administration relied on reading opinion polls in formulating vital national-security plans. What is more, even if public opinion had been as pacific as Albright suggests, could not the American people have been aroused to the danger with proper leadership? A long series of polls cited by Pillar show extremely high levels of support throughout the 1990’s for a forceful counterterrorism policy; in one representative survey, 79 percent of the public, and 75 percent of “opinion leaders,” agreed that “combating international terrorism should be a ‘very important’ goal of the United States.”
What these numbers indicate is that not public opinion but the President’s preoccupation with the Lewinsky affair throughout 1998—the crucial year when our embassies in Africa were blown up—foreclosed the possibility of a more militant course. As it was, and as Pillar reminds us, even the President’s limited retaliatory strike against bin Laden in that year was widely seen at home and abroad as part of a “Wag the Dog scenario” whereby the White House concocted a “a phony war to divert attention from a presidential sex scandal.” Even if Clinton’s motives in firing cruise missiles at bin Laden were pure (I would give him the benefit of the doubt on that score), we are all now, three years later, beginning to see the real price of his year-long distraction from his duties and his dissipation of presidential authority.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Could September 11 Have Been Averted?
Must-Reads from Magazine
Justice both delayed and denied.
According to Senate Judiciary Committee Democrat Chris Coons, Dr. Christine Blasey Ford, the woman who has accused Judge Brett Kavanaugh of sexually assaulting her when she was a minor, did not want to come forward. In an eerie echo of Anita Hill’s public ordeal, her accusations were “leaked to the media.” With her confidentiality violated, Ford had no choice but to go public. Coons could not say where that leak came from, but he did confess that “people on committee staff” had access to the letter in which Ford made her allegations. Draw your own conclusions.
Though many observers insist that what we have witnessed since Ford’s allegations were made public is about justice, it’s hard to see any rectitude in this process. Ford has been transformed into a public figure apparently against her wishes. The details of the attack that Ford alleges are deeply disturbing, but they are not prosecutable. Ford’s recollection of the events 36 years ago is understandably hazy, but what she alleges to have occurred is too vague to establish with much accuracy. She cannot recall the precise date or location in which she was supposedly attacked. Contrary to the protestations of Senate Democrats like Kamala Harris, the FBI cannot get involved in a matter that is not within the federal government’s jurisdiction. And even if local authorities were inclined to involve themselves, the statute of limitations long ago elapsed.
With precious few facts available to congressional investigators and without the sobriety that public scrutiny in the age of social media abhors, the spectacle to which the nation is about to be privy is undoubtedly going to make things worse. A public hearing featuring both Ford and Kavanaugh will be a performative and political display, if it happens at all. It will be adorned with the trappings of courtroom proceedings but with none of the associated protections afforded accused and accuser alike. It will further polarize the nation such that, whether Kavanaugh is confirmed or not, public confidence in Congress and the Supreme Court will be severely damaged. And no matter what is said in that hearing, it is unlikely to change many minds.
Given the dearth of hard evidence, it is understandable that observers have begun to look to their own experiences to evaluate the veracity of Ford’s allegations. The Atlantic contributor Caitlin Flanagan is the author of a powerful and compelling example of this kind of work. Her essay, entitled “I Believe Her,” is important for a variety of reasons. Maybe foremost among them is how she all but invalidates defenses of Kavanaugh that are based on the positive character references he’s assembled from former female acquaintances and ex-girlfriends. Flanagan was assaulted as a young woman, and her abuser—a man she says drove her to a suicidal depression similar to what Ford has described to her therapist—was not interested in a romantic relationship. CNN political commenter Symone Sanders, too, confessed that “there is no debate” in her mind as to Kavanaugh’s guilt, in part, because she was the victim of a sexual assault in college. The similarities between what she endured and what Ford says occurred are too hard for her to ignore.
These are harrowing stories, but they also reveal how little any of this has to do with Brett Kavanaugh anymore. For some, this has become a proxy battle in the broader cultural reckoning that began with the #MeToo moment. Quite unlike the many abusive men who were outed by this movement, though, the evidentiary standard being applied to Kavanaugh’s case is remarkably low. His innocence has not been presumed, and a preponderance of evidence has not been marshaled against him. It is not even clear as of this writing that Kavanaugh will be allowed to confront his accuser. At a certain point, honest observers must concede that getting to the truth has not been a defining feature of this process.
In the face of this adversity, there are some Republicans who are willing to sacrifice Kavanaugh’s nomination. Some appear to think that Kavanaugh’s troubles present them with an opportunity to advance their own political prospects and to promote a replacement nominee with whom they feel a closer ideological affinity. Others simply don’t want to risk standing by a tainted nominee. The stakes associated with a lifetime appointment to the Supreme Court are too high to confirm a justice with an asterisk next to his name—a justice who may tarnish future rulings on sensitive cases by association. Those Republicans are either capitulatory or craven.
Based on what we know now, Kavanaugh does not deserve an asterisk. Maybe he will tomorrow, but he doesn’t today. Those who would allow what is by almost all accounts an exemplary legal career to be destroyed by unconfirmable accusations or outright innuendo will not get a better deal down the line. Some Republicans are agnostic about Kavanaugh’s fate and believe that his being stopped will make room for a more doctrinaire conservative like Amy Coney Barrett. But they will not get their ideologically simpatico justice if they allow the defiling of the process by which she could be confirmed.
The experiences that Dr. Ford described are appalling. Even for those who are inclined to believe her account and think that she is due some restitution, no true justice can be meted out that doesn’t infringe on the rights of the accused. Those in the commentary class who would use Kavanaugh as a stand-in for every abuser who got away, every preppy white boy who benefited from unearned privilege, every hypocritical conservative moralizer to exact some karmic vengeance are not interested in justice. They want a political victory, even at the expense of the integrity of the American ideal. If there is a fight worth having, it’s the fight against that.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Terror is a choice.
Ari Fuld described himself on Twitter as a marketer and social media consultant “when not defending Israel by exposing the lies and strengthening the truth.” On Sunday, a Palestinian terrorist stabbed Fuld at a shopping mall in Gush Etzion, a settlement south of Jerusalem. The Queens-born father of four died from his wounds, but not before he chased down his assailant and neutralized the threat to other civilians. Fuld thus gave the full measure of devotion to the Jewish people he loved. He was 45.
The episode is a grim reminder of the wisdom and essential justice of the Trump administration’s tough stance on the Palestinians.
Start with the Taylor Force Act. The act, named for another U.S. citizen felled by Palestinian terror, stanched the flow of American taxpayer fund to the Palestinian Authority’s civilian programs. Though it is small consolation to Fuld’s family, Americans can breathe a sigh of relief that they are no longer underwriting the PA slush fund used to pay stipends to the family members of dead, imprisoned, or injured terrorists, like the one who murdered Ari Fuld.
No principle of justice or sound statesmanship requires Washington to spend $200 million—the amount of PA aid funding slashed by the Trump administration last month—on an agency that financially induces the Palestinian people to commit acts of terror. The PA’s terrorism-incentive budget—“pay-to-slay,” as Douglas Feith called it—ranges from $50 million to $350 million annually. Footing even a fraction of that bill is tantamount to the American government subsidizing terrorism against its citizens.
If we don’t pay the Palestinians, the main line of reasoning runs, frustration will lead them to commit still more and bloodier acts of terror. But U.S. assistance to the PA dates to the PA’s founding in the Oslo Accords, and Palestinian terrorists have shed American and Israeli blood through all the years since then. What does it say about Palestinian leaders that they would unleash more terror unless we cross their palms with silver?
President Trump likewise deserves praise for booting Palestinian diplomats from U.S. soil. This past weekend, the State Department revoked a visa for Husam Zomlot, the highest-ranking Palestinian official in Washington. The State Department cited the Palestinians’ years-long refusal to sit down for peace talks with Israel. The better reason for expelling them is that the label “envoy” sits uneasily next to the names of Palestinian officials, given the links between the Palestine Liberation Organization, President Mahmoud Abbas’s Fatah faction, and various armed terrorist groups.
Fatah, for example, praised the Fuld murder. As the Jerusalem Post reported, the “al-Aqsa Martyrs Brigades, the military wing of Fatah . . . welcomed the attack, stressing the necessity of resistance ‘against settlements, Judaization of the land, and occupation crimes.’” It is up to Palestinian leaders to decide whether they want to be terrorists or statesmen. Pretending that they can be both at once was the height of Western folly, as Ari Fuld no doubt recognized.
May his memory be a blessing.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The end of the water's edge.
It was the blatant subversion of the president’s sole authority to conduct American foreign policy, and the political class received it with fury. It was called “mutinous,” and the conspirators were deemed “traitors” to the Republic. Those who thought “sedition” went too far were still incensed over the breach of protocol and the reckless way in which the president’s mandate was undermined. Yes, times have certainly changed since 2015, when a series of Republican senators signed a letter warning Iran’s theocratic government that the Joint Comprehensive Plan of Action (aka, the Iran nuclear deal) was built on a foundation of sand.
The outrage that was heaped upon Senate Republicans for freelancing on foreign policy in the final years of Barack Obama’s administration has not been visited upon former Secretary of State John Kerry, though he arguably deserves it. In the publicity tour for his recently published memoir, Kerry confessed to conducting meetings with Iranian Foreign Minister Javad Zarif “three or four times” as a private citizen. When asked by Fox News Channel’s Dana Perino if Kerry had advised his Iranian interlocutor to “wait out” the Trump administration to get a better set of terms from the president’s successor, Kerry did not deny the charge. “I think everybody in the world is sitting around talking about waiting out President Trump,” he said.
Think about that. This is a former secretary of state who all but confirmed that he is actively conducting what the Boston Globe described in May as “shadow diplomacy” designed to preserve not just the Iran deal but all the associated economic relief and security guarantees it provided Tehran. The abrogation of that deal has put new pressure on the Iranians to liberalize domestically, withdraw their support for terrorism, and abandon their provocative weapons development programs—pressures that the deal’s proponents once supported.
“We’ve got Iran on the ropes now,” said former Democratic Sen. Joe Lieberman, “and a meeting between John Kerry and the Iranian foreign minister really sends a message to them that somebody in America who’s important may be trying to revive them and let them wait and be stronger against what the administration is trying to do.” This is absolutely correct because the threat Iran poses to American national security and geopolitical stability is not limited to its nuclear program. The Iranian threat will not be neutralized until it abandons its support for terror and the repression of its people, and that will not end until the Iranian regime is no more.
While Kerry’s decision to hold a variety of meetings with a representative of a nation hostile to U.S. interests is surely careless and unhelpful, it is not uncommon. During his 1984 campaign for the presidency, Jesse Jackson visited the Soviet Union and Cuba to raise his own public profile and lend credence to Democratic claims that Ronald Reagan’s confrontational foreign policy was unproductive. House Speaker Jim Wright’s trip to Nicaragua to meet with the Sandinista government was a direct repudiation of the Reagan administration’s support for the country’s anti-Communist rebels. In 2007, as Bashar al-Assad’s government was providing material support for the insurgency in Iraq, House Speaker Nancy Pelosi sojourned to Damascus to shower the genocidal dictator in good publicity. “The road to Damascus is a road to peace,” Pelosi insisted. “Unfortunately,” replied George W. Bush’s national security council spokesman, “that road is lined with the victims of Hamas and Hezbollah, the victims of terrorists who cross from Syria into Iraq.”
Honest observers must reluctantly conclude that the adage is wrong. American politics does not, in fact, stop at the water’s edge. It never has, and maybe it shouldn’t. Though it may be commonplace, American political actors who contradict the president in the conduct of their own foreign policy should be judged on the policies they are advocating. In the case of Iran, those who seek to convince the mullahs and their representatives that repressive theocracy and a terroristic foreign policy are dead-ends are advancing the interests not just of the United States but all mankind. Those who provide this hopelessly backward autocracy with the hope that America’s resolve is fleeting are, as John Kerry might say, on “the wrong side of history.”
Choose your plan and pay nothing for six Weeks!
Michael Wolff is its Marquis de Sade. Released on January 5, 2018, Wolff’s Fire and Fury became a template for authors eager to satiate the growing demand for unverified stories of Trump at his worst. Wolff filled his pages with tales of the president’s ignorant rants, his raging emotions, his television addiction, his fast-food diet, his unfamiliarity with and contempt for Beltway conventions and manners. Wolff made shocking insinuations about Trump’s mental state, not to mention his relationship with UN ambassador Nikki Haley. Wolff’s Trump is nothing more than a knave, dunce, and commedia dell’arte villain. The hero of his saga is, bizarrely, Steve Bannon, who in Wolff’s telling recognized Trump’s inadequacies, manipulated him to advance a nationalist-populist agenda, and tried to block his worst impulses.
Wolff’s sources are anonymous. That did not slow down the press from calling his accusations “mind-blowing” (Mashable.com), “wild” (Variety), and “bizarre” (Entertainment Weekly). Unlike most pornographers, he had a lesson in mind. He wanted to demonstrate Trump’s unfitness for office. “The story that I’ve told seems to present this presidency in such a way that it says that he can’t do this job, the emperor has no clothes,” Wolff told the BBC. “And suddenly everywhere people are going, ‘Oh, my God, it’s true—he has no clothes.’ That’s the background to the perception and the understanding that will finally end this, that will end this presidency.”
Nothing excites the Resistance more than the prospect of Trump leaving office before the end of his term. Hence the most stirring examples of Resistance Porn take the president’s all-too-real weaknesses and eccentricities and imbue them with apocalyptic significance. In what would become the standard response to accusations of Trumpian perfidy, reviewers of Fire and Fury were less interested in the truth of Wolff’s assertions than in the fact that his argument confirmed their preexisting biases.
Saying he agreed with President Trump that the book is “fiction,” the Guardian’s critic didn’t “doubt its overall veracity.” It was, he said, “what Mailer and Capote once called a nonfiction novel.” Writing in the Atlantic, Adam Kirsch asked: “No wonder, then, Wolff has written a self-conscious, untrustworthy, postmodern White House book. How else, he might argue, can you write about a group as self-conscious, untrustworthy, and postmodern as this crew?” Complaining in the New Yorker, Masha Gessen said Wolff broke no new ground: “Everybody” knew that the “president of the United States is a deranged liar who surrounded himself with sycophants. He is also functionally illiterate and intellectually unsound.” Remind me never to get on Gessen’s bad side.
What Fire and Fury lacked in journalistic ethics, it made up in receipts. By the third week of its release, Wolff’s book had sold more than 1.7 million copies. His talent for spinning second- and third-hand accounts of the president’s oddity and depravity into bestselling prose was unmistakable. Imitators were sure to follow, especially after Wolff alienated himself from the mainstream media by defending his innuendos about Haley.
It was during the first week of September that Resistance Porn became a competitive industry. On the afternoon of September 4, the first tidbits from Bob Woodward’s Fear appeared in the Washington Post, along with a recording of an 11-minute phone call between Trump and the white knight of Watergate. The opposition began panting soon after. Woodward, who like Wolff relies on anonymous sources, “paints a harrowing portrait” of the Trump White House, reported the Post.
No one looks good in Woodward’s telling other than former economics adviser Gary Cohn and—again bizarrely—the former White House staff secretary who was forced to resign after his two ex-wives accused him of domestic violence. The depiction of chaos, backstabbing, and mutual contempt between the president and high-level advisers who don’t much care for either his agenda or his personality was not so different from Wolff’s. What gave it added heft was Woodward’s status, his inviolable reputation.
“Nothing in Bob Woodward’s sober and grainy new book…is especially surprising,” wrote Dwight Garner at the New York Times. That was the point. The audience for Wolff and Woodward does not want to be surprised. Fear is not a book that will change minds. Nor is it intended to be. “Bob Woodward’s peek behind the Trump curtain is 100 percent as terrifying as we feared,” read a CNN headline. “President Trump is unfit for office. Bob Woodward’s ‘Fear’ confirms it,” read an op-ed headline in the Post. “There’s Always a New Low for the Trump White House,” said the Atlantic. “Amazingly,” wrote Susan Glasser in the New Yorker, “it is no longer big news when the occupant of the Oval Office is shown to be callous, ignorant, nasty, and untruthful.” How could it be, when the press has emphasized nothing but these aspects of Trump for the last three years?
The popular fixation with Trump the man, and with the turbulence, mania, frenzy, confusion, silliness, and unpredictability that have surrounded him for decades, serves two functions. It inoculates the press from having to engage in serious research into the causes of Trump’s success in business, entertainment, and politics, and into the crises of borders, opioids, stagnation, and conformity of opinion that occasioned his rise. Resistance Porn also endows Trump’s critics, both external and internal, with world-historical importance. No longer are they merely journalists, wonks, pundits, and activists sniping at a most unlikely president. They are politically correct versions of Charles Martel, the last line of defense preventing Trump the barbarian from enacting the policies on which he campaigned and was elected.
How closely their sensational claims and inflated self-conceptions track with reality is largely beside the point. When the New York Times published the op-ed “I am Part of the Resistance Inside the Trump Administration,” by an anonymous “senior official” on September 5, few readers bothered to care that the piece contained no original material. The author turned policy disagreements over trade and national security into a psychiatric diagnosis. In what can only be described as a journalistic innovation, the author dispensed with middlemen such as Wolff and Woodward, providing the Times the longest background quote in American history. That the author’s identity remains a secret only adds to its prurient appeal.
“The bigger concern,” the author wrote, “is not what Mr. Trump has done to the presidency but what we as a nation have allowed him to do to us.” Speak for yourself, bud. What President Trump has done to the Resistance is driven it batty. He’s made an untold number of people willing to entertain conspiracy theories, and to believe rumor is fact, hyperbole is truth, self-interested portrayals are incontrovertible evidence, credulity is virtue, and betrayal is fidelity—so long as all of this is done to stop that man in the White House.
Choose your plan and pay nothing for six Weeks!
Review of 'Stanley Kubrick' By Nathan Abrams
Except for Stanley Donen, every director I have worked with has been prone to the idea, first propounded in the 1950s by François Truffaut and his tendentious chums in Cahiers du Cinéma, that directors alone are authors, screenwriters merely contingent. In singular cases—Orson Welles, Michelangelo Antonioni, Woody Allen, Kubrick himself—the claim can be valid, though all of them had recourse, regular or occasional, to helping hands to spice their confections.
Kubrick’s variety of topics, themes, and periods testifies both to his curiosity and to his determination to “make it new.” Because his grades were not high enough (except in physics), this son of a Bronx doctor could not get into colleges crammed with returning GIs. The nearest he came to higher education was when he slipped into accessible lectures at Columbia. He told me, when discussing the possibility of a movie about Julius Caesar, that the great classicist Moses Hadas made a particularly strong impression.
While others were studying for degrees, solitary Stanley was out shooting photographs (sometimes with a hidden camera) for Look magazine. As a movie director, he often insisted on take after take. This gave him choices of the kind available on the still photographer’s contact sheets. Only Peter Sellers and Jack Nicholson had the nerve, and irreplaceable talent, to tell him, ahead of shooting, that they could not do a particular scene more than two or three times. The energy to electrify “Mein Führer, I can walk” and “Here’s Johnny!” could not recur indefinitely. For everyone else, “Can you do it again?” was the exhausting demand, and it could come close to being sadistic.
The same method could be applied to writers. Kubrick might recognize what he wanted when it was served up to him, but he could never articulate, ahead of time, even roughly what it was. Picking and choosing was very much his style. Cogitation and opportunism went together: The story goes that he attached Strauss’s Blue Danube to the opening sequence of 2001 because it happened to be playing in the sound studio when he came to dub the music. Genius puts chance to work.
Until academics intruded lofty criteria into cinema/film, the better to dignify their speciality, Alfred Hitchcock’s attitude covered most cases: When Ingrid Bergman asked for her motivation in walking to the window, Hitch replied, fatly, “Your salary.” On another occasion, told that some scene was not plausible, Hitch said, “It’s only a movie.” He did not take himself seriously until the Cahiers du Cinéma crowd elected to make him iconic. At dinner, I once asked Marcello Mastroianni why he was so willing to play losers or clowns. Marcello said, “Beh, cinema non e gran’ cosa” (cinema is no big deal). Orson Welles called movie-making the ultimate model-train set.
That was then; now we have “film studies.” After they moved in, academics were determined that their subject be a very big deal indeed. Comedy became no laughing matter. In his monotonous new book, the film scholar Nathan Abrams would have it that Stanley Kubrick was, in essence, a “New York Jewish intellectual.” Abrams affects to unlock what Stanley was “really” dealing with, in all his movies, never mind their apparent diversity. It is declared to be, yes, Yiddishkeit, and in particular, the Holocaust. This ground has been tilled before by Geoffrey Cocks, when he argued that the room numbers in the empty Overlook Hotel in The Shining encrypted references to the Final Solution. Abrams would have it that even Barry Lyndon is really all about the outsider seeking, and failing, to make his awkward way in (Gentile) Society. On this reading, Ryan O’Neal is seen as Hannah Arendt’s pariah in 18th-century drag. The movie’s other characters are all engaged in the enjoyment of “goyim-naches,” an expression—like menschlichkayit—he repeats ad nauseam, lest we fail to get the stretched point.
Theory is all when it comes to the apotheosis of our Jew-ridden Übermensch. So what if, in order to make a topic his own, Kubrick found it useful to translate its logic into terms familiar to him from his New York youth? In Abrams’s scheme, other mundane biographical facts count for little. No mention is made of Stanley’s displeasure when his 14-year-old daughter took a fancy to O’Neal. The latter was punished, some sources say, by having Barry’s voiceover converted from first person so that Michael Hordern would displace the star as narrator. By lending dispassionate irony to the narrative, it proved a pettish fluke of genius.
While conning Abrams’s volume, I discovered, not greatly to my chagrin, that I am the sole villain of the piece. Abrams calls me “self-serving” and “unreliable” in my accounts of my working and personal relationship with Stanley. He insinuates that I had less to do with Eyes Wide Shut than I pretend and that Stanley regretted my involvement. It is hard for him to deny (but convenient to omit) that, after trying for some 30 years to get a succession of writers to “crack” how to do Schnitzler’s Traumnovelle, Kubrick greeted my first draft with “I’m absolutely thrilled.” A source whose anonymity I respect told me that he had never seen Stanley so happy since the day he received his first royalty check (for $5 million) for 2001. No matter.
Were Abrams (the author also of a book as hostile to Commentary as this one is to me) able to put aside his waxed wrath, he might have quoted what I reported in my memoir Eyes Wide Open to support his Jewish-intellectual thesis. One day, Stanley asked me what a couple of hospital doctors, walking away with their backs to the camera, would be talking about. We were never going to hear or care what it was, but Stanley—at that early stage of development—said he wanted to know everything. I said, “Women, golf, the stock market, you know…”
“Couple of Gentiles, right?”
“That’s what you said you wanted them to be.”
“Those people, how do we ever know what they’re talking about when they’re alone together?”
“Come on, Stanley, haven’t you overheard them in trains and planes and places?”
Kubrick said, “Sure, but…they always know you’re there.”
If he was even halfway serious, Abrams’s banal thesis that, despite decades of living in England, Stanley never escaped the Old Country, might have been given some ballast.
Now, as for Stanley Kubrick’s being an “intellectual.” If this implies membership in some literary or quasi-philosophical elite, there’s a Jewish joke to dispense with it. It’s the one about the man who makes a fortune, buys himself a fancy yacht, and invites his mother to come and see it. He greets her on the gangway in full nautical rig. She says, “What’s with the gold braid already?”
“Mama, you have to realize, I’m a captain now.”
She says, “By you, you’re a captain, by me, you’re a captain, but by a captain, are you a captain?”
As New York intellectuals all used to know, Karl Popper’s definition of bad science, and bad faith, involves positing a theory and then selecting only whatever data help to furnish its validity. The honest scholar makes it a matter of principle to seek out elements that might render his thesis questionable.
Abrams seeks to enroll Lolita in his obsessive Jewish-intellectual scheme by referring to Peter Arno, a New Yorker cartoonist whom Kubrick photographed in 1949. The caption attached to Kubrick’s photograph in Look asserted that Arno liked to date “fresh, unspoiled girls,” and Abrams says this “hint[s] at Humbert Humbert in Lolita.” Ah, but Lolita was published, in Paris, in 1955, six years later. And how likely is it, in any case, that Kubrick wrote the caption?
The film of Lolita is unusual for its garrulity. Abrams’s insistence on the sinister Semitic aspect of both Clare Quilty and Humbert Humbert supposedly drawing Kubrick like moth to flame is a ridiculous camouflage of the commercial opportunism that led Stanley to seek to film the most notorious novel of the day, while fudging its scandalous eroticism.
That said, in my view, The Killing, Paths of Glory, Barry Lyndon, and Clockwork Orange were and are sans pareil. The great French poet Paul Valéry wrote of “the profundity of the surface” of a work of art. Add D.H. Lawrence’s “never trust the teller, trust the tale,” and you have two authoritative reasons for looking at or reading original works of art yourself and not relying on academic exegetes—especially when they write in the solemn, sometimes ungrammatical style of Professor Abrams, who takes time out to tell those of us at the back of his class that padre “is derived from the Latin pater.”
Abrams writes that I “claim” that I was told to exclude all overt reference to Jews in my Eyes Wide Shut screenplay, with the fatuous implication that I am lying. I am again accused of “claiming” to have given the name Ziegler to the character played by Sidney Pollack, because I once had a (quite famous) Hollywood agent called Evarts Ziegler. So I did. The principal reason for Abrams to doubt my veracity is that my having chosen the name renders irrelevant his subsequent fanciful digression on the deep, deep meanings of the name Ziegler in Jewish lore; hence he wishes to assign the naming to Kubrick. Pop goes another wished-for proof of Stanley’s deep and scholarly obsession with Yiddishkeit.
Abrams would be a more formidable enemy if he could turn a single witty phrase or even abstain from what Karl Kraus called mauscheln, the giveaway jargon of Jewish journalists straining to pass for sophisticates at home in Gentile circles. If you choose, you can apply, on line, for screenwriting lessons from Nathan Abrams, who does not have a single cinematic credit to his name. It would be cheaper, and wiser, to look again, and then again, at Kubrick’s masterpieces.