Bureaucratic reshuffling at the FBI and CIA cannot fix problems whose roots lie deep in our political culture.
Intelligence-gathering is something of a square peg in the round hole of contemporary political morality. It is about unearthing that which is willfully concealed, an enterprise that necessarily calls for invading privacy and inducing betrayal—discomfiting acts in an age that exalts the individual and his liberties above community and country. It is about assuming and preparing for the worst in an era that sees “bad” as an outmoded adjective for “different,” another dash of enlivening spice in a rich social stew. Intelligence is gimlet eyes in a world of rose-colored glasses.
Now, however, that foreign pathologies long denied have visited their excesses upon us, many among the benignly tolerant have turned overnight into the equivalent of ambulance-chasers. In particular, they have confidently laid at the door of America’s intelligence apparatus the success of America’s enemies on September 11, 2001. Even as investigators in the CIA and FBI were unable to “connect the dots,” it is said, nineteen al-Qaeda hijackers cavorted for months in this country before carrying out the atrocities of that day. Nor was this catastrophe—“by definition, the worst intelligence failure in our country’s history,” in the words of the Reagan-era intelligence expert Herbert Meyer—a singular phenomenon. Less than a year earlier, a billion-dollar battle ship, the U.S.S. Cole, had been bombed and nearly sunk, causing the deaths of seventeen servicemen, because we unwittingly berthed it in the al-Qaeda-infested port of Aden, Yemen. This, after our embassies in Kenya and Tanzania were turned to rubble in August 1998 by the very same al Qaeda, which had already attacked numerous times previously, and which no less often had expressly declared war on the United States.
Nor is that all. Thanks to our failed intelligence services (the indictment continues), the Bush administration grossly overestimated the stockpiles and production capacity of chemical, bacteriological, radiological, and nuclear weapons of mass destruction (WMD) in Iraq. In the meantime, in North Korea, construction of nuclear weapons seems to have ensued for years right under our noses. And Pyongyang’s mischief marked only a single strand in a web of proliferation woven by our ally Pakistan, a web that may have spread into as many as seven nations, including Iran, where the mullahs now harbor the remnants of al Qaeda’s leadership.
How did this wide wreckage in our intelligence capacities come about? One incisive answer has been given by Mark Riebling in his gripping history, Wedge: How the Secret War between the FBI and CIA Has Endangered National Security (1994, reissued in 2002 with a new epilogue). Riebling’s thesis is that the problem is longstanding, that it has a single “root cause,” and that this root cause is institutional. In his telling, a full half-century’s worth of national disasters—from Pearl Harbor through the Bay of Pigs, the Kennedy assassination, Watergate, Iran-Contra, and 9/11—can be traced directly to intelligence failures, and those failures were proximately caused by turf-battling between our two great rival agencies.
This has now become conventional wisdom, accepted on all sides. And one can see the apparent sense in it. A ramified system of multiple agencies having similar missions and chasing the same budget dollars will inevitably produce rivalry; rivalry begets pettiness, and pettiness begets failure. Such, indeed, is the reasoning behind virtually all of the proposals now under consideration by no fewer than seven assorted congressional committees, internal evaluators, and blue-ribbon panels charged with remedying the situation.
One proposed fix, supported by, among others, Senator John Edwards and James B. Steinberg, a deputy national security adviser in the Clinton administration, would create a new entity, analogous to Britain’s MI-5, to assume the FBI’s domestic-intelligence mission. Decoupling that agency’s information-gathering from its law-enforcement duties would allegedly result in a specialist agency that would more resemble, and be less likely to rumble with, its foreign-intelligence counterpart, the CIA. These hoped-for efficiencies would, it is (naively) supposed, compensate for the loss of the FBI’s critical power to leverage intelligence-gathering with the ready hammer of prosecution.
Steinberg and Senator Dianne Feinstein are also among those who would solve the pitfalls of conflicting bureaucracies by . . . adding another bureaucracy. This new National Intelligence Directorate would oversee the full spectrum of relevant entities, compelling the likes of the CIA, the FBI, the National Security Agency (NSA), the National Geospatial-Intelligence Agency, the Defense Intelligence Agency (DIA), and the State Department’s intelligence branch to play nice with each other. Presumably it would also render obsolete the Terrorist Threat Integration Center, another new entity (under CIA direction) created by President Bush a year ago to promote harmony.
But is it true that inter-agency rivalry is the problem everyone claims it is?
That rivalry exists is indisputable; likewise, that its effects can be pernicious. One of my first encounters with the CIA a decade ago occurred when I and other prosecutors preparing the conspiracy case against the organization responsible for the 1993 World Trade Center (WTC) bombing asked the agency for a much-needed briefing. The CIA was perfectly willing to come to New York for that purpose—but not if our FBI case agents were going to be in the same room.
Nevertheless, like many facts that appall at first blush, internecine warfare is only, at best, half the story. For one thing, intelligence professionals are correct (if occasionally disingenuous) when they complain that the public has a skewed perception of their operations: while catastrophic lapses are always notorious, intelligence successes are more numerous. These, however, must typically be kept secret in order to preserve sources of information and methods of gathering it. The unfortunate result is a portrait of ceaseless “failure” that, aside from giving intelligence-gathering an undeserved bad name, also obscures other verities.
First, day-to-day cooperation among agencies, and particularly between the FBI and CIA, is actually far better than people have been led to believe. In terrorism cases, in the decade after the 1993 WTC bombing, teamwork improved in leaps and bounds. To be sure, there are occasional breakdowns, usually due to personality conflicts. But this is an unavoidable function of the human condition—which no legislation on earth can repeal—and it is just as frequently a factor in intra-agency disputes as in those between agencies. Today, agents who fail to compare notes are generally acting in violation of information-sharing protocols; it is hard to imagine additional directives improving the situation.
Second, intelligence-gathering is not monolithic. Domestic intelligence is radically different from the foreign variety, and both differ critically from the needs of the military. So polysemous an imperative requires a variety of skills to meet widely divergent situations and assumptions. As both a practical and a political matter, it is inconceivable that the task could be accomplished by a single agency, and proposals that suggest otherwise are certain only to reshuffle, rather than eradicate, natural rivalries while damaging the quality and quantity of information collection.
Third, and most misunderstood, rivalry—overall—is a virtue. In the government’s vast monopoly, it is essential. Naturally, the seamy side of competition being a perennial best-seller, the public record is replete with hair-raising anecdotes of sharp-elbowed investigators pursuing the same quarry to the benefit of criminals, enemies, and traitors. On a macro level, however, the throat-cutting is statistically insignificant. As a rule, competition impels agents to test their premises and press for better information; it results in the generation of more leads and the collection and refinement of more intelligence. In a world where the Supreme Court cannot decide a case without amicus briefs from innumerable interested observers, where Congress declines to pass legislation without the input of scores of experts, do we really want the President, in matters of national security, reduced to a single stream of intelligence-collection and analysis?
If turf-battling is not an enormous obstacle, does that mean there are no obstacles? Hardly. The real problems, though, are not bureaucratic but structural and philosophical. They have taken over 40 years to metastasize, and they would take a lot more than cosmetic surgery to reverse, even assuming the national will to do it.
As with much else in our national life, the bacillus now grown to plague America’s intelligence apparatus took root in the unrest of Vietnam and the upheaval of Watergate. The perception of national security became intertwined in those years with an increasingly unpopular war that ended badly. For a generation of activists soon to take up positions of influence in politics, academia, and the media, the antiwar movement inculcated a lasting aversion not only to the exercise of American military power but to the agencies tasked with assessing threats to our national security, not to mention the real-world grunt work of intelligence.
Watergate deepened the aversion. For one thing, the burglars included former intelligence officers. For another, President Richard Nixon enlisted the CIA to obstruct the FBI’s investigation of the break-in. For a third, his White House “enemies” operation featured spying against domestic political adversaries. Hot on the heels of these misdeeds, the CIA became enmeshed in other domestic spying scandals that were subjected to high-profile probes, first by a commission appointed by President Ford and, in 1976, by the celebrated Senate Select Committee chaired by Frank Church.
Perhaps the first consequence of this chain of events was a long-term decline in the authority of the executive branch of government. The decline stemmed from an illogic that often bedevils the aftermath of scandal: the tendency to confound the sins of a corrupt actor (in this case, Nixon) with a structural weakness in the system itself. In the mid-1970’s, the new operating premise was that, since robust presidential power was likely to be corrupted, it must therefore be scrutinized and shackled in every respect.
From this there followed a second consequence: a shift of national-security functions, prominently including intelligence-gathering, from the ambit of broad executive discretion to the area where executive action is regulated by Congress and the federal courts. Compared with the “intelligence failures” decried by journalists and politicians today, this shift engendered a continuing calamity.
In the constitutional license given to executive action, a gaping chasm exists between the realms of law enforcement and national security. In law enforcement, as former U.S. Attorney General William P. Barr explained in congressional testimony last October, government seeks to discipline an errant member of the body politic who has allegedly violated its rules. That member, who may be a citizen, an immigrant with lawful status, or even, in certain situations, an illegal alien, is vested with rights and protections under the U.S. Constitution. Courts are imposed as a bulwark against suspect executive action; presumptions exist in favor of privacy and innocence; and defendants and other subjects of investigation enjoy the assistance of counsel, whose basic job is to thwart government efforts to obtain information. The line drawn here is that it is preferable for the government to fail than for an innocent person to be wrongly convicted or otherwise deprived of his rights.
Not so the realm of national security, where government confronts a host of sovereign states and sub-national entities (particularly terrorist organizations) claiming the right to use force. Here the executive is not enforcing American law against a suspected criminal but exercising national-defense powers to protect against external threats. Foreign hostile operatives acting from without and within are not vested with rights under the American Constitution. The galvanizing national concern in this realm is to defeat the enemy, and as Barr puts it, “preserve the very foundation of all our civil liberties.” The line drawn here is that government cannot be permitted to fail.
For these reasons, prior to the post-Vietnam, post-Watergate revolution, executive-branch authority in matters of national security had been almost plenary. The constitutional checks held by Congress were largely trifles. The power to declare war was already nearly an anachronism—during the Civil War, the Supreme Court had ruled that, regardless of whether Congress acts, Article II of the Constitution actually obliges the President to respond with all necessary force to put down attacks against the United States. Even Congress’s power of the purse lacked much practical muscle, given the inherent political risk for a legislator who dared to withhold funds the President said were vital to national security.
In line with this, the executive branch had wide latitude to gather intelligence against potential threats. True, the CIA’s charter did not permit it to conduct domestic intelligence-gathering—that task being left to the FBI—but this affected only which arms of the executive branch could spy on our enemies in which venues. It did not, at least in theory, affect the substance of the information to be gathered.
But cataclysmic changes were ahead, and their harbinger was President Jimmy Carter’s acquiescence in the 1978 Foreign Intelligence Surveillance Act (FISA). Here, for the first time, Congress and the courts undertook to regulate the gathering of national intelligence, particularly by electronic eavesdropping, against agents of hostile foreign powers. In the Nixonian afterclap, it was adjudged that the executive could not be trusted unilaterally to wield this power, which might secretly be used against political opponents.
Of course, such wiretapping was already illegal, and the Nixon experience had amply demonstrated the political price to be paid for engaging in it. No matter. Henceforth, the executive branch would not be allowed to use whatever tactics it, as the branch with the most expertise and information, determined were necessary to protect the nation. Rather, it would be compelled to go to a federal FISA court newly created for the purpose, and, as with the procedure for criminal wiretaps, it would need to establish probable cause that the target was an agent of a foreign power. Electronic surveillance would be permitted only if the judges approved.
The impact on intelligence collection was serious. Previously, it would have been laughable to suggest that foreign enemy operatives had a right to conduct their perfidies in privacy—the Fourth Amendment prohibits only “unreasonable” searches, and there is nothing unreasonable about searching or recording people who threaten national security. (The federal courts have often recognized that the Constitution is not a suicide pact.) Now, such operatives became the beneficiaries of precisely such protection. Placing so severe a roadblock in the way of a crucial investigative technique necessarily meant both that the technique would be used less frequently (thereby reducing the quantity and quality of valuable intelligence) and that investigative resources would have to be diverted from intelligence-collection to the rigors of compliance with judicial procedures (which are cumbersome).
This was only the start of the debacle. Courts and the organized defense bar soon began to ply the FISA statute with hypothetical governmental abuses. What if, they worried, a national-security wiretap yielded evidence of an ordinary crime—not an unlikely event, given that terrorists tend to commit lots of ordinary crimes, including money laundering, identity fraud, etc. This was no problem under FISA as written: intelligence agents could simply pass the information to agents of the criminal law, who could then use the damning conversations in court. But what if such law-enforcement agents, for their part, were to try to use FISA as a pretext to investigate crimes for which they themselves lacked probable cause to secure a regular criminal wiretap?
In one sense, the suggestion was not out of line—wiretap conversations are devastating evidence, and defense lawyers routinely strain to have them suppressed. But the notion was logically absurd. If a criminal investigator was going to act corruptly, it would be far easier for him to fabricate evidence showing probable cause for a regular wiretap (by pretending, for example, to have an anonymous source who had bought illegal drugs from the target) than to trump up a national-security angle necessitating an additional set of internal approvals. Nor was there any indication that such chicanery was actually afoot. But reality is rarely an obstacle for those who see life as an ongoing law-school seminar. Gradually, courts rewrote FISA, grafting onto it a so-called “primary purpose” test requiring the government to establish not only probable cause that it was targeting operatives of a foreign power but also that its real reason for seeking surveillance was counterintelligence, not criminal prosecution.
As one would expect, this created among many prosecutors a grave apprehension about “the appearance of impropriety”—a hidebound concept governing lawyer ethics that is perfectly nonsensical in the life-and-death context of national security. Even as militant Islam began its terrorist war against the United States with the 1993 WTC bombing and the 1994-95 “Bojenka” plot to blow a dozen American airliners out of the sky over the Pacific, the Justice Department was worrying that agents and prosecutors might be perceived to be using intelligence-gathering authority to build criminal prosecutions. Often, the result was weeks or more of delay, during which identified terrorists who happened also to be committing quotidian crimes went unmonitored while the government dithered over whether to employ FISA or the criminal wiretap law. The insanity reached its apex in 1995 with the “primary purpose” guidelines drafted by the Clinton administration: henceforth, a firewall would be placed between criminal and national-security agents, generally barring them even from communicating with one another.
The damage from the firewall and the impediments to FISA has been incalculable. It took ten years to make the racketeering case against Sami al-Arian, the professor accused of helping run the murderous Palestinian Islamic Jihad from the campus of South Florida University, because the wealth of information collected by intelligence agents was withheld from their criminal counterparts. And that was a pittance compared with what happened in the waning weeks before the September 11 attacks. Zacarias Moussaoui, who had paid cash for pilot training (and was reported to authorities when his bizarre behavior—including intense interest in how cabin and cockpit doors worked—could no longer be ignored), was detained by the immigration service. Worried FBI intelligence agents were desperate to search his computer, but were turned down by supervisors who decided there was insufficient evidence to go to the FISA court. His al-Qaeda membership and numerous connections to the hijackers were not uncovered until after the attacks.
And the Moussaoui travesty itself pales in comparison to the story of Khalid al-Midhar and Nawaf al-Hazmi, excruciatingly recounted in Slate by Stewart Baker, general counsel of the National Security Agency during the early Clinton administration. The pair, who had trained to pilot planes, lived in California. In August 2001, an astute FBI intelligence agent was trying to find them, and asked the criminal division for help. But FBI headquarters stepped in and insisted that the firewall not be breached: criminal agents were to stay out of the intelligence effort. A few weeks later, al-Midhar and al-Hazmi plunged Flight 77 into the Pentagon, their manifold ties to Mohammed Atta and the other hijackers kept safely under wraps.
In attempting to “connect the dots” on how branches of our government erected barricades against efficient information-sharing, one cannot avoid addressing the most basic blunder of all. In the years after World War II, the designers of the CIA conceived of it as, in one sense, an analogue to the American military. Just as the armed forces are generally precluded by law from domestic policing (which is left to the FBI and other federal, state, and local agencies), so the CIA could not conduct its operations within U.S. territory.
The CIA, then, is confined to foreign intelligence and counterintelligence activities. When leads cross into U.S. territory, the FBI takes over—mainly through its foreign-counterintelligence division, which is separate from its law-enforcement side. This division of labor, and not simple rivalry, is the salient reason for the inter-agency warfare of the last half-century.
Turf aside, however, the structure is not analogous to the military doctrine of posse comitatus, which bars the armed forces from domestic policing. For if the United States were invaded by a foreign army, our military would respond; that would be a national-defense function, not policing. Similarly, hostile foreign operatives within the U.S.—plotting, recruiting, providing funding and material support to their principals—fit the mold of an invading foreign army far better than that of a criminal collaborator.
Yet U.S. law and tradition (strenuously supported by many of the same politicians who today bluster about the CIA’s lack of dot-connecting skills) rig intelligence as if it were Russian roulette: the agency whose raison d’être is to counter foreign threats to our national security is precluded from participating in investigations once they cross into our nation, while the agency that is expected to pick up the ball and run with it from there does so without the CIA’s depth of knowledge and expertise.
The ill-conception of this arrangement has become increasingly patent. With the info-tech revolution, al-Qaeda operatives seamlessly share information across borders with the click of a mouse, enabling them instantly to construct a complete picture of their prey. By contrast, the forces charged with keeping us safe from them are expected to complete awkward hand-offs as persons and information roam in and out of the country. The windfall beneficiary is, ironically, the terrorist operative who happens also to be an American citizen. Such an operative is not only protected by the full panoply of constitutional rights wherever in the world he travels but is radioactive to the CIA, which is no less fearful of the perception that it is spying on Americans than the Justice Department was about the appearance of misusing FISA.
It is bad enough that, prior to 9/11, terrorists could easily survive in the lacunae of our domestic intelligence apparatus. Worse, they positively thrived on the way it operated.
Throughout the eight years of the Clinton administration, as militant Islam’s jihad against America escalated, the federal courts became the linchpin of counterterror strategy. This began understandably enough. The 1993 WTC bombing was viewed as a domestic crime. Although, years later, investigators and journalists would link the bombing to al Qaeda, and al Qaeda in turn to prior terrorist acts against the U.S., at the time not much was known about Osama bin Laden, his network, and his national support systems in Afghanistan and Sudan. No one credibly could fault President Clinton for handling the matter as a court case or for not responding militarily. As the murder and mayhem grew, however, and as it became clearer that indictments were a pusillanimous response to suicide bombers geared to obliterate American embassies and naval destroyers, Clinton stayed the self-defeating course.
As Defense Secretary Donald Rumsfeld has observed, weakness is provocative. The fecklessness of meeting terrorist attacks with court proceedings—trials that take years to prepare and months to present, and that, even when successful, neutralize only an infinitesimal percentage of the actual terrorist population—emboldened bin Laden. But just as hurtful was the government’s promotion of terrorism trials in the first place. They were a useful vehicle if the strategic object was to orchestrate an appearance of justice being done. As a national-security strategy, they were suicidal, providing terrorists with a banquet of information they could never have dreamed of acquiring on their own.
Under discovery rules that apply to American criminal proceedings, the government is required to provide to accused persons any information in its possession that can be deemed “material to the preparation of the defense” or that is even arguably exculpatory. The more broadly indictments are drawn (and terrorism indictments tend to be among the broadest), the greater the trove of revelation. In addition, the government must disclose all prior statements made by witnesses it calls (and, often, witnesses it does not call).
This is a staggering quantum of information, certain to illuminate not only what the government knows about terrorist organizations but the intelligence agencies’ methods and sources for obtaining that information. When, moreover, there is any dispute about whether a sensitive piece of information needs to be disclosed, the decision ends up being made by a judge on the basis of what a fair trial dictates, rather than by the executive branch on the basis of what public safely demands.
It is true that this mountain of intelligence is routinely surrendered along with appropriate judicial warnings: defendants may use it only in preparing for trial, and may not disseminate it for other purposes. Unfortunately, people who commit mass murder tend not to be terribly concerned about violating court orders (or, for that matter, about being hauled into court at all).
In 1995, just before trying the blind sheik (Omar Abdel Rahman) and eleven others, I duly complied with discovery law by writing a letter to the defense counsel listing 200 names of people who might be alleged as unindicted co-conspirators—i.e., people who were on the government’s radar screen but whom there was insufficient evidence to charge. Six years later, my letter turned up as evidence in the trial of those who bombed our embassies in Africa. It seems that, within days of my having sent it, the letter had found its way to Sudan and was in the hands of bin Laden (who was on the list), having been fetched for him by an al-Qaeda operative who had gotten it from one of his associates.
Intelligence is dynamic. Over time, foreign terrorists and spies inevitably learn our tactics and adapt: consequently, we must refine and change those tactics. When we purposely tell them what we know—for what is blithely assumed to be the greater good of ensuring they get the same kind of fair trials as insider traders and tax cheats—we enable them not only to close the knowledge gap but to gain immense insight into our technological capacities, how our agencies think, and what our future moves are likely to be.
In considering the asserted “intelligence failures” of September 11 and beyond, it is worth bearing in mind this information bounty, which our government consciously decided to provide from 1993 through 2001 even as it was increasingly manifest that the enemy was growing more proficient, its attacks more deadly.
Although I have thus far been concentrating on the collection and analysis of intelligence here at home, a similar and complementary history can be constructed for what happened to our capabilities overseas. There, too, our intelligence apparatus was thoroughly compromised.
In particular, the collapse of the Soviet Union in the early 1990’s dovetailed with a severe economic recession that ultimately cost George H. W. Bush his presidency. For the CIA, this constellation of circumstances had two major, detrimental consequences.
First, desperate to cut spending wherever politically palatable, the federal government declared a “peace dividend.” This was a fantasy. Although the fall of Soviet tyranny was an enormous blessing, it also presaged a more challenging international environment, filled with threats diffuse, unconventional, and less predictable. Nevertheless, at the urging of many of the same elected officials now complaining about failure, including Senator John F. Kerry, intelligence spending was repeatedly slashed.
The second nightmare for the CIA was President Clinton. For the first President Bush, himself a former CIA director, intelligence had been a priority. For Clinton, it was a nettlesome chore—and one he largely avoided. Clinton had no time even for James Woolsey his own chosen director of Central Intelligence, declining to hold a single one-on-one meeting during Woolsey’s maddening two-year tenure. This freeze-out had the predictable effects: agency morale plummeted, officers abandoned ship, and Congress’s funding door slammed shut.
Human intelligence also fell into disrepair, having already fallen into disrepute. It is worth considering that almost all the terrorism prosecutions of the 1990’s took place after successful attacks. We managed to stop exactly two such attacks: the 1994 Bojenka plot against the airliners, and a 1993 conspiracy to bomb New York City landmarks. The former success was due to sheer luck (a fire, started by inept chemical mixing on the part of two terrorists, was detected by an alert Manila police officer), combined with a Pakistani informant who was induced to turn in the ringleader. The latter happened because an informant penetrated the blind sheik’s terror organization, recorded scores of conspiratorial conversations, and permitted agents to catch the plotters in flagrante delicto, stirring explosives. Sadly, that informant had actually infiltrated the group in 1991 but had been deactivated seven months before the 1993 WTC bombing (after which he was reinstated).
One cannot develop the necessary global network of intelligence informants without CIA case officers. As George Tenet, the current director, attested in a recent speech, by the time he took the helm in the fifth year of the Clinton administration the graduating class of case officers was at a historic nadir. As for the agency’s clandestine-services program, Tenet elaborated, that was in such a shambles that it will take until 2009 before it is functioning at an acceptable level.
Meanwhile, abjuring clandestine operatives, Clinton-era intelligence went hi-tech, making extensive use of satellite surveillance and other advances in remote eavesdropping. But with fewer agents to translate and analyze what was gathered, or to follow leads, the effort was ineffectual. Consider: the 1998 embassy bombings in Africa, carried out by an organization we had been focusing on for five years, took several months to plan; ditto the 2000 strike on the U.S.S. Cole (which would have happened eight months earlier, to the U.S.S. The Sullivans, had not the terrorists’ attack boat sunk from the heft of explosives). The attacks of September 11, 2001 were plotted on four continents for well over a year. We did not sniff out any of them.
As the CIA stumbled, the FBI was ascendant, opening a host of new legal-attache offices around the world. Generally speaking, this was a positive development: just as the terrorist threat was exploding, so too was the spread and sophistication of criminal syndicates, making it imperative for law-enforcement agencies to cooperate internationally. But timing is everything. The FBI was spreading its wings just as its most significant cases involved not ordinary crimes but national security.
Some of our best information is obtained from foreign intelligence services. Naturally, those services are much less forthcoming if they think that what they tell us will have to be revealed in court because of U.S. legal rules. Historically, that was not much of a problem when dealing with the CIA; it is, however, always a concern for a country weighing whether to share some sensitive or potentially embarrassing information with the FBI. The Saudis’ infamous obstruction of the FBI’s efforts to investigate the 1996 Khobar Towers bombing is an exquisite example.
In the Clinton years, no matter how many times we were attacked, all the world knew that our approach was to have the FBI build criminal cases. Indeed, Presidential Decision Directive (PDD) 39, issued in June 1995, announced that prosecuting terrorists and extraditing indicted terrorists held overseas were signature priorities of the administration. Nearly three years later, after several other attacks and public declarations of war by bin Laden, Clinton issued a press release that both trumpeted as a ringing success his strategy of having terrorists “apprehended, tried, and given severe prison sentences” and announced a new directive, PDD 62. This purported to “reinforce the mission of the many U.S. agencies charged with roles in defeating terrorism,” including by means of the “apprehension and prosecution of terrorists.” The embassies in Kenya and Tanzania were bombed less than three months later.
The mantra that “9/11 changed everything” is omnipresent. But is it true? It is certainly true in one crucial sense: our national anti-terrorism strategy is no longer to fight bombs and militias with indictments and press releases. The military has reemerged as the spearhead, with law enforcement in an important but subordinate role. The ramifications have already been positive: simply by responding with force to our enemies, we have not just eliminated thousands of terrorists but accumulated volumes of vital intelligence.
But much still needs to change, and the prognosis is not hopeful. For one thing, we speak of intelligence “failures” as if they were current lapses, to be laid at the feet of the poor saps left without a chair just as the music stopped. And we speak about “fixes” without coming to terms with the nature of the problem; until we do, any such fixes will at best be palliatives, and will more likely make things worse.
Take Iraq’s missing weapons of mass destruction. It may yet turn out that these will be found in Iraq itself, or that they were moved or hidden outside the country in the many months between when we first told Saddam Hussein we were coming and when at last we arrived to depose him. Still, for the moment the stubborn fact remains that the government said the WMD were there and they have not been located. Whose intelligence failure is that? Did our intelligence agencies “fail” in 2003, when, according to David Kay, even Saddam’s Republican Guard believed Iraq possessed the weapons? Or did they “fail” in the 1990’s when the government of the United States regarded the CIA, and spying, and human intelligence, and Iraq as one big pain that should just go away?
Hizballah killed well over 200 servicemen in the two Lebanon attacks of 1983. The blind sheik, and bin Laden after him, promised their adherents that a reprise or two of such “operations” would surely induce the Americans to cut and run from the Persian Gulf. Although we did not cut and run, we did stand by as Saddam Hussein put down a revolt we had incited with the materiel we let him keep. When Saddam tried to assassinate the first President Bush and when he expelled the UN inspectors, we lobbed a few missiles at useless targets—just as we did when bin Laden obliterated our embassies in Africa. In response to the Cole bombing, we did nothing.
Bin Laden struck us repeatedly in the eight years leading up to September 11. From the thousands in al Qaeda’s swelling international ranks, we plucked about 40 and indicted them, bathing them in all the rights of American defendants, and arming them with information from our intelligence files to prepare their defenses. One of these, Mohammed Daoud al-‘Owhali, had killed nearly 250 people by helping to drive a car bomb to the entrance of our embassy in Nairobi, and later confessed. Al-‘Owhali was a soldier in a war on America, probably among the most effective ever. He was held not as a prisoner of war but as a criminal defendant, questioned not by the CIA but by FBI agents, who actually tried to give him Miranda warnings. When he was given a civilian trial, a U.S. judge initially ordered his confession suppressed—which would nearly have guaranteed his acquittal—because he had not been advised of his right to have an American defense lawyer present: a right that, since he was in the custody of Kenya, he did not have. The judge later relented, but only after issuing an opinion holding that foreign terrorists who attack America overseas should be accorded the benefits of the constitutional system it is their mission to destroy.
Was September 11 the worst intelligence failure in our country’s history? Or was it, rather, a national failure, the failure of a country that allowed its sense of decency to overwhelm its instinct for survival and that effectively convinced its enemies that they could strike with impunity?
The problem with our intelligence apparatus, to repeat, is that we went on a national nap for over two decades. If an entity is systematically warped and mismanaged for 20 or 30 years—not by a single agency director or American President, but by a philosophy—it cannot be fixed overnight. You cannot wake up on Monday and say, “We need more informants,” and expect to have them embedded and reporting by the close of the business day. If those lobbying for quick fixes to the intelligence mess do not appear to understand this, might it be because they do not want anyone to start probing whose mess it actually is?
This is not to say that the U.S. intelligence apparatus needs fundamental restructuring. In my opinion, it does not. Instead, its primary needs are, first, time to reverse a quarter-century of sloth, and, second, adequate resources to build a new human-intelligence network. Beyond that, a few other things need to happen, but it is here especially that pessimism sets in.
Although there is no need to restructure the CIA and FBI, the division of labor between them must take account of new realities. Without losing the benefits of rivalry, it is imperative to eliminate the structural barriers that, assuming they ever made sense, make none now. In particular, in a national-security investigation, the overriding assumption must be that we are dealing not with potential criminals presumed innocent but with foreign enemies who must be brought to heel. This means that the CIA must be able to follow the trail of its intelligence into the U.S.
In short, I am proposing that the CIA be permitted to work in the United States against those who have been colorably associated with foreign powers, including terrorist groups. A number of safeguards can be put in place to assure Americans that we have not authorized Big Brother to run amok. In addition to requiring that the FBI be given notice and periodic updates, we could mandate that the CIA obtain authorization within 72 hours of the start of domestic surveillance.
My own preference is that this approval come from a responsible executive-branch official rather than from the courts. The FISA model, in my view, violates the principle of separation of powers, gets courts (which have no institutional expertise in, or ready access to, intelligence) into the business of micro-managing national security, discourages agents from pursuing investigations essential to public welfare, and confers upon enemy operatives benefits they should not have. Still, given that FISA is not going away, I would rather have a requirement to obtain FISA court authorization than a continuation of the outdated system in which, while al Qaeda can freely cruise from Peshawar to Peoria, the CIA gets turned away at the border.
Complementing this change, the FBI and the CIA should continue their increasingly effective cooperation outside the United States, with two caveats. The first is that the CIA (and the Defense Department) should be in the lead, the FBI in a secondary role except when the executive branch determines it is in our national interest to extradite to our criminal-justice system a terrorist held by a foreign sovereign. The second is that, the targets in this war being enemy combatants and not criminal suspects, they should not get Miranda warnings, American constitutional protections (except minimal due process, which our government must always accord), or lavish access to our sensitive files. Instead, they should be captured, held for however long active hostilities last, squeezed (humanely) for information, and, if they have violated the laws of war, given military tribunals.
Other commonsense steps to promote competent intelligence-collection were incorporated in the Patriot Act, enacted six weeks after the September 11 attacks. This act, however, has come under blistering assault; so vicious has the campaign been that sensible Democrats like Senator Feinstein and Senator Joseph Biden have been moved to join their voices to those of President Bush and Attorney General John Ashcroft in the act’s defense. But it may be too little, too late: there are now more than a half-dozen proposals making their way through Congress seeking rollbacks or repeal.
The Patriot Act’s intelligence improvements were vital, and nowhere more so than in the area of information-sharing. It dismantled the pernicious FISA firewall that prevented agents from pooling information. It authorized intelligence agents who were conducting FISA surveillance to “consult with federal law-enforcement officers to coordinate efforts to investigate or protect against” terrorism and other hostile acts. In addition, the act made it easier to obtain surveillance authorization, scotching the requirement that agents show that foreign counterintelligence was the “primary purpose” for their application in favor of the less burdensome certification that it was a “significant purpose.”
But it is these crucial improvements that have come under greatest fire. First, in 2002, the FISA court itself took umbrage at Congress’s demolition of the firewall and the (judicially invented) “primary purpose” test. Fortunately, the court’s attempt to reestablish the suicidal status quo ante was blocked. Next, however, an amalgam of libertarian Republicans and anti-Bush Democrats has promised to limit the term of the bill’s crucial provisions to December 31, 2005, when they are currently scheduled to “sunset” unless extended or made permanent by new legislation.
This bipartisan Senate cabal (led by Democrats Patrick Leahy, Richard Durbin, and Harry Reid and Republicans Larry Craig and John Sununu) wants not only to terminate the FISA sharing provisions but to end the sharing of grand-jury information; to restrict the information that intelligence agencies may obtain from communications-service providers (the same kind of information long available to criminal investigators probing health-care fraud and gambling); and effectively to destroy the valuable “sneak-and-peak” search warrant (another longstanding tool in ordinary criminal investigations) that allows agents, with court approval, to search a location for intelligence purposes but not to seize anything, thus keeping the targets unaware. No doubt, the next time something goes boom, these Senators and their myriad sympathizers will be among the first to wail about unconnected dots.
A political class that appreciated the stakes involved would not indulge in this sort of recklessness. It would not hasten to dub every episodic setback an intelligence failure without asking searchingly whether we have set our agencies up to fail. It would have the necessary perseverance, through the inevitable torrent of catcalling, to retrace a quarter-century of missteps. And it would construct its remedies on the basis of a correct diagnosis of the disease. Right now, when we need it most, this is not the political class we have.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Intelligence Mess: How It Happened, What to Do About It
Must-Reads from Magazine
A life well lived.
Charles Krauthammer made people understand their own thoughts. It was Charles who collated the various strands of Ronald Reagan’s foreign policy and codified them as the Reagan Doctrine in a Time Magazine essay in 1985. He did the same with the Bush Doctrine 16 years later—and his codification played a role in how Bush himself came to formulate his approach to the world following 9/11. And in 2009, Charles codified the Obama Doctrine as well, although not by that name, in a speech he turned into one of the great articles of our time, “Decline Is a Choice.” I was there when he delivered that speech and rushed up to him to ask that he allow me to publish it in COMMENTARY, but I was too late; he had already promised it to The Weekly Standard.
I couldn’t really object because, 14 years earlier, I had been part of the trio (with Bill Kristol and Fred Barnes) who had recruited him to come write for the Standard. I don’t think I was the reason he did so and I’m not sure I conducted myself in a way that helped our case. I was not then and am not now easily intimidated, but I always found Charles particularly intimidating. The early going at the Standard did little to ease that sense of intimidation. At early editorial meetings, he seemed particularly eager to challenge my ideas for articles and to make me defend them; he had come to know me primarily as the brother of his wife Robbie’s best friend Rachel and as the brother-in-law of his close friend Elliott and had no independent reason to think I had any particular business running a magazine or serving as his editor, which I would do.
You didn’t edit Charles, though. He edited himself. Over and over again. His work would come in from an assistant and be revised continually until the moment of publication. Expressions of frustration about the late hour nearing the time we had to send pieces off to the printer were greeted on the phone with stony silence. Any complaint to one of his assistants (Rich Lowry was one) generated what seemed to a kind of silent terror that crackled through the cables. He was civil, but not necessarily pleasant, in these moments.
Charles existed so apart from his quadraplegic disability in the minds and experiences of those of us who knew him—because of his willed insistence that it be so, a willed insistence that was all the more powerful because it was unspoken—that any anger I might have felt at the imposition of his writerly arrogance seemed entirely permissible … until the moment that I remembered. I would remember he could not put pen to paper. I would remember he wrote by dictating. I would remember it was a goddamned astonishing fact of facts that he could do any of this, let alone do it with such easy brilliance. Think of it. He read widely and paid attention to everything—a man who had some difficulty turning a page. He wrote weekly, this man who could not write.
How did he? He told me once that when he did rounds as a resident at Mass General, the hospital had a primitive voice-control system in which he and his colleagues would phone in their notes on patients. The system would start when they began speaking, but if they paused or said um or got lost in their thoughts, it would shut down and hang up on them. And so they would have to do it again. Charles, of course, couldn’t take notes. He had to dictate off the top of his head. Because this weirdly technical aspect of his medical training taught him how, he became a fluent dictater of words, and the only person I ever knew who could make one of those early computer voice thingies called Dragon work to his advantage.
Anyway, Charles liked the Standard, and he like the work we did, and when I left editing there, he wrote me the nicest letter I have ever received. It was not a necessary gesture. I didn’t expect it and he was in no way obliged to write it, but write it he did. It was the first act of pure kindness he had ever shown me, and it began a friendship—a very distant friendship, but a friendship nonetheless—that would last two decades. Over time he would share bits and pieces of the way he was compelled to live. He told me that the year Ford came out with the van he was able to drive was the greatest liberation of his life. He did love driving that van—and drove it with frightening flourish.
He had been one of the first people I met when I came to Washington looking for a possible job at the tail end of college. Martin Peretz, the editor of The New Republic and the man who had helped turn Charles from a Mondale speechwriter into a magazine writer, had invited me to lunch with the two of them at the Palm. I had been reading him for a while, and had no idea he was wheelchair-bound. The only note taken of it was that Marty occasionally offered to help Charles with his straw, or to cut a piece of gristle off the steak that had already been sliced thin for him.
A grandee of Washington at the time—I don’t remember who, maybe Lee Hamilton or the head of Brookings or some such—stopped by the table to complain about Israel. That day the Jewish state had annexed the Golan Heights, an act that was taken to be very bad by the conventional wisdom of Washington grandees then and now.
“Please tell me even you have no defense for this,” the grandee told Marty Peretz.
“Well, I’m no fan of Begin, and I’m sure he could have done this better,” Marty said, “but there are good strategic reasons for such a move, of course.”
And Charles said, “Israel does what Israel has to do, just like the United States.”
As I said, he made people understand their own thoughts. Every week. For decades. That day, for the first time, he made me understand mine. After hundreds of other such occasions, reading him in print and listening to him during his tenure as the most unexpected of TV stars, I can say I’m not sure anyone in my lifetime has ever done that better. It is a key role of the intellectual explicator, which is what Charles was nonpareil—to help you understand what you think.
He was the most extraordinary person I have ever known, and I have been blessed to know many. We roasted Charles a few years ago at our annual fundraiser. Of course, no one could think of a bad thing to say about him. He said bad things about us. They were hilarious, because that’s the other thing he was—funny. Very, very, very funny. We’ll release video of it over the coming days.
There is more to say—about Charles as a Jew, about Charles as a brilliant social commentator, and about Charles as a medical miracle. For that he was. He was a quadriplegic who lived to the age of 68—and died not of complications from his condition but from cancer. He told Bill and Fred and me back in 1995 that he did not know how much longer he had to live and he needed to earn as much as he could to ensure Robbie and his son Daniel were provided for if the end came unexpectedly. He lived for 23 years after that. He wrote a book that sold a million copies. People flocked to him at personal appearances as though he were a Beatle.
Has anyone ever done more with the life God handed him, or played a bad hand as astoundingly as Charles did?
He did not believe in God, but if there is a God and there is a heaven, I hope Charles is playing basketball right now and cracking wise with the wisest of men, for he was among the greatest-souled of men. Baruch Dayan Emet. And may Robbie and Danny be comforted among the mourners of Zion and Jerusalem.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Last Friday, the New York Times revealed that a lawsuit targeting Harvard University claims the school has systematically discriminated against minorities. That is, one particular minority. The school, it was alleged, has handicapped Asian-American students. Otherwise, they’d have to accept too many qualified Asian-Americans. For a peculiar type of activist for social equality, this was the good kind of prejudice–the kind that privileges accidents of birth over individual merit and achievement. Or, in the soft, docile Newspeak that suffices to comfort the enlightened elites charged with keeping the deserving down: “racial balancing.”
Harvard has objected to the allegations and provided statistics that purport to show that no negative racial discrimination exists. But many of those who you might expect to defend this elite institution are, in fact, comfortable with negative discrimination, even if the victims of that process are minorities themselves.
That’s the logic evinced by Minh-Ha T. Pham, a media studies professor at Brooklyn’s Pratt Institute and, as her bio prominently notes, a parent of a student in New York City’s school system. That note is important—more important than her background as a scholar of Asian-American studies because her argument in the New York Times is that her child deserves to be disadvantaged in the name of social leveling.
You see, New York City Mayor Bill de Blasio has introduced a plan to depopulate the city’s most prestigious high schools of the disproportionately high number of Asian students in the hopes of privileging more black and Latino students who otherwise cannot compete. Asian-American parents are, quite understandably, outraged by the naked effort to punish their hard work and rob their offspring of all the opportunities their work should, by rights, afford them. Not Professor Pham, though. Her eyes are wide open.
Pham argues that de Blasio’s plan to reserve seats at prestigious high schools for students who score below the threshold for admittance on a standardized test—and, ultimately, to eliminate the test altogether—“isn’t anti-Asian, it’s anti-racist.” But she appears to conflate racism with interclass disparities. Pham even notes that success on standardized testing can be a reflection of the resources some parents are or are not able to devote to their children’s’ study. For some on the left, the distinctions between racism and classism are fairly blurred, so Prof. Pham may not see the confusion her argument inspires among the uninitiated. Nor does she tackle a 2016 mayor’s office report, which found that New York City’s Asian-American population has the city’s highest poverty rate. Whether it’s Harvard University or Stuyvesant High School, these are often first-generation students who have seen what they’ve worked for stolen from them because they were simply too successful in the endeavor.
Pham goes on to preen about how all schools in New York City should be “elite schools,” a fluffy sentiment that, in practice, renders all the institutions she’s disparaging equally bad. She adds that Asian-Americans have not suffered from the kind of racism that black Americans have historically endured and with which they still struggle. “[T]oo many Asians have chosen to preserve the status quo by buying into racism against blacks and the white supremacist system built on it,” Pham laments. Therefore, she concludes, Asian Americans should commit to “fighting” the system, even if that means passively accepting the back seat on the bus.
At this point, we need to be reminded that the controversy here isn’t over whether American minorities deserve to benefit from positive social leveling but whether qualified Asian-Americans are benefiting too much from meritocracy. Professor Pham has managed to erect an elaborate intellectual construct to convince her of the righteousness of her view, but she will probably find it hard to get support beyond her overeducated peer group. If her problem is that students acculturated in Asian immigrant households are simply better prepared for standardized tests, and that eliminating those tests might help level the playing field, that’s one thing. But Pham’s argument sprawls and contains attacks on all disparities. From racial disparities to economic disparities to qualitative disparities; her problem doesn’t seem to be inequality of opportunity but the fact that tiered and hierarchical societal institutions exist in the first place.
Pham is not just arguing against the ethos at the heart of the American idea; as the outraged Chinatown-based activist Karlin Chan said, de Blasio’s plan “attacks the immigrants’ dream of bettering their children.” She is arguing against human nature itself. “[S]ome Asian-American parents in New York are protesting this proposal,” Pham laments. “They are on the wrong side of this educational fight.” One of biology’s most powerful overriding genetic imperatives is the desire to create the most optimal conditions for one’s offspring. Not everyone can reason themselves into believing that depriving their children of the opportunities that may be their due is a necessary sacrifice to the arbitrary diktats of social justice. Where would this country be if they could?
For the last 300 years or so, the most fundamental distinction among Western political factions has been between those who think that mankind can be perfected and those who do not. Professor Pham believes that reason should trump biology, in this case, even if it leaves her progeny worse off. There is a reason that those who believed in humanity’s perfectibility—from the Jacobins to the Bolsheviks—all resorted to the compelling power of the state to impose their dogma. They have rationalized themselves into an entirely irrational position.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Podcast: Battles at the border and in the UN.
Is the crisis the Trump administration inaugurated over the separation of children from their border-crossing families over? Or will the press and Democrats pursue this story even after at the risk of exposing the systemic flaws in the country’s immigration system? Also, the U.S. withdraws from the United Nations Human Rights Council, and good riddance.
Don’t forget to subscribe to our podcast on iTunes.
Choose your plan and pay nothing for six Weeks!
Radicalism and self-injury.
As a candidate, Donald Trump promised to be uncompromising when it came to immigration. For the most part, he has delivered. An executive order that restricted refugee intake and access to temporary visas in the first days of his administration sparked a wave of popular unrest, but the outrage subsided as Trump’s assaults on America’s permissive immigration regime became routinized. Only when Trump began breaking up the families of asylum seekers did the powerful public aversion we saw with the introduction of the “travel ban” again overtake the national consciousness. The abuse was so grotesque, the victims so sympathetic, and the administration’s insecurity so apparent that it broke the routine.
Opponents of this administration’s “zero tolerance policy” for border crossers and some asylum seekers currently have the upper hand. But as the debate over what to do next heads to Congress, where the mundanities of a legislative fix will come to dominate the national conversation, the liberal-activist wing risks sacrificing its sympathy. Such activists have convinced themselves that this is an extreme situation that requires extreme measures in response. Down that road lies marginalization and, ultimately, defeat.
On Tuesday night, Homeland Security Sec. Kirstjen Nielsen went out to dinner at a Mexican restaurant, and it was deemed by activists and reporters to be a galling provocation that could not stand. Activists descended on the restaurant, shouting “If kids don’t eat in peace, you don’t eat in peace!” Reporters marveled at Nielsen’s gauche “optics,” and even speculated that her choice of venue was a subtle effort by the White House to bait their opponents into an overreaction (as if baiting were necessary). Nielsen was forced to leave the restaurant.
This is the kind of mania that can only afflict those hysterical enough to disregard the fact that Mexicans no longer make up the majority of the illegal population in America, and that most border crossers travel north from violence-plagued “Northern Triangle,” which consists of Honduras, El Salvador, and Guatemala. That kind of reaction from activists in and out of journalism is understandable—a policy that amounts to state-sponsored child abuse is a terrible injustice—but it is also self-defeating.
The logic that led to Nielsen’s ordeal is the same logic that has convinced some on the radical left to endorse the outing of otherwise anonymous U.S. Immigration and Customs Enforcement officers in public fora. Activists on Tuesday night trolled through the online professional network LinkedIn to identify ICE officers, track where they live, and direct the most aggrieved of protesters to make their lives miserable. Online administrators had the presence of mind to suspend these users and scrub the web of their work, but those who want that information know where to get it. And this may not be a harmless activity. A popular activist Twitter account promoting the defunct leftist protest movement “Occupy Wall Street” posted an infographic on Tuesday glamorizing the murder of ICE agents for its more than 200,000 followers. Anyone of sound mind would ignore these incitements to radicalism, but it only takes one.
Those who are attracted to these tactics justify them as a necessarily extreme response to extremism. That might be explicable if the same tactics were not used to make Federal Communications Commission Chairman Ajit Pai’s life miserable when the left became convinced that a two-year-old supervisory regulation allowing Internet service providers to privilege content providers was a blow to the foundations of the republic.
In January, when the FCC approved a plan to phase out “Net Neutrality” regulations, the left determined that the only reasonable response was unreasonableness. HBO’s John Oliver mobilized his viewers to bombard the FCC’s website with comments. Some of those commenting took it upon themselves to threaten the murder of the chairman’s family. “Resistance” groups began putting literature up around Pai’s neighborhood accusing him of criminal abuses. They held vigils in his driveway, held up signs invoking his children by name, bombarded his house with pizza deliveries he never ordered, and phoned in bomb threats that cleared out the FCC’s offices. They “come up to our front windows and take photographs of the inside of the house,” Pai told the Wall Street Journal last May. “My kids are 5 and 3. It’s not pleasant.”
It seems as if conflating the conduct of public and private life holds greater and greater appeal for a certain segment of the left. The attention it generates ensures that it will become a regular feature of protest movements in the Trump era. What’s more, the targets of this tactic suggest that the left will make no distinction between irritants that offend liberal sensibilities and those things that are truly obscene. That’s a slippery slope, and traveling down it sap the left of the sympathy it needs from the general public. In making Trump appointees and their families the targets of personal harassment, Trump’s opponents are discrediting themselves more than they are shaming anyone in the White House.
Choose your plan and pay nothing for six Weeks!
A dangerous idea makes a comeback.
The word “ethics” appears prominently in the biographies of the authors who co-wrote a recent Washington Post op-ed lamenting the “taboo” associated with “talking about overpopulation.” Frances Kissling is the president of the Center for Health, Ethics, and Social Policy. Peter Singer is a professor of bioethics at Princeton University. Only Jotham Musinguzi, the “director general of Uganda’s National Population Council,” doesn’t mention “ethics” in the bio. That’s good because the Malthusian views promulgated in the piece are anything but ethical.
Inauspiciously, the authors begin by applying a coat of gloss over Paul Ehrlich’s 1968 book The Population Bomb, which they note had a “major impact” on public policy but that “spurred a backlash” rendering the discussion of its thesis “radioactive.” Indeed, that’s only just. Ehrlich’s claims were dead wrong.
Ehrlich claimed that the Earth had a finite “carrying capacity,” and its limits were about to be tested. He claimed that mass starvation was imminent; hundreds of millions would die. Neither the first nor the third world would be spared; the average American lifespan would decline to just 42 by 1980. Ehrlich continued to make apocalyptic predictions after his book became a sensation. “Most of the people who are going to die in the greatest cataclysm in the history of man have already been born,” he wrote in 1969. A year later: “The death rate will increase until at least 100-200 million people per year will be starving to death during the next ten years.” Between 1980 and 1989, most of the Earth’s population, including over one-third of all Americans, would die or be murdered what he grimly dubbed “the Great Die-Off.” As recently as this year, Ehrlich—who still teaches at Stanford University—said that civilizational collapse remains a likely prospect and the chief shortcoming of his most famous book was that it failed to invoke the modern progressive Trinity: feminism, anti-racism, and inequality.
Our WaPo ethicists don’t tackle any of this. Indeed, they favorably observe that Ehrlich’s warnings render family planning in the developed world a necessity to stave off the unfortunate circumstances that would force wealthy nations to withhold food aid from the developing world to induce “necessary and justifiable” chaos and starvation. Seriously.
Because population control is not a problem in the developed world, where birthrates are declining below even replacement rates, population controllers tend to fixate on sexual habits in the developing world. The authors of this op-ed are no exception. They draw an almost always fallacious straight-line projection to conclude that—in the unlikely event that nothing changes between today and 2100—a population crisis should afflict a variety of Sub-Saharan African nations. To avert this crisis, they advocate promoting and supporting proper sexual hygiene, to which almost no one would object. But their authors’ core agenda isn’t the distribution of prophylactics. They seek to de-stigmatize abortion in the equatorial world, which is controversial for reasons that have nothing to do with faith. After all, it was The Population Bomb and its progenitors that lent renewed legitimacy to old arguments that inevitably result in targeting black and brown populations with sterilization and eugenics.
The title of Ehrlich’s book was lifted from a 1954 pamphlet issued by Gen. William Draper’s Population Crisis Committee, and it arguably inaugurated the overpopulation fad toward which pop intellectuals were drawn in the 20th Century. The effects this mania had on public policy were terrible. In the United States, population control hysteria led, in part, to the sterilization of “up to one-quarter” of the Native American women of childbearing age by 1977, according to Angela Franks’ 2005 book, Margaret Sanger’s Eugenic Legacy. “The large number of sterilizations began in earnest in 1966, when Medicaid came into existence and funded the operation for low-income people.” Thousands of Native American women in the early to mid-1970s were sterilized after signing consent forms that failed to comply with regulations.
With the assistance of the U.S. government and the International Planned Parenthood Federation, the Puerto Rican government operated a program of voluntary female sterilization for decades, but it was “voluntary” in the most perverse sense. Pressure from employers and public incentives united to “liberate” women from the drudgery of childbearing, leaving many women without much of a choice in the matter. A 1965 survey of Puerto Rican women found that one-third of women in prime child-bearing years admitted to undergoing sterilization.
America’s minority populations were, however, a secondary concern to population controllers. It was, as ever, the so-called underdeveloped world that preoccupies the technocrats. Toward supposedly enlightened ends, the World Bank, working in quiet concert with the U.S. government, helped to advance Washington’s unstated goal of keeping population levels in the developing world down. “In some cases, strong direction has involved incentives such as payment to acceptors for sterilization, or disincentives such as giving low priorities in the allocation of housing or schooling to those with larger families,” a triumphant 1974 National Security Council memorandum read. As part of this campaign, American philanthropic institutions working with USAID reportedly distributed unsafe and untested contraceptive devices in the developing world. “USAID has been able to put some distance between itself and many of the more objectionable elements of its population agenda,” Population Research Institute’s James A. Miller wrote in a 1996 exposé.
For decades, a pseudoscientific religion that justified coercion and eugenics to achieve “optimal” population ratios quietly guided the development of Western public policy. In a comprehensive 2012 essay in The New Atlantis, Robert Zubrin demonstrated conclusively that 20th Century population control programs were “dictatorial,” “dishonest,” “coercive,” “medically irresponsible and negligent,” “cruel, callous, and abusive of human dignity and human rights,” and, perhaps most of all, “racist.” It was, in fact, their “neocolonial” aspects that led to a left-wing revolt against population controllers in the 1970s. But the left will never be able to entirely divorce itself from the logic that led to population control because they are Malthusians at heart. From peak Earth to peak oil, the left is possessed of a boundless pessimism. Theirs is an ideology that is founded upon the belief that life is a zero-sum game; all commodities are finite and can only be distributed fairly by enlightened elites. They will always underestimate humanity’s capacity to engineer itself out of a jam.
So, yes, overpopulation is a “taboo” subject because it has justified one of the most grotesque campaigns of industrialized human rights abuses the world has ever seen. In making a veiled argument in favor of abortion, our ethicists have inadvertently made their opponents’ case for them: reproductive controls targeting women in the developing world inevitably legitimize condescension, imperialism, and dehumanization. “The conversation about ethics, population and reproduction needs to shift from the perspective of white donor countries,” the authors conclude. And yet, as was ever the case, the “perspective of white donor countries” seems always to be the place from which dangerous ideas about the undesirable procreative habits of women in the equatorial world spring. Fifty years after the publication of a book that helped to legitimize the sterilization of millions in the developing world, that kind of noxious chauvinism remains a prominent feature of the population control movement.