With the opening of a new national museum, the evidence begs to be examined afresh.
On September 21, the National Museum of the American Indian will open its doors. In an interview early this year, the museum’s founding director, W. Richard West, declared that the new institution would not shy away from such difficult subjects as the effort to eradicate American-Indian culture in the 19th and 20th centuries. It is a safe bet that someone will also, inevitably, raise the issue of genocide.
The story of the encounter between European settlers and America’s native population does not make for pleasant reading. Among early accounts, perhaps the most famous is Helen Hunt Jackson’s A Century of Dishonor (1888), a doleful recitation of forced removals, killings, and callous disregard. Jackson’s book, which clearly captured some essential elements of what happened, also set a pattern of exaggeration and one-sided indictment that has persisted to this day.
Thus, according to Ward Churchill, a professor of ethnic studies at the University of Colorado, the reduction of the North American Indian population from an estimated 12 million in 1500 to barely 237,000 in 1900 represents a “vast genocide . . . , the most sustained on record.” By the end of the 19th century, writes David E. Stannard, a historian at the University of Hawaii, native Americans had undergone the “worst human holocaust the world had ever witnessed, roaring across two continents non-stop for four centuries and consuming the lives of countless tens of millions of people.” In the judgment of Lenore A. Stiffarm and Phil Lane, Jr., “there can be no more monumental example of sustained genocide—certainly none involving a ‘race’ of people as broad and complex as this—anywhere in the annals of human history.”
The sweeping charge of genocide against the Indians became especially popular during the Vietnam war, when historians opposed to that conflict began drawing parallels between our actions in Southeast Asia and earlier examples of a supposedly ingrained American viciousness toward non-white peoples. The historian Richard Drinnon, referring to the troops under the command of the Indian scout Kit Carson, called them “forerunners of the Burning Fifth Marines” who set fire to Vietnamese villages, while in The American Indian: The First Victim (1972), Jay David urged contemporary readers to recall how America’s civilization had originated in “theft and murder” and “efforts toward . . . genocide.”
Further accusations of genocide marked the run-up to the 1992 quincentenary of the landing of Columbus. The National Council of Churches adopted a resolution branding this event “an invasion” that resulted in the “slavery and genocide of native people.” In a widely read book, The Conquest of Paradise (1990), Kirkpatrick Sale charged the English and their American successors with pursuing a policy of extermination that had continued unabated for four centuries. Later works have followed suit. In the 1999 Encyclopedia of Genocide, edited by the scholar Israel Charny, an article by Ward Churchill argues that extermination was the “express objective” of the U.S. government. To the Cambodia expert Ben Kiernan, similarly, genocide is the “only appropriate way” to describe how white settlers treated the Indians. And so forth.
That American Indians suffered horribly is indisputable. But whether their suffering amounted to a “holocaust,” or to genocide, is another matter.
It is a firmly established fact that a mere 250,000 native Americans were still alive in the territory of the United States at the end of the 19th century. Still in scholarly contention, however, is the number of Indians alive at the time of first contact with Europeans. Some students of the subject speak of an inflated “numbers game”; others charge that the size of the aboriginal population has been deliberately minimized in order to make the decline seem less severe than it was.
The disparity in estimates is enormous. In 1928, the ethnologist James Mooney proposed a total count of 1,152,950 Indians in all tribal areas north of Mexico at the time of the European arrival. By 1987, in American Indian Holocaust and Survival, Russell Thornton was giving a figure of well over 5 million, nearly five times as high as Mooney’s, while Lenore Stiffarm and Phil Lane, Jr. suggested a total of 12 million. That figure rested in turn on the work of the anthropologist Henry Dobyns, who in 1983 had estimated the aboriginal population of North America as a whole at 18 million and of the present territory of the United States at about 10 million.
From one perspective, these differences, however startling, may seem beside the point: there is ample evidence, after all, that the arrival of the white man triggered a drastic reduction in the number of native Americans. Nevertheless, even if the higher figures are credited, they alone do not prove the occurrence of genocide.
To address this issue properly we must begin with the most important reason for the Indians’ catastrophic decline—namely, the spread of highly contagious diseases to which they had no immunity. This phenomenon is known by scholars as a “virgin-soil epidemic”; in North America, it was the norm.
The most lethal of the pathogens introduced by the Europeans was smallpox, which sometimes incapacitated so many adults at once that deaths from hunger and starvation ran as high as deaths from disease; in several cases, entire tribes were rendered extinct. Other killers included measles, influenza, whooping cough, diphtheria, typhus, bubonic plague, cholera, and scarlet fever. Although syphilis was apparently native to parts of the Western hemisphere, it, too, was probably introduced into North America by Europeans.
About all this there is no essential disagreement. The most hideous enemy of native Americans was not the white man and his weaponry, concludes Alfred Crosby, “but the invisible killers which those men brought in their blood and breath.” It is thought that between 75 to 90 percent of all Indian deaths resulted from these killers.
To some, however, this is enough in itself to warrant the term genocide. David Stannard, for instance, states that just as Jews who died of disease and starvation in the ghettos are counted among the victims of the Holocaust, Indians who died of introduced diseases “were as much the victims of the Euro-American genocidal war as were those burned or stabbed or hacked or shot to death, or devoured by hungry dogs.” As an example of actual genocidal conditions, Stannard points to Franciscan missions in California as “furnaces of death.”
But right away we are in highly debatable territory. It is true that the cramped quarters of the missions, with their poor ventilation and bad sanitation, encouraged the spread of disease. But it is demonstrably untrue that, like the Nazis, the missionaries were unconcerned with the welfare of their native converts. No matter how difficult the conditions under which the Indians labored—obligatory work, often inadequate food and medical care, corporal punishment—their experience bore no comparison with the fate of the Jews in the ghettos. The missionaries had a poor understanding of the causes of the diseases that afflicted their charges, and medically there was little they could do for them. By contrast, the Nazis knew exactly what was happening in the ghettos, and quite deliberately deprived the inmates of both food and medicine; unlike in Stannard’s “furnaces of death,” the deaths that occurred there were meant to occur.
The larger picture also does not conform to Stannard’s idea of disease as an expression of “genocidal war.” True, the forced relocations of Indian tribes were often accompanied by great hardship and harsh treatment; the removal of the Cherokee from their homelands to territories west of the Mississippi in 1838 took the lives of thousands and has entered history as the Trail of Tears. But the largest loss of life occurred well before this time, and sometimes after only minimal contact with European traders. True, too, some colonists later welcomed the high mortality among Indians, seeing it as a sign of divine providence; that, however, does not alter the basic fact that Europeans did not come to the New World in order to infect the natives with deadly diseases.
Or did they? Ward Churchill, taking the argument a step further than Stannard, asserts that there was nothing unwitting or unintentional about the way the great bulk of North America’s native population disappeared: “it was precisely malice, not nature, that did the deed.” In brief, the Europeans were engaged in biological warfare.
Unfortunately for this thesis, we know of but a single instance of such warfare, and the documentary evidence is inconclusive. In 1763, a particularly serious uprising threatened the British garrisons west of the Allegheny mountains. Worried about his limited resources, and disgusted by what he saw as the Indians’ treacherous and savage modes of warfare, Sir Jeffrey Amherst, commander-in-chief of British forces in North America, wrote as follows to Colonel Henry Bouquet at Fort Pitt: “You will do well to try to inoculate the Indians [with smallpox] by means of blankets, as well as to try every other method, that can serve to extirpate this execrable race.”
Bouquet clearly approved of Amherst’s suggestion, but whether he himself carried it out is uncertain. On or around June 24, two traders at Fort Pitt did give blankets and a handkerchief from the fort’s quarantined hospital to two visiting Delaware Indians, and one of the traders noted in his journal: “I hope it will have the desired effect.” Smallpox was already present among the tribes of Ohio; at some point after this episode, there was another outbreak in which hundreds died.
A second, even less substantiated instance of alleged biological warfare concerns an incident that occurred on June 20, 1837. On that day, Churchill writes, the U.S. Army began to dispense “ ‘trade blankets’ to Mandans and other Indians gathered at Fort Clark on the Missouri River in present-day North Dakota.” He continues:
Far from being trade goods, the blankets had been taken from a military infirmary in St. Louis quarantined for smallpox, and brought upriver aboard the steamboat St. Peter’s. When the first Indians showed symptoms of the disease on July 14, the post surgeon advised those camped near the post to scatter and seek “sanctuary” in the villages of healthy relatives.
In this way the disease was spread, the Mandans were “virtually exterminated,” and other tribes suffered similarly devastating losses. Citing a figure of “100,000 or more fatalities” caused by the U.S. Army in the 1836-40 smallpox pandemic (elsewhere he speaks of a toll “several times that number”), Churchill refers the reader to Thornton’s American Indian Holocaust and Survival.
Supporting Churchill here are Stiffarm and Lane, who write that “the distribution of smallpox-infected blankets by the U.S. Army to Mandans at Fort Clark . . . was the causative factor in the pandemic of 1836-40.” In evidence, they cite the journal of a contemporary at Fort Clark, Francis A. Chardon.
But Chardon’s journal manifestly does not suggest that the U.S. Army distributed infected blankets, instead blaming the epidemic on the inadvertent spread of disease by a ship’s passenger. And as for the “100,000 fatalities,” not only does Thornton fail to allege such obviously absurd numbers, but he too points to infected passengers on the steamboat St. Peter’s as the cause. Another scholar, drawing on newly discovered source material, has also refuted the idea of a conspiracy to harm the Indians.
Similarly at odds with any such idea is the effort of the United States government at this time to vaccinate the native population. Smallpox vaccination, a procedure developed by the English country doctor Edward Jenner in 1796, was first ordered in 1801 by President Jefferson; the program continued in force for three decades, though its implementation was slowed both by the resistance of the Indians, who suspected a trick, and by lack of interest on the part of some officials. Still, as Thornton writes: “Vaccination of American Indians did eventually succeed in reducing mortality from smallpox.”
To sum up, European settlers came to the New World for a variety of reasons, but the thought of infecting the Indians with deadly pathogens was not one of them. As for the charge that the U.S. government should itself be held responsible for the demographic disaster that overtook the American-Indian population, it is unsupported by evidence or legitimate argument. The United States did not wage biological warfare against the Indians; neither can the large number of deaths as a result of disease be considered the result of a genocidal design.
Still, even if up to 90 percent of the reduction in Indian population was the result of disease, that leaves a sizable death toll caused by mistreatment and violence. Should some or all of these deaths be considered instances of genocide?
We may examine representative incidents by following the geographic route of European settlement, beginning in the New England colonies. There, at first, the Puritans did not regard the Indians they encountered as natural enemies, but rather as potential friends and converts. But their Christianizing efforts showed little success, and their experience with the natives gradually yielded a more hostile view. The Pequot tribe in particular, with its reputation for cruelty and ruthlessness, was feared not only by the colonists but by most other Indians in New England. In the warfare that eventually ensued, caused in part by intertribal rivalries, the Narragansett Indians became actively engaged on the Puritan side.
Hostilities opened in late 1636 after the murder of several colonists. When the Pequots refused to comply with the demands of the Massachusetts Bay Colony for the surrender of the guilty and other forms of indemnification, a punitive expedition was led against them by John Endecott, the first resident governor of the colony; although it ended inconclusively, the Pequots retaliated by attacking any settler they could find. Fort Saybrook on the Connecticut River was besieged, and members of the garrison who ventured outside were ambushed and killed. One captured trader, tied to a stake in sight of the fort, was tortured for three days, expiring after his captors flayed his skin with the help of hot timbers and cut off his fingers and toes. Another prisoner was roasted alive.
The torture of prisoners was indeed routine practice for most Indian tribes, and was deeply ingrained in Indian culture. Valuing bravery above all things, the Indians had little sympathy for those who surrendered or were captured. Prisoners. unable to withstand the rigor of wilderness travel were usually killed on the spot. Among those—Indian or European—taken back to the village, some would be adopted to replace slain warriors, the rest subjected to a ritual of torture designed to humiliate them and exact atonement for the tribe’s losses. Afterward the Indians often consumed the body or parts of it in a ceremonial meal, and proudly displayed scalps and fingers as trophies of victory.
Despite the colonists’ own resort to torture in order to extract confessions, the cruelty of these practices strengthened the belief that the natives were savages who deserved no quarter. This revulsion accounts at least in part for the ferocity of the battle of Fort Mystic in May 1637, when a force commanded by John Mason and assisted by militiamen from Saybrook surprised about half of the Pequot tribe encamped near the Mystic River.
The intention of the colonists had been to kill the warriors “with their Swords,” as Mason put it, to plunder the village, and to capture the women and children. But the plan did not work out. About 150 Pequot warriors had arrived in the fort the night before, and when the surprise attack began they emerged from their tents to fight. Fearing the Indians’ numerical strength, the English attackers set fire to the fortified village and retreated outside the palisades. There they formed a circle and shot down anyone seeking to escape; a second cordon of Narragansett Indians cut down the few who managed to get through the English line. When the battle was over, the Pequots had suffered several hundred dead, perhaps as many as 300 of these being women and children. Twenty Narragansett warriors also fell.
A number of recent historians have charged the Puritans with genocide: that is, with having carried out a premeditated plan to exterminate the Pequots. The evidence belies this. The use of fire as a weapon of war was not unusual for either Europeans or Indians, and every contemporary account stresses that the burning of the fort was an act of self-protection, not part of a pre-planned massacre. In later stages of the Pequot war, moreover, the colonists spared women, children, and the elderly, further contradicting the idea of genocidal intention.
A second famous example from the colonial period is King Philip’s War (1675-76). This conflict, proportionately the costliest of all American wars, took the life of one in every sixteen men of military age in the colonies; large numbers of women and children also perished or were carried into captivity. Fifty-two of New England’s 90 towns were attacked, seventeen were razed to the ground, and 25 were pillaged. Casualties among the Indians were even higher, with many of those captured being executed or sold into slavery abroad.
The war was also merciless, on both sides. At its outset, a colonial council in Boston had declared “that none be Killed or Wounded that are Willing to surrender themselves into Custody.” But these rules were soon abandoned on the grounds that the Indians themselves, failing to adhere either to the laws of war or to the law of nature, would “skulk” behind trees, rocks, and bushes rather than appear openly to do “civilized” battle. Similarly creating a desire for retribution were the cruelties perpetrated by Indians when ambushing English troops or overrunning strongholds housing women and children. Before long, both colonists and Indians were dismembering corpses and displaying body parts and heads on poles. (Nevertheless, Indians could not be killed with impunity. In the summer of 1676, four men were tried in Boston for the brutal murder of three squaws and three Indian children; all were found guilty and two were executed.)
The hatred kindled by King Philip’s War became even more pronounced in 1689 when strong Indian tribes allied themselves with the French against the British. In 1694, the General Court of Massachusetts ordered all friendly Indians confined to a small area. A bounty was then offered for the killing or capture of hostile Indians, and scalps were accepted as proof of a kill. In 1704, this was amended in the direction of “Christian practice” by means of a scale of rewards graduated by age and sex; bounty was proscribed in the case of children under the age of ten, subsequently raised to twelve (sixteen in Connecticut, fifteen in New Jersey). Here, too, genocidal intent was far from evident; the practices were justified on grounds of self-preservation and revenge, and in reprisal for the extensive scalping carried out by Indians.
We turn now to the American frontier. In Pennsylvania, where the white population had doubled between 1740 and 1760, the pressure on Indian lands increased formidably; in 1754, encouraged by French agents, Indian warriors struck, starting a long and bloody conflict known as the French and Indian War or the Seven Years’ War.
By 1763, according to one estimate, about 2,000 whites had been killed or vanished into captivity. Stories of real, exaggerated, and imaginary atrocities spread by word of mouth, in narratives of imprisonment, and by means of provincial newspapers. Some British officers gave orders that captured Indians be given no quarter, and even after the end of formal hostilities, feelings continued to run so high that murderers of Indians, like the infamous Paxton Boys, were applauded rather than arrested.
As the United States expanded westward, such conflicts multiplied. So far had things progressed by 1784 that, according to one British traveler, “white Americans have the most rancorous antipathy to the whole race of Indians; and nothing is more common than to hear them talk of extirpating them totally from the face of the earth, men, women, and children.”
Settlers on the expanding frontier treated the Indians with contempt, often robbing and killing them at will. In 1782, a militia pursuing an Indian war party that had slain a woman and a child massacred more than 90 peaceful Moravian Delawares. Although federal and state officials tried to bring such killers to justice, their efforts, writes the historian Francis Prucha, “were no match for the singular Indian-hating mentality of the frontiersmen, upon whom depended conviction in the local courts.”
But that, too, is only part of the story. The view that the Indian problem could be solved by force alone came under vigorous challenge from a number of federal commissioners who from 1832 on headed the Bureau of Indian Affairs and supervised the network of agents and subagents in the field. Many Americans on the eastern seaboard, too, openly criticized the rough ways of the frontier. Pity for the vanishing Indian, together with a sense of remorse, led to a revival of the 18th-century concept of the noble savage. America’s native inhabitants were romanticized in historiography, art, and literature, notably by James Fenimore Cooper in his Leatherstocking Tales and Henry Wadsworth Longfellow in his long poem, The Song of Hiawatha.
On the western frontier itself, such views were of course dismissed as rank sentimentality; the perceived nobility of the savages, observed cynics, was directly proportional to one’s geographic distance from them. Instead, settlers vigorously complained that the regular army was failing to meet the Indian threat more aggressively. A large-scale uprising of the Sioux in Minnesota in 1862, in which Indian war parties killed, raped, and pillaged all over the countryside, left in its wake a climate of fear and anger that spread over the entire West.
Colorado was especially tense. Cheyenne and Arapahoe Indians, who had legitimate grievances against the encroaching white settlers, also fought for the sheer joy of combat, the desire for booty, and the prestige that accrued from success. The overland route to the East was particularly vulnerable: at one point in 1864, Denver was cut off from all supplies, and there were several butcheries of entire families at outlying ranches. In one gruesome case, all of the victims were scalped, the throats of the two children were cut, and the mother’s body was ripped open and her entrails pulled over her face.
Writing in September 1864, the Reverend William Crawford reported on the attitude of the white population of Colorado: “There is but one sentiment in regard to the final disposition which shall be made of the Indians: ‘Let them be exterminated—men, women, and children together.’ ” Of course, he added, “I do not myself share in such views.” The Rocky Mountain News, which at first had distinguished between friendly and hostile Indians, likewise began to advocate extermination of this “dissolute, vagabondish, brutal, and ungrateful race.”
With the regular army off fighting the Civil War in the South, the western settlers depended for their protection on volunteer regiments, many lamentably deficient in discipline. It was a local force of such volunteers that committed the massacre of Sand Creek, Colorado on November 29, 1864. Formed in August, the regiment was made up of miners down on their luck, cowpokes tired of ranching, and others itching for battle. Its commander, the Reverend John Milton Chivington, a politician and ardent Indian-hater, had urged war without mercy, even against children. “Nits make lice,” he was fond of saying. The ensuing orgy of violence in the course of a surprise attack on a large Indian encampment left between 70 and 250 Indians dead, the majority women and children. The regiment suffered eight killed and 40 wounded.
News of the Sand Creek massacre sparked an outcry in the East and led to several congressional inquiries. Although some of the investigators appear to have been biased against Chivington, there was no disputing that he had issued orders not to give quarter, or that his soldiers had engaged in massive scalping and other mutilations.
The sorry tale continues in California. The area that in 1850 became admitted to the Union as the 31st state had once held an Indian population estimated at anywhere between 150,000 and 250,000. By the end of the 19th century, the number had dropped to 15,000. As elsewhere, disease was the single most important factor, although the state also witnessed an unusually large number of deliberate killings.
The discovery of gold in 1848 brought about a fundamental change in Indian-white relations. Whereas formerly Mexican ranchers had both exploited the Indians and provided them with a minimum of protection, the new immigrants, mostly young single males, exhibited animosity from the start, trespassing on Indian lands and often freely killing any who were in their way. An American officer wrote to his sister in 1860: “There never was a viler sort of men in the world than is congregated about these mines.”
What was true of miners was often true as well of newly arrived farmers. By the early 1850’s, whites in California outnumbered Indians by about two to one, and the lot of the natives, gradually forced into the least fertile parts of the territory, began to deteriorate rapidly. Many succumbed to starvation; others, desperate for food, went on the attack, stealing and killing livestock. Indian women who prostituted themselves to feed their families contributed to the demographic decline by removing themselves from the reproductive cycle. As a solution to the growing problem, the federal government sought to confine the Indians to reservations, but this was opposed both by the Indians themselves and by white ranchers fearing the loss of labor. Meanwhile, clashes multiplied.
One of the most violent, between white settlers and Yuki Indians in the Round Valley of Mendocino County, lasted for several years and was waged with great ferocity. Although Governor John B. Weller cautioned against an indiscriminate campaign—“[Y]our operations against the Indians,” he wrote to the commander of a volunteer force in 1859, “must be confined strictly to those who are known to have been engaged in killing the stock and destroying the property of our citizens . . . and the women and children under all circumstances must be spared”—his words had little effect. By 1864 the number of Yukis had declined from about 5,000 to 300.
The Humboldt Bay region, just northwest of the Round Valley, was the scene of still more collisions. Here too Indians stole and killed cattle, and militia companies retaliated. A secret league, formed in the town of Eureka, perpetrated a particularly hideous massacre in February 1860, surprising Indians sleeping in their houses and killing about sixty, mostly by hatchet. During the same morning hours, whites attacked two other Indian rancherias, with the same deadly results. In all, nearly 300 Indians were killed on one day, at least half of them women and children.
Once again there was outrage and remorse. “The white settlers,” wrote a historian only 20 years later, “had received great provocation. . . . But nothing they had suffered, no depredations the savages had committed, could justify the cruel slaughter of innocent women and children.” This had also been the opinion of a majority of the people of Eureka, where a grand jury condemned the massacre, while in cities like San Francisco all such killings repeatedly drew strong criticism. But atrocities continued: by the 1870’s, as one historian has summarized the situation in California, “only remnants of the aboriginal populations were still alive, and those who had survived the maelstrom of the preceding quarter-century were dislocated, demoralized, and impoverished.”
Lastly we come to the wars on the Great Plains. Following the end of the Civil War, large waves of white migrants, arriving simultaneously from East and West, squeezed the Plains Indians between them. In response, the Indians attacked vulnerable white outposts; their “acts of devilish cruelty,” reported one officer on the scene, had “no parallel in savage warfare.” The trails west were in similar peril: in December 1866, an army detachment of 80 men was lured into an ambush on the Bozeman Trail, and all of the soldiers were killed.
To force the natives into submission, Generals Sherman and Sheridan, who for two decades after the Civil War commanded the Indian-fighting army units on the Plains, applied the same strategy they had used so successfully in their marches across Georgia and in the Shenandoah Valley. Unable to defeat the Indians on the open prairie, they pursued them to their winter camps, where numbing cold and heavy snows limited their mobility. There they destroyed the lodges and stores of food, a tactic that inevitably resulted in the deaths of women and children.
Genocide? These actions were almost certainly in conformity with the laws of war accepted at the time. The principles of limited war and of noncombatant immunity had been codified in Francis Lieber’s General Order No. 100, issued for the Union Army on April 24, 1863. But the villages of warring Indians who refused to surrender were considered legitimate military objectives. In any event, there was never any order to exterminate the Plains Indians, despite heated pronouncements on the subject by the outraged Sherman and despite Sheridan’s famous quip that “the only good Indians I ever saw were dead.” Although Sheridan did not mean that all Indians should be shot on sight, but rather that none of the warring Indians on the Plains could be trusted, his words, as the historian James Axtell rightly suggests, did “more to harm straight thinking about Indian-white relations than any number of Sand Creeks or Wounded Knees.”
As for that last-named encounter, it took place on December 29, 1890 on the Pine Ridge Reservation in South Dakota. By this time, the 7th Regiment of U.S. Cavalry had compiled a reputation for aggressiveness, particularly in the wake of its surprise assault in 1868 on a Cheyenne village on the Washita river in Kansas, where about 100 Indians were killed by General George Custer’s men.
Still, the battle of Washita, although one-sided, had not been a massacre: wounded warriors were given first aid, and 53 women and children who had hidden in their lodges survived the assault and were taken prisoner. Nor were the Cheyennes unarmed innocents; as their chief Black Kettle acknowledged, they had been conducting regular raids into Kansas that he was powerless to stop.
The encounter at Wounded Knee, 22 years later, must be seen in the context of the Ghost Dance religion, a messianic movement that since 1889 had caused great excitement among Indians in the area and that was interpreted by whites as a general call to war. While an encampment of Sioux was being searched for arms, a few young men created an incident; the soldiers, furious at what they considered an act of Indian treachery, fought back furiously as guns surrounding the encampment opened fire with deadly effect. The Army’s casualties were 25 killed and 39 wounded, mostly as a result of friendly fire. More than 300 Indians died.
Wounded Knee has been called “perhaps the best-known genocide of North American Indians.” But, as Robert Utley has concluded in a careful analysis, it is better described as “a regrettable, tragic accident of war,” a bloodbath that neither side intended. In a situation where women and children were mixed with men, it was inevitable that some of the former would be killed. But several groups of women and children were in fact allowed out of the encampment, and wounded Indian warriors, too, were spared and taken to a hospital. There may have been a few deliberate killings of noncombatants, but on the whole, as a court of inquiry ordered by President Harrison established, the officers and soldiers of the unit made supreme efforts to avoid killing women and children.
On January 15, 1891, the last Sioux warriors surrendered. Apart from isolated clashes, America’s Indian wars had ended.
The Genocide Convention was approved by the General Assembly of the United Nations on December 9, 1948 and came into force on January 12, 1951; after a long delay, it was ratified by the United States in 1986. Since genocide is now a technical term in international criminal law, the definition established by the convention has assumed prima-facie authority, and it is with this definition that we should begin in assessing the applicability of the concept of genocide to the events we have been considering.
According to Article II of the convention, the crime of genocide consists of a series of acts “committed with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group as such” (emphases added). Practically all legal scholars accept the centrality of this clause. During the deliberations over the convention, some argued for a clear specification of the reasons, or motives, for the destruction of a group. In the end, instead of a list of such motives, the issue was resolved by adding the words “as such”—i.e., the motive or reason for the destruction must be the ending of the group as a national, ethnic, racial, or religious entity. Evidence of such a motive, as one legal scholar put it, “will constitute an integral part of the proof of a genocidal plan, and therefore of genocidal intent.”
The crucial role played by intentionality in the Genocide Convention means that under its terms the huge number of Indian deaths from epidemics cannot be considered genocide. The lethal diseases were introduced inadvertently, and the Europeans cannot be blamed for their ignorance of what medical science would discover only centuries later. Similarly, military engagements that led to the death of non-combatants, like the battle of the Washita, cannot be seen as genocidal acts, for the loss of innocent life was not intended and the soldiers did not aim at the destruction of the Indians as a defined group. By contrast, some of the massacres in California, where both the perpetrators and their supporters openly acknowledged a desire to destroy the Indians as an ethnic entity, might indeed be regarded under the terms of the convention as exhibiting genocidal intent.
Even as it outlaws the destruction of a group “in whole or in part,” the convention does not address the question of what percentage of a group must be affected in order to qualify as genocide. As a benchmark, the prosecutor of the International Criminal Tribunal for the Former Yugoslavia has suggested “a reasonably significant number, relative to the total of the group as a whole,” adding that the actual or attempted destruction should also relate to “the factual opportunity of the accused to destroy a group in a specific geographic area within the sphere of his control, and not in relation to the entire population of the group in a wider geographic sense.” If this principle were adopted, an atrocity like the Sand Creek massacre, limited to one group in a specific single locality, might also be considered an act of genocide.
Of course, It is far from easy to apply a legal concept developed in the middle of the 20th century to events taking place many decades if not hundreds of years earlier. Our knowledge of many of these occurrences is incomplete. Moreover, the malefactors, long since dead, cannot be tried in a court of law, where it would be possible to establish crucial factual details and to clarify relevant legal principles.
Applying today’s standards to events of the past raises still other questions, legal and moral alike. While history has no statute of limitations, our legal system rejects the idea of retroactivity (ex post facto laws). Morally, even if we accept the idea of universal principles transcending particular cultures and periods, we must exercise caution in condemning, say, the conduct of war during America’s colonial period, which for the most part conformed to then-prevailing notions of right and wrong. To understand all is hardly to forgive all, but historical judgment, as the scholar Gordon Leff has correctly stressed, “must always be contextual: it is no more reprehensible for an age to have lacked our values than to have lacked forks.”
The real task, then, is to ascertain the context of a specific situation and the options it presented. Given circumstances, and the moral standards of the day, did the people on whose conduct we are sitting in judgment have a choice to act differently? Such an approach would lead us to greater indulgence toward the Puritans of New England, who fought for their survival, than toward the miners and volunteer militias of California who often slaughtered Indian men, women, and children for no other reason than to satisfy their appetite for gold and land. The former, in addition, battled their Indian adversaries in an age that had little concern for humane standards of warfare, while the latter committed their atrocities in the face of vehement denunciation not only by self-styled humanitarians in the faraway East but by many of their fellow citizens in California.
Finally, even if some episodes can be considered genocidal—that is, tending toward genocide—they certainly do not justify condemning an entire society. Guilt is personal, and for good reason the Genocide Convention provides that only “persons” can be charged with the crime, probably even ruling out legal proceedings against governments. No less significant is that a massacre like Sand Creek was undertaken by a local volunteer militia and was not the expression of official U.S. policy. No regular U.S. Army unit was ever implicated in a similar atrocity. In the majority of actions, concludes Robert Utley, “the Army shot noncombatants incidentally and accidentally, not purposefully.” As for the larger society, even if some elements in the white population, mainly in the West, at times advocated extermination, no official of the U.S. government ever seriously proposed it. Genocide was never American policy, nor was it the result of policy.
The violent collision between whites and America’s native population was probably unavoidable. Between 1600 and 1850, a dramatic surge in population led to massive waves of emigration from Europe, and many of the millions who arrived in the New World gradually pushed westward into America’s seemingly unlimited space. No doubt, the 19th-century idea of America’s “manifest destiny” was in part a rationalization for acquisitiveness, but the resulting dispossession of the Indians was as unstoppable as other great population movements of the past. The U.S. government could not have prevented the westward movement even if it had wanted to.
In the end, the sad fate of America’s Indians represents not a crime but a tragedy, involving an irreconcilable collision of cultures and values. Despite the efforts of well-meaning people in both camps, there existed no good solution to this clash. The Indians were not prepared to give up the nomadic life of the hunter for the sedentary life of the farmer. The new Americans, convinced of their cultural and racial superiority, were unwilling to grant the original inhabitants of the continent the vast preserve of land required by the Indians’ way of life. The consequence was a conflict in which there were few heroes, but which was far from a simple tale of hapless victims and merciless aggressors. To fling the charge of genocide at an entire society serves neither the interests of the Indians nor those of history.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Were American Indians the Victims of Genocide?
Must-Reads from Magazine
Last year, we asked experts to examine Candidate Trump’s policy proposals. This year, we’ve asked them to examine how he has executed these proposals in office.
On Trade By Scott Lincicome
Last year, economic, legal, and geopolitical calamity lurked in the shadows of almost every trade-policy promise made by presidential candidate Donald Trump. Eight months into the Trump presidency, those problems have—thankfully—not yet materialized. Instead, Trump trade policy has been a mixture of bluster, disappointment, relief, and uncertainty. This last category warrants close attention: In the coming months, Trump’s dangerous trade ambitions could remain in check, thus keeping a global trade system alive. Or politics, legal ambiguity, and Trump’s own emotional impulses could deal that system a fatal blow.
There is no doubt that President Trump has already done serious damage to the United States’ longstanding position as a world leader on trade policy, the American political consensus in favor of trade liberalization, and Republican views of trade and globalization. His constant vituperation has offended U.S. allies and trading partners, causing them to turn to Europe, Asia, or Latin America in search of alternatives to the once-welcoming and predictable U.S. market. He has accelerated (not started) the American retreat from the World Trade Organization, further wounding a multilateral trading system that was a U.S. invention—an invention that has, contrary to popular belief, served U.S. economic and foreign-policy interests well since the 1940s.
Trump’s day-one withdrawal from the Trans-Pacific Partnership—the flawed-yet-deserving Asia-Pacific trade agreement started by President Bush and ultimately signed by President Obama—has left vacuums in both Asia-Pacific trade and international economic law. TPP was far from perfect, but it was widely supported by U.S. trade and foreign-policy experts because of its economic and geopolitical benefits. The deal contained important new rules for 21st-century issues such as e-commerce, GMOs, and state-owned enterprises. Moreover, it would have provided small but significant benefits for U.S. workers and the economy, while cementing the United States’s influence in a region increasingly covered by China’s shadow. Now, TPP parties are working to complete a “TPP-11” deal that excludes the United States, while China is negotiating its own version of the TPP—the Regional Comprehensive Economic Partnership. And many of TPP’s novel provisions are being relitigated in contentious NAFTA renegotiations with Canada and Mexico (both TPP parties).
All of this is disappointing, but it’s probably survivable and hardly the fire and brimstone of the Trump campaign trail (hence, the relief). Trump has repeatedly threatened tariffs and other forms of dangerous unilateral protectionism, but economic, legal, and political realities have intervened. For example, when Trump promised new “national security” tariffs on steel and aluminum under Section 232 of the Trade Expansion Act of 1962, the opposition from Congress, business groups, strategic allies, NGOs, and even members of Trump’s administration was unrelenting. As a result, planned tariffs have quietly been shelved (for now). Other presidential threats have similarly come and gone without major action, giving market participants some heartburn but little long-term pain. Only in the opaque area of trade remedies—antidumping, countervailing duty, and safeguard measures—has there been a marked uptick in U.S. protectionism. But this is the result of long and technical administrative proceedings initiated by U.S. industries or unions that formally petitioned the government under relevant domestic law—hardly the wave-of-the-hand actions that Trump promised.
Some measure of relief is warranted, but we’re not out of the woods just yet. Indeed, in the last eight months, Trump has publicly threatened to
- block steel and aluminum imports for national-security reasons or bring new cases against semiconductors and ships, under the aforementioned Section 232;
- withdraw from the North American Free Trade Agreement and the U.S.-Korea FTA;
- slap tariffs on Chinese imports under Section 301 of the Trade Act of 1974 because of alleged Chinese intellectual-property-rights violations; and
- impose onerous new “Buy American” requirements on U.S. pipelines and government-funded infrastructure projects.
And those are just the public threats. Behind closed doors, Trump has reportedly considered enacting sweeping import restrictions under the International Emergency Economic Powers Act. The president reportedly yelled, “I want tariffs. Bring me some tariffs!” when told by his “globalist” advisers that legal and economic realities prevent him from imposing broad-based protectionism on a whim.
None of the threats on Trump’s wish list is officially off the table, and any one of them would have serious economic consequences: Steel tariffs alone would put more than 1.3 million American jobs at risk; NAFTA withdrawal could destroy 250,000 more; and several nations have promised immediate retaliation against American goods, services, or investment in response to Trumpian protectionism. Trump’s actions would also raise major legal issues. For example, the World Trade Organization’s broad, subjective “national security” exception wasn’t intended to be used as a get-out-of-jail free-card for steel tariffs, and a dispute over a member’s right to invoke it could imperil the multilateral trading system. Meanwhile, Trump’s withdrawal from a free-trade agreement without congressional consent would raise major constitutional questions as to whether the president had that authority and what would happen to the myriad U.S. tariffs and other commitments that were embedded in legislation and passed into law. Lawsuits over these and other issues surrounding presidential trade powers would throw billions of dollars of cross-border trade and investments into legal limbo.
The president’s unpredictability, political weakness, and clear affinity for protectionism, combined with ample (though ambiguous) legal authority to act unilaterally, mean that any one of his trade threats could still materialize in the coming months. The White House’s internationalists may have won the early battles, but the war will rage for as long as Trump is president. Continued vigilance and advocacy for the benefits of freer trade remain critical.
And congressional legislation clarifying and limiting the president’s trade powers might not be a bad idea either…just in case.
Click here to read what Scott Lincicome wrote about Candidate Trump and trade last year.
Scott Lincicome is an international trade attorney, adjunct scholar at the Cato Institute, and visiting lecturer at Duke University Law School. The views expressed herein are his own and do not necessarily reflect those of his employer.
On Taxes By James Pethokoukis
At some point in his first term, President Donald Trump will likely sign legislation that cuts taxes by some amount for somebody. This modest prediction is based less on reading the political tea leaves than understanding conservative politics. If any issue made the modern Republican Party, it was tax cuts. Not surprising, then, that candidate Trump promised big cuts for individuals and businesses. And with the GOP now holding the White House and Congress, failure to deliver is almost unimaginable.
Of course it’s almost equally unimaginable that the Trump tax cuts will at all resemble the ambitious plans devised by Trump advisers during the campaign. There were two of those blueprints. The first, rolled out September 2015, proposed lowering the top personal rate to 25 percent from the current 39.6 percent, and cutting the corporate rate to 15 percent from the current 35 percent. Along with other changes, including eliminating the alternative minimum tax and estate tax, this initial plan might have lowered annual government revenue by a whopping $1 trillion a year or more (even if one assumes much faster economic growth).
This was, in other words, more a fantasy proposal cooked up by Reagan-era supply-siders than a serious effort to reform the tax code without worsening our historically high federal debt. Indeed, Trump’s sole purpose in signing on to the plan may have been to win over that very same group, still influential among base voters. Trump himself talked little about the plan while on the hustings, especially compared with immigration, trade, and The Wall.
The Trump campaign’s second bite at the apple a year later was a scaled-back plan, but still a colossal one. Instead of losing a trillion bucks a year, maybe the government would be out just a half trillion or so. Again, since the plan was unaccompanied by spending cuts elsewhere in the budget, it was more a set of glorified campaign talking points than a serious proposal. And like the first, Trump didn’t talk much about it.
So after Trump’s shock election, there really was no realistic Trump tax plan. No worries, however, since there was a House Republican tax plan all ready to go, with an enthusiastic House Speaker Paul Ryan ready to push it hard through the lower chamber. It was an ambitious proposal but one within reality, especially with a bit of fiscal tweaking. That plan called for, among other things, lowering the top personal rate to 33 percent and the corporate rate to 20 percent, immediately expensing new capital investment, and expanding the child tax credit.
And more so than the Trump campaign plans, the House plan intended to reform the tax code, not just cut taxes. For example, it eliminated all personal itemized deductions other than mortgage interest and charitable contributions. The House plan also made a stronger attempt to pay for the tax through a border-adjustment tax and limiting business-interest deductibility. All in all, the plan cost a couple of trillion dollars over a decade, not assuming economic feedback. On such a dynamic basis, according to Tax Foundation modeling, the House plan would reduce 10-year revenues by just under $200 billion.
So if Republicans really wanted to make their plan revenue neutral, it was certainly doable through relatively minor changes, such as less dramatic corporate or personal rate cuts. Yet the plan would still be a massive improvement over the status quo, both in terms of encouraging more domestic investment and providing middle-class tax relief.
With a detailed plan at the ready and Republicans running Washington, it is easy to understand why many in the GOP thought it reasonable to predict that Trump would be signing a mega tax bill by August of this year, just as Ronald Reagan did in the first year of his first term. Reagan did it from his ranch in Santa Barbara, California. Maybe Trump would repeat the feat from his Trump Tower penthouse in Manhattan.
But that did not happen. Then again, very little of Trump’s ambitious domestic agenda has happened as planned. Repeal and replace was promised by Easter, leaving plenty of time to hash out the fine details of tax reform and move legislation through the House and Senate. But the GOP health reform was a long slog consuming valuable time, attention, and political capital. Also deserving blame was Trump’s inability to focus on pushing policy priorities rather than pounding political opponents on Twitter. As of now, it seems highly unlikely that significant tax reform will occur in 2017. And 2018 looks challenging as well.
Yes, Trump has provided more distraction than leadership on this issue. And trying to pass major legislation in a midterm year only adds to the political difficulties. But the biggest problem is that there is no tax-reform plan for Republicans to push.
What happened to the ready-to-serve House plan? It suffered from not being a fantasy. It acknowledged both political and policy constraints, something the populist president almost never does. For instance: the House plan tried to pay for the tax cuts—a political necessity to placate debt-hawk Republicans. That requires making somebody somewhere unhappy. Ryan knew that without such an effort, it would be extraordinarily difficult to reduce the corporate tax rate to anywhere close to 20 percent. But while exporters supported the border tax, importers hated it, complaining that it would raise costs. Nor was the Trump White House happy about axing business-interest deductibility.
Still, as problematic as those pay-fors were, the alternatives—limiting tax breaks for mortgages, 401(k)s, and state and local taxes—are equally if not more so. The state and local tax deduction is a case in point. Pushed hard by Republican leaders as the primary revenue generator to replace border adjustment, it seems unlikely to survive criticism from blue-state Republicans. Eventual legislation is likely to be a far smaller and less comprehensive bill than first envisioned—more cut than reform—with some temporary parts designed to satisfy congressional budget rules. Indeed, Senate budget writers cleared room for just a $1.5 trillion tax cut, and even that might be overly ambitious. Expect Trump and his people to call whatever passes a “down payment” on true tax reform. Pro-growth conservatives should call it a missed opportunity.
Click here to read what James Pethokoukis wrote about Candidate Trump and taxes last year.
James Pethokoukis is the DeWitt Wallace Fellow at the American Enterprise Institute. He is also an official CNBC contributor.
‘The Wall’ By Linda Chavez
“We’re going to build a wall. That wall will go up so fast, your head will spin.” Donald Trump made this promise on August 23, 2016, repeated it throughout his presidential campaign, and has reiterated it in tweets and at press conferences and rallies ever since. But the only spinning going on lately has been the president’s own efforts to assure his base that he will eventually build a wall, or a fence, or some barrier along the U.S. border with Mexico, except maybe for those areas that don’t need one or already have one. Oh, and someone will pay for it—preferably Mexico, as he promised—but if not, Congress, unless Democrats or even Republicans refuse to go along. A year after winning the presidency, Trump’s most ubiquitous pledge, The Great Wall separating the U.S. from Mexico, remains largely a figment of his imagination and evidence of his supporters’ gullibility.
No issue defined Trump’s campaign more viscerally than immigration, and on none was his position less ambiguous. Trump’s presidential record on immigration enforcement and policy, however, is decidedly more mixed. He continues to promise that construction of the wall is going to start soon: “Way ahead of schedule. Way ahead of schedule. Way, way, way ahead of schedule,” he said in February. But the cost, with estimates as high as $70 billion, and the sheer impracticality of erecting a solid barrier along 1,900 miles make little sense in light of recent trends in illegal immigration. Illegal immigration is at historically low levels today (roughly the same, in absolute numbers, as it was in the early 1970s) and has been falling more or less consistently since the peak in 2000, mostly because fewer people are crossing the border from Mexico. Apprehensions of Mexicans are at a 50-year low, as are all apprehensions along the southern border. Year-to-date in 2017, apprehensions at the Mexican border have dropped 24 percent compared with those in 2016, when a slight uptick occurred as more people tried to cross in advance of a feared Trump victory and border crackdown. The population of undocumented immigrants living in the U.S. is down as well and now stands at roughly 11 million, from a peak of 12.2 million in 2007; and two-thirds of these unauthorized immigrants have lived here a decade or longer. More Mexicans—whom Trump described as “bringing drugs. . . crime. They’re rapists”—are now leaving the U.S. than arriving. In 2013, for the first time since the 1960s, Mexico fell as the top source of immigrants to the U.S., behind both China and India.
Trump’s pledge to build a wall, of course, wasn’t his only promise on immigration, but he hasn’t lived up to his own hype in other areas either, which is a good thing. He said he’d end on day one the Obama administration’s Deferred Action for Childhood Arrivals (DACA), a program that provided temporary protection from removal for young people who arrived here illegally before age 16. Instead, Trump waited until September 5 to send his beleaguered Attorney General Jeff Sessions out to announce that DACA would end in six months unless Congress acted. Trump then almost immediately backtracked in a series of tweets and offhand statements. Polls show that large majorities of Americans, including some two-thirds of Trump voters, have no interest in deporting so-called Dreamers, half of whom came before they were seven years old and 90 percent of whom are employed and paying taxes. Trump’s own misgivings and the backlash over the policy’s announcement led him into a tentative deal with Democratic leaders Representative Nancy Pelosi and Senator Chuck Schumer in September to support legislation granting legal status for Dreamers who complete school, get jobs, or join the military. Trump’s most nativist supporters have already dubbed him “Amnesty Don” for even suggesting that Dreamers should be allowed to remain and gain temporary legal status, much less earn a path toward citizenship. But whether such legislation will make it through Congress is still uncertain. Similar bills have repeatedly passed one chamber and died in the other over the past 10 years, but the potential threat that the administration might begin deporting many of the 800,000 young adults who signed up for DACA should concentrate the minds of the Republican leadership to allow legislation to move forward. One of the complications in the House is the “Hastert Rule,” named after former Speaker Dennis Hastert, an informal agreement that binds the speaker from bringing a bill to the floor unless a majority of the majority party supports it.
To be sure, Trump’s rhetoric and his appointment of hard-line immigration restrictionists to posts in his administration have led to fear among immigrants, as have the administration’s erratic, irrational enforcement policies. Previous administrations, including Barack Obama’s, gave priority to detaining and deporting aliens convicted of serious crimes, but in one of his first executive orders and Department of Homeland Security memoranda, Trump broadened the priorities for detention and removal to include anyone even suspected of committing a crime, with or without charges or conviction. As a result, arrests for immigration offenses have increased under Trump and have swept up hundreds of individuals who pose no threat to safety or security, some picked up outside their children’s schools or when seeking court orders against domestic abuse. Actual deportations, on the other hand, are down slightly in Trump’s first eight months compared with the same period in Obama’s last year. This is largely because the overloaded system isn’t equipped for mass deportation. Trump promised to rid the country of a greatly exaggerated 2 million criminal aliens and “a vast number of additional criminal illegal immigrants who have fled or evaded justice.” But his boasting that “their days on the run will soon be over” has always been aimed less at promoting sensible immigration policy than at stoking nativist anger in pursuit of his own brand of identity politics. Trump’s America will be a less welcoming place for immigrants—legal as well as illegal—if Trump gets his way on proposed legislation to reduce legal immigration by half over the next decade. But labor shortages and an aging population make it unlikely that Trump’s efforts will succeed. The simple fact is that we need more, not fewer, immigrants if the economy is to grow. Building walls and deporting workers is exactly the wrong way to go about needed immigration reform, whether Trump and his hard-core base can admit it or not.
Click here to read what Linda Chavez wrote about Candidate Trump and ‘The Wall’ last year.
Linda Chavez is the president of the Becoming American Institute and a frequent contributor to Commentary.
On Infrastructure By Philip Klein
A massive infrastructure bill was supposed to be one of the early triumphs of President Trump’s administration. Instead, Trump’s inability to advance the ball on one of his signature issues has highlighted the lack of focus, inattention to detail, and difficulties working with Congress that are emblematic of his presidency to date.
The idea of rebuilding the nation’s infrastructure, though overshadowed by daily controversies during the wild 2016 campaign, wove together several elements of the Trump phenomenon.
His experience in building projects such as luxury hotels, resorts, skyscrapers, and golf courses became central to his argument that he had the skills required to get things done in Washington. By touting the economic benefits of infrastructure during his campaign, Trump also signaled that he was an unorthodox Republican, breaking with decades of conservative critiques of Keynesian stimulus projects. Trump also spoke of infrastructure in nationalist terms, integrating it into riffs about how the United States was constantly losing to China. “They have trains that go 300 miles per hour,” he said during the campaign. “We have trains that go: Chug. Chug. Chug.”
When Trump pulled off his election-victory upset, Washington insiders quickly focused on infrastructure as one issue on which he could get a legislative win and box Democrats into a corner. After all, could Democrats really resist passing a major policy priority that had eluded them when one of their own was in the White House?
In his Inaugural Address, Trump threw a jab at Bush-era Republicanism, declaring that the U.S. “spent trillions of dollars overseas while America’s infrastructure has fallen into disrepair and decay.” Going forward, he said, “America will start winning again, winning like never before.” He promised: “We will build new roads, and highways, and bridges, and airports, and tunnels, and railways all across our wonderful nation.”
Now in the fall of the first year of his presidency, any effort to advance infrastructure legislation has been drowned out by daily controversies involving White House intrigue, the investigation into Russian influence in the 2016 election, and Trump’s raucous Twitter feed. Congress, meanwhile, spent much of the year focused on repealing and replacing Obamacare.
This isn’t to say that the Trump administration didn’t try, in fits and starts, to push infrastructure. In May, with the release of his first budget, Trump included $200 billion in funding for infrastructure as the first step in his $1 trillion infrastructure initiative. He also released a six-page fact sheet outlining his vision for infrastructure, which remains the most detailed resource on his infrastructure goals.
The document, broadly speaking, argues that current infrastructure money is spent inefficiently. It proposes greater selectivity in using federal dollars for infrastructure investments that are in the national interest and recommends giving state and local governments more leeway over their own projects. It also calls for more public-private partnerships.
Specifically, the proposal would create a nongovernment entity to manage the nation’s air-traffic-control system. It would also support private rest stops, give states the ability to work with private companies to manage their toll roads, and streamline the environmental-review process. The proposal received little attention, as it was rolled out during a week when Russia hearings took center stage in Congress and Trump was traveling in Europe and the Middle East.
Such inattention was supposed to end in early June, when White House officials announced “Infrastructure Week.” This was a carefully orchestrated campaign in which Trump was supposed to deliver speeches and lead staged events to highlight different aspects of his infrastructure initiative. But during this week, Washington was captivated by testimony of fired FBI Director James Comey, and Trump veered way off message in his speeches and on his favorite social-media platform.
He went on a Twitter tear. Trump attacked his own Justice Department for pursuing a “watered down” travel ban, took a shot at the mayor of London in the wake of a terrorist attack, unloaded on “fake news” outlets, and hit Comey as a liar. During a speech meant to make the case for both parties to get behind his infrastructure effort, Trump went off on a tangent, blasting Democrats as “obstructionists” on health care.
In truth, any hope of getting Democrats on board for the Trump infrastructure push had been fading even before this implosion. Liberals had already pressured lawmakers to pursue a policy of total resistance to Trump. But during Trump’s big policy push, Senate Minority Leader Chuck Schumer declared overtly that Democrats had no appetite for his infrastructure initiative due to its reliance on privatization.
Before long, the phrase “Infrastructure Week” had become a punch line—an ironic metaphor for a presidency gone off the rails.
Trump has made little progress on infrastructure since then, beyond issuing an executive order in August aimed at making the permitting process for building roads, bridges, and pipelines more efficient. But again, this announcement was overshadowed, as it came during the same news conference in which he blamed “both sides” for the violence in Charlottesville and complained about the slippery slope of removing the Robert E. Lee statue.
On the other hand, by striking a deal with Democratic leaders on the debt ceiling and negotiating with them on immigration, Trump has revived talk about the possibility that he could be ready to compromise with them to get infrastructure legislation passed as well. It is important to note, however, that in both cases—DACA and the debt ceiling—there was a ticking-time-bomb element that forced action. No such urgency exists when it comes to infrastructure.
From the perspective of a limited-government conservative, Trump’s inability thus far to negotiate a trillion-dollar federal infrastructure package with Democrats is nothing to shed tears about. But if we’re looking at the issue through the broader lens of whether or not Trump has been able to deliver on his ambitious campaign promises and make the transition from being a bombastic reality-television star to governing, it’s a case study in failure.
Click here to read what Philip Klein wrote about Candidate Trump and infrastructure last year.
Philip Klein is managing editor of the Washington Examiner.
On NATO By Tod Lindberg
On the campaign trail, Donald Trump was unsparing in his disparagement of U.S. alliances. In a word, allies were freeloaders—complacent in their reliance on the United States to provide them security, contributing nothing like their “fair share” of the cost of their defense, and lavishing the dividend on their domestic needs. Maybe that was acceptable when they were flat on their backs after a war that left the United States on top, but now that they are prospering and the United States has pressing needs of its own, it’s time for the allies to pay up. He also mused about NATO being “obsolete.”
This was alarming (to put it mildly) to most American foreign-policy specialists—to say nothing of the reaction of U.S. allies. The postwar alliance structure in Europe has been the backbone of security on a continent where the United States fought two wars. The North Atlantic Treaty Organization underpinned the postwar revival of Western Europe and subsequently, after the collapse of the Warsaw Pact and the demise of the Soviet Union, of Central and Eastern Europe. The relevance of the alliance has gained renewed salience with Russia’s aggression against its neighbors, first in Georgia in 2008, then in Ukraine in 2014.
At the heart of the alliance is Article 5 of the Washington Treaty of 1949—the commitment of each member to regard an armed attack on any as an attack on all. In practical terms, the meaning of Article 5 is that American power provides a security guarantee for Europe, a commitment upheld and explicitly reiterated by U.S. presidents since Harry S. Truman. The treaty is binding, yet equally in practical terms, it is the
American president whose commander-in-chief powers will dictate the response of the U.S. military to any attack—and by extension, the sincerity of his commitment determines the deterrent value of Article 5 against potential aggressors. Would a President Trump abrogate the U.S. commitment? Or hold it hostage to defense-spending increases by allies—perhaps even by demanding the payment of a much larger past-due bill, as the candidate suggested on at least one occasion?
In Asia, the biggest long-term challenge is the rise of China; the U.S. alliances with Japan, South Korea, Australia, and the Philippines (as well as the more complicated commitment enshrined in the Taiwan Relations Act) represent the underpinning of Pacific security. Would this, too, be up for grabs under Trump? Was “America First” shorthand for an isolationist retooling of U.S. relations with the rest of the world? The short answer to these questions turns out to be no. Trump has no apparent intention to do away with U.S. alliance relationships, however cumbersome and expensive he perceives them to be, and he evinces no intention to try to replace the postwar security architecture with something new and different, whatever that might be. So what happened? Were his many critics sounding the alarm therefore wrong about his intentions? Did he change his mind? Is the question of alliances now settled? Since Trump has taken office, alliance policy seems to have operated on two tracks within the U.S. government. The first track is the president’s own. He has continued to warn allies that they need to pay up—though his demands have moderated considerably, coalescing around the 2 percent of GDP that allies have pledged to spend on defense (though very few do). And although he has reaffirmed the U.S. Article 5 commitment on some occasions, on others when it would have been appropriate for him to do so, he has declined, apparently intentionally. Still, he has never repudiated the commitment. There seem to be two possibilities here: either a deliberate exercise in ambiguity, or incompetence and confusion of the kind his critics have long diagnosed.
I think the evidence points distinctly toward the former. That evidence is the second track of policy within the government. Vice President Mike Pence, Secretary of State Rex Tillerson, and Secretary of Defense James Mattis—as well as officials junior to them—have been on something close to a nonstop reassurance tour of U.S. allies and partners since the beginning of the administration. National Security Adviser H.R. McMaster has joined the chorus since he stepped in to replace the ousted Michael Flynn. Their message has been unambiguous: The United States stands by its security and alliance commitments, and allies must contribute more to collective defense. True, some allies continue to harbor doubts centered on the persona of Trump. Yet—therefore?—many are moving to spend more on defense.
Now, the simple fact is that Trump could order his Cabinet members and senior staff to desist from repeating the first half of their message—the reassurance. Trump might have had some resignations to cope with, but it is well within his power to issue such an edict, and he hasn’t done so. The most likely reason he hasn’t is that he has concluded that too much is riding on these alliances. To continue in this speculative vein, what Trump knew to be true about U.S. allies during the campaign season was that they weren’t contributing enough; that’s a message that Washington has been sending with little effect for decades. What he didn’t know on the campaign trail and has since determined is how central these alliances are to U.S. national security. U.S. alliances aren’t quite so fragile as some feared. The case for them, competently made by the likes of Mattis, must be compelling, including to the skeptic in chief.
It’s here that we may be getting a little lesson in the cunning of history. From his skeptical premise, Trump sparked a very broad debate over alliances. Senior officials of his administration have probably devoted more time and energy to making the public case for NATO and our Pacific alliances during his first 10 months in office than their predecessors did in the previous 10 years. The latter had taken the utility of alliances to U.S. national security as a given.
All this attention has had an effect on public opinion. But the effect has not been, as many feared, a groundswell of support for isolationist or anti-alliance sentiment. Just the opposite. For the past three years, the Chicago Council Survey has asked, “How effective do you think [maintaining effective alliances is] to achieving the foreign-policy goals of the United States?” In 2015, 32 percent of all respondents responded “very effective.” In 2016, the figure was 40 percent. In 2017? Forty-nine percent. Specifically on NATO, 69 percent say the alliance is “essential” to U.S. security, a slight increase from 65 percent in 2016 and well above the 57 percent who said the same when the Chicago Council first asked the question in 2002.
For the first time in the history of the survey, a majority of Americans, 52 percent, say they would support “the use of U.S. troops…if Russia invades a NATO ally like Latvia, Lithuania, or Estonia.” The Trump administration has had little to say about the Russian threat to the Baltics but a great deal to say about the danger of North Korea’s nuclear weapons and missile program. A year ago, 47 percent said they would favor “the use of U.S. troops…if North Korea invaded South Korea.” That was the view of 26 percent of Americans in 1990. Today, it’s what 62 percent think.
Finally, on the question of allies paying up, the survey asked which comes closer to the respondent’s views: “The United States should encourage greater allied defense spending through persuasion and diplomatic means” or “The United States should withhold its commitment to defend NATO members” until they actually spend more. Overall, 59 percent said persuasion and diplomacy; 38 percent (including 51 percent of Republicans) would put Article 5 at risk. Maybe I’m hearing things, but that sounds to me more like a warning to our allies to take seriously American insistence that they spend more on defense starting now than it does an abrogation of the commitments at the center of U.S. national-security strategy for 70 years.
Click here to read what Tod Lindberg wrote about Candidate Trump and NATO last year.
Tod Lindberg is a member of the Chicago Council Survey’s foreign policy advisory board.
On Asia By Michael Auslin
Despite continued Russian threats in Eastern Europe and the lurking danger of an Iranian race to a nuclear bomb, it is Asia that has vaulted to the top of the national-security agenda. Barack Obama had warned Donald Trump that North Korea would be the major national-security threat he would face, and North Korean dictator Kim Jong Un has proved him right. Kim is on the threshold of fielding a reliable intercontinental ballistic missile (ICBM) that can reach U.S. territory in the Pacific and even the American homeland. He is within striking distance of achieving his family’s long-held dream of possessing the ultimate weapon. Not since 1994, when Bill Clinton initially ordered and then called back an air strike on Pyongyang’s nascent nuclear facilities, has the region seemed so close to war.
Beyond the Korean peninsula, Asia has arguably been Trump’s central foreign preoccupation since his entry into politics. He talked during his campaign about a 45 percent tariff on Chinese goods. And despite his noninterventionist affect, he began his transition phase by getting tough on China for its increasingly assertive actions during the Obama years, including the successful building and militarization of islands in contested waters in the South China Sea.
Then Trump retreated from his tough stance toward Beijing, initiating a period of seesawing between cooperation and confrontation and mixing together trade and economic concerns with security and diplomatic issues. His explicit linkage of the two, carefully separated by previous presidents, has been particularly unnerving to Beijing. China’s regime has warned of the risks of a larger trade war if Trump continues to threaten economic retaliation for disagreement on security issues. Of equal concern to Beijing has been his recent willingness to permit more frequent freedom-of-navigation operations by the U.S. Navy in the disputed South China Sea waters off the Spratly and Paracel Islands.
Trump’s initial hard line, including an unprecedented transition-period phone call to Taiwan’s president, put Beijing on its back foot. But his subsequent inconstancy has led to a reassertion of Chinese activism on economic and diplomatic issues. His withdrawal from the Trans-Pacific Partnership and general anti-free-trade stance have allowed Chinese President Xi Jinping to claim the mantle of global economic leadership—promoting free-trade alternatives and grandiose policies such as the “Belt and Road Initiative,” in which Xi has promised more than $1 trillion of infrastructure investment to link the world in a trading network centered in China.
In contrast, Trump’s relations with America’s Asian allies, particularly Japan and South Korea, have been surprisingly smooth. Again backing down from campaign rhetoric, Trump early on reaffirmed the importance of both alliances, and buried talk of making the two pay more for hosting U.S. forces on their territory. His bond with Japanese Prime Minister Shinzo Abe has been particularly close, and his conversations with South Korea’s new left-leaning president, Moon Jae In, have gone better than some expected. Far from scaling back the alliances, Trump and his top officials, including Secretary of Defense James Mattis, have put them at the center of American strategy in the Pacific, especially with respect to North Korea.
It is North Korea, however, that remains the first great test of the Trump administration. Trump clearly inherited a failed policy, stretching over past Democratic and Republican administrations alike, and was doubly cursed in coming to office on the eve of Kim Jong Un’s nuclear and ICBM breakout.
Yet despite Trump’s heated rhetoric, he and his team have actually moved cautiously on North Korea. Like its predecessors, the administration has combined shows of force, such as flying B-1 bombers over the peninsula, with appeals to the United Nations for further sanctions on Pyongyang. Two new rounds of sanctions, in July and September, may indeed have been harder than those previously levied, but, just as in the past, the administration had to settle for less than it wanted. More worrying, Trump appears to be adopting the long-held goal of presidents past: North Korean denuclearization. This is a strategic mistake that threatens to lock him into an unending series of negotiations that have served over the past quarter-century to buy time for Pyongyang to develop its nuclear and missile capabilities. I believe it would be a far more realistic move for Trump to drop the chimera of denuclearization and instead tacitly acknowledge that North Korea is a nuclear-weapons-capable state. This would free up the administration to focus on the far more important job of deterring and containing a nuclear North Korea. Since Trump is almost certainly sure to avoid a preventive war to remove Kim’s nuclear weapons, given the associated military and political risks, he will be forced in the end to accept them. That then mandates a credible and comprehensive policy to restrict North Korea’s actions abroad while making clear that any nuclear use will result in a devastating counterstrike. Washington has been deterring North Korea ever since the end of the Korean War. This new approach explicitly makes deterrence the center of U.S. policy, dropping the unobtainable goal of denuclearization or the imprudent goal of normalizing relations with North Korea. To be successful, Trump will need to get the support of both Seoul and Tokyo, which is a tall order. The alternative, however, is another round of Kabuki negotiations and the diversion of U.S. attention from the far more necessary task of ensuring that Kim Jong Un is kept in his nuclear box.
Click here to read what Michael Auslin wrote about Candidate Trump and Asia last year.
Michael Auslin is the Williams-Griffis Fellow in Contemporary Asia at the Hoover Institution, Stanford University, and the author of The End of the Asian Century (Yale).
On Israel By Daniella J. Greenbaum
As a candidate, Donald Trump’s positions on Israel were a blend of incoherence and inconsistency. He was an isolationist, except he was also Israel’s biggest supporter; he would enforce the Iran deal, except he wanted to rip it up on day one; he was the most pro- Israel candidate on the stage, except that he wanted to be “the neutral guy”; he wouldn’t commit to a policy on Jerusalem, except he declared his plan to immediately move the American Embassy to Israel’s eternal and undivided capital.
Words—especially a president’s—matter, but until Trump took office, it was impossible to predict how his administration would treat the Jewish state. Some Israel advocates became convinced that Trump’s victory would lead to the fulfillment of their bucket list of Middle East dreams—in particular, resolution of the long-simmering issue involving the location of the U.S. Embassy in Israel. The Jerusalem Embassy Act, which became law in 1995, recognized that “each sovereign nation, under international law and custom, may designate its own capital” and that “since 1950, the city of Jerusalem has been the capital of the State of Israel.” It ordered that “the United States Embassy in Israel should be established in Jerusalem no later than May 31, 1999.”
And yet, despite all that, the American Embassy has remained in Tel Aviv. (Presidents were given the power to push the date back on national-security grounds.) Much like then-candidates Bill Clinton and George W. Bush, Trump pledged to move the embassy if elected president. In a March 2016 speech to the American Israel Public Affairs Committee’s Policy Conference, Trump said unequivocally: “We will move the American Embassy to the eternal capital of the Jewish people, Jerusalem.”
The American Embassy belongs in Jerusalem, and Trump’s evolution on the issue was, for the most part, encouraging. (Early on in his candidacy, he was booed at the Republican Jewish Coalition’s annual meeting after refusing to take a position on Jerusalem’s status.) But for Israelis, who face myriad threats on a daily basis—both physically, from their many hostile neighbors, and economically, through an international boycott, divestment, and sanctions campaign—the location of the embassy ranks low on the list of urgent political matters. Even the most ardent proponents of this policy shift acknowledge it has the potential to inflame tensions in the region. Like his predecessors, Trump signed the waiver and suspended the move.
Next on the bucket list: discarding Barack Obama’s cataclysmic Iran deal. When Trump was a candidate, his intentions for the Joint Comprehensive Plan of Action (JCPOA) were anything but clear. He told AIPAC, “My number-one priority is to dismantle the disastrous deal with Iran.” But he also said, “We will enforce it like you’ve never seen a contract enforced before folks, believe me.” It’s hard to know which part of his schizophrenic speech the audience—and the country—was supposed to believe. The schizophrenia has continued during his tenure, with Trump certifying the Iran deal twice before announcing in October his decision not to recertify a third time. Despite signaling his extreme displeasure with the deal, Trump has so far opted not to terminate it. But, by refusing to recertify, he has instead left to Congress the decision whether or not to reimpose sanctions.
Most important, perhaps, to pro-Israel forces was Trump’s choice of foreign-policy team. While Jared Kushner’s lack of political experience made him an odd choice for Middle East maven—Trump exclaimed at an inauguration event: “if [he] can’t produce peace in the Middle East, nobody can”—there is no denying that Kushner is a Zionist. Along with Jason Greenblatt, Trump’s envoy to the Israeli–Palestinian peace process, Kushner visited Israel this summer to determine whether restarting peace talks was a viable course of action. The duo have articulated their desire to refrain from repeating the mistakes of previous administrations: “It is no secret that our approach to these discussions departs from some of the usual orthodoxy. … Instead of working to impose a solution from the outside, we are giving the parties space to make their own decisions about the future,” Greenblatt explained. Maybe that’s why Benjamin Netanyahu seems so elated. Bibi’s friction with Obama was well documented, and the prime minister has expressed his jubilation at the changed nature of his relationship to Washington. During the United Nations General Assembly, he tweeted: “Under your leadership, @realDonaldTrump, the alliance between the United States and Israel has never been stronger.”
During the campaign, it was hard to imagine that might be the case. Trump’s repeated use of the phrase “America First,” a classic isolationist trope with anti-Semitic overtones, was deeply concerning to pro- Israel voters. He continually insisted that foreign governments were a drain on the American economy: “I want to help all of our allies, but we are losing billions and billions of dollars. We cannot be the policemen of the world. We cannot protect countries all over the world…where they’re not paying us what we need.” According to a 2016 report from the Congressional Research Service, “Israel is the largest cumulative recipient of U.S. foreign assistance since World War II.” The report calculates that the United States has, over the years, provided Israel with more than $127 billion in bilateral assistance. If words and campaign promises meant anything to Trump, the candidate who insisted that Israel could pay “big league” would have metamorphosed into the president who ensured that it did.
But Trump’s campaign promises seem to have had no bearing on his actions. In an appropriations bill, Congress pledged an extra $75 million in aid to Israel, on top of the annual $3.1 billion already promised for this year. As part of negotiations for the 2016 Memorandum of Understanding, the Israeli government promised to return any funds that surpassed the pre-negotiated aid package. In what was doubtlessly a major disappointment to Trump’s America-first base, the State Department confirmed it will not be asking the Israelis to return the additional funds.
His behavior toward Israel during his eight months in office has confirmed what was evident throughout the campaign: Donald Trump’s words and actions have, at best, a haphazard relationship to each other. So far Israel has benefited. That may not always be the case.
Click here to read what Jordan Chandler Hirsch wrote about Candidate Trump and Israel last year.
Daniella J. Greenbaum is assistant editor of Commentary.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Of Hobbes and Harvey Weinstein
In man’s natural state, with no social or religious order to impose limits upon his hungers and passions, “notions of right and wrong, justice and injustice have there no place. Where there is no common power, force and fraud are…the cardinal virtues.” Thus did Thomas Hobbes, in 1651, anticipate and describe the sordid story of the film producer Harvey Weinstein.
The reason Weinstein’s three decades of monstrous personal and professional conduct are so appalling and fascinating in equal measure is that he was clearly functioning outside the “social compact” Hobbes said was necessary to save men from a perpetual state of war they would wage against one another in the state of nature. For that is what Weinstein was doing, in his own way: waging Hobbesian war against the women he abused and finding orgasmic pleasure in his victories.
And Weinstein did so while cleverly pretending to leadership within the social compact and disingenuously advocating for its improvement both through political change and artistic accomplishment. Hobbes said the life of man in the state of nature was nasty, brutish, and short, but he did not say the warrior could not be strategic. Rochefoucauld’s immortal declaration that hypocrisy is the tribute vice pays to virtue is entirely wrong in this case. Weinstein paid off feminists and liberals to extend his zone of protection and seduction, not to help support the virtues he was subverting with his own vices.
Hobbes said that in the state of nature there was “no arts; no letters; no society.” But if the man in the state of nature, the nihilistic warrior, coexists with people who live within the social compact, would it not be a brilliant strategy to use the arts, letters, and society as cover, and a means of infiltrating and suborning the social compact? Harvey Weinstein is a brutal thug, a man of no grace, more akin to a mafioso than a maker of culture. And yet as a movie producer he gravitated toward respectable, quality, middlebrow, elevated and elevating fare. People wanted to work with him because of the kinds of movies he made. I think we can see that was the whole point of the exercise: It was exciting to be called into his presence because you knew you would do better, more socially responsible, more praiseworthy work under his aegis than you would with another producer.
And then, garbed only in a bathrobe, Weinstein would strike.
Weinstein was universally known to be a terrible person long before the horrifying tales of his sexual predation, depredation, and assault were finally revealed. And—this is important—known to be a uniquely terrible person. His specific acts of repugnant public thuggishness were detailed in dozens of articles and blog items over the decades, and were notable precisely because they were and are not common currency in business or anywhere else. It was said of him after the latest revelations that he had mysterious abilities to suppress negative stories about himself, and perhaps he did; even so, it was a matter of common knowledge that he was the most disgusting person in the movie business, and that’s saying a lot. And that’s before we get to sex.
To take one example, Ken Auletta related a story in the New Yorker in 2001 about the director Julie Taymor and her husband, the composer Eliot Goldenthal. She had helmed a movie about Frida Kahlo produced by Weinstein. There was a preview screening at the Lincoln Square theater in Manhattan. The audience liked it, but some of its responses indicated that the plotline was confusing. Weinstein, whose hunger to edit the work of others had long since earned him the name “Harvey Scissorhands,” wanted to recut it to clarify the picture. Taymor didn’t, citing the audience’s favorable reaction. Then this happened:
He saw Taymor’s agent…and yelled at him, “Get the fuck out of here!” To Goldenthal, who wrote the score for Frida, Weinstein said, “I don’t like the look on your face.” Then, according to several witnesses, he moved very close to Goldenthal and said, “Why don’t you defend her so I can beat the shit out of you?” Goldenthal quickly escorted Taymor away. When asked about this incident, Weinstein insisted that he did not threaten Goldenthal, yet he concedes, “I am not saying I was remotely hospitable. I did not behave well. I was not physically menacing to anybody. But I was rude and impolite.” One member of Taymor’s team described Weinstein’s conduct as actually bordering on “criminal assault.”
Weinstein told the late David Carr in 2002 that his conduct in such cases had merely been the result of excess glucose in his system, that he was changing his diet, and he was getting better. That glucose problem was his blanket explanation for all the bad stories about him, like this one:
“You know what? It’s good that I’m the fucking sheriff of this fucking lawless piece-of-shit town.” Weinstein said that to Andrew Goldman, then a reporter for the New York Observer, when he took him out of a party in a headlock last November after there was a tussle for Goldman’s tape recorder and someone got knocked in the head.
Goldman’s then-girlfriend, Rebecca Traister, asked Weinstein about a controversial movie he had produced. Traister provided the predicate for this anecdote in a recent piece: “Weinstein didn’t like my question about O, there was an altercation…[and] he called me a c—.”
Auletta also related how Weinstein physically threatened the studio executive Stacey Snider. She went to Disney executive Jeffrey Katzenberg and told him the story. Katzenberg, “one of his closest friends in the business,” told Weinstein he had to apologize. He did, kind of. Afterward, Katzenberg told Auletta, “I love Harvey.”
These anecdotes are 15 years old. And there were anecdotes published about Weinstein’s behavior dating back another 15 years. What they revealed then is no different from what they reveal now: Weinstein is an out-and-out psychopath. And apparently this was fine in his profession…as long as he was successful and important, and the stories involved only violence and intimidation.
Flash-forward to October 2017. Katzenberg—the man who loved Harvey—publicly released an email he had sent to Weinstein after he was done for: “You have done terrible things to a number of women over a period of years. I cannot in any way say this is OK with me…There appear to be two Harvey Weinsteins…one that I have known well, appreciated, and admired and another that I have not known at all.”
So which Weinstein, pray tell, was the one from whom Katzenberg had had to protect Stacey Snider? The one he knew or the one he didn’t know? Because they are, of course, the same person. We know that sexual violence is more about power than sex—about the ultimate domination and humiliation. In these anecdotes and others about Weinstein, we see that his great passions in life were dominating and humiliating. Even if the rumors hadn’t been swirling around his sexual misconduct for decades, could anyone actually have been surprised he sought to secure his victory over the social compact in the most visceral way possible outside of murder?
The commentariat’s reaction to the Weinstein revelations has been desperately confused, and for once, the confusion is constructive, because there are strange ideological and moral convergences.
The most extreme argument has it that he’s really not a unique monster, that every working woman in America has encountered a Weinstein, and that the problem derives from a culture of “toxic masculinity.” This attitude is an outgrowth of the now-fashionable view that there have been no real gains for women and minorities over the past half-century, that the gains are illusory or tokenish, and that something more revolutionary is required to level the playing field.
As a matter of fact in the Weinstein case, this view is false. Women have indeed encountered boors and creeps in their workplaces. But a wolf-whistler is not a rapist. Someone who leers at a woman isn’t the same as someone who masturbates in front of her. Coping with grotesque and inappropriate co-workers and bosses is something every human being, regardless of gender, has had to deal with, and will have to deal with until we are all replaced by robots. It’s worse for women, to be sure. Still, no one should have to go through such experiences. But we all have and we all do. It’s one of the many unpleasant aspects of being human.
Still, the extreme view of “toxic masculinity” contains a deeper truth that is anything but revolutionary. It takes us right back to Hobbes. His central insight—indeed, the insight of civilization itself—is that every man is a potential Weinstein. This clear-eyed, even cold-eyed view of man’s nature is the central conviction of philosophical conservatism. Without limits, without having impressed upon us a fear of the legal sanction of punishment or the social sanction of shame and ostracism, we are in danger of seeking our earthly rewards in the state of nature.
The revolutionary and the conservative also seem to agree there’s something viscerally disturbing about sex crimes that sets them apart. But here is where the consensus between us breaks down. Logically, if the problem is that we live in a toxic culture that facilitates these crimes, then the men who commit them are, at root, cogs in an inherently unjust system. The fault ultimately is the system’s, not theirs.
Harvey Weinstein is an exceptionally clever man who spent decades standing above and outside the system, manipulating it and gaming it for his own ends. He’s no cog. Tina Brown once ran Weinstein’s magazine and book-publishing line. She wrote that “strange contracts pre-dating us would suddenly surface, book deals with no deadline attached authored by attractive or nearly famous women, one I recall was by the stewardess on a private plane.” Which means he didn’t get into book publishing, or magazine publishing, to oversee the production of books and articles. He did it because he needed entities through which he would pass through payoffs both to women he had harassed and molested and to journalists whose silence he bought through options and advances. His primary interest wasn’t in the creation of culture. It was the creation of conditions under which he could hunt.
Which may explain his choice of the entertainment industry in the first place. In how many industries is there a specific term for demanding sexual favors in exchange for employment? There’s a “casting couch”; there’s no “insurance-adjustor couch.” In how many industries do people conduct meetings in hotel rooms at off hours anyway? And in how many industries could that meeting in a hotel room end up with the dominant player telling a young woman she should feel comfortable getting naked in front of him because the job for which she is applying will require her to get naked in front of millions?
Weinstein is entirely responsible for his own actions, but his predatory existence was certainly made easier by the general collapse of most formal boundaries between the genders. Young women were told to meet him in private at night in fancy suites. Half a century earlier, no young woman would have been permitted to travel alone in a hotel elevator to a man’s room. The world in which that was the norm imposed unacceptable limitations on the freedoms of women. But it did place serious impediments in the paths of predators whose despicable joy in life is living entirely without religious, spiritual, cultural, or moral impediment.
Hobbes was the great philosopher of limits. We Americans don’t accept his view of things; we tend to think better of people than he did. We tend to believe in the greater good, which he resolutely did not. We believe in self-government, which he certainly did not. But what our more optimistic outlook finds extraordinarily difficult to reckon with is behavior that challenges this complacency about human nature. We try to find larger explanations for it that place it in a more comprehensible context: It’s toxic masculinity! It’s the residue of the 1960s! It’s the people who enabled it! The truth is that, on occasion—and this is one such occasion—we are forced to come face to face with the worst of what any of us could be. And no one explanation suffices save Hamlet’s: “Use every man after his desert, and who should ’scape whipping?”
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The education-reform outfit’s hard-left shift
In remaking itself, TFA has subtly downgraded the principles that had won it allies across the spectrum. George W. Bush, Mitch McConnell, John Cornyn, Chris Christie, and Meg Whitman are a few of the Republicans who championed TFA. The group attracted such boldface names, and hundreds of millions of dollars from some of the largest American firms and philanthropies, because it stood for a simple but powerful idea: that teacher quality is the decisive factor in the educational outcomes produced by schools.
Judging by its interventions in recent debates, it isn’t all that clear that senior TFA executives still believe this. These days, TFA’s voice on charters, accountability, and curricular rigor is decidedly muffled. Such education-reform essentials have been eclipsed in TFA’s discourse by immigration, policing, “queer” and transgender-identity issues, and other left-wing causes. TFA’s message seems to be that until numerous other social ills are cured—until immigration is less restricted, policing becomes more gentle, and poverty is eliminated—an excellent education will elude the poor. That was the status-quo defeatism TFA originally set out to challenge.
Wendy Kopp conceived TFA when she was a senior at Princeton in 1989. Unable to get a New York City teaching job without a graduate degree and state certification, Kopp wrote a thesis calling for the creation of a nontraditional recruitment pipeline that would bring America’s most promising young people to its neediest classrooms. TFA members would teach for two years, applying their energy and ambition to drive achievement at the classroom level. She speculated that some would stay in education, while others would go on to careers in law, medicine, business, journalism, etc. But all would remain “lifelong leaders in the effort to end educational inequity.”
The following year, Kopp launched TFA with a corps of 489 new teachers who were dispatched to schools in six regions—a virtuoso feat of social entrepreneurship. Since then some 50,000 teachers have completed the program. This year’s corps counts around 6,400 members, serving 53 regions from coast to coast.
By the time I joined, in 2005, TFA had distilled the experience of its best corps members into a theory of educational transformation called “Teaching as Leadership.” Most people, it said, aren’t natural-born educators. But they could rise to classroom greatness by setting “big goals” for all students, planning engaging lessons, continually assessing their students, maintaining tough discipline, and investing parents and the wider community in their goals.
Mostly, great teachers work hard—really hard. TFA brought the work habits usually associated with large law firms and high-end management consultancies to America’s K–12 failure factories. Its “summer institute” for new recruits was a grueling ordeal of tears, sweat, and 16-hour days. When I was a corps member, we were told that this is what it would take to overcome the forces of the status quo, which were chronically low expectations; broken homes and criminality in the streets; messy, undisciplined classrooms; and bloated bureaucracies that put the needs of adults above those of children.
The TFA worldview diverged sharply from the one that predominated in the education industry. The leading lights of the profession held that the achievement gap was a product of inadequate funding and larger social inequalities. Thus they transferred blame for classroom outcomes from teachers to policymakers and society at large. Teachers’ unions were particularly fond of this theory, since it provided cover for resisting accountability and high expectations.
TFA raged against all this. The assumption that some kids were doomed to underachievement was wrong and, indeed, bigoted. Ditto for the notion that inner-city children couldn’t be expected to behave like young scholars. These children could pull themselves up, provided they had dedicated educators who believed in them. This wasn’t to say that external factors were discounted altogether. But TFA concentrated on the things that educators and school leaders could control. It would emphasize self-help and uplift. And it would accept friends and allies across political divides to fulfill the promise of educational equality.T oday’s Teach for America is a different story. TFA’s leaders have now fully enlisted the organization in the culture war—to the detriment of its mission and the high-minded civic sensibility that used to animate its work.
This has been most visible in TFA’s response to the 2016 election. TFA chief executive Elisa Villanueva Beard, who took over from Kopp four years ago, doesn’t bother to mask either her progressivism or her revulsion at the new administration. When, a couple of weeks after the election, the president-elect announced his choice of Betsy DeVos to lead the Department of Education, Beard’s response was swift and cold.
A November 23 TFA news release began by decrying Trump’s “indisputably hostile and racially charged campaign” and called on DeVos to uphold “diversity, equity, and inclusiveness.” The statement went on to outline 11 TFA demands. Topping the litany was protection of the previous administration’s Deferred Action for Childhood Arrivals, or DACA, program, which granted legal status to certain illegal immigrants brought into the country as children. Then came the identity-politics checklist: “SAFE classrooms for LGBTQ youth and teachers,” “safe classrooms for students and teachers with disabilities,” “safe classrooms for Muslim students and teachers,” “culturally responsive teaching,” and so on.
Of the 11 demands, only three directly touched core education-reform areas—high expectations, accountability, and data-driven instruction—and these were couched in the broadest terms possible. Most notably, there wasn’t a single kind word for DeVos: no well wishes, no hope of “working together to achieve common goals,” no call for dialogue, nothing but angry demands. This, even though the secretary-designee was a passionate charter advocate and came from the same corporate philanthropy and activism ecosystem that TFA had long inhabited.
It is true that inner-city educators were horrified at the election of a candidate who winked at David Duke and suggested that a federal judge’s Mexican heritage was disqualifying. TFA’s particular concern about DACA makes sense, since many corps members work with illegal-immigrant children in border states. (My own stint took me to the Rio Grande Valley region of South Texas.)
Even so, TFA’s allergic reaction to the Trump phenomenon reflects faulty strategic thinking. Beard isn’t Rachel Maddow, and TFA isn’t supposed to be an immigration-reform outfit, still less a progressive think tank. With Republicans having swept all three branches of the federal government, as well as a majority of statehouses and governors’ mansions, TFA must come to terms with the GOP. Condemning the new education secretary as barely legitimate wasn’t wise.
Beard is also making a grave mistake by attempting to banish legitimate conservative positions from the reform movement. In the wake of the bloody white-nationalist protests in Charlottesville, Virginia, she blasted an email to the organization that denounced in one breath opposition to affirmative action and “racist and xenophobic violence.” Some two-thirds of Americans oppose race-based affirmative action. Will these Americans give TFA a fair hearing on educational reform when the organization equates them with alt-right thugs? In a phone interview, Beard said she didn’t intend to link white nationalism with opposition to affirmative action.
As for DACA, the amount of attention TFA devotes to the fate of those affected is out of all proportion. TFA has a full-time director for DACA issues. A search of its website reveals at least 31 news releases, statements, and personal blogs on DACA—including a 2013 call for solidarity with “UndocuQueer students” that delved into the more exotic dimensions of intersectionality. As one education reformer told me in an interview, “They are super-concerned with ‘can’t wait’ issues—DACA and so on—and so much of their mental space [is filled up] by that kind of thing that less of their attention and time is being spent” on central priorities. “Personally, I think that’s such a shame.” (This reformer, and others I interviewed for this article, declined to speak on the record.)
By contrast, TFA didn’t call out Mayor Bill de Blasio on his attempts to roll back charter schools in New York. The organization has rarely targeted teachers’ unions the way it has ripped into Trump. But it is the National Education Association and the American Federation of Teachers that pose the main obstacle to expanding school choice and dismissing ineffective teachers. It is the unions that are bent on snuffing out data-driven instruction. It was a teachers’ union boss (Karen Lewis of Chicago), not the 45th president, who in 2012 accused TFA of supporting policies that “kill and disenfranchise children.”T
each for America’s turn to the harder left predated Trump’s ascent, and it isn’t mainly about him. Rather, it tracks deeper shifts within American liberalism, from the meritocratic Clintonian ideas of the 1990s and early aughts to today’s socialist revival and the fervid politics of race, gender, and sexuality.
Culturally, TFA was always more liberal than conservative. Educators tend to be liberal Democrats, regardless of the path that brings them to the classroom. But education reformers are unwanted children of American liberalism. They are signed up for the Democratic program, but they clash with public-sector labor unions, the most powerful component of the party base.
As TFA went from startup to corporate-backed giant, it sustained withering attacks from leftist quarters. On her influential education blog, New York University’s Diane Ravitch (a one-time education reformer who changed sides) relentlessly hammered corps members as “woefully unprepared,” as scabs “used to take jobs away from experienced teachers,” as agents of “privatization” and the “neoliberal attack on the public sector.” It was Ravitch who publicized Lewis’s claim that TFAers “kill” kids.
Michelle Rhee, the Korean-American alumna who in 2007 was tapped as chancellor of the District of Columbia system, became a lightning rod for anti-TFA sentiment on the left. Rhee’s no-nonsense approach to failing schools was summed up in a Time magazine cover that showed her holding a broom in the middle of a classroom. When D.C. Mayor Adrian Fenty didn’t win reelection in 2010, it was seen as a popular verdict against this image of TFA-style reform.
In 2013, one university instructor, herself a TFA alumna, urged college professors not to write letters of recommendation for students seeking admission to the organization. Liberal pundits took issue with TFA’s alleged elitism and lack of diversity, portraying it as the latest in a long line of “effete” white reformist institutions that invariably let down the minorities they try to help. TFA, argued a writer in the insurgent leftist magazine Jacobin, is “another chimerical attempt in a long history of chimerical attempts to sell educational reform as a solution to class inequality. At worst, it’s a Trojan horse for all that is unseemly about the contemporary education-reform movement.” By “unseemly,” the writer meant conservative and corporate.
The assaults have had an effect. Applications to TFA dropped to 37,000 last year, down from 57,000 in 2013. Thus ended a growth spurt that had seen the organization increase the size of its corps by about a fifth each year since 2000. Partly this was due to more jobs and better salaries on offer to elite graduates in a rebounding private sector. But as Beard conceded in a statement in April 2016, partly it was the “toxic debate surrounding education” that was “pushing future leaders away from considering education as a space where they can have real impact.”
The temptation for any successful nonprofit crusade is to care more about viability and growth than the original cause. Wounded by the union-led attacks, TFA leaders have apparently concluded that identity politics and a progressive public presence can revive recruitment. With its raft of corporate donors and the massive Walton-family endowment, TFA would never fit in comfortably with an American liberalism moving in the direction of Bernie Sanders and Elizabeth Warren. But talk of Black Lives and “UndocuQueers” might help it reconnect with younger millennials nursed on race-and-gender theory.
Thus, TFA leads its current pitch by touting its diversity. Beard opened her keynote at last year’s 25th-anniversary summit in Washington by noting: “We are more diverse than we have ever been. . . . We are a community that is black, that is Latino, that is white, that is American Indian, that is Asian and Pacific Islander, that is multiracial. We are a community that is lesbian, gay, bisexual, queer and trans.” The organization’s first priority, Beard went on, will always be “to build an inclusive community.”
It makes sense to recruit diverse teachers to lead classrooms in minority-majority regions, to be sure. But one can’t help detecting a certain liberal guilt behind this rhetoric, as if TFA had taken all the attacks against it to heart: We aren’t elite, we swear! Yet the 90 percent of black children who don’t reach math proficiency by eighth grade need good math teachers, period. Their parents don’t care how teachers worship (if at all), what they look like, or what they get up to in the bedroom. They want teachers who will put their children on a trajectory out of poverty.
Minority parents, moreover, fear for their kids’ well-being in chaotic schools and gang-infested streets. Yet to hear many of the speakers at TFA’s summit, you would have thought that police and other authority figures represent the main threat to black and Hispanic children. At a session titled “#StayWoke,” a TFA teacher railed against the police:
I teach 22 second-graders in Southeast D.C., all of them students of color. Sixteen of them are beautiful, carefree black and brown boys, who, despite their charm and playfulness, could be slain in the streets by the power that be [sic], simply because of the color of their skin, what clothes they wear, or the music they choose to listen to.
Educators must therefore impart “a racial literacy, a literacy of resistance.” Their students “must grow up woke.” Another teacher-panelist condemned anti-gang violence initiatives that
come from the same place as the appetite to charge black and brown people with charges of self-destruction. The tradition of blaming black folk keeps us from aiming at real sources of violence. If we were really interested in ending violence, we would be asking who pulled the trigger to underfund schools in Philadelphia? Who poisoned our brothers and sisters in Flint, Michigan? Who and what made New Orleans the incarceration capital of the world? We would teach our students to raise these questions.
Throughout, he led the assembly in chants of “Stay Woke!”
Talk of teaching “resistance” represented a reversion to the radical pedagogy and racial separatism that left a legacy of broken inner-city schools in the previous century. TFA’s own experience, and that of TFA-linked charter networks such as the Knowledge Is Power Program, had taught reformers that, to thrive academically, low-income students need rigid structure and order. Racial resentment won’t set these kids up for success but for alienation and failure—and prison.
Another session, on “Academic Rigor, Social and Political Consciousness, and Culturally Relevant Pedagogy,” pushed similar ideas. Jeff Duncan-Andrade, an associate professor of “Raza studies” at San Francisco State University, urged teachers to develop an ultra-localized race-conscious curriculum:
Don’t even essentialize Oakland’s culture! If you’re from the town, you know it’s a big-ass difference between the west and the east [sic]. We talk differently, we walk differently, we dress differently, we speak differently. The historical elements are different. So if you use stuff from the west [of Oakland] you have to really figure out, ‘How do I modify this to be relevant to the communities I’m serving in East Oakland?’ Develop curriculum, pedagogy, assessment that is responsive to the community you serve. You gotta become an ethnographer. You gotta get on the streets, get into the neighborhoods and barrios…talk to the ancestors…
If your curriculum is not building pathways to self-love for kids who at every turn of their day are taught to hate themselves, hate the color of their skin, hate the texture of their hair, hate the color of their eyes, hate the language they speak, hate the culture they come from, hate the ‘hood that they come from, hate the countries that their people come from, then what’s the purpose of your schooling?
Other sessions included “Native American Community Academy: A Case Study in Culturally Responsive Pedagogy”; “What Is the Role of White Leaders?”; “Navigating Gender Dynamics”; “Beyond Marriage Equality: Safety and Empowerment in the Education of LGBTQ Youth”; “A Chorus of Voices: Building Power Together,” featuring the incendiary Black Lives Matter activist and TFA alumnus DeRay McKesson; “Every Student Counts: Moving the Equity Agenda Forward for Asian American and Pacific Islander Students”; “Intentionally Diverse Learning Communities”; and much more of the kind.
Lost amid all this talk of identitarian self-love was the educator’s role in leading poor children toward things bigger and higher than Oakland, with its no doubt edifying east–west street rivalries—toward the glories of the West and the civic and constitutional bonds that link Americans of all backgrounds. You can be sure that the people who participate in TFA see to it that their own children learn to appreciate Caravaggio and Shakespeare and The Federalist. The whole point of the organization was to ensure that kids from Oakland could do the same.
Twenty-seven years since Teach for America was founded, the group’s mission remains vital. Today fewer than 1 in 10 children growing up in low-income communities graduate college. The basic political dynamics of education reform haven’t changed: Teach for America, and the other reform efforts it has inspired, have shown what works. The question is whether Teach for America is still determined to reform schools and fight for educational excellence for all—or whether it wants to become a cash-flush and slick vehicle for the new politics of identity.
Choose your plan and pay nothing for six Weeks!
Review of 'iGen' By Jean Twenge
n 1954, scientists James Olds and Peter Milner ran some experiments on rats in a laboratory at McGill University. What they found was remarkable and disturbing. They discovered that if electrodes were implanted into a particular part of the rat brain—the lateral hypothalamus—rats would voluntarily give themselves electric shocks. They would press a lever several thousand times per hour, for days on end, and even forgo food so that they could keep pressing. The scientists discovered that the rats were even prepared to endure torture in order to receive these shocks: The animals would run back and forth over an electrified grid if that’s what it took to get their fix. They enjoyed the shocks so much that they endured charring on the bottoms of their feet to receive them. For a long time afterward, Olds and Milner thought that they had discovered the “bliss center” of the brain—but this was wrong. They had discovered the reward center. They had found the part of the brain that gives us our drives and our desires. These scientists assumed that the rats must have been in a deep state of pleasure while receiving these electric shocks, but in reality they were in a prolonged state of acute craving.
Jean Twenge’s important new book, iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood, talks about a new form of electronic stimulation that appears to be driving young people to extreme distraction. A professor of psychology at San Diego State University, Twenge has built her career on looking at patterns in very large samples of people across long periods of time. She takes data from the General Social Survey, which has examined adults 18 years and older since 1966; the American Freshman Survey, which has questioned college students since 1991; the Youth Risk Behavior Surveillance System; and the Monitoring the Future databases. She looks to see whether there have been any changes in behavior and personality across time for people the same age but from different generations. Prior to iGen, she was the author of The Narcissism Epidemic (2009), co-written with psychologist W. Keith Campbell, and Generation Me (2013), a book about self-entitled Millennials. Twenge knows whereof she speaks.
Unlike previous patterns of rising narcissism, the trends of self-regard and self-entitlement associated with those born after 1995 appear to have petered out. What Twenge does find, however, is that reversals in trends of narcissism have been replaced by sharp increases in anxiety. Rates of anxiety and depression are spiking rapidly in young people, while at the same time their engagement with adult behaviors is declining. Using dozens of graphs, Twenge shows the reader how teenagers today drink less, go out less, socialize less, are less motivated to get their driver’s license, work less, date less, and even have sex less.
At first glance, the data seem counterintuitive, because the social pressures to abstain from alcohol and casual sex have never been more relaxed. But, on further reading, it appears that young people’s avoidance of adult behaviors has at least something to do with the addictive and distracting nature of smartphones and social media. Of course, Twenge is careful to point out that this is all “correlational.” She does not have a smoking gun and cannot prove causality. But the speculation seems plausible. All of the changes she observes started accelerating after 2007, when smartphones became ubiquitous. She writes:
I asked my undergraduate students what I thought was a very simple question: “What do you do with your phone while you sleep? Why?” Their answers were a profile in obsession. Nearly all slept with their phones, putting them under their pillows, on the mattress, or at the very least within arm’s reach of the bed. They checked social media websites and watched videos right before they went to bed and reached for their phones again as soon as they woke up in the morning (they had to—all of them used it as their alarm). Their phone was the last thing they saw before they went to bed and the first thing they saw when they woke up. If they woke up in the middle of the night they often ended up looking at their phones. They talked about their phones the way an addict would talk about crack: “I know I shouldn’t, but I just can’t help it.”
Recent experiments also lend support to the hypothesis. In an experiment carried out in 2013, psychologists Larry Rosen and Nancy Cheever brought 163 university students into a room. Some students had their phones unexpectedly taken away and others were told to put their phones on silent and out of sight. All students were then asked to fill out a brief anxiety questionnaire in 20-minute intervals. Those who were the heaviest smartphone users and heaviest social-media users recorded anxiety levels that kept climbing over the 90-minute period. The kids who used their smartphones the least did not have any increase in anxiety. This experiment lends strong support to the hypothesis that smartphones, by their propensity to promote constant use, do in fact cause agitation.
Twenge’s chapter on mental health in the generation born after 1995 makes for the book’s most disturbing reading. Heavy smartphone and social-media use correlates with higher anxiety and increased feelings of loneliness, particularly in girls. Social media seems to allow girls to bully one another in much more subtle and effective ways than were previously available. They constantly include or exclude one another from online activities such as group “chats,” and they are forever surveilling their peers’ presentation and appearance. This means that if girls aren’t vigilantly checking their social-media accounts, they won’t know if they’re being gossiped about or excluded from some fun activity. Like the electrodes placed on Olds and Milner’s rats, this new technology seems to activate the reward center—but it does not induce states of contentment, satisfaction, or meaning. It also takes time away from other activities such as sports and in-person socializing that would induce feelings of contentment and satisfaction. For a young person who is developing his personality and his competencies in the real world, this could have a profound and long-lasting effect.
Twenge tries not to be alarmist, and she presents her findings in a cautious, conscientious manner. She takes care to make caveats and eschew emotionally laden language. But it’s hard not to be alarmed by what she has found. In the six years between 2009 and 2015, the number of high-school girls who attempted suicide increased by 43 percent and the number of college students who “seriously considered” ending their lives rose by 51 percent. Suicides in young people are carefully tracked—there can be no ambiguity in this data—and increasing rates of children killing themselves are strong evidence that something is seriously amiss. From 2007 (the year smartphones became omnipresent) to 2015, suicide among 15- to 19-year-olds rose by 46 percent, and among those aged 12 to 14, it rose by half. And this rise is particularly pronounced for young girls. Three times as many 12- to 14-year-old girls killed themselves in 2015 as in 2007; among boys that age, suicide doubled in the same period. The suicide rate is always higher for boys (partly because they use more violent methods), but girls are now beginning to close this gender gap.
Another startling chapter in Twenge’s book focuses on sex, relationships, and family formation. We all know that young people are putting off marriage and child-rearing until later years, often for sensible reasons. But what is less well known is that young people are dating a lot less and spending a lot more time alone. It appears that old-fashioned romance and courtship norms are out the window, and so too is sex among young people. Twenge writes:
[M]ore young adults are not having sex at all. More than twice as many iGen’ers and late Millennials (those born in the 1990s) in their early twenties (16 percent) had not had sex at all since age 18 compared to GenX’ers at the same age (6 percent). A more sophisticated statistical analysis that included all adults and controlled for age and time period confirmed twice as many “adult virgins” among those born in the 1990s than among those born in the 1960s.
But if 16 percent are virgins, that means 84 percent of young people are having sex. Perhaps, then, there’s only a small segment bucking the trend toward more libertine lifestyles? Not so. Twenge writes:
Even with age controlled [in samples], Gen X’ers born in the 1970s report having an average of 10.05 sexual partners in their lifetimes, whereas Millennials and iGen’ers born in the 1990s report having sex with 5.29 partners. So Millennials and iGen’ers, the generations known for quick, casual sex, are actually having sex with fewer people.
For decades, conservatives have worried about loosened social and sexual mores among young people. It’s true that sexual promiscuity poses meaningful risks to youths’ well-being, especially among women. But there are also risks that manifest at a broader level when there is a lack of sexual activity in young people. And this risk can be summed up in three words—angry young men. Anthropologists are well aware that societies without strong norms of monogamous pairing produce a host of negative outcomes. In such populations, crime and child abuse increase while savings and GDP decline. Those are just some of the problems that come from men’s directing their energies toward competing with one another for mates instead of providing for families. In monogamous societies, male-to-male competition is tempered by the demands of family life and planning for children’s futures.
These trends identified by Twenge—increased anxiety and depression, huge amounts of time spent on the Internet, and less time spent dating and socializing—do not bode well for the future of Western societies. It should come as no surprise that young people who struggle to connect with one another and young men who can’t find girlfriends will express their anxieties as political resentments. Twenge’s book reveals just how extensive those anxieties are.
Like the rats that forgo food to binge on electric shocks, teenagers are forgoing formative life experiences and human connection in order to satiate their desire for electronic rewards. But the problem is not necessarily insurmountable. Twenge identifies possible protective factors such as playing sports, real-life socializing, adequate sleep, sunlight, and good food. Indeed, phone apps designed to encourage good habits are becoming popular, as are those that lock people out of their social-media accounts for predetermined periods of time. Twenge also argues that iGen has several positive indicators. They are less narcissistic and are more industrious than the generation before them, and they are also more realistic about the demands of work and careers. But harnessing those qualities will require an effort that seems at once piddling and gargantuan. IGen’s future well-being, and ours, depends on whether or not they can just put down their phones.
Choose your plan and pay nothing for six Weeks!
Playwrights and politics
No similar incidents have been reported, but not for lack of opportunity. In the past year, references to Trump have been shoehorned into any number of theatrical productions in New York and elsewhere. One Trump-related play by a noted author, Robert Schenkkan’s Building the Wall, has already been produced off Broadway and across America, and various other Trump-themed plays are in the pipeline, including Tracy Letts’s The Minutes and Beau Willimon’s The Parisian Woman, both of which will open on Broadway later this season.
The first thing to be said about this avalanche of theatrical activity is that these plays and productions, so far as is known, all show Trump in a negative light. That was to be expected. Save for David Mamet, I am not aware of any prominent present-day American playwright, stage actor, director, or technician who has ever publicly expressed anything other than liberal or progressive views on any political subject whatsoever. However, it appears one can simultaneously oppose Trump and still be skeptical about the artistic effects of such lockstep unanimity, for many left-of-center drama critics have had unfavorable things to say about the works of art inspired to date by the Trump presidency.
So even a political monoculture like that of the American theater can criticize the fruits of its own one-sidedness. But can such a culture produce any other kind of art? Or might the Theater of Trump be inherently flawed in a way that prevents it from transcending its limitations?F rom Aristophanes to Angels in America, politics has always been a normal part of the subject matter of theater. Not until the end of the 19th century, though, did a major playwright emerge whose primary interest in writing plays was political rather than aesthetic. George Bernard Shaw saw himself less as an artist than as a propagandist for the causes to which he subscribed, which included socialism, vegetarianism, pacifism, and (late in his life) Stalinism. But Shaw took care to sugar the political pill by embedding his preoccupations in entertaining comedies of ideas, and he was just as careful to make his villains as attractive—and persuasive-sounding—as his heroes.
In those far-off days, the English-speaking theater world was more politically diverse than it is today both on and off stage. It was only in the late ’40s that the balance started to shift, at first slowly, then with steadily increasing speed. In England, this ultimately led to a theater in which it is now common to find explicit political statements embedded not merely in plays but also in such commercial musicals as Billy Elliot, a show about the British miners’ strike of 1984 in which a chorus of children sings a holiday carol whose refrain runs as follows: “Merry Christmas, Maggie Thatcher / We all celebrate today / Cause it’s one day closer to your death.”
As this example suggests, postwar English political theater is consumed with indictments of the evils arising from the existence of a rigid class system. American playwrights, by contrast, are typically more inclined to follow in the footsteps of Arthur Miller and Tennessee Williams, both of whose plays portray (albeit for different reasons) the spiritual and emotional poverty of middle-class life. In both countries, most theater is neither explicitly nor implicitly political. Nevertheless, the theater communities of England and America have for the last half-century or so been all but unanimous in their offstage political convictions. This means that when an English-language play is political, the views that it embodies will almost certainly be left-liberal.
This unanimity of opinion is responsible for what I called, in a 2009 Commentary essay about Miller, the “theater of concurrence.”1 Its practitioners, presumably because all of their colleagues share their political views, take for granted that their audiences will also share them. Hence they write political plays in which no attempt is made to persuade dissenters to change their minds, it being assumed that no dissenters are present in the theater. In the theater of concurrence, disagreement with left-liberal orthodoxy is normally taken to be the result either of invincible ignorance or a deliberate embrace of evil. In the U.S. and England alike, it has become rare to see old-fashioned Shavian political plays like David Hare’s Skylight (1995) in which the devil (in this case, a Thatcherite businessman in love with an upper-middle-class do-gooder) is given his due. Instead, we get plays whose villains are demoniacal monsters (Tony Kushner’s fictionalized portrayal of Roy Cohn in Angels in America is an example) rather than flawed humans who, like Tom in Skylight, have reached the point of no moral return.
All this being the case, it makes perfect sense that Donald Trump’s election should have come as so disorienting a shock to the American theater community, which took for granted that he was unelectable. No sooner were the votes tallied than theater people took to social media to angrily declare their unalterable resistance to the Trump presidency. Many of them believe both Trump and his supporters to be, in Hillary Clinton’s oft-quoted phrase, members of “the basket of deplorables . . . racist, sexist, homophobic, xenophobic, Islamophobic, you name it.”
What kind of theater is emerging from this shared belief? Building the Wall, the first dramatic fruit of the Trump era, is a two-character play set in the visiting room of a Texas prison. It takes place in 2019, by which time President Trump has been impeached after having responded to the detonation of a nuclear weapon in Times Square by declaring nationwide martial law and locking up every foreigner in sight. The bomb, it turns out, was a “false flag” operation planted not by terrorists but by the president’s men. Rick, the play’s principal character, has been imprisoned for doing something so unspeakably awful that he and his interlocutor, a sanctimonious black journalist who is interviewing him for a book, are initially reluctant to talk about it. At the end of an hour or so of increasingly broad hints, we learn that Rick helped the White House set up a Nazi-style death camp for illegal immigrants.
Schenkkan has described Building the Wall as “not a crazy or extreme fantasy,” an inadvertently revealing remark. It is possible to spin involving drama out of raging paranoia, but that requires a certain amount of subtlety, not to mention intelligence—and there is nothing remotely subtle or intelligent about Building the Wall. Rick is a blue-collar cartoon, a regular-guy Texan who claims not to be a racist but voted for Trump because “all our jobs were going to Mexico and China and places like that and then the illegals here taking what jobs are left and nobody gave a damn.” Gloria, his interviewer, is a cartoon of a different kind, a leftsplaining virtue signal in human form who does nothing but emit smug speeches illustrating her own enlightened state: “I mean, at some point in the past we were all immigrants, right, except for Native Americans. And those of us who didn’t have a choice in the matter.” The New York production of Building the Wall closed a month ahead of schedule, having received universally bad reviews (the New York Times described it as “slick and dispiriting”).
The Public Theater’s Julius Caesar, by contrast, received mixed but broadly positive reviews. But it, too, was problematic, albeit on an infinitely higher level of dramatic accomplishment. Here, the fundamental problem was that Eustis had superimposed a gratuitous directorial gloss on Shakespeare’s play. There have been many other high-concept productions of Julius Caesar, starting with Orson Welles’s 1937 modern-dress Broadway staging, which similarly transformed Shakespeare’s play into an it-can-happen-here parable of modern-day fascism. But Eustis’s over-specific decision to turn Caesar into a broad-brush caricature of Trump hijacked the text instead of illuminating it. Rather than allowing the audience to draw its own parallels to the present situation, he pandered to its prejudices. The result was a quintessential example of the theater of concurrence, a staging that undercut its not-inconsiderable virtues by reducing the complexities of the Trump phenomenon to little more than boob-baiting by a populist vulgarian.
Darko Tresjnak committed a venial version of the same sin in his Hartford Stage revival of Shaw’s Heartbreak House (1919), which opened around the same time as Building the Wall and Julius Caesar. Written in the wake of World War I, Heartbreak House is a tragicomedy about a group of liberal bohemians who lack the willpower to reconstruct their doomed society along Shaw’s preferred socialist lines. Tresjnak’s lively but essentially traditional staging hewed to Shaw’s text in every way but one: He put a yellow Trump-style wig on Boss Mangan, the bloated, parasitical businessman who is the play’s villain. The effect was not unlike dressing a character in a play in a T-shirt with a four-letter word printed across the chest. The wig triggered a loud laugh on Mangan’s first entrance, but you were forced to keep on looking at it for the next two hours, by which time the joke had long since grown numbingly stale. It was a piece of cheap point-making unworthy of a production that was otherwise distinguished.How might contemporary theater artists engage with the Trump phenomenon in a way that is both politically and artistically serious?
For playwrights, the obvious answer is to follow Shaw’s own example by allowing Trump (or a Trump-like character) to speak for himself in a way that is persuasive, even seductive. Shaw himself did so in Major Barbara (1905), whose central character is an arms manufacturer so engagingly urbane that he persuades his pacifist daughter to give up her position with the Salvation Army and embrace the gospel of high explosives. But the trouble with this approach is that it is hard to imagine a playwright willing to admit that Trump could be persuasive to anyone but the hated booboisie.
Then there is Lynn Nottage’s Sweat, which transferred to Broadway last March after successful runs at the Oregon Shakespeare Festival and the Public Theater. First performed in the summer of 2015, around the time that Trump announced his presidential candidacy, Sweat is an ensemble drama about a racially diverse group of unemployed steel workers in Reading, the Pennsylvania city that has become synonymous with deindustrialization. Trump is never mentioned in the play, which takes place between 2000 and 2008 and is not “political” in the ordinary sense of the word, since Nottage did not write it to persuade anyone to do anything in particular. Her purpose was simply to show how the people of Reading feel, and try to explain why they feel that way. Tightly structured and free of sermonizing, Sweat is a wholly personal drama whose broader political implications are left unsaid. Instead of putting Trump in the pillory, it takes a searching look at the lives of the people who voted for him, and it portrays them sympathetically, making a genuine good-faith attempt to understand why they chose to embrace Trumpian populism.
Sweat is a model for serious political art—artful political art, if you will. Are more such plays destined to be written about Donald Trump and his angry supporters? Perhaps, if their authors heed the wise words of Joseph Conrad: “My task which I am trying to achieve is, by the power of the written word, to make you hear, to make you feel—it is, before all, to make you see.” Only the very best artists can make political art with that kind of revelatory power. Shaw and Bertolt Brecht did it, and so has Lynn Nottage. Will Tracy Letts and Beau Willimon follow suit, or will they settle for the pandering crudities of Building the Wall? The answer to that question will tell us much about the future of political theater in the Age of Trump.
1 “Concurring with Arthur Miller” (Commentary, June 2009)