ince its first issue in 1945, COMMENTARY has published hundreds of articles about Jews and Judaism. As one would expect, they cover just about every important aspect of the topic. But there is a lacuna, and not one involving some obscure bit of Judaica. COMMENTARY has never published a systematic discussion of one of the most obvious topics of all: the extravagant overrepresentation of Jews, relative to their numbers, in the top ranks of the arts, sciences, law, medicine, finance, entrepreneurship, and the media.
I have personal experience with the reluctance of Jews to talk about Jewish accomplishment—my co-author, the late Richard Herrnstein, gently resisted the paragraphs on Jewish IQ that I insisted on putting in The Bell Curve (1994). Both history and the contemporary revival of anti-Semitism in Europe make it easy to understand the reasons for that reluctance. But Jewish accomplishment constitutes a fascinating and important story. Recent scholarship is expanding our understanding of its origins.
And so this Scots-Irish Gentile from Iowa hereby undertakes to tell the story. I cover three topics: the timing and nature of Jewish accomplishment, focusing on the arts and sciences; elevated Jewish IQ as an explanation for that accomplishment; and current theories about how the Jews acquired their elevated IQ.

rom 800 B.C.E. through the first millennium of the Common Era, we have just two examples of great Jewish accomplishment, and neither falls strictly within the realms of the arts or sciences. But what a pair they are. The first is the fully realized conceptualization of monotheism, expressed through one of the literary treasures of the world, the Hebrew Bible. It not only laid the foundation for three great religions but, as Thomas Cahill describes in The Gifts of the Jews (1998), introduced a way of looking at the meaning of human life and the nature of history that defines core elements of the modern sensibility. The second achievement is not often treated as a Jewish one but clearly is: Christian theology expressed through the New Testament, an accomplishment that has spilled into every aspect of Western civilization.
But religious literature is the exception. The Jews do not appear in the annals of philosophy, drama, visual art, mathematics, or the natural sciences during the eighteen centuries from the time of Homer through the first millennium C.E., when so much was happening in Greece, China, and South Asia. It is unclear to what extent this reflects a lack of activity or the lack of a readily available record. For example, only a handful of the scientists of the Middle Ages are mentioned in most histories of science, and none was a Jew. But when George Sarton put a high-powered lens to the Middle Ages in his monumental Introduction to the History of Science (1927-48), he found that 95 of the 626 known scientists working everywhere in the world from 1150 to 1300 were Jews—15 percent of the total, far out of proportion to the Jewish population.
As it happens, that same period overlaps with the life of the most famous Jewish philosopher of medieval times, Maimonides (1135–1204), and of others less well known, not to mention the Jewish poets, grammarians, religious thinkers, scholars, physicians, and courtiers of Spain in the “Golden Age,” or the brilliant exegetes and rabbinical legislators of northern France and Germany. But this only exemplifies the difficulty of assessing Jewish intellectual activity in that period. Aside from Maimonides and a few others, these thinkers and artists did not perceptibly influence history or culture outside the confines of the Jewish world.
Generally speaking, this remained the case well into the Renaissance and beyond. When writing a book called Human Accomplishment (2003), I compiled inventories of “significant figures” in the arts and sciences, defined as people who are mentioned in at least half of the major histories of their respective fields. From 1200 to 1800, only seven Jews are among those significant figures, and only two were important enough to have names that are still widely recognized: Spinoza and Montaigne (whose mother was Jewish).

he sparse representation of Jews during the flowering of the European arts and sciences is not hard to explain. They were systematically excluded, both by legal restrictions on the occupations they could enter and by savage social discrimination. Then came legal emancipation, beginning in the late 1700’s in a few countries and completed in Western Europe by the 1870’s, and with it one of the most extraordinary stories of any ethnic group at any point in human history.
As soon as Jewish children born under legal emancipation had time to grow to adulthood, they started appearing in the first ranks of the arts and sciences. During the four decades from 1830 to 1870, when the first Jews to live under emancipation reached their forties, 16 significant Jewish figures appear. In the next four decades, from 1870 to 1910, the number jumps to 40. During the next four decades, 1910–1950, despite the contemporaneous devastation of European Jewry, the number of significant figures almost triples, to 114.
To get a sense of the density of accomplishment these numbers represent, I will focus on 1870 onward, after legal emancipation had been achieved throughout Central and Western Europe. How does the actual number of significant figures compare to what would be expected given the Jewish proportion of the European and North American population? From 1870 to 1950, Jewish representation in literature was four times the number one would expect. In music, five times. In the visual arts, five times. In biology, eight times. In chemistry, six times. In physics, nine times. In mathematics, twelve times. In philosophy, fourteen times.
Disproportionate Jewish accomplishment in the arts and sciences continues to this day. My inventories end with 1950, but many other measures are available, of which the best known is the Nobel Prize. In the first half of the 20th century, despite pervasive and continuing social discrimination against Jews throughout the Western world, despite the retraction of legal rights, and despite the Holocaust, Jews won 14 percent of Nobel Prizes in literature, chemistry, physics, and medicine/physiology. In the second half of the 20th century, when Nobel Prizes began to be awarded to people from all over the world, that figure rose to 29 percent. So far, in the 21st century, it has been 32 percent. Jews constitute about two-tenths of one percent of the world’s population. You do the math.

hat accounts for this remarkable record? A full answer must call on many characteristics of Jewish culture, but intelligence has to be at the center of the answer. Jews have been found to have an unusually high mean intelligence as measured by IQ tests since the first Jewish samples were tested. (The widely repeated story that Jewish immigrants to this country in the early 20th century tested low on IQ is a canard.) Exactly how high has been difficult to pin down, because Jewish sub-samples in the available surveys are seldom perfectly representative. But it is currently accepted that the mean is somewhere in the range of 107 to 115, with 110 being a plausible compromise.
The IQ mean for the American population is “normed” to be 100, with a standard deviation of 15. If the Jewish mean is 110, then the mathematics of the normal distribution says that the average Jew is at the 75th percentile. Underlying that mean in overall IQ is a consistent pattern on IQ subtests: Jews are only about average on the subtests measuring visuo-spatial skills, but extremely high on subtests that measure verbal and reasoning skills.
A group’s mean intelligence is important in explaining outcomes such as mean educational attainment or mean income. The key indicator for predicting exceptional accomplishment (like winning a Nobel Prize) is the incidence of exceptional intelligence. Consider an IQ score of 140 or higher, denoting the level of intelligence that can permit people to excel in fields like theoretical physics and pure mathematics. If the mean Jewish IQ is 110 and the standard deviation is 15, then the proportion of Jews with IQ’s of 140 or higher is somewhere around six times the proportion of everyone else.
The imbalance continues to increase for still higher IQ’s. New York City’s public-school system used to administer a pencil-and-paper IQ test to its entire school population. In 1954, a psychologist used those test results to identify all 28 children in the New York public-school system with measured IQ’s of 170 or higher. Of those 28, 24 were Jews.
Exceptional intelligence is not enough to explain exceptional accomplishment. Qualities such as imagination, ambition, perseverance, and curiosity are decisive in separating the merely smart from the highly productive. The role of intelligence is nicely expressed in an analogy suggested to me years ago by the sociologist Steven Goldberg: intelligence plays the same role in an intellectually demanding task that weight plays in the performance of NFL offensive tackles. The heaviest offensive tackle is not necessarily the best. Indeed, the correlation between weight and performance among NFL offensive tackles is probably quite low. But they all weigh more than 300 pounds.
So with intelligence. The other things count, but you must be very smart to have even a chance of achieving great work. A randomly selected Jew has a higher probability of possessing that level of intelligence than a randomly selected member of any other ethnic or national group, by far.

othing that I have presented up to this point is scientifically controversial. The profile of disproportionately high Jewish accomplishment in the arts and sciences since the 18th century, the reality of elevated Jewish IQ, and the connection between the two are not to be denied by means of data. And so we come to the great question: how and when did this elevated Jewish IQ come about? Here, the discussion must become speculative. Geneticists and historians are still assembling the pieces of the explanation, and there is much room for disagreement.
I begin with the assumption that elevated Jewish intelligence is grounded in genetics. It is no longer seriously disputed that intelligence in Homo sapiens is substantially heritable. In the last two decades, it has also been established that obvious environmental factors such as high income, books in the house, and parental reading to children are not as potent as one might expect. A “good enough” environment is important for the nurture of intellectual potential, but the requirements for “good enough” are not high. Even the very best home environments add only a few points, if that, to a merely okay environment. It is also known that children adopted at birth do not achieve the IQ’s predicted by their parents’ IQ.
To put it another way, we have good reason to think that Gentile children raised in Jewish families do not acquire Jewish intelligence. Hence my view that something in the genes explains elevated Jewish IQ. That conclusion is not logically necessary but, given what we know about heritability and environmental effects on intelligence in humans as a species, it is extremely plausible.
Two potential explanations for a Jewish gene pool favoring high intelligence are so obvious that many people assume they must be true: winnowing by persecution (only the smartest Jews either survived or remained Jews) and marrying for brains (scholars and children of scholars were socially desirable spouses). I too think that both of these must have played some role, but how much of a role is open to question.
In the case of winnowing through persecution, the logic cuts both ways. Yes, those who remained faithful during the many persecutions of the Jews were self-selected for commitment to Judaism, and the role of scholarship in that commitment probably means that intelligence was one of the factors in self-selection. The foresight that goes with intelligence might also have had some survival value (as in anticipating pogroms), though it is not obvious that its effect would be large enough to explain much.
But once the Cossacks are sweeping through town, the kind of intelligence that leads to business success or rabbinical acumen is no help at all. On the contrary, the most successful people could easily have become the most likely to be killed, by virtue of being more visible and the targets of greater envy. Furthermore, other groups, such as the Gypsies, have been persecuted for centuries without developing elevated intelligence. Considered closely, the winnowing-by-persecution logic is not as compelling as it may first appear.
What of the marrying-for-brains theory? “A man should sell all he possesses in order to marry the daughter of a scholar, as well as to marry his daughter to a scholar,” advises the Talmud (Pesahim 49a), and scholarship did in fact have social cachet within many Jewish communities before (and after) emancipation. The combination could have been potent: by marrying the children of scholars to the children of successful merchants, Jews were in effect joining those selected for abstract reasoning ability with those selected for practical intelligence.
Once again, however, it is difficult to be more specific about how much effect this might have had. Arguments have been advanced that rich merchants were in fact often reluctant to entrust their daughters to penniless and unworldly scholars. Nor is it clear that the fertility rate of scholars, or their numbers, were high enough to account for a major effect on intelligence. The attractiveness of brains in prospective marriage partners surely played some role but, once again, the data for assessing how much have not been assembled.

gainst this backdrop of uncertainty, a data-driven theory for explaining elevated Jewish IQ appeared in 2006 in the Journal of Biosocial Science. In an article entitled “Natural History of Ashkenazi Intelligence,” Gregory Cochran (a physicist) and Jason Hardy and Henry Harpending (anthropologists) contend that elevated Jewish IQ is confined to the Ashkenazi Jews of northern and central Europe, and developed from the Middle Ages onward, primarily from 800 to 1600 C.E.
In the analysis of these authors, the key factor explaining elevated Jewish intelligence is occupational selection. From the time Jews became established north of the Pyrenees-Balkans line, around 800 C.E., they were in most places and at most times restricted to occupations involving sales, finance, and trade. Economic success in all of these occupations is far more highly selected for intelligence than success in the chief occupation of non-Jews: namely, farming. Economic success is in turn related to reproductive success, because higher income means lower infant mortality, better nutrition, and, more generally, reproductive “fitness.” Over time, increased fitness among the successful leads to strong selection for the cognitive and psychological traits that produce that fitness, intensified when there is a low inward gene flow from other populations—as was the case with Ashkenazim.
Sephardi and Oriental Jews—i.e., those from the Iberian peninsula, the Mediterranean littoral, and the Islamic East—were also engaged in urban occupations during the same centuries. But the authors cite evidence that, as a rule, they were less concentrated in occupations that selected for IQ and instead more commonly worked in craft trades. Thus, elevated intelligence did not develop among Sephardi and Oriental Jews—as manifested by contemporary test results in Israel that show the IQ’s of non-European Jews to be roughly similar to the IQ’s of Gentiles.
The three authors conclude this part of their argument with an elegant corollary that matches the known test profiles of today’s Ashkenazim with the historical experience of their ancestors:
The suggested selective process explains the pattern of mental abilities in Ashkenazi Jews: high verbal and mathematical ability but relatively low spatio-visual ability. Verbal and mathematical talent helped medieval businessmen succeed, while spatio-visual abilities were irrelevant.
The rest of their presentation is a lengthy and technical discussion of the genetics of selection for IQ, indirect evidence linking elevated Jewish IQ with a variety of genetically based diseases found among Ashkenazim, and evidence that most of these selection effects have occurred within the last 1,200 years.

o one has yet presented an alternative to the Cochran-Hardy-Harpending theory that can match it for documentation. But, as someone who suspects that elevated Jewish intelligence was (a) not confined to Ashkenazim and (b) antedates the Middle Ages, I will outline the strands of an alternative explanation that should be explored.
It begins with evidence that Jews who remained in the Islamic world exhibited unusually high levels of accomplishment as of the beginning of the second millennium. The hardest evidence is Sarton’s enumeration of scientists mentioned earlier, of whom 15 percent were Jews. These were not Ashkenazim in northern Europe, where Jews were still largely excluded from the world of scientific scholarship, but Sephardim in the Iberian peninsula, in Baghdad, and in other Islamic centers of learning. I have also mentioned the more diffuse cultural evidence from Spain, where, under both Muslim and Christian rule, Jews attained eminent positions in the professions, commerce, and government as well as in elite literary and intellectual circles.
After being expelled from Spain at the end of the 15th century, Sephardi Jews rose to distinction in many of the countries where they settled. Some economic historians have traced the decline of Spain after 1500, and the subsequent rise of the Netherlands, in part to the Sephardi commercial talent that was transferred from the one to the other. Centuries later, in England, one could point to such Sephardi eminences as Benjamin Disraeli and the economist David Ricardo.
In sum, I propose that a strong case could be assembled that Jews everywhere had unusually high intellectual resources that manifested themselves outside of Ashkenaz and well before the period when non-rabbinic Ashkenazi accomplishment manifested itself.
How is this case to be sustained in the face of contemporary test data indicating that non-Ashkenazi Jews do not have the elevated mean of today’s Ashkenazim? The logical inconsistency disappears if one posits that Jews circa 1000 C.E. had elevated intelligence everywhere, but that it subsequently was augmented still further among Ashkenazim and declined for Jews living in the Islamic world—perhaps because of the dynamics described by Cochran, Hardy, and Harpending (that is, Oriental Jews were concentrated in trades for which high intelligence did not yield wealth).
Recent advances in the use of genetic markers to characterize populations enable us to pursue such possibilities systematically. I offer this testable hypothesis as just one of many possibilities: if genetic markers are used to discriminate among non- Ashkenazi Jews, it will be found that those who are closest genetically to the Sephardim of Golden Age Spain have an elevated mean IQ, though perhaps not so high as the contemporary Ashkenazi IQ.

he next strand of an alternative to the Cochran-Hardy-Harpending theory involves reasons for thinking that some of the elevation of Jewish intelligence occurred even before Jews moved into occupations selected for intelligence, because of the shift in ancient Judaism from a rite-based to a learning-based religion.
All scholars who have examined the topic agree that about 80–90 percent of all Jews were farmers at the beginning of the Common Era, and that only about 10–20 percent of Jews were farmers by the end of the first millennium. No other ethnic group underwent this same kind of occupational shift. For the story of why this happened, I turn to a discussion by Maristella Botticini and Zvi Eckstein entitled “Jewish Occupational Selection: Education, Restrictions, or Minorities?” which appeared in the Journal of Economic History in 2005.
Rejecting the explanation that Jews became merchants because they were restricted from farming, Botticini and Eckstein point to cases in which Jews who were free to own land and engage in agriculture made the same shift to urban, skilled occupations that Jews exhibited where restrictions were in force. Instead, they focus on an event that occurred in 64 C.E., when the Palestinian sage Joshua ben Gamla issued an ordinance mandating universal schooling for all males starting at about age six. The ordinance was not only issued; it was implemented. Within about a century, the Jews, uniquely among the peoples of the world, had effectively established universal male literacy and numeracy.
The authors’ explanation for the subsequent shift from farming to urban occupations reduces to this: if you were educated, you possessed an asset that had economic value in occupations that required literacy and numeracy, such as those involving sales and transactions. If you remained a farmer, your education had little or no value. Over the centuries, this basic economic reality led Jews to leave farming and engage in urban occupations.
So far, Botticini and Eckstein have provided an explanatory backdrop to the shift in occupations that in turn produced the selection pressures for intelligence described by Cochran, Hardy, and Harpending. But selection pressure in this classic form was probably not the only force at work. Between the 1st and 6th centuries C.E., the number of Jews in the world plummeted from about 4.5 million to 1.5 million or fewer. About 1 million Jews were killed in the revolts against the Romans in Judea and Egypt. There were scattered forced conversions from Judaism to another religion. Some of the reduction may be associated with a general drop in population that accompanied the decline and fall of the Roman Empire. But that still leaves a huge number of Jews who just disappeared.
What happened to them? Botticini and Eckstein argue that an economic force was at work: for Jews who remained farmers, universal education involved a cost that had little economic benefit. As time went on, they drifted away from Judaism. I am sure this explanation has some merit. But a more direct explanation could involve the increased intellectual demands of Judaism.
Joshua ben Gamla’s ordinance mandating literacy occurred at about the same time as the destruction of the Second Temple—64 C.E. and 70 C.E., respectively. Both mark the moment when Judaism began actively to transform itself from a religion centered on rites and sacrifices at the Temple in Jerusalem to a religion centered on prayer and the study of the Torah at decentralized synagogues and study houses. Rabbis and scholars took on a much larger role as leaders of local communities. Since worship of God involved not only prayer but study, all Jewish males had to read if they were to practice their faith—and not only read in private but be able to read aloud in the presence of others.
In this context, consider the intellectual requirements of literacy. People with modest intelligence can become functionally literate, but they are able to read only simple texts. The Torah and the Hebrew prayer book are not simple texts; even to be able to read them mechanically requires fairly advanced literacy. To study the Talmud and its commentaries with any understanding requires considerable intellectual capacity. In short, during the centuries after Rome’s destruction of the Temple, Judaism evolved in such a way that to be a good Jew meant that a man had to be smart.
What happened to the millions of Jews who disappeared? It is not necessary to maintain that Jews of low intelligence were run out of town because they could not read the Torah and commentaries fluently. Rather, few people enjoy being in a position where their inadequacies are constantly highlighted. It is human nature to withdraw from such situations. I suggest that the Jews who fell away from Judaism from the 1st to 6th centuries C.E. were heavily concentrated among those who could not learn to read well enough to be good Jews—meaning those from the lower half of the intelligence distribution. Even before the selection pressures arising from urban occupations began to have an effect, I am arguing, the remaining self-identified Jews circa 800 C.E. already had elevated intelligence.

loose end remains. Is it the case that, before the 1st century C.E., Jews were intellectually ordinary? Are we to believe that the Bible, a work compiled over centuries and incorporating everything from brilliant poetry to profound ethics, with stories that speak so eloquently to the human condition that they have inspired great art, music, and literature for millennia, was produced by an intellectually run-of-the-mill Levantine tribe?
In The Evolution of Man and Society (1969), the geneticist Cyril Darlington presented the thesis that Jews and Judaism were decisively shaped much earlier than the 1st century C.E., namely, by the Babylonian captivity that began with the fall of Jerusalem to the forces of Nebuchadnezzar in 586 B.C.E.
Darlington’s analysis touches on many issues, but I will focus on just the intelligence question. The biblical account clearly states that only a select group of Jews were taken to Babylon. We read that Nebuchadnezzar “carried into exile all Jerusalem: all the officers and fighting men, and all the craftsmen and artisans. . . . Only the poorest people of the land were left” (2 Kings 24:10).
In effect, the Babylonians took away the Jewish elites, selected in part for high intelligence, and left behind the poor and unskilled, selected in part for low intelligence. By the time the exiles returned, more than a century later, many of those remaining behind in Judah had been absorbed into other religions. Following Ezra’s command to “separate yourselves from the peoples around you and from your foreign wives” (Ezra 10:9), only those who renounced their foreign wives and children were permitted to stay within the group. The returned exiles, who formed the bulk of the reconstituted Jewish community, comprised mainly the descendants of the Jewish elites—plausibly a far more able population, on average, than the pre-captivity population.
I offer the Babylonian captivity as a concrete mechanism whereby Jewish intelligence may have been elevated very early, but I am not wedded to it. Even without that mechanism, there is reason to think that selection for intelligence antedates the 1st century C.E.
From its very outset, apparently going back to the time of Moses, Judaism was intertwined with intellectual complexity. Jews were commanded by God to heed the law, which meant they had to learn the law. The law was so extensive and complicated that this process of learning and reviewing was never complete. Moreover, Jewish males were not free to pretend that they had learned the law, for fathers were commanded to teach the law to their children. It became obvious to all when fathers failed in their duty. No other religion made so many intellectual demands upon the whole body of its believers. Long before Joshua ben Gamla and the destruction of the Second Temple, the requirements for being a good Jew had provided incentives for the less intelligent to fall away.
Assessing the events of the 1st century C.E. thus poses a chicken-and-egg problem. By way of an analogy, consider written Chinese with its thousands of unique characters. On cognitive tests, today’s Chinese do especially well on visuo-spatial skills. It is possible, I suppose, that their high visuo-spatial skills have been fostered by having to learn written Chinese; but I find it much more plausible that only people who already possessed high visuo-spatial skills would ever devise such a ferociously difficult written language. Similarly, I suppose it is possible that the Jews’ high verbal skills were fostered, through secondary and tertiary effects, by the requirement that they be able to read and understand complicated texts after the 1st century C.E.; but I find it much more plausible that only people who already possessed high verbal skills would dream of installing such a demanding requirement.
This reasoning pushes me even farther into the realm of speculation. Insofar as I am suggesting that the Jews may have had some degree of unusual verbal skills going back to the time of Moses, I am naked before the evolutionary psychologists’ ultimate challenge. Why should one particular tribe at the time of Moses, living in the same environment as other nomadic and agricultural peoples of the Middle East, have already evolved elevated intelligence when the others did not?
At this point, I take sanctuary in my remaining hypothesis, uniquely parsimonious and happily irrefutable. The Jews are God’s chosen people.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Jewish Genius
Must-Reads from Magazine
The Reckoning
Sex and Work in an Age Without Norms
Christine Rosen 2017-12-14
In the Beginning Was the ‘Hostile Work Environment’
In 1979, the feminist legal thinker Catharine MacKinnon published a book called Sexual Harassment of Working Women. Her goal was to convince the public (especially the courts) that harassment was a serious problem affecting all women whether or not they had been harassed, and that it was discriminatory. “The factors that explain and comprise the experience of sexual harassment characterize all women’s situation in one way or another, not only that of direct victims of the practice,” MacKinnon wrote. “It is this level of commonality that makes sexual harassment a women’s experience, not merely an experience of a series of individuals who happen to be of the female sex.” MacKinnon was not only making a case against clear-cut instances of harassment, but also arguing that the ordinary social dynamic between men and women itself created what she called “hostile work environments.”
The culture was ripe for such arguments. Bourgeois norms of sexual behavior had been eroding for at least a decade, a fact many on the left hailed as evidence of the dawn of a new age of sexual and social freedom. At the same time, however, a Redbook magazine survey published a few years before MacKinnon’s book found that nearly 90 percent of the female respondents had experienced some form of harassment on the job.
MacKinnon’s views might have been radical—she argued for a Marxist feminist jurisprudence reflecting her belief that sexual relations are hopelessly mired in male dominance and female submission—but she wasn’t entirely wrong. The postwar America in which women like MacKinnon came of age offered few opportunities for female agency, and the popular culture of the day reinforced the idea that women were all but incapable of it.
It wasn’t just the perfect housewives in the midcentury mold of Donna Reed and June Cleaver who “donned their domestic harness,” as the historian Elaine Tyler May wrote in her social history Homeward Bound. Popular magazines such as Good Housekeeping, McCall’s, and Redbook reinforced the message; so did their advertisers. A 1955 issue of Family Circle featured an advertisement for Tide detergent that depicted a woman with a rapturous expression on her face actually hugging a box of Tide under the line: “No wonder you women buy more Tide than any other washday product! Tide’s got what women want!” Other advertisements infantilized women by suggesting they were incapable of making basic decisions. “You mean a -woman can open it?” ran one for Alcoa aluminum bottle caps. It is almost impossible to read the articles or view the ads without thinking they were some kind of put-on.
The competing view of women in the postwar era was equally pernicious: the objectified pinup or sexpot. Marilyn Monroe’s hypersexualized character in The Seven Year Itch from 1955 doesn’t even have a name—she’s simply called The Girl. The 1956 film introducing the pulchritudinous Jayne Mansfield to the world was called The Girl Can’t Help It. The behavior of Rat Pack–era men has now been so airbrushed and glamorized that we’ve forgotten just how thoroughly debased their treatment of women was. Even as we thrill to Frank Sinatra’s “nice ’n’ easy” style, we overlook the classic Sinatra movie character’s enjoying an endless stream of showgirls and (barely disguised) prostitutes until forced to settle down with a killjoy ball-and-chain girlfriend. The depiction of women either as childish wives living under the protection of their husbands or brainless sirens sexually available to the first taker was undoubtedly vulgar, but it reflected a reality about the domestic arrangements of Americans after 1945 that was due for a profound revision when the 1960s came along.
And change they did, with a vengeance. The sexual revolution broke down the barriers between the sexes as the women’s-liberation movement insisted that bourgeois domesticity was a prison. The rules melted away, but attitudes don’t melt so readily; Sinatra’s ball-and-chain may have disappeared by common consent, but for a long time it seemed that the kooky sexpot of the most chauvinistic fantasy had simply become the ideal American woman. The distinction between the workplaces of the upper middle class and the singles bars where they sought companionship was pretty blurred.
Which is where MacKinnon came in—although if we look back at it, her objection seems not Marxist in orientation but almost Victorian. She described a workplace in which women were unprotected by old-fashioned social norms against adultery and general caddishness and found themselves mired in a “hostile environment.” She named the problem; it fell to the feminist movement as a whole to enshrine protections against it. They had some success. In 1986, the U.S. Supreme Court embraced elements of MacKinnon’s reasoning when it ruled unanimously in Meritor Savings Bank v. Vinson that harassment that was “sufficiently severe or pervasive” enough to create “a hostile or abusive work environment” was a violation of Title VII of the Civil Rights Act of 1964. The U.S. Equal Employment Opportunity Commission issued rules advising employers to create procedures to combat harassment, and employers followed suit by establishing sexual-harassment policies. Human-resource departments spent countless hours and many millions of dollars on sexual-harassment-awareness training for employees.
With new regulations and enforcement mechanisms, the argument went, the final, fusty traces of patriarchal, protective norms and bad behavior would be swept away in favor of rational legal rules that would ensure equal protection for women in the workplace. The culture might still objectify women, but our legal and employment systems would, in fits and starts, erect scaffolding upon which women who were harassed could seek justice.
But as the growing list of present-day harassers and predators attests—Harvey Weinstein, Louis C.K., Charlie Rose, Michael Oreskes, Glenn Thrush, Mark Halperin, John Conyers, Al Franken, Roy Moore, Matt Lauer, Garrison Keillor, et al.—the system appears to have failed the people it was meant to protect. There were searing moments that raised popular awareness about sexual harassment: (Anita Hill’s testimony about U.S. Supreme Court nominee Clarence Thomas in 1991; Senator Bob Packwood’s ouster for serial groping in 1995). There was, however, still plenty of space for men who harassed and assaulted women (and, in Kevin Spacey’s case, men) to shelter in place.
This wasn’t supposed to happen. Why did it?
Sex and Training
What makes sexual harassment so unnerving is not the harassment. It’s the sex—a subject, even a half-century into our so-called sexual revolution, about which we remain deeply confused.
The challenge going forward, now that the Hollywood honcho Weinstein and other notoriously lascivious beneficiaries of the liberation era have been removed, is how to negotiate the rules of attraction and punish predators in a culture that no longer embraces accepted norms for sexual behavior. Who sets the rules, and how do we enforce them? The self-appointed guardians of that galaxy used to be the feminist movement, but it is in no position to play that role today as it reckons not only with the gropers in its midst (Franken) but the ghosts of gropers past (Bill Clinton).
The feminist movement long ago traded MacKinnon’s radical feminism for political expedience. In 1992 and 1998, when her husband was a presidential candidate and then president, Hillary Clinton covered for Bill, enthusiastically slut-shaming his accusers. Her sin was and is at least understandable, if not excusable, given that the two are married. But what about America’s most glamorous early feminist, Gloria Steinem? In 1998, Steinem wrote of Clinton accuser Kathleen Willey: “The truth is that even if the allegations are true, the President is not guilty of sexual harassment. He is accused of having made a gross, dumb and reckless pass at a supporter during a low point in her life. She pushed him away, she said, and it never happened again. In other words, President Clinton took ‘no’ for an answer.” As for Monica Lewinsky, Steinem didn’t even consider the president’s behavior with a young intern to be harassment: “Welcome sexual behavior is about as relevant to sexual harassment as borrowing a car is to stealing one.”
The consequences of applying to Clinton what Steinem herself called the “one-free-grope” rule are only now becoming fully visible. Even in the case of a predator as malevolent as Weinstein, it’s clear that feminists no longer have a shared moral language or the credibility with which to condemn such behavior. Having tied their movement’s fortunes to political power, especially the Democratic Party, it is difficult to take seriously their injunctions about male behavior on either side of the aisle now (just as it was difficult to take seriously partisans on the right who defended the Alabama Senate candidate and credibly accused child sexual predator Roy Moore). Democrat Nancy Pelosi’s initial hemming and hawing about denouncing accused sexual harasser Representative John Conyers was disappointing but not surprising. As for Steinem, she’s gone from posing undercover as a Playboy bunny in order to expose male vice to sitting on the board of Playboy’s true heir, VICE Media, an organization whose bro-culture has spawned many sexual-harassment complaints. She’s been honored by Rutgers University, which created the Gloria Steinem Chair in Media, Culture, and Feminist Studies. One of the chair’s major endowers? Harvey Weinstein.
In place of older accepted norms or trusted moral arbiters, we have weaponized gossip. “S—-y Media Men” is a Google spreadsheet created by a woman who works in media and who, in the wake of the Weinstein revelations, wanted to encourage other women to name the gropers among us. At first a well-intentioned effort to warn women informally about men who had behaved badly, it quickly devolved into an anonymous unverified online litany of horribles devoid of context. The men named on the list were accused of everything from sending clumsy text messages to rape; Jia Tolentino of the New Yorker confessed that she didn’t believe the charges lodged against a male friend of hers who appeared on the list.
Others have found sisterhood and catharsis on social media, where, on Twitter, the phrase #MeToo quickly became the symbol for women’s shared experiences of harassment or assault. Like the consciousness-raising sessions of earlier eras, the hashtag supposedly demonstrated the strength of women supporting other women. But unlike in earlier eras, it led not to group hugs over readings of The Feminine Mystique, but to a brutally efficient form of insta-justice meted out on an almost daily basis against the accused. Writing in the Guardian, Jessica Valenti praised #MeToo for encouraging women to tell their stories but added, “Why have a list of victims when a list of perpetrators could be so much more useful?” Valenti encouraged women to start using the hashtag as a way to out predators, not merely to bond with one another. Even the New York Times has gone all-in on the assumption that the reckoning will continue: The newspaper’s “gender editor,” Jessica Bennett, launched a newsletter, The #MeToo Moment, described as “the latest news and insights on the sexual harassment and misconduct scandals roiling our society.”
As the also-popular hashtag #OpenSecret suggests, this #MeToo moment has brought with it troubling questions about who knew what and when—and a great deal of anger at gatekeepers and institutions that might have turned a blind eye to predators. The backlash against the Metropolitan Opera in New York is only the most recent example. Reports of conductor James Levine’s molestation of teenagers have evidently been widespread in the classical-music world for decades. And, as many social-media users hinted with their use of the hashtag #itscoming, Levine is not the only one who will face a reckoning.
To be sure, questioning and catharsis are welcome if they spark reforms such as crackdowns on the court-approved payoffs and nondisclosure agreements that allowed sexual predators like Weinstein to roam free for so long. And they have also brought a long-overdue recognition of the ineffectiveness of so much of what passes for sexual-harassment-prevention training in the workplace. As the law professor Lauren Edelman noted in the Washington Post, “There have been only a handful of empirical studies of sexual-harassment training, and the research has not established that such training is effective. Some studies suggest that training may in fact backfire, reinforcing gendered stereotypes that place women at a disadvantage.” One specific survey at a university found that “men who participated in the training were less likely to view coercion of a subordinate as sexual harassment, less willing to report harassment and more inclined to blame the victim than were women or men who had not gone through the training.”
Realistic Change vs. Impossible Revolution
Because harassment lies at the intersection of law, politics, ideology, and culture, attempts to re-regulate behavior, either by returning to older, more traditional norms, or by weaponizing women’s potential victimhood via Twitter, won’t work. America is throwing the book at foul old violators like Weinstein and Levine, but aside from warning future violators that they may be subject to horrible public humiliation and ruination, how is all this going to fix the problem?
We are a long way from Phyllis Schlafly’s ridiculous remark, made years ago during a U.S. Senate committee hearing, that “virtuous women are seldom accosted,” but Vice President Mike Pence’s rule about avoiding one-on-one social interactions with women who aren’t his wife doesn’t really scale up in terms of effective policy in the workplace, either. The Pence Rule, like corporate H.R. policies about sexual harassment, really exists to protect Pence from liability, not to protect women.
Indeed, the possibility of realistic change is made almost moot by the hysterical ambitions of those who believe they are on the verge of bringing down the edifice of American masculinity the way the Germans brought down the Berlin wall. Bennett of the Times spoke for many when she wrote in her description of the #MeToo newsletter: “The new conversation goes way beyond the workplace to sweep in street harassment, rape culture, and ‘toxic masculinity’—terminology that would have been confined to gender studies classes, not found in mainstream newspapers, not so long ago.”
Do women need protection? Since the rise of the feminist movement, it has been considered unacceptable to declare that women are weaker than men (even physically), yet, as many of these recent assault cases make clear, this is a plain fact. Men are, on average, physically larger and more aggressive than women; this is why for centuries social codes existed to protect women who were, by and large, less powerful, more vulnerable members of society.
MacKinnon’s definition of harassment at first seemed to acknowledge such differences; she described harassment as “dominance eroticized.” But like all good feminist theorists, she claimed this dominance was socially constructed rather than biological—“the legally relevant content of the term sex, understood as gender difference, should focus upon its social meaning more than upon any biological givens,” she wrote. As such, the reasoning went, men’s socially constructed dominance could be socially deconstructed through reeducation, training, and the like.
Culturally, this is the view that now prevails, which is why we pinball between arguing that women can do anything men can do and worrying that women are all the potential victims of predatory, toxic men. So which is it? Girl Power or the Fainting Couch?
Regardless, when harassment or assault claims arise, the cultural assumptions that feminism has successfully cultivated demand we accept that women are right and men are wrong (hence the insistence that we must believe every woman’s claim about harassment and assault, and the calling out of those who question a woman’s accusation). This gives women—who are, after all, flawed human beings just like men—too much accusatory power in situations where context is often crucial for understanding what transpired. Feminists with a historical memory should recall how they embraced this view after mandatory-arrest laws for partner violence that were passed in the 1990s netted many women for physically assaulting their partners. Many feminist legal scholars at the time argued that such laws were unfair to women precisely because they neglected context. (“By following the letter of the law… law enforcement officers often disregard the context in which victims of violence resort to using violence themselves,” wrote Susan L. Miller in the Violence Against Women journal in 2001.)
Worse, the unquestioned valorization of women’s claims leaves men in the position of being presumed guilty unless proven innocent. Consider a recent tweet by Washington Post reporter and young-adult author Monica Hesse in response to New York Times reporter Farhad Manjoo’s self-indulgent lament. Manjoo: “I am at the point where i seriously, sincerely wonder how all women don’t regard all men as monsters to be constantly feared. the real world turns out to be a legit horror movie that I inhabited and knew nothing about.”
Hesse’s answer: “Surprise! The answer is that we do, and we must, regard all men as potential monsters to be feared. That’s why we cross to the other side of the street at night, and why we sometimes obey when men say ‘Smile, honey!’ We are always aware the alternative could be death.” This isn’t hyperbole in her case; Hesse has so thoroughly internalized the message that men are to be feared, not trusted, that she thinks one might kill her on the street if she doesn’t smile at him. Such illogic makes the Victorian neurasthenics look like the Valkyrie.
But while most reasonable people agree that women and men both need to take responsibility for themselves and exercise good judgment, what this looks like in practice is not going to be perfectly fair, given the differences between men and women when it comes to sexual behavior. In her book, MacKinnon observed of sexual harassment, “Tacitly, it has been both acceptable and taboo; acceptable for men to do, taboo for women to confront, even to themselves.”
That’s one thing we can say for certain is no longer true. Nevertheless, if you begin with the assumption that every sexual invitation is a power play or the prelude to an assault, you are likely to find enemies lurking everywhere. As Hesse wrote in the Washington Post about male behavior: “It’s about the rot that we didn’t want to see, that we shoveled into the garbage disposal of America for years. Some of the rot might have once been a carrot and some it might have once been a moldy piece of rape-steak, but it’s all fetid and horrific and now, and it’s all coming up at once. How do we deal with it? Prison for everyone? Firing for some? …We’re only asking for the entire universe to change. That’s all.”
But women are part of that “entire universe,” too, and it is incumbent on them to make it clear when someone has crossed the line. Both women and men would be better served if they adopted the same rule—“If you see something, say something”—when it comes to harassment. Among the many details that emerged from the recent exposé at Vox about New York Times reporter Glenn Thrush was the setting for the supposedly egregious behavior: It was always after work and after several drinks at a bar. In all of the interactions described, one or usually both of the parties was tipsy or drunk; the women always agreed to go with Thrush to another location. The women also stayed on good terms with Thrush after he made his often-sloppy passes at them, in one case sending friendly text messages and ensuring him he didn’t need to apologize for his behavior. The Vox writer, who herself claims to have been victimized by Thrush, argues, “Thrush, just by his stature, put women in a position of feeling they had to suck up and move on from an uncomfortable encounter.” Perhaps. But he didn’t put them in the position of getting drunk after work with him. They put themselves in that position.
Also, as the Thrush story reveals, women sometimes use sexual appeal and banter for their own benefit in the workplace. If we want to clarify the blurred lines that exist around workplace relationships, then we will have to reckon with the women who have successfully exploited them for their own advantage.
None of this means women should be held responsible when men behave badly or illegally. But it puts male behavior in the proper context. Sometimes, things really are just about sex, not power. As New York Times columnist Ross Douthat bluntly noted in a recent debate in New York magazine with feminist Rebecca Traister, “I think women shouldn’t underestimate the extent to which male sexual desire is distinctive and strange and (to women) irrational-seeming. Saying ‘It’s power, not sex’ excludes too much.”
Social-Media Justice or Restorative Justice?
What do we want to happen? Do we want social-media justice or restorative justice for harassers and predators? The first is immediate, cathartic, and brutal, with little consideration for nuance or presumed innocence for the accused. The second is more painstaking because it requires reaching some kind of consensus about the allegations, but it is also ultimately less destructive of the community and culture as a whole.
Social-media justice deploys the powerful force of shame at the mere whiff of transgression, so as to create a regime of prevention. The thing is, Americans don’t really like shame (the sexual revolution taught us that). Our therapeutic age doesn’t think that suppressing emotions and inhibiting feelings—especially about sex—is “healthy.” So either we will have to embrace the instant and unreflective emotiveness of #MeToo culture and accept that its rough justice is better than no justice at all—or we will have to stop overreacting every time a man does something that is untoward—like sending a single, creepy text message—but not actually illegal (like assault or constant harassment).
After all, it’s not all bad news from the land of masculinity. Rates of sexual violence have fallen 63 percent since 1993, according to statistics from the Rape, Abuse, and Incest National Network, and as scholar Steven Pinker recently observed: “Despite recent attention, workplace sexual harassment has declined over time: from 6.1 percent of GSS [General Social Survey] respondents in 2002 to 3.6 percent in 2014. Too high, but there’s been progress, which can continue.”
Still, many men have taken this cultural moment as an opportunity to reflect on their own understanding of masculinity. In the New York Times, essayist Stephen Marche fretted about the “unexamined brutality of the male libido” and echoed Catharine MacKinnon when he asked, “How can healthy sexuality ever occur in conditions in which men and women are not equal?” He would have done better to ask how we can raise boys who will become men who behave honorably toward women. And how do we even raise boys to become honorable men in a culture that no longer recognizes and rewards honor?
The answers to those questions aren’t immediately clear. But one thing that will make answering them even harder is the promotion of the idea of “toxic masculinity.” New York Times columnist Charles Blow recently argued that “we have to re-examine our toxic, privileged, encroaching masculinity itself. And yes, that also means on some level reimagining the rules of attraction.” But the whole point of the phrase “rules of attraction” is to highlight that there aren’t any and never have been (if you have any doubts, read the 1987 Bret Easton Ellis novel that popularized the phrase). Blow’s lectures about “toxic masculinity” are meant to sow self-doubt in men and thus encourage some enlightened form of masculinity, but that won’t end sexual harassment any more than Lysistrata-style refusal by women to have sex will end war.
Parents should be teaching their sons about personal boundaries and consent from a young age, just as they teach their daughters, and unequivocally condemn raunchy and threatening remarks about women, whether they are uttered by a talk-radio host or by the president of the United States. The phrase “that isn’t how decent men behave” should be something every parent utters.
But such efforts are made more difficult by a liberal culture that has decided to equate caddish behavior with assault precisely because it has rejected the strict norms that used to hold sway—the old conservative norms that regarded any transgression against them as a seriousviolation and punished it accordingly. Instead, in an effort to be a kinder, gentler, more “woke” society that’s understanding of everyone’s differences, we’ve ended up arbitrarily picking and choosing among the various forms of questionable behavior for which we will have no tolerance, all the while failing to come to terms with the costs of living in such a society. A culture that hangs the accused first and asks questions later might have its virtues, but psychological understanding is not one of them.
And so we come back to sex and our muddled understanding of its place in society. Is it a meaningless pleasure you’re supposed to enjoy with as many people as possible before settling down and marrying? Or is it something more important than that? Is it something that you feel empowered to handle in Riot Grrrl fashion, or is getting groped once by a pervy co-worker something that prompts decades of nightmares and declarations that you will “never be the same”? How can we condemn people like Senator Al Franken, whose implicit self-defense is that it’s no big deal to cop a feel every so often, when our culture constantly offers up women like comedian Amy Schumer or Abbi and Ilana of the sketch show Broad City, who argue that women can and should be as filthy and degenerate as the most degenerate guy?
Perhaps it’s progress that the downfall of powerful men who engage in inappropriate sexual behavior is no longer called a “bimbo eruption,” as it was in the days of Bill Clinton, and that the men who harassed or assaulted women are facing the end of their careers and, in some cases, prison. But this is not the great awakening that so many observers have claimed it is. Awakenings need tent preachers to inspire and eager audiences to participate; our #MeToo moment has plenty of those. What it doesn’t have, unless we can agree on new norms for sexual behavior both inside and outside the workplace, is a functional theology that might cultivate believers who will actually practice what they preach.
That functional theology is out of our reach. Which means this moment is just that—a moment. It will die down, impossible though it seems at present. And every 10 or 15 years a new harassment scandal will spark widespread outrage, and we will declare that a new moment of reckoning and realization has emerged. After which the stories will again die down and very little will have changed.
No one wants to admit this. It’s much more satisfying to see the felling of so many powerful men as a tectonic cultural shift, another great leap forward toward equality between the sexes. But it isn’t, because the kind of asexual equality between the genders imagined by those most eager to celebrate our #MeToo moment has never been one most people embrace. It’s one that willfully overlooks significant differences between the sexes and assumes that thoughtful people can still agree on norms of sexual behavior.
They can’t. And they won’t.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Disharmony of the Spheres
The U.S. will endanger itself if it accedes to Russian and Chinese efforts to change the international system to their liking
H. Brands & C. Edel 2017-12-14
A “sphere of influence” is traditionally understood as a geographical zone within which the most powerful actor can impose its will. And nearly three decades after the close of the superpower struggle that Churchill’s speech heralded, spheres of influence are back. At both ends of the Eurasian landmass, the authoritarian regimes in China and Russia are carving out areas of privileged influence—geographic buffer zones in which they exercise diplomatic, economic, and military primacy. China and Russia are seeking to coerce and overawe their neighbors. They are endeavoring to weaken the international rules and norms—and the influence of opposing powers—that stand athwart their ambitions in their respective “near abroads.” Chinese island-building and maritime expansionism in the South China Sea and Russian aggression in Ukraine and intimidation of the Baltic states are part and parcel of the quasi-imperial projects these revisionist regional powers are now pursuing.
Historically speaking, a world made up of rival spheres is more the norm than the exception. Yet such a world is in sharp tension with many of the key tenets of the American foreign-policy tradition—and with the international order that the United States has labored to construct and maintain since the end of World War II.
To be sure, Washington carved out its own spheres of influence in the Western Hemisphere beginning in the 19th century, and America’s myriad alliance blocs in key overseas regions are effectively spheres by another name. And today, some international-relations observers have welcomed the return of what the foreign-policy analyst Michael Lind has recently called “blocpolitik,” hoping that it might lead to a more peaceful age of multilateral equilibrium.
But for more than two centuries, American leaders have generally opposed the idea of a world divided into rival spheres of influence and have worked hard to deny other powers their own. And a reversion to a world dominated by great powers and their spheres of influence would thus undo some of the strongest traditions in American foreign policy and take the international system back to a darker, more dangerous era.
I n an extreme form, a sphere of influence can take the shape of direct imperial or colonial control. Yet there are also versions in which a leading power forgoes direct military or administrative domination of its neighbors but nonetheless exerts geopolitical, economic, and ideological influence. Whatever their form, spheres of influence reflect two dominant imperatives of great-power politics in an anarchic world: the need for security vis-à-vis rival powers and the desire to shape a nation’s immediate environment to its benefit. Indeed, great powers have throughout history pursued spheres of influence to provide a buffer against the encroachment of other hostile actors and to foster the conditions conducive to their own security and well-being.The Persian Empire, Athens and Sparta, and Rome all carved out domains of dominance. The Chinese tribute system—which combined geopolitical control with the spread of Chinese norms and ideas—profoundly shaped the trajectory of East Asia for hundreds of years. The 19th and 20th centuries saw the British Empire, Japan’s East Asian Co-Prosperity Sphere, and the Soviet bloc.
America, too, has played the spheres-of-influence game. From the early-19th century onward, American officials strove for preeminence in the Western Hemisphere—first by running other European powers off much of the North American continent and then by pushing them out of Latin America. With the Monroe Doctrine, first enunciated in 1823, America staked its claim to geopolitical primacy from Canada to the Southern Cone. Over the succeeding generations, Washington worked to achieve military dominance in that area, to tie the countries of the Western Hemisphere to America geopolitically and economically, and even to help pick the rulers of countries from Mexico to Brazil.
If this wasn’t a sphere of influence, nothing was. In 1895, Secretary of State Richard Olney declared that “the United States is practically sovereign on this continent and its fiat is law upon the subjects to which it confines its interposition.” After World War II, moreover, a globally predominant United States steadily expanded its influence into Europe through NATO, into East Asia through various military alliances, and into the Middle East through a web of defense, diplomatic, and political arrangements. The story of global politics over the past 200 years has, in large part, been the story of expanding U.S. influence.
Nonetheless, there has always been something ambivalent—critics would say hypocritical—about American views of this matter. For as energetic as Washington has been in constructing its geopolitical domain, a “spheres-of-influence world” is in perpetual tension with four strong intellectual traditions in U.S. strategy. These are hegemony, liberty, openness, and exceptionalism.
First, hegemony. The myth of America as an innocent isolationist country during its first 170 years is powerful and enduring; it’s also wrong. From the outset, American statesmen understood that the country’s favorable geography, expanding population, and enviable resource endowments gave it the potential to rival, and ultimately overtake, the European states that dominated world politics. America might be a fledgling republic, George Washington said, but it would one day attain “the strength of a giant.” From the revolution onward, American officials worried, with good reason, that France, Spain, and the United Kingdom would use their North American territories to strangle or contain the young republic. Much of early American diplomacy was therefore geared toward depriving the European powers of their North American possessions, using measures from coercive diplomacy to outright wars of conquest. “The world shall have to be familiarized with the idea of considering our proper dominion to be the continent of North America,” wrote John Quincy Adams in 1819. The only regional sphere of influence that Americans would accept as legitimate was their own.
By the late-19th century, the same considerations were pushing Americans to target spheres of influence further abroad. As the industrial revolution progressed, it became clear that geography alone might not protect the nation. Aggressive powers could now generate sufficient military strength to dominate large swaths of Europe or East Asia and then harness the accumulated resources to threaten the United States. Moreover, as America itself became an increasingly mighty country that sought to project its influence overseas, its leaders naturally objected to its rivals’ efforts to establish their own preserves from which Washington would be excluded. If much of America’s 19th-century diplomacy was dedicated to denying other powers spheres of influence in the Western Hemisphere, much of the country’s 20th-century diplomacy was an effort to break up or deny rival spheres of influence in Europe and East Asia.
From the Open Door policy, which sought to prevent imperial powers from carving up China, to U.S. intervention in the world wars, to the confrontation with the Soviet Empire in the Cold War, the United States repeatedly acted on the belief that it could be neither as secure nor influential as it desired in a world divided up and dominated by rival nations. The American geopolitical tradition, in other words, has long contained a built-in hostility to other countries’ spheres of influence.
The American ideological tradition shares this sense of preeminence, as reflected in the second key tenet: liberty. America’s founding generation did not see the revolution merely as the birth of a future superpower; they saw it as a catalyst for spreading political liberty far and wide. Thomas Paine proclaimed in 1775 that Americans could “begin the world anew”; John Quincy Adams predicted, several decades later, that America’s liberal ideology was “destined to cover the surface of the globe.” Here, too, the new nation was not cursed with excessive modesty—and here, too, the existence of rival spheres of influence threatened this ambition.
Rival spheres of influence—particularly within the Western Hemisphere—imperiled the survival of liberty at home. If the United States were merely one great power among many on the North American continent, the founding generation worried, it would be forced to maintain a large standing military establishment and erect a sort of 18th-century “garrison state.” Living in perpetual conflict and vigilance, in turn, would corrode the very freedoms for which the revolution had been fought. “No nation,” wrote James Madison, “can preserve its freedom in the midst of continual warfare.” Just as Madison argued, in Federalist No. 10, that “extending the sphere”—expanding the republic—was a way of safeguarding republicanism at home, expanding America’s geopolitical domain was essential to providing the external security that a liberal polity required to survive.
Rival spheres of influence also constrained the prospects for liberty abroad. Although the question of whether the United States should actively support democratic revolutions overseas has been a source of unending controversy, virtually all American strategists have agreed that the country would be more secure and influential in a world where democracy was widespread. Given this mindset, Americans could hardly be desirous of foreign powers—particularly authoritarian powers—establishing formidable spheres of influence that would allow them to dominate the international system or suppress liberal ideals. The Monroe Doctrine was a response to the geopolitical dangers inherent in renewed imperial control of South America; it was also a response to the ideological danger posed by European nations that would “extend the political system to any portion” of the Western Hemisphere. Similar concerns have been at the heart of American opposition to the British Empire and the Soviet bloc.
Economic openness, the third core dynamic of American policy, has long served as a commercial counterpart to America’s ideological proselytism. Influenced as much by Adam Smith as by Alexander Hamilton, early American statecraft promoted free trade, neutral rights, and open markets, both to safeguard liberty and enrich a growing nation. This mission has depended on access to the world’s seas and markets. When that access was circumscribed—by the British in 1812 and by the Germans in 1917—Americans went to war to preserve it. It is unsurprising, then, that Americans also looked askance at efforts by other powers to establish areas that might be walled off from U.S. trade and investment—and from the spread of America’s capitalist ideology.
A brief list of robust policy endeavors underscores the persistent U.S. hostility to an economically closed, spheres-of-influence world: the Model Treaty of 1776, designed to promote free and reciprocal trade; John Hay’s Open Door policy of 1899, designed to prevent any outside power from dominating trade with China; Woodrow Wilson’s advocacy in his “14 Points” speech of 1918 for the removal “of all economic barriers and the establishment of an equality of trade conditions among all nations”; and the focus of the 1941 Atlantic Charter on reducing trade restrictions while promoting international economic cooperation (assuming the allies would emerge triumphant from World War II).
Fourth and finally, there’s exceptionalism. Americans have long believed that their nation was created not simply to replicate the practices of the Old World, but to revolutionize how states and peoples interact with one another. The United States, in this view, was not merely another great power out for its own self-interest. It was a country that, by virtue of its republican ideals, stood for the advancement of universal rights, and one that rejected the back-alley methods of monarchical diplomacy in favor of a more principled statecraft. When Abraham Lincoln said America represented “the last best hope of earth,” or when Woodrow Wilson scorned secret agreements in favor of “open covenants arrived at openly,” they demonstrated this exceptionalist strain in American thinking. There is some hypocrisy here, of course, for the United States has often acted in precisely the self-interested, cutthroat manner its statesmen deplored. Nonetheless, American exceptionalism has had a pronounced effect on American conduct.
Compare how Washington led its Western European allies during the Cold War—the extent to which NATO rested on the authentic consent of its members, the way the United States consistently sought to empower rather than dominate its partners—with how Moscow managed its empire in Eastern Europe. In the same way, Americans have often recoiled from arrangements that reeked of the old diplomacy. Franklin Roosevelt might have tolerated a Soviet-dominated Eastern Europe after World War II, for instance, but he knew he could not admit this publicly. Likewise, the Helsinki Accords of 1975, which required Washington to acknowledge the diplomatic legitimacy of the Soviet sphere, proved controversial inside the United States because they seemed to represent just the sort of cynical, old-school geopolitics that American exceptionalism abhors.
To be clear, U.S. hostility to a spheres-of-influence world has always been leavened with a dose of pragmatism; American leaders have pursued that hostility only so far as power and prudence allowed. The Monroe Doctrine warned European powers to stay out of the Americas, but the quid pro quo was that a young and relatively weak United States would accept, for a time, a sphere of monarchical dominance within Europe. Even during the Cold War, U.S. policymakers generally accepted that Washington could not break up the Soviet bloc in Eastern Europe without risking nuclear war.
But these were concessions to expediency. As America gained greater global power, it more actively resisted the acquisition or preservation of spheres by others. From gradually pushing the Old World out of the New, to helping vanquish the German and Japanese Empires by force of arms, to assisting the liquidation of the British Empire after World War II, to containing and ultimately defeating the Soviet bloc, the United States was present at the destruction of spheres of influence possessed by adversaries and allies alike.
The acme of this project came in the quarter-century that followed the Cold War. With the collapse of the Warsaw Pact and the Soviet Union itself, it was possible to envision a world in which what Thomas Jefferson called America’s “empire of liberty” could attain global dimensions, and traditional spheres of influence would be consigned to history. The goal, as George W. Bush’s 2002 National Security Strategy proclaimed, was to “create a balance of power that favors human freedom.” This meant an international environment in which the United States and its values were dominant and there was no balance of power whatsoever.
Under presidents from George H.W. Bush to Barack Obama, this project entailed working to spread democracy and economic liberalism farther than ever before. It involved pushing American influence and U.S.-led institutions into regions—such as Eastern Europe—that were previously dominated by other powers. It meant maintaining the military primacy necessary to stop regional powers from establishing new spheres of influence, as Washington did by rolling back Saddam Hussein’s conquest of Kuwait in 1990 and by deterring China from coercing Taiwan in 1995–96. Not least, this American project involved seeking to integrate potential rivals—foremost Russia and China—into the post–Cold War order, in hopes of depriving them of even the desire to challenge it. This multifaceted effort reflected the optimism of the post-Cold War era, as well as the influence of tendencies with deep roots in the American past. Yet try as Washington might to permanently leave behind a spheres-of-influence world, that prospect is once again upon us.
B egin with China’s actions in the Asia-Pacific region. The sources of Chinese conduct are diverse, ranging from domestic insecurity to the country’s confidence as a rising power to its sense of historical destiny as “the Middle Kingdom.” All these influences animate China’s bid to establish regional mastery. China is working, first, to create a power vacuum by driving the United States out of the Western Pacific, and second, to fill that vacuum with its own influence. A Chinese admiral made this ambition clear when he remarked—supposedly in jest—to an American counterpart that, in the future, the two powers should simply split the Pacific with Hawaii as the dividing line. Yang Jiechi, then China’s foreign minister, echoed this sentiment in a moment of frustration by lecturing the nations of Southeast Asia. “China is a big country,” he said, “and other countries are small countries, and that’s just a fact.”Policy has followed rhetoric. To undercut America’s position, Beijing has harassed American ships and planes operating in international waters and airspace. The Chinese have warned U.S. allies they may be caught in the crossfire of a Sino-American war unless Washington accommodates China or the allies cut loose from the United States. China has simultaneously worked to undermine the credibility of U.S. alliance guarantees by using strategies designed to shift the regional status quo in ways even the mighty U.S. Navy finds difficult to counter. Through a mixture of economic aid and diplomatic coercion, Beijing has also successfully divided international bodies, such as the Association of Southeast Asian Nations, through which the United States has sought to rally opposition to Chinese assertiveness. And in the background, China has been steadily building, over the course of more than two decades, formidable military tools designed to keep the United States out of the region and give Beijing a free hand in dealing with its weaker neighbors. As America’s sun sets in the Asia-Pacific, Chinese leaders calculate, the shadow China casts over the region will only grow longer.
To that end, China has claimed, dubiously, nearly all of the South China Sea as its own and constructed artificial islands as staging points for the projection of military power. Military and paramilitary forces have teased, confronted, and violated the sovereignty of countries from Vietnam to the Philippines; China is likewise intensifying the pressure on Japan in the East China Sea. Economically, Beijing uses its muscle to reward those who comply with China’s policies and punish those not willing to bow to its demands. It is simultaneously advancing geoeconomic projects, such as the Belt and Road Initiative, Asian Infrastructure Investment Bank, and Regional Comprehensive Economic Project (RCEP) that are designed to bring the region into its orbit.
Strikingly, China has also moved away from its long-professed principle of noninterference in other countries’ domestic politics by extending the reach of Chinese propaganda organs and using investment and even bribery to co-opt regional elites. Payoffs to Australian politicians are as critical to China’s regional project as development of “carrier-killer” missiles. Finally, far from subscribing to liberal concepts of democracy and human rights, Beijing emphasizes its rejection of these values and its desire to create “Asia for Asians.” In sum, China is pursuing a classic spheres-of-influence project. By blending intimidation with inducement, Beijing aims to sunder its neighbors’ bonds with America and force them to accept a Sino-centric order—a new Chinese tribute system for the 21st century.
A t the other end of Eurasia, Russia is playing geopolitical hardball of a different sort. The idea that Moscow should dominate its “near abroad” is as natural to many Russians as American regional primacy is to Americans. The loss of the Kremlin’s traditional buffer zone was, therefore, one of the most painful legacies of the Cold War’s end. And so it is hardly surprising that, as Russia has regained a degree of strength in recent years, it has sought to reassert its supremacy.It has done so, in fact, through more overtly aggressive means than those employed by China. Moscow has twice seized opportunities to humiliate and dismember former Soviet republics that committed the sin of tilting toward the West or throwing out pro-Russian leaders, first in Georgia in 2008 and then in Ukraine in 2014. It has regularly reminded its neighbors that they live on Russia’s doorstep, through coercive activities such as conducting cyberattacks on Estonia in 2007 and holding aggressive military exercises on the frontiers of the Baltic states. In the same vein, the Kremlin has essentially claimed a veto over the geopolitical alignments of neighbors from the Caucasus to Scandinavia, whether by creating frozen conflicts on their territory or threatening to target them militarily—perhaps with nuclear weapons—should they join NATO.
Military muscle is not Moscow’s only tool. Russia has simultaneously used energy exports to keep the states on its periphery economically dependent, and it has exported corruption and illiberalism to non-aligned states in the former Warsaw Pact area to prevent further encroachment of liberal values. Not least, the Kremlin has worked to undermine NATO and the European Union through political subversion and intervention in Western electoral processes. And while Russia’s activities are most concentrated in Eastern Europe and Central Asia, it’s also projecting its influence farther afield. Russian forces intervened successfully in Syria in 2015 to prop up Bashar al-Assad, preserve access to warm-water ports on the Mediterranean, and demonstrate the improved accuracy and lethality of Russian arms. Moscow continues to make inroads in the Middle East, often in cooperation with another American adversary: Iran.
To be sure, the projects that China and Russia are pursuing today are vastly different from each other, but the core logic is indisputably the same. Authoritarian powers are re-staking their claim to privileged influence in key geostrategic areas.
S o what does this mean for American interests? Some observers have argued that the United States should make a virtue of necessity and accept the return of such arrangements. By this logic, spheres of influence create buffer zones between contending great powers; they diffuse responsibility for enforcing order in key areas. Indeed, for those who think that U.S. policy has left the country exhausted and overextended, a return to a world in which America no longer has the burden of being the dominant power in every region may seem attractive. The great sin of American policy after the Cold War, many realist scholars argue, was the failure to recognize that even a weakened Russia would demand privileged influence along its frontiers and thus be unalterably opposed to NATO expansion. Similarly, they lament the failure to understand that China would not forever tolerate U.S. dominance along its own periphery. It is not surprising, then, to hear analysts such as Australia’s Hugh White or America’s John Mearsheimer argue that the United States should learn to “share power” with China in the Pacific, or that it must yield ground in Eastern Europe in order to avoid war with Russia.Such claims are not meritless; there are instances in which spheres of influence led to a degree of stability. The division of Europe into rival blocs fostered an ugly sort of stasis during the Cold War; closer to home, America’s dominance in the Western Hemisphere has long muted geopolitical competition in our own neighborhood. For all the problems associated with European empires, they often partially succeeded in limiting scourges such as communal violence.
And yet the allure of a spheres-of-influence world is largely an illusion, for such a world would threaten U.S. interests, traditions, and values in several ways.
First, basic human rights and democratic values would be less respected. China and Russia are not liberal democracies; they are illiberal autocracies that see the spread of democratic values as profoundly corrosive to their own authority and security. Just as the United States has long sought to create a world congenial to its own ideological predilections, Beijing and Moscow would certainly do likewise within their spheres of dominance.
They would, presumably, bring their influence to bear in support of friendly authoritarian regimes. And they would surely undermine democratic governments seen to pose a threat of ideological contagion or insubordination to Russian or Chinese prerogatives. Russia has taken steps to prevent the emergence of a Western-facing democracy in Ukraine and to undermine liberal democracies in Europe and elsewhere; China is snuffing out political freedoms in Hong Kong. Such actions offer a preview of what we will see when these countries are indisputably dominant along their peripheries. Further aggressions, in turn, would not simply be offensive to America’s ideological sensibilities. For given that the spread of democracy has been central to the absence of major interstate war in recent decades, and that the spread of American values has made the U.S. more secure and influential, a less democratic world will also be a more dangerous world.
Second, a spheres-of-influence world would be less open to American commerce and investment. After all, the United States itself saw geoeconomic dominance in Latin America as the necessary counterpart to geopolitical dominance. Why would China take a less self-interested approach? China already reaps the advantages of an open global economy even as it embraces protectionism and mercantilism. In a Chinese-dominated East Asia, all economic roads will surely lead to Beijing, as Chinese officials will be able to use their leverage to ensure that trade and investment flows are oriented toward China and geopolitical competitors like the United States are left on the outside. Beijing’s current geoeconomic projects—namely, RCEP and the Belt and Road Initiative—offer insight into a regional economic future in which flows of commerce and investment are subject to heavy Chinese influence.
Third, as spheres of influence reemerge, the United States will be less able to shape critical geopolitical events in crucial regions. The reason Washington has long taken an interest in events in faraway places is that East Asia, Europe, and the Middle East are the areas from which major security challenges have emerged in the past. Since World War II, America’s forward military presence has been intended to suppress incipient threats and instability; that presence has gone hand in glove with energetic diplomacy that amplifies America’s voice and protects U.S. interests. In a spheres-of-influence world, Washington would no longer enjoy the ability to act with decisive effect in these regions; it would find itself reacting to global events rather than molding them.
This leads to a final, and crucial, issue. America would be more likely to find its core security interests challenged because world orders based on rival spheres of influence have rarely been as peaceful and settled as one might imagine.
To see this, just work backward from the present. During the Cold War, a bipolar balance did help avert actual war between Moscow and Washington. But even in Europe—where the spheres of influence were best defined—there were continual tensions and crises as Moscow tested the Western bloc. And outside Europe, violence and proxy wars were common as the superpowers competed to extend their reach into the Third World. In the 1930s, the emergence of German and Japanese spheres of influence led to the most catastrophic war in global history. The empires of the 19th century—spheres of influence in their own right—continually jostled one another, leading to wars and near-wars over the course of decades; the Peace of Amiens between England and Napoleonic France lasted a mere 14 months. And looking back to the ancient world, there were not one, but three Punic Wars fought between Rome and Carthage as two expanding empires came into conflict. A world defined by spheres of influence is often a world characterized by tensions, wars, and competition.
The reasons for this are simple. As the political scientist William Wohlforth observed, unipolar systems—such as the U.S.-dominated post–Cold War order—are anchored by a hegemonic power that can act decisively to maintain the peace. In a unipolar system, Wohlforth writes, there are few incentives for revisionist powers to incur the “focused enmity” of the leading state. Truly multipolar systems, by contrast, have often been volatile. When the major powers are more evenly matched, there is a greater temptation to aggression by those who seek to change the existing order of things. And seek to change things they undoubtedly will.
The idea that spheres of influence are stabilizing holds only if one assumes that the major powers are motivated only by insecurity and that concessions to the revisionists will therefore lead to peace. Churchill described this as the idea that if one “feeds the crocodile enough, the crocodile will eat him last.”
Unfortunately, today’s rising or resurgent powers are also motivated—as is America—by honor, ambition, and the timeless desire to make their international habitats reflect their own interests and ideals. It is a risky gamble indeed, then, to think that ceding Russia or China an uncontested sphere of influence would turn a revisionist authoritarian regime into a satisfied power. The result, as Robert Kagan has noted, might be to embolden those actors all the more, by giving them freer rein to bring their near-abroads under control, greater latitude and resources to pursue their ambitions, and enhanced confidence that the U.S.-led order is fracturing at its foundations. For China, dominance over the first island chain might simply intensify desires to achieve primacy in the second island chain and beyond; for Russia, renewed mastery in the former Soviet space could lead to desires to bring parts of the former Warsaw Pact to heel, as well. To observe how China is developing ever longer-range anti-access/area denial capabilities, or how Russia has been projecting military power ever farther afield, is to see this process in action.
T he reemergence of a spheres-of-influence world would thus undercut one of the great historical achievements of U.S. foreign policy: the creation of a system in which America is the dominant power in each major geopolitical region and can act decisively to shape events and protect its interests. It would foster an environment in which democratic values are less prominent, authoritarian models are ascendant, and mercantilism advances as economic openness recedes. And rather than leading to multipolar stability, this change could simply encourage greater revisionism on the part of powers whose appetite grows with the eating. This would lead the world away from the relative stability of the post–Cold War era and back into the darker environment it seemed to have relegated to history a quarter-century ago. The phrase “spheres of influence” may sound vaguely theoretical and benign, but its real-world effects are likely to be tangible and pernicious.Fortunately, the return of a spheres-of-influence world is not yet inevitable. Even as some nations will accept incorporation into a Chinese or Russian sphere of influence as the price of avoiding conflict, or maintaining access to critical markets and resources, others will resist because they see their own well-being as dependent on the preservation of the world order that Washington has long worked to create. The Philippines and Cambodia seem increasingly to fall into the former group; Poland and Japan, among many others, make up the latter. The willingness of even this latter group to take actions that risk incurring Beijing and Moscow’s wrath, however, will be constantly calibrated against an assessment of America’s own ability to continue leading the resistance to a spheres-of-influence world. Averting that outcome is becoming steadily harder, as the relative power and ambition of America’s authoritarian rivals rise and U.S. leadership seems to falter.
Harder, but not impossible. The United States and its allies still command a significant preponderance of global wealth and power. And the political, economic, and military weaknesses of its challengers are legion. It is far from fated, then, that the Western Pacific and Eastern Europe will slip into China’s and Russia’s respective orbits. With sufficient creativity and determination, Washington and its partners might still be able to resist the return of a dangerous global system. Doing so will require difficult policy work in the military, economic, and diplomatic realms. But ideas precede policy, and so simply rediscovering the venerable tradition of American hostility to spheres of influence—and no less, the powerful logic on which that tradition is based—would be a good start.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Art of Conducting
What does the man with the baton actually do?
Terry Teachout 2017-12-14
Why, then, are virtually all modern professional orchestras led by well-paid conductors instead of performing on their own? It’s an interesting question. After all, while many celebrity conductors are highly trained and knowledgeable, there have been others, some of them legendary, whose musical abilities were and are far more limited. It was no secret in the world of classical music that Serge Koussevitzky, the music director of the Boston Symphony from 1924 to 1949, found it difficult to read full orchestral scores and sometimes learned how to lead them in public by first practicing with a pair of rehearsal pianists whom he “conducted” in private.
Yet recordings show that Koussevitzky’s interpretations of such complicated pieces of music as Aaron Copland’s El Salón México and Maurice Ravel’s orchestral transcription of Mussorgsky’s Pictures at an Exhibition (both of which he premiered and championed) were immensely characterful and distinctive. What made them so? Was it the virtuosic playing of the Boston Symphony alone? Or did Koussevitzky also bring something special to these performances—and if so, what was it?
Part of what makes this question so tricky to answer is that scarcely any well-known conductors have spoken or written in detail about what they do. Only two conductors of the first rank, Thomas Beecham and Bruno Walter, have left behind full-length autobiographies, and neither one features a discussion of its author’s technical methods. For this reason, the publication of John Mauceri’s Maestros and Their Music: The Art and Alchemy of Conducting will be of special interest to those who, like my friend, wonder exactly what it is that conductors contribute to the performances that they lead.1
An impeccable musical journeyman best known for his lively performances of film music with the Hollywood Bowl Orchestra, Mauceri has led most of the world’s top orchestras. He writes illuminatingly about his work in Maestros and Their Music, leavening his discussions of such matters as the foibles of opera directors and music critics with sharply pointed, sometimes gossipy anecdotes. Most interesting of all, though, are the chapters in which he talks about what conductors do on the podium. To read Maestros and Their Music is to come away with a much clearer understanding of what its author calls the “strange and lawless world” of conducting—and to understand how conductors whose technique is deficient to the point of seeming incompetence can still give exciting performances.
P rior to the 19th century, conductors of the modern kind did not exist. Orchestras were smaller then—most of the ensembles that performed Mozart’s symphonies and operas contained anywhere from two to three dozen players—and their concerts were “conducted” either by the leader of the first violins or by the orchestra’s keyboard player.As orchestras grew larger in response to the increasing complexity of 19th-century music, however, it became necessary for a full-time conductor both to rehearse them and to control their public performances, normally by standing on a podium placed in front of the musicians and beating time in the air with a baton. Most of the first men to do so were composers, including Hector Berlioz, Felix Mendelssohn, and Richard Wagner. By the end of the century, however, it was becoming increasingly common for musicians to specialize in conducting, and some of them, notably Arthur Nikisch and Arturo Toscanini, came to be regarded as virtuosos in their own right. Since then, only three important composers—Benjamin Britten, Leonard Bernstein, and Pierre Boulez—have also pursued parallel careers as world-class conductors. Every other major conductor of the 20th century was a specialist.
What did these men do in front of an orchestra? Mauceri’s description of the basic physical process of conducting is admirably straightforward:
The right hand beats time; that is, it sets the tempo or pulse of the music. It can hold a baton. The left hand turns pages [in the orchestral score], cues instrumentalists with an invitational or pointing gesture, and generally indicates the quality of the notes (percussive, smoothly linked, sustained, etc.).
Beyond these elements, though, all bets are off. Most of the major conductors of the 20th century were filmed in performance, and what one sees in these films is so widely varied that it is impossible to generalize about what constitutes a good conducting technique.2 Most of them used batons, but several, including Boulez and Leopold Stokowski, conducted with their bare hands. Bernstein and Beecham gestured extravagantly, even wildly, while others, most famously Fritz Reiner, restricted themselves to tightly controlled hand movements. Toscanini beat time in a flowing, beautifully expressive way that made his musical intentions self-evident, but Wilhelm Furtwängler and Herbert von Karajan often conducted so unclearly that it is hard to see how the orchestras they led were able to follow them. (One exasperated member of the London Philharmonic claimed, partly in jest, that Furtwängler’s baton signaled the start of a piece “only after the thirteenth preliminary wiggle.”) Conductors of the Furtwängler sort tend to be at their best in front of orchestras with which they have worked for many years and whose members have learned from experience to “speak” their gestural language fluently.
Nevertheless, all of these men were pursuing the same musical goals. Beyond stopping and starting a given piece, it is the job of a conductor to decide how it will be interpreted. How loud should the middle section of the first movement be—and ought the violins to be playing a bit softer so as not to drown out the flutes? Someone must answer questions such as these if a performance is not to sound indecisive or chaotic, and it is far easier for one person to do so than for 100 people to vote on each decision.
Above all, a conductor controls the tempo of a performance, varying it from moment to moment as he sees fit. It is impossible for a full-sized symphony orchestra to play a piece with any degree of rhythmic flexibility unless a conductor is controlling the performance from the podium. Bernstein put it well when he observed in a 1955 TV special that “the conductor is a kind of sculptor whose element is time instead of marble.” These “sculptural” decisions are subjective, since traditional musical notation cannot be matched with exactitude. As Mauceri reminds us, Toscanini and Beecham both recorded La Bohème, having previously discussed their interpretations with Giacomo Puccini, the opera’s composer, and Toscanini conducted its 1896 premiere. Yet Beecham’s performance is 14 minutes longer than Toscanini’s. Who is “right”? It is purely a matter of individual taste, since both interpretations are powerfully persuasive.
Beyond the not-so-basic task of setting, maintaining, and varying tempos, it is the job of a conductor to inspire an orchestra—to make its members play with a charged precision that transcends mere unanimity. The first step in doing so is to persuade the players of his musical competence. If he cannot run a rehearsal efficiently, they will soon grow bored and lose interest; if he does not know the score in detail, they will not take him seriously. This requires extensive preparation on the part of the conductor, and an orchestra can tell within seconds of the downbeat whether he is adequately prepared—a fact that every conductor knows. “I’m extremely humble about whatever gifts I may have, but I am not modest about the work I do,” Bernstein once told an interviewer. “I work extremely hard and all the time.”
All things being equal, it is better than not for a conductor to have a clear technique, if only because it simplifies and streamlines the process of rehearsing an orchestra. Fritz Reiner, who taught Bernstein among others, did not exaggerate when he claimed that he and his pupils could “stand up [in front of] an orchestra they have never seen before and conduct correctly a new piece at first sight without verbal explanation and by means only of manual technique.”
While orchestra players prefer this kind of conducting, a conductor need not have a technique as fully developed as that of a Reiner or Bernstein if he knows how to rehearse effectively. Given sufficient rehearsal time, decisive and unambiguous verbal instructions will produce the same results as a virtuoso stick technique. This was how Willem Mengelberg and George Szell distinguished themselves on the podium. Their techniques were no better than adequate, but they rehearsed so meticulously that their performances were always brilliant and exact.
It also helps to supply the members of the orchestra with carefully marked orchestra parts. Beecham’s manual technique was notoriously messy, but he marked his musical intentions into each player’s part so clearly and precisely that simply reading the music on the stand would produce most of the effects that he desired.
What players do not like is to be lectured. They want to be told what to do and, if absolutely necessary, how to do it, at which point the wise conductor will stop talking and start conducting. Mauceri recalls the advice given to a group of student conductors by Joseph Silverstein, the concertmaster of the Boston Symphony: “Don’t talk to us about blue skies. Just tell us ‘longer-shorter,’ ‘faster-slower,’ ‘higher-lower.’” Professional musicians cannot abide flowery speeches about the inner meaning of a piece of music, though they will readily respond to a well-turned metaphor. Mauceri makes this point with a Toscanini anecdote:
One of Toscanini’s musicians told me of a moment in a rehearsal when the sound the NBC Symphony was giving him was too heavy. … In this case, without saying a word, he reached into his pocket and took out his silk handkerchief, tossed it into the air, and everyone watched it slowly glide to earth. After seeing that, the orchestra played the same passage exactly as Toscanini wanted.
Conducting, like all acts of leadership, is in large part a function of character. The violinist Carl Flesch went so far as to call it “the only musical activity in which a dash of charlatanism is not only harmless, but positively necessary.” While that is putting it too cynically, Flesch was on to something. I did a fair amount of conducting in college, but even though I practiced endlessly in front of a mirror and spent hours poring over my scores, I lacked the personal magnetism without which no conductor can hope to be more than merely competent at best.
On the other hand, a talented musician with a sufficiently compelling personality can turn himself into a conductor more or less overnight. Toscanini had never conducted an orchestra before making his unrehearsed debut in a performance of Verdi’s Aida at the age of 19, yet the players hastened to do his musical bidding. I once saw the modern-dance choreographer Mark Morris, whose knowledge of classical music is profound, lead a chorus and orchestra in the score to Gloria, a dance he had made in 1981 to a piece by Vivaldi. It was no stunt: Morris used a baton and a score and controlled the performance with the assurance of a seasoned pro. Not only did he have a strong personality, but he had also done his musical homework, and he knew that one was as important as the other.
The reverse, however, is no less true: The success of conductors like Serge Koussevitzky is at least as much a function of their personalities as of their preparation. To be sure, Koussevitzky had been an instrumental virtuoso (he played the double bass) before taking up conducting, but everyone who worked with him in later years was aware of his musical limitations. Yet he was still capable of imposing his larger-than-life personality on players who might well have responded indifferently to his conducting had he been less charismatic. Leopold Stokowski functioned in much the same way. He was widely thought by his peers to have been far more a showman than an artist, to the point that Toscanini contemptuously dismissed him as a “clown.” But he had, like Koussevitzky, a richly romantic musical imagination coupled with the showmanship of a stage actor, and so the orchestras that he led, however skeptical they might be about his musical seriousness, did whatever he wanted.
All great conductors share this same ability to impose their will on an orchestra—and that, after all, is the heart of the matter. A conductor can be effective only if the orchestra does what he wants. It is not like a piano, whose notes automatically sound when the keys are pressed, but a living organism with a will of its own. Conducting, then, is first and foremost an act of persuasion, as Mauceri acknowledges:
The person who stands before a symphony orchestra is charged with something both impossible and improbable. The impossible part is herding a hundred musicians to agree on something, and the improbable part is that one does it by waving one’s hands in the air.
This is why so many famous conductors have claimed that the art of conducting cannot be taught. In the deepest sense, they are right. To be sure, it is perfectly possible, as Reiner did, to teach the rudiments of clear stick technique and effective rehearsal practice. But the mystery at the heart of conducting is, indeed, unteachable: One cannot tell a budding young conductor how to cultivate a magnetic personality, any more than an actor can be taught how to have star quality. What sets the Bernsteins and Bogarts of the world apart from the rest of us is very much like what James M. Barrie said of feminine charm in What Every Woman Knows: “If you have it, you don’t need to have anything else; and if you don’t have it, it doesn’t much matter what else you have.”

2 Excerpts from many of these films were woven together into a two-part BBC documentary, The Art of Conducting, which is available on home video and can also be viewed in its entirety on YouTube
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Comedy Boy of the Senate
Washington Commentary
Andrew Ferguson 2017-12-14
Not that he tries. What was remarkable about the condescension in this instance was that Franken directed it at women who accused him of behaving “inappropriately” toward them. (In an era of strictly enforced relativism, we struggle to find our footing in judging misbehavior, so we borrow words from the prissy language of etiquette. The mildest and most common rebuke is unfortunate, followed by the slightly more serious inappropriate, followed by the ultimate reproach: unacceptable, which, depending on the context, can include both attempted rape and blowing your nose into your dinner napkin.) Franken’s inappropriateness entailed, so to speak, squeezing the bottoms of complete strangers, and cupping the occasional breast.
Franken himself did not use the word “inappropriate.” By his account, he had done nothing to earn the title. His earlier vague denials of the allegations, he told his fellow senators, “gave some people the false impression that I was admitting to doing things that, in fact, I haven’t done.” How could he have confused people about such an important matter? Doggone it, it’s that damn sensitivity of his. The nation was beginning a conversation about sexual harassment—squeezing strangers’ bottoms, stuff like that—and “I wanted to be respectful of that broader conversation because all women deserve to be heard and their experiences taken seriously.”
Well, not all women. The women with those bottoms and breasts he supposedly manhandled, for example—their experiences don’t deserve to be taken seriously. We’ve got Al’s word on it. “Some of the allegations against me are not true,” he said. “Others, I remember very differently.” His accusers, in other words, fall into one of two camps: the liars and the befuddled. You know how women can be sometimes. It might be a hormonal thing.
But enough about them, Al seemed to be saying: Let’s get back to Al. “I know the work I’ve been able to do has improved people’s lives,” Franken said, but he didn’t want to get into any specifics. “I have used my power to be a champion of women.” He has faith in his “proud legacy of progressive advocacy.” He’s been passionate and worked hard—not for himself, mind you, but for his home state of Minnesota, by which he’s “blown away.” And yes, he would get tired or discouraged or frustrated once in a while. But then that big heart of his would well up: “I would think about the people I was doing this for, and it would get me back on my feet.” Franken recently published a book about himself: Giant of the Senate. I had assumed the title was ironic. Now I’m not sure.
Yet even in his flights of self-love, the problem that has ever attended Senator Franken was still there. You can’t take him seriously. He looks as though God made him to be a figure of fun. Try as he might, his aspect is that of a man who is going to try to make you laugh, and who is built for that purpose and no other—a close cousin to Bert Lahr or Chris Farley. And for years, of course, that’s the part he played in public life, as a writer and performer on Saturday Night Live. When he announced nine years ago that he would return to Minnesota and run for the Senate—when he came out of the closet and tried to present himself as a man of substance—the effect was so disorienting that I, and probably many others, never quite recovered. As a comedian-turned-politician, he was no longer the one and could never quite become the other.
The chubby cheeks and the perpetual pucker, the slightly crossed eyes behind Coke-bottle glasses, the rounded, diminutive torso straining to stay upright under the weight of an enormous head—he was the very picture of Comedy Boy, and suddenly he wanted to be something else: Politics Boy. I have never seen the famously tasteless tearjerker The Day the Clown Cried, in which Jerry Lewis stars as a circus clown imprisoned in a Nazi death camp, but I’m sure watching it would be a lot like watching the ex-funnyman Franken deliver a speech about farm price supports.
Then he came to Washington and slipped right into place. His career is testament to a dreary fact of life here: Taken in the mass, senators are pretty much interchangeable. Party discipline determines nearly every vote they cast. Only at the margins is one Democrat or Republican different in a practical sense from another Democrat or Republican. Some of us held out hope, despite the premonitory evidence, that Franken might use his professional gifts in service of his new job. Yet so desperate was he to be taken seriously that he quickly passed serious and swung straight into obnoxious. It was a natural fit. In no time at all, he mastered the senatorial art of asking pointless or showy questions in committee hearings, looming from his riser over fumbling witnesses and hollering “Answer the question!” when they didn’t respond properly.
It’s not hard to be a good senator, if you have the kind of personality that frees you to simulate chumminess with people you scarcely know or have never met and will probably never see again. There’s not much to it. A senator has a huge staff to satisfy his every need. There are experts to give him brief, personal tutorials on any subject he will be asked about, writers to write his questions for his committee hearings and an occasional op-ed if an idea strikes him, staffers to arrange his travel and drive him here or there, political aides to guard his reputation with the folks back home, press aides to regulate his dealings with reporters, and legislative aides to write the bills should he ever want to introduce any. The rest is show biz.
Oddly, Franken was at his worst precisely when he was handling the show-biz aspects of his job. While his inquisitions in committee hearings often showed the obligatory ferocity and indignation, he could also appear baffled and aimless. His speeches weren’t much good, and he didn’t deliver them well. As if to prove the point, he published a collection of them earlier this year, Speaking Franken. Until Pearl Harbor, he’d been showing signs of wanting to run for president. Liberal pundits were talking him up as a national candidate. Speaking Franken was likely intended to do for him what Profiles in Courage did for John Kennedy, another middling senator with presidential longings. Unfortunately for Franken, Ted Sorensen is still dead.
The final question raised by Franken’s resignation is why so many fellow Democrats urged him to give up his seat so suddenly, once his last accuser came forward. The consensus view involved Roy Moore, in those dark days when he was favored to win Alabama’s special election. With the impending arrival of an accused pedophile on the Republican side of the aisle, Democrats didn’t want an accused sexual harasser in their own ranks to deflect what promised to be a relentless focus on the GOP’s newest senator. This is bad news for any legacy Franken once hoped for himself. None of his work as a senator will commend him to history. He will be remembered instead for two things: as a minor TV star, and as Roy Moore’s oldest victim.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Lioness of Judah
Review of 'Lioness' By Francine Klagsbrun
Rick Richman 2017-12-14
Golda Meir, Israel’s fourth prime minister, moved to Palestine from America in 1921, at the age of 22, to pursue Socialist Zionism. She was instrumental in transforming the Jewish people into a state; signed that state’s Declaration of Independence; served as its first ambassador to the Soviet Union, as labor minister for seven years, and as foreign minister for a decade. In 1969, she became the first female head of state in the Western world, serving from the aftermath of the 1967 Six-Day War and through the nearly catastrophic but ultimately victorious 1973 Yom Kippur War. She resigned in 1974 at the age of 76, after five years as prime minister. Her involvement at the forefront of Zionism and the leadership of Israel thus extended more than half a century.
This is the second major biography of Golda Meir in the last decade, after Elinor Burkett’s excellent Golda in 2008. Klagsbrun’s portrait is even grander in scope. Her epigraph comes from Ezekiel’s lamentation for Israel: What a lioness was your mother / Among the lions! / Crouching among the great beasts / She reared her cubs. The “mother” was Israel; the “cubs,” her many ancient kings; the “great beasts,” the hostile nations surrounding her. One finishes Klagsbrun’s monumental volume, which is both a biography of Golda and a biography of Israel in her time, with a deepened sense that modern Israel, its prime ministers, and its survival is a story of biblical proportions.
Golda Meir’s story spans three countries—Russia, America, and Israel. Before she was Golda Meir, she was Golda Meyerson; and before that, she was Golda Mabovitch, born in 1898 in Kiev in the Russian Empire. Her father left for America after the horrific Kishinev pogrom in 1903, found work in Milwaukee as a carpenter, and in 1906 sent for his wife and three daughters, who escaped using false identities and border bribes. Golda said later that what she took from Russia was “fear, hunger and fear.” It was an existential fear that she never forgot.In Milwaukee, Golda found socialism in the air: The city had both a socialist mayor and a socialist congressman, and she was enthralled by news from Palestine, where Jews were living out socialist ideals in kibbutzim. She immersed herself in Poalei Zion (Workers of Zion), a movement synthesizing Zionism and socialism, and in 1917 married a fellow socialist, Morris Meyerson. As soon as conditions permitted, they moved to Palestine, where the marriage ultimately failed—a casualty of the extended periods she spent away from home working for Socialist Zionism and her admission that the cause was more important to her than her husband and children. Klagsbrun writes that Meir might appear to be the consummate feminist: She asserted her independence from her husband, traveled continually and extensively on her own, left her husband and children for months to pursue her work, and demanded respect as an individual rather than on special standards based on her gender. But she never considered herself a feminist and indeed denigrated women’s organizations as reducing issues to women’s interests only, and she gave minimal assistance to other women. Klagsbrun concludes that questions about Meir as a feminist figure ultimately “hang in the air.”
Her American connection and her unaccented American English became strategic assets for Zionism. She understood American Jews, spoke their language, and conducted many fundraising trips to the United States, tirelessly raising tens of millions of dollars of critically needed funds. David Ben-Gurion called her the “woman who got the money which made the state possible.” Klagsbrun provides the schedule of her 1932 trip as an example of her efforts: Over the course of a single month, the 34-year-old Zionist pioneer traveled to Kansas City, Tulsa, Dallas, San Antonio, Los Angeles, San Francisco, Seattle, and three cities in Canada. She became the face of Zionism in America—“The First Lady,” in the words of a huge banner at a later Chicago event, “of the Jewish People.” She connected with American Jews in a way no other Zionist leader had done before her.
In her own straightforward way, she mobilized the English language and sent it into battle for Zionism. While Abba Eban denigrated her poor Hebrew—“She has a vocabulary of two thousand words, okay, but why doesn’t she use them?”—she had a way of crystallizing issues in plainspoken English. Of British attempts to prevent the growth of the Jewish community in Palestine, she said Britain “should remember that Jews were here 2,000 years before the British came.” Of expressions of sympathy for Israel: “There is only one thing I hope to see before I die, and that is that my people should not need expressions of sympathy anymore.” And perhaps her most famous saying: “Peace will come when the Arabs love their children more than they hate us.”
Once she moved to the Israeli foreign ministry, she changed her name from Meyerson to Meir, in response to Ben-Gurion’s insistence that ministers assume Israeli names. She began a decade-long tenure there as the voice and face of Israel in the world. At a Madison Square Garden rally after the 1967 Six-Day War, she observed sardonically that the world called Israelis “a wonderful people,” complimented them for having prevailed “against such odds,” and yet wanted Israel to give up what it needed for its self-defense:
“Now that they have won this battle, let them go back where they came from, so that the hills of Syria will again be open for Syrian guns; so that Jordanian Legionnaires, who shoot and shell at will, can again stand on the towers of the Old City of Jerusalem; so that the Gaza Strip will again become a place from which infiltrators are sent to kill and ambush.” … Is there anybody who has the boldness to say to the Israelis: “Go home! Begin preparing your nine and ten year olds for the next war, perhaps in ten years.”
The next war would come not in ten years, but in six, and while Meir was prime minister.
Klagsbrun’s extended discussion of Meir’s leadership before, during, and after the 1973 Yom Kippur War is one of the most valuable parts of her book, enabling readers to make informed judgments about that war and assess Meir’s ultimate place in Israeli history. The book makes a convincing case that there was no pre-war “peace option” that could have prevented the conflict. Egypt’s leader, Anwar Sadat, was insisting on a complete Israeli withdrawal before negotiations could even begin, and Meir’s view was, “We had no peace with the old boundaries. How can we have peace by returning to them?” She considered the demand part of a plan to push Israel back to the ’67 lines “and then bring the Palestinians back, which means no more Israel.”
A half-century later, after three Israeli offers of a Palestinian state on substantially all the disputed territories—with the Palestinians rejecting each offer, insisting instead on an Israeli retreat to indefensible lines and recognition of an alleged Palestinian “right of return”—Meir’s view looks prescient.
Klagsbrun’s day-by-day description of the ensuing war is largely favorable to Meir, who relied on assurances from her defense minister, Moshe Dayan, that the Arabs would not attack, and assurances from her intelligence community that, even if they did, Israel would have a 48-hour notice—enough time to mobilize the reserves that constituted more than 75 percent of its military force. Both sets of assurances proved false, and the joint Egyptian-Syrian attack took virtually everyone in Israel by surprise. Dayan had something close to a mental breakdown, but Meir remained calm and in control after the initial shock, making key military decisions. She was able to rely on the excellent personal relationships she had established with President Nixon and his national security adviser, Henry Kissinger, and the critical resupply of American arms that enabled Israel—once its reserves were called into action—to take the war into Egyptian and Syrian territories, with Israeli forces camped in both countries by its end.
Meir had resisted the option of a preemptive strike against Egypt and Syria when it suddenly became clear, 12 hours before the war started, that coordinated Egyptian and Syrian attacks were coming. On the second day of the war, she told her war cabinet that she regretted not having authorized the IDF to act, and she sent a message to Kissinger that Israel’s “failure to take such action is the reason for our situation now.” After the war, however, she testified that, had Israel begun the war, the U.S. would not have sent the crucial assistance that Israel needed (a point on which Kissinger agreed), and that she therefore believed she had done the right thing. A preemptive response, however, or a massive call-up of the reserves in the days before the attacks, might have avoided a war in which Israel lost 2,600 soldiers—the demographic equivalent of all the American losses in the Vietnam War.
It is hard to fault Meir’s decision, given the erroneous information and advice she was uniformly receiving from all her defense and intelligence subordinates, but it is a reminder that for Israeli prime ministers (such as Levi Eshkol in the Six-Day War, Menachem Begin with the Iraq nuclear reactor in 1981, and Ehud Olmert with the Syrian one in 2007), the potential necessity of taking preemptive action always hangs in the air. Klagsbrun’s extensive discussion of the Yom Kippur War is a case study of that question, and an Israeli prime minister may yet again face that situation.
The Meir story is also a tale of the limits of socialism as an organizing principle for the modern state. Klagsbrun writes about “Golda’s persistent—and hopelessly utopian—vision of how a socialist society should be conducted,” exemplified by her dream of instituting commune-like living arrangements for urban families, comparable to those in the kibbutzim, where all adults would share common kitchens and all the children would eat at school. She also tried to institute a family wage system, in which people would be paid according to their needs rather than their talents, a battle she lost when the unionized nurses insisted on being paid as professionals, based on their education and experience, and not the sizes of their families.
Socialism foundered not only on the laws of economics and human nature but also in the realm of foreign relations. In 1973, enraged that the socialist governments and leaders in Europe had refused to come to Israel’s aid during the Yom Kippur War, Meir convened a special London conference of the Socialist International, attended by eight heads of state and a dozen other socialist-party leaders. Before the conference, she told Willy Brandt, Germany’s socialist chancellor, that she wanted “to hear for myself, with my own ears, what it was that kept the heads of these socialist governments from helping us.”
In her speech at the conference, she criticized the Europeans for not even permitting “refueling the [American] planes that saved us from destruction.” Then she told them, “I just want to understand …what socialism is really about today”:
We are all old comrades, long-standing friends. … Believe me, I am the last person to belittle the fact that we are only one tiny Jewish state and that there are over twenty Arab states with vast territories, endless oil, and billions of dollars. But what I want to know from you today is whether these things are the decisive factors in Socialist thinking, too?
After she concluded her speech, the chairman asked whether anyone wanted to reply. No one did, and she thus effectively received her answer.
One wonders what Meir would think of the Socialist International today. On the centenary of the Balfour Declaration last year, the World Socialist website called it “a sordid deal” that launched “a nakedly colonial project.” Socialism was part of the cause for which she went to Palestine in 1921, and it has not fared well in history’s judgment. But the other half—
Zionism—became one of the great successes of the 20th century, in significant part because of the lifelong efforts of individuals such as she.
Golda Meir has long been a popular figure in the American imagination, particularly among American Jews. Her ghostwritten autobiography was a bestseller; Ingrid Bergman played her in a well-received TV film; Anne Bancroft played her on the Broadway stage. But her image as the “71-year old grandmother,” as the press frequently referred to her when she became prime minister, has always obscured the historic leader beneath that façade. She was a woman with strengths and weaknesses who willed herself into half a century of history. Francine Klagsbrun has given us a magisterial portrait of a lioness in full.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
An Alumni Reunion of the Echo Chamber
Media Commentary
Matthew Continetti 2017-12-14
Back in 2016, then–deputy national-security adviser Ben Rhodes gave an extraordinary interview to the New York Times Magazine in which he revealed how President Obama exploited a clueless and deracinated press to steamroll opposition to the Iranian nuclear deal. “We created an echo chamber,” Rhodes told journalist David Samuels. “They”—writers and bloggers and pundits—“were saying things that validated what we had given them to say.”
Rhodes went on to explain that his job was made easier by structural changes in the media, such as the closing of foreign bureaus, the retirement of experienced editors and correspondents, and the shift from investigative reporting to aggregation. “The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns,” he said. “That’s a sea change. They literally know nothing.”
And they haven’t learned much. It was dispiriting to watch in December as journalists repeated arguments against the Jerusalem decision presented by Rhodes on Twitter. On December 5, quoting Mahmoud Abbas’s threat that moving the U.S. Embassy to Jerusalem would have “dangerous consequences,” Rhodes tweeted, “Trump seems to view all foreign policy as an extension of a patchwork of domestic policy positions, with no regard for the consequences of his actions.” He seemed blissfully unaware that the same could be said of his old boss.
The following day, Rhodes tweeted, “In addition to making goal of peace even less possible, Trump is risking huge blowback against the U.S. and Americans. For no reason other than a political promise he doesn’t even understand.” On December 8, quoting from a report that the construction of a new American Embassy would take some time, Rhodes asked, “Then why cause an international crisis by announcing it?”
Rhodes made clear his talking points for the millions of people inclined to criticize President Trump: Acknowledging Israel’s right to name its own capital is unnecessary and self-destructive. Rhodes’s former assistant, Ned Price, condensed the potential lines of attack in a single tweet on December 5. “In order to cater to his political base,” Price wrote, “Trump appears willing to: put U.S. personnel at great risk; risk C-ISIL [counter-ISIL] momentum; destabilize a regional ally; strain global alliances; put Israeli-Palestinian peace farther out of reach.”
Prominent media figures happily reprised their roles in the echo chamber. Susan Glasser of Politico: “Just got this in my in box from Ayman Odeh, leading Arab Israeli member of parliament: ‘Trump is a pyromaniac who could set the entire region on fire with his madness.’” BBC reporter Julia Merryfarlane: “Whether related or not, everything that happens from now on in Israel and the Pal territories will be examined in the context of Trump signaling to move the embassy to Jerusalem.” Neither Rhodes nor Price could have asked for more.
Network news broadcasts described the president’s decision as “controversial” but only reported on the views of one side in the controversy. Guess which one. “There have already been some demonstrations,” reported NBC’s Richard Engel. “They are expected to intensify, with Palestinians calling for three days of rage if President Trump goes through with it.” Left unmentioned was the fact that Hamas calls for days of rage like you and I call for pizza.
Throughout Engel’s segment, the chyron read: “Controversial decision could lead to upheaval.” On ABC, George Stephanopoulos said, “World leaders call the decision dangerous.” On CBS, Gayle King chimed in: “U.S. allies and leaders around the world say it’s a big mistake that will torpedo any chance of Middle East peace.” Oh? What were the chances of Middle East peace prior to Trump’s speech?
On CNN, longtime peace processor Aaron David Miller likened recognizing Jerusalem to hitting “somebody over the head with a hammer.” On MSNBC, Chris Matthews fumed: “Deaths are coming.” That same network featured foreign-policy gadfly Steven Clemons of the Atlantic, who said Trump “stuck a knife in the back of the two-state process.” Price and former Obama official Joel Rubin also appeared on the network to denounce Trump. “American credibility is shot, and in diplomacy, credibility relies on your word, and our word is, at this moment, not to be trusted from a peace-process perspective, certainly,” Rubin said. This from the administration that gave new meaning to the words “red line.”
Some journalists were so devoted to Rhodes’s tendentious narrative of Trump’s selfishness and heedlessness that they mangled the actual story. “He had promised this day would come, but to hear these words from the White House was jaw-dropping,” said Martha Raddatz of ABC. “Not only signing a proclamation reversing nearly 70 years of U.S. policy, but starting plans to move the embassy to Jerusalem. No one else on earth has an embassy there!” How dare America take a brave stand for a small and threatened democracy!
In fact, Trump was following U.S. policy as legislated by the Congress in 1995, reaffirmed in the Senate by a 90–0 vote just last June, and supported (in word if not in deed) by his three most recent predecessors as well as the last four Democratic party platforms. Most remarkable, the debate surrounding the Jerusalem policy ignored a crucial section of the president’s address. “We are not taking a position on any final-status issues,” he said, “including the specific boundaries of Israeli sovereignty in Jerusalem, or the resolution of contested borders. Those questions are up to the parties involved.” What we did then was simply accept the reality that the city that houses the Knesset and where the head of government receives foreign dignitaries is the capital of Israel.
However, just as had happened during the debate over the Iran deal, the facts were far less important to Rhodes than the overarching strategic goal. In this case, the objective was to discredit and undermine President Trump’s policy while isolating the conservative government of Israel. Yet there were plenty of reasons to be skeptical toward the disingenuous duo of Rhodes and Price. Trump’s announcement was bold, for sure, but the tepid protests from Arab capitals more worried about the rise of Iran, which Rhodes and Price facilitated, than the Palestinian issue suggested that the “Arab street” would sit this one out.
Which is what happened. Moreover, verbal disagreement aside, there is no evidence that the Atlantic alliance is in jeopardy. Nor has the war on ISIS lost momentum. As for putting “Israeli–Palestinian peace farther out of reach,” if third-party recognition of Jerusalem as Israel’s capital forecloses a deal, perhaps no deal was ever possible. Rhodes and Price would like us to overlook the fact that the two sides weren’t even negotiating during the Obama administration—an administration that did as much as possible to harm relations between Israel and the United States.
This most recent episode of the Trump show was a reminder that some things never change. Jerusalem was, is, and will be the capital of the Jewish state. President Trump routinely ignores conventional wisdom and expert opinion. And whatever nonsense President Obama and his allies say today, the press will echo tomorrow.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
