Diplomacy can work. But in the end, it all comes down to American power.
Have you ever found yourself in the position of asking, on your own behalf or on behalf of others, how many or precisely which people it would be useful to kill in order to secure a benefit for yourself or your cause? And just how to do it? No? Others have. Their answers have ranged from Cain’s original “Abel, with my bare hands” to Hitler’s “all the Jews, mainly by gas,” and the widespread Hutu view in the Rwanda of 1994, “the Tutsis, with machetes.” The question burns today for the government of Sudan and in the Congo.
Humanity will never be able to solve the problem of Cain, of fratricidal rage born of jealousy or some equivalent passion, nor of the more calculating retail impulse to profit in some way from doing someone in. Thus, for individuals, we maintain a system of laws, police forces, courts, prisons, mental hospitals, and, for extreme cases, the apparatus of the death penalty to punish those whom an impulse or cold calculation has led to murder—thereby deterring (so we hope) at least some others from embarking on a similar course of action. But we understand that our system is no solution to the problem of murder.
It is not obvious, however, or should not be, that because the human condition gives us no prospect of ridding the world of murder, we must be similarly pessimistic about our ability to rid the world of murder on the scale of populations. Mass atrocities, up to the point of genocide, are not simply collective acts of individual murder. Though genocides are not uniform in character, they are all political. Genocide constitutes the most extreme possible terms for settling differences: a stronger party’s decision to annihilate or extirpate the weaker. Genocide is organized. It entails a project, which in turn requires leaders with a purpose in mind and their acquisition of the means of death, including followers to do the dirty work.
We simply do not have to put up with this. By “we,” let me be clear. I do not mean “humanity,” although I would welcome the collective conclusion of mankind that genocide is unacceptable. I do not mean the “international community,” although a decision on the part of all national governments to refrain from engaging in mass atrocities at home or abroad would be most welcome, as would a collective intention to stop and punish leaders or would-be leaders seeking to deviate from the norm. What I really mean by “we” is “we who are strong enough to stop the murderous bastards before they can get away with it.”
This “we” is an inclusive group; everyone with a will and a way is welcome. But its purpose must go far beyond declaratory well-wishing. It is not a bad thing but a grossly insufficient thing to join in choruses of “never again,” the familiar refrain after something really bad has happened—say, 6 million dead Jews, 2 million dead Cambodians, or 800,000 dead Tutsis. No, we must act to stop the malefactors.
And by “we,” in the last analysis, I mean the United States.
We have the privilege to live at a time of unprecedented prosperity, and we know how to generate more of it. Anybody who thinks the present financial crisis has changed these fundamental facts is engaged in the time-honored human propensity for self-dramatization. Our prosperity is accompanied by a likewise unprecedented confluence of power and moral sensibility—or at least it seems to be. With regard to atrocities on a mass scale, we have the means at our disposal to stop what we and all right-thinking people know is wrong. It comes down to the choice of whether to act or not.
If we are unable to muster the political will to prevent or halt genocide and mass atrocities, the long-term consequences are truly chilling to contemplate. This is of course especially true with regard to future victims: the terror of being rounded up and held at gunpoint, especially in the final few seconds, as the shooting starts; of feeling the first slash of a swinging machete, knowing that more are coming. But it is also true for us. Future generations more committed to the principles we espouse but fail to act on may look back with disdain or disgust on our failure. Or, more horrifying still, future generations will conclude that all moral reasoning in political matters is sentimental superstructure that should be jettisoned in the interest of clarity about the first and only true principle of politics: the strong take care of themselves and the weak are on their own.
The progress of politics and civilization itself is nothing other than the long, difficult, incomplete struggle to overcome the original political principle of self-regard by instilling in the strong an empathetic regard for others. The first successes came in the mists of prehistory in the form of small groups ceasing to fight among themselves—clan, tribe, city. With the spread in terms of territory and clout of rights-regarding nation-states in recent centuries, it became possible to imagine cooperative efforts among such states to extend a principle of regard for others across international boundaries, indeed globally. In 1998, the NATO alliance—led, of course, by the United States—went to war against Serbia to stop ethnic cleansing and atrocities in Kosovo, averting a potential genocide in close proximity to NATO territory. But in 2004, after the U.S. Secretary of State, Colin Powell, declared that atrocities in the Darfur region of Sudan amounted to genocide, the response of the United States and others was uncertain and halting at best. Hundreds of thousands of lives were lost and millions evacuated their homes for refugee and displaced-persons camps. There they remain.
So, in recent memory, “we” have acted effectively, showing that we can, and “we” have failed to act effectively, revealing a gap between our professed moral sense and what we are prepared to do to vindicate it. The test of progress for this generation is whether we will be able to extend the principle of regard for others by acting when necessary to prevent or halt genocide.
Words are not enough; however, words matter. All things considered, when it comes to the importance of preventing genocide and mass atrocities, we talk a pretty good game. First there are American words. It is (or should be) a point of pride for believer and atheist alike that our founding national document, the Declaration of Independence, affirms that people are endowed by their Creator with, first of all, a right to life. The right to live can be especially difficult to vindicate. There is no one to whom a drowning man can appeal; it is not wrong for the water to drown him. But it surely is wrong if governments, wholly the creations of people, deny or violate this basic right. The Declaration sets forth the correct aspiration. True, certain historical conduct—the treatment of Native Americans in particular—miserably fails to measure up to the stated aspiration. But should we therefore abandon the aspiration? Of course not. We discredit only ourselves when we fail to live up to our ideals. The ideals themselves are not discredited.
Then there are words inspired by America’s founding that, in their drafting, sought to extend those ideals to the rest of the world, words in the United Nations Charter and the Universal Declaration of Human Rights. These documents affirm the rights of the individual against states or other actors that violate those rights. But the affirmation is more theoretical than actual, since the UN Charter also embraces a doctrine of sovereign right according to which states may not interfere in the internal affairs of others.
This aspect of the Charter gives states so inclined a ready cloak behind which to repress their people—including by commission of mass atrocities. This is what I mean when I say that words matter but are not enough. The UN’s universalist human-rights creed is honored far more in the breach than in the observance. At the same time, the UN Security Council is also charged to act in the interest of peace and security, which can create an opening in response to extreme situations in which large numbers of lives are at risk.
In 1946, with the dimensions of the horror of the Holocaust still unfolding, the UN General Assembly passed a resolution declaring genocide a crime under international law. Genocide “shocks the conscience of mankind,” the resolution memorably declared. This effort to “internationalize” the crime of genocide might have been the world body’s finest hour. The ensuing Genocide Convention of 1948 provides for “the prevention and punishment of the crime of genocide” whether “committed in time of peace or time of war” and elaborates a definition, which includes “acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group.”
The Convention isn’t self-executing, in that it doesn’t compel its signatories to take any particular action if the terms of the treaty are violated. But it does provide an international legal and, more important, moral framework for preventive action in response to the risk of genocide.
Breakthrough though it was, one unintended consequence of the Genocide Convention has been a serious problem. The definition of genocide is good as far as it goes, and the prevention mandate seems to allow latitude for timely action against would-be perpetrators. But whether “genocide” as defined in the treaty is actually occurring or about to occur is a complicated question both epistemologically and legally. For if you act to prevent genocide and succeed, there is no genocide—and so you cannot prove you have prevented one. Moreover, those you act against can claim you have violated their sovereign rights, and the argument will carry weight.
If, on the other hand, there is a legal finding of genocide, then it is too late for prevention. All that is left is mitigation. Moreover, if “genocide” is the trigger for action, then the bar is rather high: Atrocities short of genocide may somehow end up as tolerable, or at least tolerated. In 2005, a year after Colin Powell announced the U.S. finding of a genocide in Darfur, a UN special inquiry issued a report saying that while criminal atrocities had taken place in Sudan for which perpetrators needed to be held accountable, it lacked the basis for a conclusion that those crimes amounted to genocide. The bloodstained rulers in Khartoum were delighted to characterize the report as a vindication.
A further attempt to “internationalize” the Declaration’s “right to life” came in 2005, when the World Summit at the United Nations embraced in its “Outcome Document” the principle of the “responsibility to protect.” The doctrine of “responsibility to protect,” known colloquially as “R2P,” holds that a state has an obligation to protect those living on its territory from atrocities (specified in the Outcome Document as “genocide, war crimes, ethnic cleansing, and crimes against humanity”). If a state is unable or unwilling to fulfill this requirement, the protection function falls to the international community, which can take measures up to and including the use of force in order to protect populations. With sovereign right comes sovereign responsibility. The principle of noninterference gives way in circumstances of mass atrocities.
I had a small role in the adoption of R2P. Congress (principally in the person of Frank Wolf, a Republican member of the House of Representatives from Virginia) chartered a bipartisan task force on UN reform run by the U.S. Institute of Peace and co-chaired by former House Speaker Newt Gingrich and former Senate Majority Leader George Mitchell. I ran the Task Force’s expert group on human rights. Not without difficulty, we were able to include in the June 2005 consensus report a strong endorsement of the “responsibility to protect.” This was the first major bipartisan statement on behalf of R2P, which before had mainly been the province of liberal internationalists and human-rights groups on the Left.
The Task Force recommendation in turn influenced the Bush State Department to back the concept at the World Summit. In the absence of the Gingrich-Mitchell recommendation, the State Department’s traditional institutional wariness as well as ideological conservative skepticism would likely have led to U.S. opposition, which would have doomed the project.
As for the objections, the main concern has been (and remains) that the United States, by embracing R2P, will subject itself to the whims of the “international community” on whether and when to intervene in fulfillment of the protection function. Thus Steven Groves of the Heritage Foundation has expressed alarm that “the United States would cede control—any control—of its armed forces to the caprice of the world community without the consent of the American people.” In the extreme case, in this view, the U.S. might incur a legal obligation to go to war whether it wants to or not. The latter concern is so far down a trail of speculation piled on intemperate inference on top of worst-case hypothesizing that it hardly bears consideration. In its less extreme form, this is the question of how much the U.S. should engage with others to find common ends or interests and pursue them jointly.
Power is power, and the United States has more of it than any other state. But international political support is of value, and the U.S. does benefit from seeking it in fora that others regard as legitimate. We will never give the UN Security Council the last word. Other countries don’t like that, but then a Kosovo comes along, Russia blocks Security Council action, and people of good will realize that the price of calling off war because the Security Council hasn’t authorized it will be several hundred thousand dead Kosovars.
In other words, one should try one’s best at the UN for the simple reason that one might succeed. But failure at the UN does not end the discussion, as the U.S. determination in the months leading up to the war in Iraq demonstrated, and certainly should not when a genocide is brewing.
A more practical concern is that R2P would simply be used against Israel. This is true, but no more of R2P than of everything else, alas. Given bad will, any principle can be distorted almost into its opposite in the application. Vladimir Putin’s Russia cleverly cited the responsibility to protect as a reason for its invasion of Georgia in 2008—it was just acting to protect Russians in the breakaway Georgian regions of Abkhazia and South Ossetia, don’t you see! It fell to the Swedish foreign ministry to inform the Russians that the “responsibility to protect” here was Georgia’s, since it was on Georgian territory that the supposed offense against Russian ethnics was taking place—and that in case Georgia failed, the responsibility would fall to the “international community.”
All of these documents, from the Declaration to the UN Charter to the R2P language in the Outcome Document, are subject to the criticism that, again, they are mere words on paper. Whom have these words actually protected? The answer is that these words are tools of moral suasion. The principles they espouse represent some of our best conclusions about how the world should be and what we should do in pursuit of such a world. They are, of course, works in progress and remain subject to refinement. But we can’t say we haven’t really thought about genocide and mass atrocities, whether they matter to us or what we should do when confronted with them. By now, we know.
Institutions cannot respond effectively to the threat of genocide and mass atrocities in the absence of political will on the part of their members. Nevertheless, institutions can be more or less adroit, responsive, and effective. Here, we have a long way to go, though a range of promising steps has been taken.
Let me offer two snapshots of the problem and the response. The first comes from 2005, during work on the Gingrich-Mitchell report. The second comes from work I did last year on the Genocide Prevention Task Force,1 which issued a report in December 2008 with recommendations to the U.S. government on forestalling the threat of atrocities. The institutional change over the course of three years has been staggering.
In 2005, all was confusion, and the Darfur situation in particular was a frustrating daisy chain of inaction: Everybody who was potentially in a position to do something useful—from the Secretary General’s office at the United Nations to the UN Security Council to the European Union to NATO to the African Union mission on the ground in Darfur to the United States government itself—was full of explanations about why somebody else had to do something first.
In 2004, the African Union (AU) deployed a small number of troops to Sudan to protect outside monitors of a cease-fire agreement. They were able to do little to contain the depredations Sudanese government forces were inflicting on Darfur in conjunction with the Janjaweed militia, irregular forces of nomadic Arabic-speaking tribes at odds with the sedentary population of Western Sudan. As was well known to everyone involved in early 2005, the AU force was too small and woefully underequipped and unprepared. To be even minimally effective, the African Union needed a package of assistance that would include communications and intelligence assets, lift, planning and headquarters help, and training. Where to get it?
Well, maybe a military alliance with serious capabilities along those lines, like NATO. Or maybe NATO acting in conjunction with the European Union, which was already providing the main funding for the AU mission. Or maybe the European Union itself, if it could get its act together on its desire for a “common foreign and security policy.” Or maybe just the United States, leading a coalition of the willing or even acting on its own, if necessary.
It turned out that in the previous year, in the summer of 2004, the NATO military command under General James Jones (now Barack Obama’s national- security adviser) had begun a “prudent planning” exercise on Darfur—essentially, an inquiry into what might be done to help out the AU. It was undertaken without the authorization of the North Atlantic Council, NATO’s political decision-making body.
That exercise was interrupted when several allies, notably France, objected to NATO assigning itself a role in Africa. Some saw in the objection an effort to protect the EU’s turf. The planning didn’t cease, but it moved out of NATO auspices to the U.S. European Command, our military’s headquarters on the continent. As matters stood, there was no prospect of a NATO mission—but it seemed to us that matters need not have stood there.
We knew that UN Secretary General Kofi Annan had given a couple of speeches urging NATO to assist the African Union’s Darfur efforts. One ambassador at NATO told us he thought this represented an opening. The Europeans who were reluctant to involve NATO would not change their minds based merely on a speech by Annan, but if the Secretary General actually sent a formal letter to NATO asking for alliance help, that might change the debate. It would be one thing to say NATO shouldn’t insert itself into Africa, quite another to decline a UN request for help.
Does this sound ridiculous? Hundreds of thousands of lives potentially at stake over whether the contents of a speech are transferred to a letter? It does, and this is an indication of just how ill-equipped the “international community” as a whole was to deal with an emergency on the scale of Darfur.
Skeptics at the European Union’s headquarters in Brussels, meanwhile, informed us that the African Union would be reluctant to accept assistance from the West’s military alliance, since doing so would smack of neo-imperialism and colonialism. A better avenue would be through the European Union, according to the European Union—not that the EU actually had a plan.
So why not have Annan send a letter? We asked that question at a meeting with senior UN officials on the top floor of the organization’s building in New York. The answer was that Annan’s representatives had sounded out NATO and determined that there was simply no support for the alliance’s involvement in Africa. Annan couldn’t possibly ask for help only to be rebuked, explained Mark Malloch Brown, Annan’s top adviser (now an intimate of British Prime Minister Gordon Brown).
I found myself, to my surprise, shouting at Malloch Brown from the staff seats in the second row: Their information was simply wrong, there was substantial will at NATO to do just that. What was needed was a letter—Annan had already given speeches saying the same thing, all he needed to do was send a letter, just a letter. My importuning, though impolitic, got Malloch Brown’s attention and drew an invitation for follow-up on the matter. On the train on the way back to Washington, we drafted an e-mail explaining the situation as we had found it, why everything was so horribly stuck, and how it might at last get unstuck.
We quickly heard through intermediaries that though Annan was favorably disposed to the idea of a formal request, he didn’t think he had the authority to write such a letter—he didn’t want to get out too far in front of the Security Council on a matter that was subject to difficult ongoing negotiations there.
So now what? Another avenue to change the debate in NATO would be a letter directly from the African Union asking for assistance in Darfur—notwithstanding the patronizing assurance we had received that the African Union could not conceivably accept the neocolonialist assistance of monstrous Americans who had invaded poor Iraq.
Success. For what Annan could not write personally, he evidently could get written. After days of back-channel exchanges with Annan’s office, a letter arrived at NATO headquarters on April 26, 2005 from African Union chairman Alpha Konare specifically requesting NATO’s help in Darfur. Hard upon it, NATO’s North Atlantic Council—the same body that had insisted on an end to the previous year’s “prudent planning” exercise on how the genocide might be interrupted—formally approved the assistance.
I make no claim about the efficacy or adequacy of that NATO assistance. The best one can say about it is that things could have been worse. More than a million people in displaced-person and refugee camps are better than more than a million dead. The presence of peacekeepers, though woefully inadequate, seems nevertheless to have had some deterrent effect on the monstrous Janjaweed militia and the government.
The chief fact we found as we tried to manage the rules of the international system in 2005 was a high level of dysfunctionality. Nobody really knew what was on the minds of the key players in the African Union. The United Nations Secretary General didn’t know what was possible at NATO. NATO itself was uncertain about getting involved in Africa. Some Europeans seemed more interested in protecting their African turf than in action that might help those at risk. Meanwhile, the only organization that seemed genuinely interested in taking action, the African Union, was hobbled by a grievous lack of resources and capacity, and didn’t know how or whom to ask for help.
So what do you need to deal with a situation like Darfur? You need soldiers, and they had better be well trained and well led, otherwise you can end up (as the UN unfortunately has on more than one occasion) with peacekeepers who also dabble as sexual predators on the populations they are supposed to be protecting. You need equipment, like armored personnel carriers, and better still, helicopters. You need a mandate that enables your soldiers to take effective action, so they’re actually able to protect the locals in danger (not just to protect, as was notoriously the case in Darfur, the cease-fire monitors). Above all, you need the political will to take action.
And you really need to have figured out how to put together all of the above before a crisis spirals out of control. That means you’ve got to do the tedious work of getting people, governments, and institutions to think about what they need and plan in advance on how to get it. It means a hundred different letters and memorandums of understanding. The machinery of international politics was not developed to address problems such as Darfur. If we want to address them, and we must, then we have to retool and refine what we’ve got. To that end, the Gingrich-Mitchell report included a number of recommendations on things like “capacity-building,” an unlovely bit of foreign policy jargon, but one that nonetheless captures the imperative to close the gap between what you have and what you need.
Fast forward a few years later: Making the fact-finding rounds again, this time with the Genocide Prevention Task Force, I was astounded to see that all of the things we recommended in Gingrich-Mitchell were starting to happen. I don’t say these changes occurred because Gingrich-Mitchell recommended them. But we had clearly been onto something in terms of identifying the gaps and roadblocks in the international system.
Far from resisting American or European assistance on neocolonial or any other grounds, the African Union and other organizations on the continent welcome help. They are increasingly finding the political will to confront the continent’s malefactors. They have been working to develop “early warning” systems. They have the troops, but they need training and equipment before they will be fully prepared to act swiftly in response to trouble, and that’s where the developed world can be useful.
NATO, meanwhile, is in the process of figuring out how to do more in partnership with others and is favorably disposed to helping out with peace building and peacekeeping missions conducted under UN or other auspices. A deputy secretary general at NATO now has the responsibility to serve as the focal point for engagement with other organizations and institutions. A document outlining how NATO will work with the UN has been approved. And there is now a NATO liaison officer to the African Union.
The emphasis on Africa is obvious, but mass atrocities are not, of course, a problem unique to Africa. For the first time, the charter of the Association of Southeast Asian Nations now includes a provision on human rights. The UN Secretary General now has a special adviser on the “responsibility to protect” as well as a special adviser on the prevention of genocide. These offices are small, and they necessarily view their subjects from a UN perspective, which is too limiting for U.S. policymakers. But again, the more constructive the UN can be, the better.
One could go on. The point is that governments and international and regional organizations have made a beginning of taking the problem of preventing genocide and mass atrocities with the level of seriousness the subject demands. On the home front, the Genocide Prevention Task Force offered a large dose of specific guidance on internal government reform that holds out the promise of more effective and timely policymaking. This is no place for a discussion of the specifics of the interagency process and military planning procedures. Suffice it to say that better internal organization is within reach.
The missing institutional piece on the international scene now, it seems to me, flows from the absence of coordination and mutual awareness among the various parties that are now taking the issue seriously. The Task Force recommended that the U.S. government undertake a “major diplomatic initiative” whose purpose would be to put together a formal network linking all the parties that engage on the issue—governments, non-governmental organizations, and regional and international institutions. The idea would be to share information and strategize responses to emerging threats.
The report does not quite say so, but it would be prudent to have someplace to go where people with a record of taking the issue seriously and with genuine moral authority gather, in the all-too-likely event that the UN Security Council finds itself paralyzed once again in the face of mass atrocities. Such a network would have no legal authority, but it might well have moral authority of the sort that contributes to the generation of political will.
In the end, unsurprisingly, effective action may come down to U.S. power and will. Those of us who see an imperative for action in these cases should welcome encouragement to that end from wherever it may come. And realistically, it would most likely be due only to very poor diplomacy if the United States found itself without supporters and allies in preventing or stopping genocide.
The response to Darfur has to be judged a failure. But it has perhaps been a constructive failure that has galvanized people to think about how to make the system more nimble in response to gathering dangers. Those with a profound distaste for “nonconsensual military intervention”—that would be an “invasion” to the plain speakers among us—should be all the more concerned about timely action to identify the gathering danger of mass atrocities and nip the problem in the bud. Those with a will to argue for whatever is necessary to halt a slide into mass slaughter must realize that they will be most effective in galvanizing a response if they amass a chorus of the like-minded to speak as one on the moral imperative.
But we cannot assure ourselves that our best planning will always enable us to act early, nor can we count on having a phalanx of the like-minded alongside us. In the extreme case, halting or failing to halt genocide has come down to whether the political will exists within the United States to act. We will not be spared from such decisions in the future. If we are serious, we have to be willing to take upon ourselves the burden of providing the leadership, the arms, the troops, and the resources, and of bearing the casualties, the reversals of fortune, and the inevitable complaints and second-guessing.
Because the would-be genocidaires are out there, thinking about it: whom to kill; how many; how to do it. Whether they can get away with it.
1The Task Force, chaired by former Secretary of State Madeleine Albright and former Secretary of Defense William Cohen, was a joint project of the U.S. Holocaust Memorial Museum, the U.S. Institute of Peace, and the American Academy of Diplomacy.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The Only Way To Prevent Genocide
Must-Reads from Magazine
At the Met, distinguished singers and conductors, mostly born and trained in Europe, appeared in theatrically conservative big-budget productions of the popular operas of the 19th century, with a sprinkling of pre-romantic and modern works thrown in to leaven the loaf. City Opera, by contrast, presented younger artists—many, like Beverly Sills, born in this country—in a wider-ranging, more adventurously staged repertoire that often included new operas, some of them written by American composers, to which the public was admitted at what were then called “popular prices.”
Between them, the companies represented a feast for culture-consuming New Yorkers, though complaints were already being heard that their new theaters were too big. Moreover, neither the Met nor City Opera was having any luck at commissioning memorable new operas and thereby expanding and refreshing the operatic repertoire, to which only a handful of significant new works—none of them, then or since, premiered by either company—had been added since World War I.
A half-century later, the feast has turned to famine. In 2011, New York City Opera left Lincoln Center, declaring bankruptcy. It closed its doors forever two years later. The Met has weathered a nearly uninterrupted string of crises that climaxed earlier this year with the firing of James Levine, the company’s once-celebrated music director emeritus. He was accused in 2017 of molesting teenage musicians and was dismissed from all of his conducting posts in New York and elsewhere. Today the Met is in dire financial straits that threaten its long-term survival.
And while newer opera companies in such other American cities as Chicago, Houston, San Francisco, Santa Fe, and Seattle now offer alternative models of leadership, none has established itself as a potential successor either to the Met or the now-defunct NYCO.1
Is American opera as a whole in a terminal condition? Or are the collapse of the New York City Opera and the Met’s ongoing struggle to survive purely local matters of no relevance elsewhere? Heidi Waleson addresses these questions in Mad Scenes and Exit Arias: The Death of the New York City Opera and the Future of Opera in America.2 Waleson draws on her experience as the opera critic of the Wall Street Journal to speculate on the prospects for an art form that has never quite managed to set down firm roots in American culture.
In this richly informative chronicle of NYCO’s decline and fall, Waleson persuasively argues that what happened to City Opera (and, by extension, the Met) could happen to other opera companies as well. The days in which an ambitious community sought successfully to elevate itself into the first rank of world cities by building and manning an opera house are long past, and Mad Scenes and Exit Arias helps us understand why.D s Waleson reminds us, it was Fiorello LaGuardia, the New York mayor who played a central role in the creation of the NYCO, who dubbed the company “the people’s opera” when it was founded in 1943. According to LaGuardia, NYCO existed to perform popular operas at popular prices for a mass audience. In later years, it moved away from that goal, but the slogan stuck. Indeed, no opera company has ever formulated a clearer statement of its institutional mission.
Even after it moved to Lincoln Center in 1966, NYCO had an equally coherent and similarly appealing purpose: It was where you went to see the opera stars of tomorrow, foremost among them Sills and Plácido Domingo, in inexpensively but imaginatively staged productions of the classics. The company went out of its way to present modern operas, too, but it never did so at the expense of its central repertoire—and tickets to its performances cost half of what the Met charged. Well into the 21st century, City Opera stuck more or less closely to its redefined mission. Under Paul Kellogg, the general and artistic director from 1996 to 2007, it did so with consistent artistic success. But revenues declined throughout the latter part of Kellogg’s tenure, in part because younger New Yorkers were unwilling to become subscribers.
In those days, the Metropolitan Opera, NYCO’s next-door neighbor, was still one of the world’s most conservative opera houses. That changed when Peter Gelb became its general manager in 2006. Gelb was resolved to modernize the Met’s productions and, to a lesser extent, its repertoire, and he simultaneously sought to heighten its national profile by digitally simulcasting live performances into movie theaters throughout America.
Kellogg was frustrated by the chronic acoustic inadequacies of the New York State Theater and sought in vain to move City Opera to a three-theater complex that was to be built (but never was) on the World Trade Center site. He retired soon after Gelb came to the Met. Kellogg was succeeded by Gérard Mortier, a European impresario who was accustomed to working in state-subsidized theaters. Mortier made a pair of fateful decisions. First, he canceled City Opera’s entire 2008–2009 season while the interior of the State Theater underwent much-needed renovations. Then he announced a follow-up season of 20th-century operas that lacked audience appeal.
That follow-up season never happened, because Mortier resigned in 2008 and fled New York. He was replaced by George Steel, who had previously served for just three months as general manager of the Dallas Opera. Under Steel, NYCO slashed its schedule to ribbons in a futile attempt to get back on its financial feet after Mortier’s financially ruinous year-long hiatus. Then he mounted a series of productions of nonstandard repertory that received mixed reviews and flopped at the box office.
The combined effect of Gelb’s innovations and the inept leadership of Mortier and Steel all but obliterated City Opera’s reason for existing. Under Gelb, the Met’s repertory ranged from such warhorses as Rigoletto and Tosca to 20th-century masterpieces like Benjamin Britten’s Midsummer Night’s Dream and Alban Berg’s Wozzeck, and tickets could be bought for as little as $20. With the Met performing a more interesting repertoire under a wider range of directors, and in part at “people’s prices,” City Opera no longer did anything that the Met wasn’t already doing on a far larger and better-financed scale. What, then, was its mission now? The truth was that it had none, and when the company went under in 2013, few mourned its passing.
As it happened, Gelb’s own innovations were a mere artistic Band-aid, for he was unwilling or unable to trim the Met’s bloated budget to any meaningful extent. He made no serious attempt to cut the company’s labor costs until a budget crisis in 2014 forced him to confront its unions, which he did with limited success. In addition, his new productions of the standard-repertory operas on which the Met relied to draw and hold older subscribers were felt by many to be trashily trendy.
The Met had particular difficulty managing the reduced circumstances of the 21st century when it came to opera. Its 3,800-seat theater has an 80-foot-deep stage with a proscenium opening that measures 54 feet on each side. (Bayreuth, by contrast, seats 1,925, La Scala 2,030, and the Vienna State Opera 2,200.) As a result, it is all but impossible to mount low-to-medium-budget shows in the Metropolitan Opera House, even as the company finds it is no longer able to fill its increasingly empty house. Two decades ago, the Met earned 90 percent of its potential box-office revenue. That figure plummeted to 66 percent by 2015, forcing Gelb to raise ticket prices to an average of $158.50 per head. On Broadway, the average price of a ticket that season was $103.86.
Above all, Gelb was swimming against the cultural tide. Asked about the effects on audience development of the Met simulcasts, he admitted that three-quarters of the people who attended them were “over 65, and 30 percent of them are over 75.” As he explained: “Grand opera is in itself a kind of a dinosaur of an art form…. The question is not whether I think I’m doing a good job or not in trying to keep the [Metropolitan Opera] alive. It’s whether I’m doing a good job or not in the face of a cultural and social rejection of opera as an art form. And what I’m doing is fighting an uphill battle to try and maintain an audience in a very difficult time.”
Was that statement buck-passing defeatism, or a fair appraisal of the state of American opera? Other opera executives distanced themselves from Gelb’s remarks, and it was true—and still is—that smaller American companies have done a somewhat better job of attracting younger audiences than the top-heavy Met. But according to the National Endowment for the Arts, the percentage of U.S. adults who attend at least one operatic performance each year declined from 3.2 percent in 2002 to 2.1 percent in 2012. This problem, of course, is not limited to opera. As I wrote in these pages in 2010, the disappearance of secondary-school arts education and the rise of digital media may well be leading to “not merely a decline in public interest in the fine arts but the death of the live audience as a cultural phenomenon.”3D
oes American opera have a future in an era of what Heidi Waleson succinctly describes as “flat ticket income and rising expenses”? In the last chapter of Mad Scenes and Exit Arias, she chronicles the activities of a group of innovative smaller troupes that are “rethinking what an opera company is, what it does, and who it serves.” Yet in the same breath, she acknowledges the possibility that “filling a giant theater for multiple productions of grand operas [is] no longer an achievable goal.”
If that is so, then it may be worth asking a different question: Did American opera ever have a past? It is true that opera in America has had a great and glorious history, but virtually the whole of that history consisted of American productions of 18th- and 19th-century European operas. By contrast, no opera by an American classical composer has ever entered the international major-house repertoire. Indeed, while new American operas are still commissioned and premiered at an impressive rate, few things are so rare as a second production of any of these works.
While a handful continue to be performed—John Adams’s Nixon in China (1987), André Previn’s A Streetcar Named Desire (1995), Mark Adamo’s Little Women (1998), and Jake Heggie’s Dead Man Walking (2000)—their success is a tribute to the familiarity of their subject matter and source material, not their musico-theatrical quality. As for the rest, the hard but inescapable truth is that with the exception of George Gershwin’s Porgy and Bess (1935), virtually all large-scale American operas have been purpose-written novelties that were shelved and forgotten immediately after their premieres.
The success of Porgy and Bess, which received its premiere not in an opera house but on Broadway, reminds us that American musical comedy, unlike American opera, is deeply rooted in our national culture, in much the same way that grand opera is no less deeply rooted in the national cultures of Germany and Italy, where it is still genuinely popular (if less so today than a half-century ago). By comparison with Porgy, Carousel, Guys and Dolls, or My Fair Lady, American opera as a homegrown form simply does not exist: It is merely an obscure offshoot of its European counterpart. Aaron Copland, America’s greatest composer, was not really joking when he wittily described opera as “la forme fatale,” and his own failed attempts to compose an audience-friendly opera that would be as successful as his folk-flavored ballet scores say much about the difficulties facing any composer who seeks to follow in his footsteps.
It is not that grand opera is incapable of appealing to American theatergoers. Even now, there are many Americans who love it passionately, just as there are regional companies such as Chicago’s Lyric Opera and San Francisco Opera that have avoided making the mistakes that closed City Opera’s doors. Yet the crises from which the Metropolitan Opera has so far failed to extricate itself suggest that in the absence of the generous state subsidies that keep European opera houses in business, large-house grand opera in America may simply be too expensive to thrive—or, ultimately, to survive. At its best, no art form is more thrilling or seductive. But none is at greater risk of following the dinosaurs down the cold road to extinction.
1 The “New York City Opera” founded in 2016 that now mounts operas in various New York theaters on an ad hoc basis is a brand-new enterprise that has no connection with its predecessor.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'Halakhah' By Chaim N. Saiman
Review of 'The Jungle Grows Back' By Robert Kagan
Review of 'The Zionist Ideas' Edited by Gil Troy