As if this were not bad enough, in the current year the United States has suffered two other major blows–in Iran and Nicaragua–of large and strategic significance. In each country, the Carter administration not only failed to prevent the undesired outcome, it actively collaborated in the replacement of moderate autocrats friendly to American interests with less friendly autocrats of extremist persuasion. It is too soon to be certain about what kind of regime will ultimately emerge in either Iran or Nicaragua, but accumulating evidence suggests that things are as likely to get worse as to get better in both countries. The Sandinistas in Nicaragua appear to be as skillful in consolidating power as the Ayatollah Khomeini is inept, and leaders of both revolutions display an intolerance and arrogance that do not bode well for the peaceful sharing of power or the establishment of constitutional governments, especially since those leaders have made clear that they have no intention of seeking either.
It is at least possible that the SALT debate may stimulate new scrutiny of the nation’s strategic position and defense policy, but there are no signs that anyone is giving serious attention to this nation’s role in Iranian and Nicaraguan developments–despite clear warnings that the U.S. is confronted with similar situations and options in El Salvador, Guatemala, Morocco, Zaire, and elsewhere. Yet no problem of American foreign policy is more urgent than that of formulating a morally and strategically acceptable, and politically realistic, program for dealing with non-democratic governments who are threatened by Soviet-sponsored subversion. In the absence of such a policy, we can expect that the same reflexes that guided Washington in Iran and Nicaragua will be permitted to determine American actions from Korea to Mexico–with the same disastrous effects on the U.S. strategic position. (That the administration has not called its policies in Iran and Nicaragua a failure–and probably does not consider them such–complicates the problem without changing its nature.)
There were, of course, significant differences in the relations between the United States and each of these countries during the past two or three decades. Oil, size, and proximity to the Soviet Union gave Iran greater economic and strategic import than any Central American “republic,” and closer relations were cultivated with the Shah, his counselors, and family than with President Somoza, his advisers, and family. Relations with the Shah were probably also enhanced by our approval of his manifest determination to modernize Iran regardless of the effects of modernization on traditional social and cultural patterns (including those which enhanced his own authority and legitimacy). And, of course, the Shah was much better looking and altogether more dashing than Somoza; his private life was much more romantic, more interesting to the media, popular and otherwise. Therefore, more Americans were more aware of the Shah than of the equally tenacious Somoza.
But even though Iran was rich, blessed with a product the U.S. and its allies needed badly, and led by a handsome king, while Nicaragua was poor and rocked along under a long-tenure president of less striking aspect, there were many similarities between the two countries and our relations with them. Both these small nations were led by men who had not been selected by free elections, who recognized no duty to submit them selves to searching tests of popular acceptability. Both did tolerate limited apposition, including opposition newspapers and political parties, but both were also confronted by radical, violent opponents bent on social and political revolution. Both rulers, therefore, sometimes invoked martial law to arrest, imprison, exile, and occasionally, it was alleged, torture their opponents. Both relied for public order on police forces whose personnel were said to be too harsh, too arbitrary, and too powerful. Each had what the American press termed “private armies,” which is to say, armies pledging their allegiance to the ruler rather than the “constitution” or the “nation” or some other impersonal entity.
In short, both Somoza and the Shah were, in central ways, traditional rulers of semi-traditional societies. Although the Shah very badly wanted to create a technologically modern and powerful nation and Somoza tried hard to introduce mod- ern agricultural methods, neither sought to reform his society in the light of any abstract idea of social justice or political virtue. Neither attempted to alter significantly the distribution of goods, status, or power (though the democratization of education and skills that accompanied modernization in Iran did result in some redistribution of money and power there).
Both Somoza and the Shah enjoyed long tenure, large personal fortunes (much of which were no doubt appropriated from general revenues), and good relations with the United States. The Shah and Somoza were not only anti-Communist, they were positively friendly to the U.S., sending their sons and others to be educated in our universities, voting with us in the United Nations, and regularly supporting American interests and positions even when these entailed personal and political cost. The embassies of both governments were active in Washington social life, and were frequented by powerful Americans who occupied major roles in this nation’s diplomatic, military, and political life. And the Shah and Somoza themselves were both welcome in Washington, and had many American friends.
But once an attack was launched by opponents bent on destruction, everything changed. The rise of serious, violent opposition in Iran and Nicaragua set in motion a succession of events which bore a suggestive resemblance to one another and a suggestive similarity to our behavior in China before the fall of Chiang Kaishek, in Cuba before the triumph of Castro, in certain crucial periods of the Vietnamese war, and, more recently, in Angola. In each of these countries, the American effort to impose liberalization and democratization on a government confronted with violent internal opposition not only failed, but actually assisted the coming to power of new regimes in which ordinary people enjoy fewer freedoms and less personal security than under the previous autocracy–regimes, moreover, hostile to American interests and policies.
The pattern is familiar enough: an established autocracy with a record of friendship with the U.S. is attacked by insurgents, some of whose leaders have long ties to the Communist movement, and most of whose arms are of Soviet, Chinese, or Czechoslovak origin. The “Marxist” presence is ignored and/or minimized by American officials and by the elite media on the ground that U.S. sup- port for the dictator gives the rebels little choice but to seek aid “elsewhere.” Violence spreads and American officials wonder aloud about the viability of a regime that “lacks the support of its own people.” The absence of an opposition party is deplored and civil-rights violations are reviewed. Liberal columnists question the morality of continuing aid to a “rightist dictatorship” and provide assurances concerning the essential moderation of some insurgent leaders who “hope” for some sign that the U.S. will remember its own revolutionary origins. Requests for help from the beleaguered autocrat go unheeded, and the argument is increasingly voiced that ties should be established with rebel leaders “before it is too late.” The President, delaying U.S. aid, appoints a special emissary who confirms the deterioration of the government position and its diminished capacity to control the situation and recommends various measures for “strengthening” and “liberalizing” the regime, all of which involve diluting its power.
The emissary’s recommendations are presented in the context of a growing clamor for American disengagement on grounds that continued involvement confirms our status as an agent of imperialism, racism, and reaction; is inconsistent with support for human rights; alienates us from the “forces of democracy”; and threatens to put the U.S. once more on the side of history’s “losers.” This chorus is supplemented daily by interviews with returning missionaries and “reasonable” rebels.
As the situation worsens, the President assures the world that the U.S. desires only that the “people choose their own form of government”; he blocks delivery of all arms to the government and undertakes negotiations to establish a “broadly based” coalition headed by a “moderate” critic of the regime who, once elevated, will move quickly to seek a “political” settlement to the conflict. Should the incumbent autocrat prove resistant to American demands that he step aside, he will be readily overwhelmed by the military strength of his opponents, whose patrons will have continued to provide sophisticated arms and advisers at the same time the U.S. cuts off military sales. Should the incumbent be so demoralized as to agree to yield power, he will be replaced by a “moderate” of American selection. Only after the insurgents have refused the proffered political solution and anarchy has spread throughout the nation will it be noticed that the new head of government has no significant following, no experience at governing, and no talent for leadership. By then, military commanders, no longer bound by loyalty to the chief of state, will depose the faltering “moderate” in favor of a fanatic of their own choosing.
In either case, the U.S. will have been led by its own misunderstanding of the situation to assist actively in deposing an erstwhile friend and ally and installing a government hostile to American interests and policies in the world. At best we will have lost access to friendly territory. At worst the Soviets will have gained a new base. And everywhere our friends will have noted that the U.S. cannot be counted on in times of difficulty and our enemies will have observed that American support provides no security against the forward march of history.
Events in Nicaragua also departed from the scenario presented above both because the Cuban and Soviet roles were clearer and because U.S. officials were more intensely and publicly working against Somoza. After the Somoza regime had defeated the first wave of Sandinista violence, the U.S. ceased aid, imposed sanctions, and took other steps which undermined the status and the credibility of the government in domestic and foreign affairs. Between the murder of ABC correspondent Bill Stewart by a National Guardsman in early June and the Sandinista victory in late July, the U.S. State Department assigned a new ambassador who refused to submit his credentials to Somoza even though Somoza was still chief of state, and called for replacing the government with a “broadly based provisional government that would include representatives of Sandinista guerillas.” Americans were assured by Assistant Secretary of State Viron Vaky that “Nicaraguans and our democratic friends in Latin America have no intention of seeing Nicaragua turned into a second Cuba,” even though the State Department knew that the top Sandinista leaders had close personal ties and were in continuing contact with Havana, and, more specifically, that a Cuban secret-police official, Julian Lopez, was frequently present in the Sandinista headquarters and that Cuban military advisers were present in Sandinista ranks.
In a manner uncharacteristic of the Carter administration, which generally seems willing to negotiate anything with anyone anywhere, the U.S. government adopted an oddly uncompromising posture in dealing with Somoza. “No end to the crisis is possible,” said Vaky, “that does not start with the departure of Somoza from power and the end of his regime. No negotiation, mediation, or compromise can be achieved any longer with a Somoza government. The solution can only begin with a sharp break from the past.” Trying hard, we not only banned all American arms sales to the government of Nicaragua but pressured Israel, Guatemala, and others to do likewise–all in the name of insuring a “democratic” outcome. Finally, as the Sandinista leaders consolidated control over weapons and communications, banned opposition, and took off for Cuba, President Carter warned us against attributing this “evolutionary change” to “Cuban machinations” and assured the world that the U.S. desired only to “let the people of Nicaragua choose their own form of government.”
Yet despite all the variations, the Carter administration brought to the crises in Iran and Nicaragua several common assumptions each of which played a major role in hastening the victory of even more repressive dictatorships than had been in place before. These were, first, the belief that there existed at the moment of crisis a democratic alternative to the incumbent government: second, the belief that the continuation of the status quo was not possible; third, the belief that any change, including the establishment of a government headed by self-styled Marxist revolutionaries, was preferable to the present government. Each of these beliefs was (and is) widely shared in the liberal community generally. Not one of them can withstand close scrutiny.
Two or three decades ago, when Marxism enjoyed its greatest prestige among American intellectuals, it was the economic prerequisites of democracy that were emphasized by social scientists. Democracy, they argued, could function only in relatively rich societies with an advanced economy, a substantial middle class, and a literate population, but it could be expected to emerge more or less automatically whenever these conditions prevailed. Today, this picture seems grossly over-simplified. While it surely helps to have an economy strong enough to provide decent levels of well-being for all, and “open” enough to provide mobility and encourage achievement, a pluralistic society and the right kind of political culture–and time–are even more essential.
In his essay on Representative Government, John Stuart Mill identified three fundamental conditions which the Carter administration would do well to ponder. These are: “One, that the people should be willing to receive it [representative government]; two, that they should be willing and able to do what is necessary for its preservation; three, that they should be willing and able to fulfill the duties and discharge the functions which it imposes on them.”
Fulfilling the duties and discharging the functions of representative government make heavy demands on leaders and citizens, demands for participation and restraint, for consensus and compromise. It is not necessary for all citizens to be avidly interested in politics or well-informed about public affairs–although far more widespread interest and mobilization are needed than in autocracies. What is necessary is that a substantial number of citizens think of themselves as participants in society’s decision-making and not simply as subjects bound by its laws. Moreover, leaders of all major sectors of the society must agree to pursue power only by legal means, must eschew (at least in principle) violence, theft, and fraud, and must accept defeat when necessary. They must also be skilled at finding and creating common ground among diverse points of view and interests, and correlatively willing to compromise on all but the most basic values.
In addition to an appropriate political culture, democratic government requires institutions strong enough to channel and contain conflict. Voluntary, non-official institutions are needed to articulate and aggregate diverse interests and opinions present in the society. Otherwise, the formal governmental institutions will not be able to translate popular demands into public policy.
In the relatively few places where they exist, democratic governments have come into being slowly, after extended prior experience with more limited forms of participation during which leaders have reluctantly grown accustomed to tolerating dissent and opposition, opponents have accepted the notion that they may defeat but not destroy incumbents, and people have become aware of government’s effects on their lives and of their own possible effects on government. Decades, if not centuries, are normally required for people to acquire the necessary disciplines and habits. In Britain, the road from the Magna Carta to the Act of Settlement, to the great Reform Bills of 1832, 1867, and 1885, took seven centuries to traverse. American history gives no better grounds for believing that democracy comes easily, quickly, or for the asking. A war of independence, an unsuccessful constitution, a civil war, a long process of gradual enfranchisement marked our progress toward constitutional democratic government. The French path was still more difficult. Terror, dictatorship, monarchy, instability, and incompetence followed on the revolution that was to usher in a millennium of brotherhood. Only in the 20th century did the democratic principle finally gain wide acceptance in France and not until after World War II were the principles of order and democracy, popular sovereignty and authority, finally reconciled in institutions strong enough to contain conflicting currents of public opinion.
Although there is no instance of a revolutionary “socialist” or Communist society being democratized, right-wing autocracies do sometimes evolve into democracies–given time, propitious economic, social, and political circumstances, talented leaders, and a strong indigenous demand for representative government. Something of the kind is in progress on the Iberian peninsula and the first steps have been taken in Brazil. Something similar could conceivably have also occurred in Iran and Nicaragua if contestation and participation had been more gradually expanded.
But it seems clear that the architects of contemporary American foreign policy have little idea of how to go about encouraging the liberalization of an autocracy. In neither Nicaragua nor Iran did they realize that the only likely result of an effort to replace an incumbent autocrat with one of his moderate critics or a “broad-based coalition” would be to sap the foundations of the existing regime without moving the nation any closer to democracy. Yet this outcome was entirely predictable. Authority in traditional autocracies is transmitted through personal relations: from the ruler to his close associates (relatives, household members, personal friends) and from them to people to whom the associates are related by personal ties resembling their own relation to the ruler. The fabric off authority unravels quickly when the power and status of the man at the top are undermined or eliminated. The longer the autocrat has held power, and the more pervasive his personal influence, the more dependent a nation’s institutions will be on him. Without him, the organized life of the society will collapse, like an arch from which the keystone has been removed. The blend of qualities that bound the Iranian army to the Shah or the national guard to Somoza is typical of the relationships-personal, hierarchical, non-transferable–that, support a traditional autocracy. The speed with which armies collapse, bureaucracies abdicate, and social structures dissolve once the autocrat is removed frequently surprises American policymakers and journalists accustomed to public institutions based on universalistic norms rather than particularistic relations.
Confusion concerning the character of the opposition, especially its intransigence and will to power, leads regularly to downplaying the amount of force required to counteract its violence. In neither Iran nor Nicaragua did the U.S. adequately appreciate the government’s problem in maintaining order in a society confronted with an ideologically extreme opposition. Yet the presence of such groups was well known. The State Department’s 1977 report on human rights described an Iran confronted
with a small number of extreme rightist and leftist terrorists operating within the country. There is evidence that they have received substantial foreign support and training … [and] have been responsible for the murder of Iranian government officials and Americans….
The same report characterized Somoza’s opponents in the following terms:
A guerrilla organization known as the Sandinista National Liberation Front (FSLN) seeks the violent overthrow of the government, and has received limited support from Cuba. The FSLN carried out an operation in Managua in December 1974, killing four people, taking several officials hostage … since then, it continues to challenge civil authority in certain isolated regions.
In 1978, the State Department’s report said that Sandinista violence was continuing–after the state of siege had been lifted by the Somoza government.
When U.S. policymakers and large portions of the liberal press interpret insurgency as evidence of widespread popular discontent and a will to democracy, the scene is set for disaster. For if civil strife reflects a popular demand for democracy, it follows that a “liberalized” government will be more acceptable to “public opinion.”
Thus, in the hope of strengthening a government, U.S. policymakers are led, mistake after mistake, to impose measures almost certain to weaken its authority. Hurried efforts to force complex and unfamiliar political practices on societies lacking the requisite political culture, tradition, and social structures not only fail to produce desired outcomes; if they are undertaken at a time when the traditional regime is under attack, they actually facilitate the job of the insurgents.
Vietnam presumably taught us that the United States could not serve as the world’s policeman; it should also have taught us the dangers of trying to be the world’s midwife to democracy when the birth is scheduled to take place under conditions of guerrilla war.
At the time the Carter administration came into office it was widely reported that the President had assembled a team who shared a new approach to foreign policy and a new conception of the national interest. The principal elements of this new approach were said to be two: the conviction that the cold war was over, and the conviction that, this being the case, the U.S. should give priority to North-South problems and help less developed nations achieve their own destiny.
More is involved in these changes than originally meets the eye. For, unlikely as it may seem, the foreign policy of the Carter administration is guided by a relatively full-blown philosophy of history which includes, as philosophies of history always do, a theory of social change, or, as it is currently called, a doctrine of modernization. Like most other philosophies of history that have appeared in the West since the 18th century, the Carter administration’s doctrine predicts progress (in the form of modernization for all societies) and a happy ending (in the form of a world community of developed, autonomous nations).
The administration’s approach to foreign affairs was clearly foreshadowed in Zbigniew Brzezinski’s 1970 book on the U.S. role in the “technetronic era,” Between Two Ages. In that book, Brzezinski showed that he had the imagination to look beyond the cold war to a brave new world of global politics and interdependence. To deal with that new world a new approach was said to be “evolving,” which Brzezinski designated “rational humanism.” In the new approach, the “preoccupation” with “national supremacy” would give way to “global” perspectives, and international problems would be viewed as “human issues” rather than as “political confrontations.” The traditional intellectual framework for dealing with foreign policy would have to be scrapped:
Today, the old framework of international politics … with their spheres of influence, military alliances between nation states, the fiction of sovereignty, doctrinal conflicts arising from 19th-century crisis–is clearly no longer compatible with reality.
Only the “delayed development” of the Soviet Union, “an archaic religious community that experiences modernity existentially but not quite yet normatively,” prevented wider realization of the fact that the end of ideology was already here. For the U.S., Brzezinski recommended “a great deal of patience,” a more detached attitude toward world revolutionary processes, and a less anxious preoccupation with the Soviet Union. Instead of en- gaging in ancient diplomatic pastimes, we should make “a broader effort to contain the global tendencies toward chaos,” while assisting the processes of change that will move the world toward the “community of developed nations.”
The central concern of Brzezinski’s book, as of the Carter administration’s foreign policy, is with the modernization of the Third World. From the beginning, the administration has manifested a special, intense interest in the problems of the so-called Third World. But instead of viewing international developments in terms of the American national interest, as national interest is historically conceived, the architects of administration policy have viewed them in terms of a contemporary version of the same idea of progress that has traumatized Western imaginations since the Enlightenment.
In its current form, the concept of modernization involves more than industrialization, more than “political development” (whatever that is). It is used instead to designate “. . . the process through which a traditional or pre-technological society passes as it is transformed into a society characterized by machine technology, rational and secular attitudes, and highly differentiated social structures.” Condorcet, Comte, Hegel, Marx, and Weber are all present in this view of history as the working out of the idea of modernity.
The crucial elements of the modernization concept have been clearly explicated by Samuel P. Huntington (who, despite a period at the National Security Council, was assuredly not the architect of the administration’s policy). The modernization paradigm, Huntington has observed, postulates an ongoing process of change: complex, because it involves all dimensions of human life in society; systemic, because its elements interact in predictable, necessary ways; global, because all societies will, necessarily, pass through the transition from traditional to modern; lengthy, because time is required to modernize economic and social organization, character, and culture; phased, because each modernizing society must pass through essentially the same stages; homogenizing, because it tends toward the convergence and interdependence of societies; irreversible, because the direction of change is “given” in the relation of the elements of the process; progressive, in the sense that it is desirable, and in the long run provides significant benefits to the affiliated people.
This perspective on contemporary events is optimistic in the sense that it foresees continuing human progress; deterministic in the sense that it perceives events as fixed by processes over which persons and policies can have but little influence; moralistic in the sense that it perceives history and U.S. policy as having moral ends; cosmopolitan in the sense that it attempts to view the world not from the perspective of American interests or intentions but from the perspective of the modernizing nation with both revolution and morality, and U.S. policy with all three.
The idea that it is “forces” rather than people which shape events recurs each time an administration spokesman articulates or explains policy. The President, for example, assured us in February of this year;
The revolution in Iran is a product of deep social, political, religious, and economic factors growing out of the history of Iran itself.
And of Asia he said:
At this moment there is turmoil or change in various countries from one end of the Indian Ocean to the other; some turmoil as in Indo- china is the product of age-old enmities, inflamed by rivalries for influence by conflicting forces. Stability in some other countries is being shaken by the process of modernization, the search for national significance, or the desire to fulfill legitimate human hopes and human aspirations.
Harold Saunders, Assistant Secretary for Near Eastern and South Asian Affairs, commenting on “instability” in Iran and the Horn of Africa, states:
We, of course, recognize that fundamental changes are taking place across this area of western Asia and northeastern Africa-economic modernization, social change, a revival of religion, resurgent nationalism, demands for broader popular participation in the political process. These changes are generated by forces within each country.
Or here is Anthony Lake, chief of the State Department’s Policy Planning staff, on South Africa:
Change will come in South Africa. The welfare of the people there, and American interests, will be profoundly affected by the way in which it comes. The question is whether it will be peaceful or not.
Brzezinski makes the point still clearer. Speaking as chief of the National Security Council, he has assured us that the struggles for power in Asia and Africa are really only incidents along the route to modernization:
… all the developing countries in the arc from northeast Asia to southern Africa continue to search for viable forms of government capable of managing the process of modernization.
No matter that the invasions, coups, civil wars, and political struggles of less violent kinds that one sees all around do not seem to be incidents in a global personnel search for someone to manage the modernization process. Neither Brzezinski nor anyone else seems bothered by the fact that the political participants in that arc from northeast Asia to southern Africa do not know that they are “searching for viable forms of government capable of managing the process of modernization.” The motives and intentions of real persons are no more relevant to the modernization paradigm than they are to the Marxist view of history. Viewed from this level of abstraction, it is the “forces” rather than the people that count.
So what if the “deep historical forces” at work in such diverse places as Iran, the Horn of Africa, Southeast Asia, Central America, and the United Nations look a lot like Russians or Cubans? Having moved past what the President calls our “inordinate fear of Communism,” identified by him with the Cold War, we should, we are told, now be capable of distinguishing Soviet and Cuban “machinations,” which anyway exist mainly in the minds of cold warriors and others guilty of oversimplifying the world, from evolutionary changes, which seem to be the only kind that actually occur.
What can a U.S. President faced with such complicated, inexorable, impersonal processes do? The answer, offered again and again by the President and his top officials, is, not much. Since events are not caused by human decisions, they cannot be stopped or altered by them. Brzezinski, for example, has said: “We recognize that the world is changing under the influence of forces no government can control….” And Cyrus Vance has cautioned: “The fact is that we can no more stop change than Canute could still the waters.”
Those who argue that the U.S. should or could intervene directly to thwart [the revolution in Iran] are wrong about the realities of Iran…. We have encouraged to the limited extent of our own ability the public support for the Bakhtiar government…. How long [the Shah] will be out of Iran, we have no way to determine. Future events and his own desires will determine that…. It is impossible for anyone to anticipate all future political events. . . . Even if we had been able to anticipate events that were going to take place in Iran or in other countries, obviously our ability to determine those events is very limited [emphasis added].
Vance made the same point:
In Iran our policy throughout the current crisis has been based on the fact that only Iranians can resolve the fundamental political issues which they now confront.
Where once upon a time an American President might have sent Marines to assure the protection of American strategic interests, there is no room for force in this world of progress and self-determination. Force, the President told us at Notre Dame, does not work; that is the lesson he extracted from Vietnam. It offers only “superficial” solutions. Concerning Iran, he said:
Certainly we have no desire or ability to intrude massive forces into Iran or any other country to determine the outcome of domestic political issues. This is something that we have no intention of ever doing in another country. We’ve tried this once in Vietnam. It didn’t work, as you well know.
There was nothing unique about Iran. In Nicaragua, the climate and language were different but the “historical forces” and the U.S. response were the same. Military intervention was out of the question. Assistant Secretary of State Viron Vaky described as “unthinkable” the “use of U.S. military power to intervene in the internal affairs of another American republic.” Vance provided parallel assurances for Africa, asserting that we would not try to match Cuban and Soviet activities there.
But there is a problem. The conceivable contexts turn out to be mainly those in which non-Communist autocracies are under pressure from revolutionary guerrillas. Since Moscow is the aggressive, expansionist power today, it is more often than not insurgents, encouraged and armed by the Soviet Union, who challenge the status quo. The American commitment to “change” in the abstract ends up by aligning us tacitly with Soviet clients and irresponsible extremists like the Ayatollah Khomeini or, in the end, Yasir Arafat.
So far, assisting “change” has not led the Carter administration to undertake the destabilization of a Communist country. The principles of self-determination and nonintervention are thus both selectively applied. We seem to accept the status quo in Communist nations (in the name of ‘diversity” and national autonomy), but not in nations ruled by “right-wing” dictators or white oligarchies. Concerning China, for example, Brzezinski has observed: “We recognize that the PRC and we have different ideologies and economic and political systems. . . . We harbor neither the hope nor the desire that through extensive contacts with China we can remake that nation into the American image. Indeed, we accept our differences.” Of Southeast Asia, the President noted in February:
Our interest is to promote peace and the withdrawal of outside forces and not to become embroiled in the conflict among Asian nations. And, in general, our interest is to promote the health and the development of individual societies, not to a pattern cut exactly like ours in the United States but tailored rather to the hopes and the needs and desires of the peoples involved.
But the administration’s position shifts sharply when South Africa is discussed. For example, Anthony Lake asserted in late 1978:
… We have indicated to South Africa the fact that if it does not make significant progress toward racial equality, its relations with the international community, including the United States, are bound to deteriorate.
Over the years, we have tried through a series of progressive steps to demonstrate that the U.S. cannot and will not be associated with the continued practice of apartheid.
As to Nicaragua, Hodding Carter III said in February 1979:
The unwillingness of the Nicaraguan government to accept the [OAS] group’s proposal, the resulting prospects for renewal and polarization, and the human-rights situation in Nicaragua … unavoidably affect the kind of relationships we can maintain with that government….
And Carter commented on Latin American autocracies:
My government will not be deterred from protecting human rights, including economic and social rights, in whatever ways we can. We prefer to take actions that are positive, but where nations persist in serious violations of human rights, we will continue to demonstrate that there are costs to the flagrant disregard of international standards.
Inconsistencies are a familiar part of politics in most societies. Usually, however, governments behave hypocritically when their principles conflict with the national interest. What makes the inconsistencies of the Carter administration noteworthy are, first, the administration’s moralism, which renders it especially vulnerable to charges of hypocrisy; and, second, the administration’s predilection for policies that violate the strategic and economic interests of the United States. The administration’s conception of national interest borders on doublethink: it finds friendly powers to be guilty representatives of the status quo and views the triumph of unfriendly groups as beneficial to America’s “true interests.”
This logic is quite obviously reinforced by the prejudices and preferences of many administration officials. Traditional autocracies are, in general and in their very nature, deeply offensive to modern American sensibilities. The notion that public affairs should be ordered on the basis of kinship, friendship, and other personal relations rather than on the basis of objective “rational” standards violates our conception of justice and efficiency. The preference for stability rather than change is also disturbing to Americans whose whole national experience rests on the principles of change, growth, and progress. The extremes of wealth and poverty characteristic of traditional societies also offend us, the more so since the poor are usually very poor and bound to their squalor by a hereditary allocation of role. Moreover, the relative lack of concern of rich, comfortable rulers for the poverty, ignorance, and disease of “their” people is likely to be interpreted by Americans as moral dereliction pure and simple. The truth is that Americans can hardly bear such societies and such rulers. Confronted with them, our vaunted cultural relativism evaporates and we become as censorious as Cotton Mather confronting sin in New England.
But if the politics of traditional and semi-traditional autocracy is nearly antithetical to our own–at both the symbolic and the operational level–the rhetoric of progressive revolutionaries sounds much better to us; their symbols are much more acceptable. One reason that some modern Americans prefer “socialist” to traditional autocracies is that the former have embraced modernity and have adopted modern modes and perspectives, including an instrumental, manipulative, functional orientation toward most social, cultural, and personal affairs; a profession of universalistic norms; an emphasis on reason, science, education, and progress; a deemphasis of the sacred; and “rational,” bureaucratic organizations. They speak our language.
Because socialism of the Soviet/Chinese/Cuban variety is an ideology rooted in a version of the same values that sparked the Enlightenment and the democratic revolutions of the 18th century; because it is modern and not traditional; because it postulates goals that appeal to Christian as well as to secular values (brotherhood of man, elimination of power as a mode of human relations), it is highly congenial to many Americans at the symbolic level. Marxist revolutionaries speak the language of a hopeful future while traditional autocrats speak the language of an unattractive past. Because left-wing revolutionaries invoke the symbols and values of democracy–emphasizing egalitarianism rather than hierarchy and privilege, liberty rather than order, activity rather than passivity–they are again and again accepted as partisans in the cause of freedom and democracy.
Where concern about “socialist encirclement,” Soviet expansion, and traditional conceptions of the national interest inoculated his predecessors against such easy equations, Carter’s doctrine of national interest and modernization encourages support for all change that takes place in the name of “the people,” regardless of its “superficial” Marxist or anti-American content. Any lingering doubt about whether the U.S. should, in case of conflict, support a “tested friend” such as the Shah or a friendly power such as Zimbabwe Rhodesia against an opponent who despises us is resolved by reference to our “true,” our “long-range” interests.
Stephen Rosenfeld of the Washington Post described the commitment of the Carter administration to this sort of “progressive liberalism”:
The Carter administration came to power, after all, committed precisely to reducing the centrality of strategic competition with Moscow in American foreign policy, and to extending the United States’ association with what it was prepared to accept as legitimate wave-of-the-future popular movements around the world-first of all with the victorious movement in Vietnam.… Indochina was supposed to be the state on which Americans could demonstrate their “post-Vietnam” intent to come to terms with the progressive popular element that Kissinger, the villain, had denied.
In other words, the Carter administration, Rosenfeld tells us, came to power resolved not to assess international developments in the light of “cold-war” perspectives but to accept at face value the claim of revolutionary groups to represent “popular” aspirations and “progressive” forces–regardless of the ties of these revolutionaries to the Soviet Union. To this end, overtures were made looking to the “normalization” of relations with Vietnam, Cuba, and the Chinese People’s Republic, and steps were taken to cool relations with South Korea, South Africa, Nicaragua, the Philippines, and others. These moves followed naturally from the conviction that the U.S. had, as our enemies said, been on the wrong side of history in supporting the status quo and opposing revolution.
In this adminstration’s time, Vietnam has been transformed for much of American public opinion, from a country wronged by the U.S. to one revealing a brutal essence of its own.This has been a quiet but major trauma to the Carter people (as to all liberals) scarring their self-confidence and their claim on public trust alike.
Presumably, however, the barbarity of the “progressive” governments in Cambodia and Vietnam has been less traumatic for the President and his chief advisers than for Rosenfeld, since there is little evidence of changed predispositions at crucial levels of the White House and the State Department. The President continues to behave as before–not like a man who abhors autocrats but like one who abhors only right-wing autocrats.
In fact, high officials in the Carter administration understand better than they seem to the aggressive, expansionist character of contemporary Soviet behavior in Africa, the Middle East, Southeast Asia, the Indian Ocean, Central America, and the Caribbean. But although the Soviet/Cuban role in Grenada, Nicaragua, and El Salvador (plus the transfer of MIG-23’s to Cuba) had already prompted resumption of surveillance of Cuba (which in turn confirmed the presence of a Soviet combat brigade), the President’s eagerness not to “heat up” the climate of public opinion remains stronger than his commitment to speak the truth to the American people. His statement on Nicaragua clearly reflects these priorities:
It’s a mistake for Americans to assume or to claim that every time an evolutionary change takes place in this hemisphere that somehow it’s a result of secret, massive Cuban intervention. The fact in Nicaragua is that the Somoza regime lost the confidence of the people. To bring about an orderly transition there, our effort was to let the people of Nicaragua ultimately make the decision on who would be their leader–what form of government they should have.
This statement, which presumably represents the President’s best thinking on the matter, is illuminating. Carter’s effort to dismiss concern about military events in this specific country as a manifestation of a national proclivity for seeing “Cuban machinations” under every bed constitutes a shocking effort to falsify reality. There was no question in Nicaragua of “evolutionary change” or of attributing such change to Castro’s agents. There was only a question about the appropriate U.S. response to a military struggle in a country whose location gives it strategic importance out of proportion to its size or strength.
But that is not all. The rest of the President’s statement graphically illustrates the blinding power of ideology on his interpretation of events. When he says that “the Somoza regime, lost the confidence of the people,” the President implies that the regime had previously rested on the confidence of “the people,” but that the situation had now changed. In fact, the Somoza regime had never rested on popular will (but instead on manipulation, force, and habit), and was not being ousted by it. It was instead succumbing to arms and soldiers. However, the assumption that the armed conflict of Sandinistas and Somozistas was the military equivalent of a national referendum enabled the President to imagine that it could be, and should be, settled by the people of Nicaragua. For this pious sentiment even to seem true the President would have had to be unaware that insurgents were receiving a great many arms from other non-Nicaraguans; and that the U.S. had played a significant role in disarming the Somoza regime.
The President’s mistakes and distortions are all fashionable ones. His assumptions are those of people who want badly to be on the progressive side in conflicts between “rightist” autocracy and “leftist” challenges, and to prefer the latter, almost regardless of the probable consequences.
To be sure, neither the President, nor Vance, nor Brzezinski desires the proliferation of Soviet-supported regimes. Each has asserted his disapproval of Soviet “interference” in the modernization process. But each, nevertheless, remains willing to “destabilize” friendly or neutral autocracies without any assurance that they will not be replaced by reactionary totalitarian theocracies, totalitarian Soviet client states, or worst of all, by murderous fanatics of the Pol Pot variety.
The foreign policy of the Carter administration fails not for lack of good intentions but for lack of realism about the nature of traditional versus revolutionary autocracies and the relation of each to the American national interest. Only intellectual fashion and the tyranny of Right/Left thinking prevent intelligent men of good will from perceiving the facts that traditional authoritarian governments are less repressive than revolutionary autocracies, that they are more susceptible of liberalization, and that they are more compatible with U.S. interests. The evidence on all these points is clear enough.
From time to time a truly bestial ruler can come to power in either type of autocracy–Idi Amin, Papa Doc Duvalier, Joseph Stalin, Pol Pot are examples–but neither type regularly produces such moral monsters (though democracy regularly prevents their accession to power). There are, however, systemic differences between traditional and revolutionary autocracies that have a predictable effect on their degree of repressiveness. Generally speaking, traditional autocrats tolerate social inequities, brutality, and poverty while revolutionary autocracies create them.
Traditional autocrats leave in place existing allocations of wealth, power, status, and other re- sources which in most traditional societies favor an affluent few and maintain masses in poverty. But they worship traditional gods and observe traditional taboos. They do not disturb the habitual rhythms of work and leisure, habitual places of residence, habitual patterns of family and personal relations. Because the miseries of traditional life are familiar, they are bearable to ordinary people who, growing up in the society, learn to cope, as children born to untouchables in India acquire the skills and attitudes necessary for survival in the miserable roles they are destined to fill. Such societies create no refugees.
Precisely the opposite is true of revolutionary Communist regimes. They create refugees by the million because they claim jurisdiction over the whole life of the society and make demands for change that so violate internalized values and habits that inhabitants flee by the tens of thousands in the remarkable expectation that their attitudes, values, and goals will “fit” better in a foreign country than in their native land.
The former deputy chairman of Vietnam’s National Assembly from 1976 to his defection early in August 1979, Hoang Van Hoan, described recently the impact of Vietnam’s ongoing revolution on that country’s more than one million Chinese inhabitants:
They have been expelled from places they have lived in for generations. They have been dispossessed of virtually all possessions–their lands, their houses. They have been driven into areas called new economic zones, but they have not been given any aid. How can they eke out a living in such conditions reclaiming new land? They gradually die for a number of reasons–diseases, the hard life. They also die of humiliation.
It is not only the Chinese who have suffered in Southeast Asia since the “liberation,” and it is not only in Vietnam that the Chinese suffer. By the end of 1978 more than six million refugees had fled countries ruled by Marxist governments. In spite of walls, fences, guns, and sharks, the steady stream of people fleeing revolutionary utopias continues..
There is a damning, contrast between the number of refugees created by Marxist regimes and those created by other autocracies: more than a million Cubans have left their homeland since Castro’s rise (one refugee for every nine inhabitants) as compared to about 35,000 each from Argentina, Brazil, and Chile. In Africa more than five times as many refugees have fled Guinea and Guinea Bissau as have left Zimbabwe Rhodesia, suggesting that civil war and racial discrimination are easier for most people to bear than Marxist-style liberation.
Moreover, the history of this century provides no grounds for expecting that radical totalitarian regimes will transform themselves. At the moment there is a far greater likelihood of progressive liberalization and democratization in the governments of Brazil, Argentina, and Chile than in the government of Cuba; in Taiwan than in the People’s Republic of China; in South Korea than in North Korea; in Zaire than in Angola; and so forth.
Since many traditional autocracies permit limited contestation and participation, it is not impossible that U.S. policy could effectively encourage this process of liberalization and democratization, provided that the effort is not made at a time when the incumbent government is fighting for its life against violent adversaries, and that proposed reforms are aimed at producing gradual change rather than perfect democracy overnight. To accomplish this, policymakers are needed who understand how actual democracies have actually come into being. History is a better guide than good intentions.
A realistic policy which aims at protecting our own interest and assisting the capacities for self-determination of less developed nations will need to face the unpleasant fact that, if victorious, violent insurgency headed by Marxist revolutionaries is unlikely to lead to anything but totalitarian tyranny. Armed intellectuals citing Marx and supported by Soviet-bloc arms and advisers will almost surely not turn out to be agrarian reformers, or simple nationalists, or democratic socialists. However incomprehensible it may be to some, Marxist revolutionaries are not contemporary embodiments of the Americans who wrote the Declaration of Independence, and they will not be content with establishing a broad-based coalition in which they have only one voice among many.
If, moreover, revolutionary leaders describe the United States as the scourge of the 20th century, the enemy of freedom-loving people, the perpetrator of imperialism, racism, colonialism, genocide, war, then they are not authentic democrats or, to put it mildly, friends. Groups which define themselves as enemies should be treated as enemies. The United States is not in fact a racist, colonial power, it does not practice genocide, it does not threaten world peace with expansionist activities. In the last decade especially we have practiced remarkable forbearance everywhere and undertaken the “unilateral restraints on defense spending” recommended by Brzezinski as appropriate for the technetronic era. We have also moved further, faster, in eliminating domestic racism than any multiracial society in the world or in history.
For these reasons and more, a posture of continuous self-abasement and apology vis-a-vis the Third World is neither morally necessary nor politically appropriate. No more is it necessary or appropriate to support vocal enemies of the United States because they invoke the rhetoric of popular liberation. It is not even necessary or appropriate for our leaders to forswear unilaterally the use of military force to counter military force. Liberal idealism need not be identical with masochism, and need not be incompatible with the defense of freedom and the national interest.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Dictatorships & Double Standards
Must-Reads from Magazine
Sex and Work in an Age Without Norms
In the Beginning Was the ‘Hostile Work Environment’
In 1979, the feminist legal thinker Catharine MacKinnon published a book called Sexual Harassment of Working Women. Her goal was to convince the public (especially the courts) that harassment was a serious problem affecting all women whether or not they had been harassed, and that it was discriminatory. “The factors that explain and comprise the experience of sexual harassment characterize all women’s situation in one way or another, not only that of direct victims of the practice,” MacKinnon wrote. “It is this level of commonality that makes sexual harassment a women’s experience, not merely an experience of a series of individuals who happen to be of the female sex.” MacKinnon was not only making a case against clear-cut instances of harassment, but also arguing that the ordinary social dynamic between men and women itself created what she called “hostile work environments.”
The culture was ripe for such arguments. Bourgeois norms of sexual behavior had been eroding for at least a decade, a fact many on the left hailed as evidence of the dawn of a new age of sexual and social freedom. At the same time, however, a Redbook magazine survey published a few years before MacKinnon’s book found that nearly 90 percent of the female respondents had experienced some form of harassment on the job.
MacKinnon’s views might have been radical—she argued for a Marxist feminist jurisprudence reflecting her belief that sexual relations are hopelessly mired in male dominance and female submission—but she wasn’t entirely wrong. The postwar America in which women like MacKinnon came of age offered few opportunities for female agency, and the popular culture of the day reinforced the idea that women were all but incapable of it.
It wasn’t just the perfect housewives in the midcentury mold of Donna Reed and June Cleaver who “donned their domestic harness,” as the historian Elaine Tyler May wrote in her social history Homeward Bound. Popular magazines such as Good Housekeeping, McCall’s, and Redbook reinforced the message; so did their advertisers. A 1955 issue of Family Circle featured an advertisement for Tide detergent that depicted a woman with a rapturous expression on her face actually hugging a box of Tide under the line: “No wonder you women buy more Tide than any other washday product! Tide’s got what women want!” Other advertisements infantilized women by suggesting they were incapable of making basic decisions. “You mean a -woman can open it?” ran one for Alcoa aluminum bottle caps. It is almost impossible to read the articles or view the ads without thinking they were some kind of put-on.
The competing view of women in the postwar era was equally pernicious: the objectified pinup or sexpot. Marilyn Monroe’s hypersexualized character in The Seven Year Itch from 1955 doesn’t even have a name—she’s simply called The Girl. The 1956 film introducing the pulchritudinous Jayne Mansfield to the world was called The Girl Can’t Help It. The behavior of Rat Pack–era men has now been so airbrushed and glamorized that we’ve forgotten just how thoroughly debased their treatment of women was. Even as we thrill to Frank Sinatra’s “nice ’n’ easy” style, we overlook the classic Sinatra movie character’s enjoying an endless stream of showgirls and (barely disguised) prostitutes until forced to settle down with a killjoy ball-and-chain girlfriend. The depiction of women either as childish wives living under the protection of their husbands or brainless sirens sexually available to the first taker was undoubtedly vulgar, but it reflected a reality about the domestic arrangements of Americans after 1945 that was due for a profound revision when the 1960s came along.
And change they did, with a vengeance. The sexual revolution broke down the barriers between the sexes as the women’s-liberation movement insisted that bourgeois domesticity was a prison. The rules melted away, but attitudes don’t melt so readily; Sinatra’s ball-and-chain may have disappeared by common consent, but for a long time it seemed that the kooky sexpot of the most chauvinistic fantasy had simply become the ideal American woman. The distinction between the workplaces of the upper middle class and the singles bars where they sought companionship was pretty blurred.
Which is where MacKinnon came in—although if we look back at it, her objection seems not Marxist in orientation but almost Victorian. She described a workplace in which women were unprotected by old-fashioned social norms against adultery and general caddishness and found themselves mired in a “hostile environment.” She named the problem; it fell to the feminist movement as a whole to enshrine protections against it. They had some success. In 1986, the U.S. Supreme Court embraced elements of MacKinnon’s reasoning when it ruled unanimously in Meritor Savings Bank v. Vinson that harassment that was “sufficiently severe or pervasive” enough to create “a hostile or abusive work environment” was a violation of Title VII of the Civil Rights Act of 1964. The U.S. Equal Employment Opportunity Commission issued rules advising employers to create procedures to combat harassment, and employers followed suit by establishing sexual-harassment policies. Human-resource departments spent countless hours and many millions of dollars on sexual-harassment-awareness training for employees.
With new regulations and enforcement mechanisms, the argument went, the final, fusty traces of patriarchal, protective norms and bad behavior would be swept away in favor of rational legal rules that would ensure equal protection for women in the workplace. The culture might still objectify women, but our legal and employment systems would, in fits and starts, erect scaffolding upon which women who were harassed could seek justice.
But as the growing list of present-day harassers and predators attests—Harvey Weinstein, Louis C.K., Charlie Rose, Michael Oreskes, Glenn Thrush, Mark Halperin, John Conyers, Al Franken, Roy Moore, Matt Lauer, Garrison Keillor, et al.—the system appears to have failed the people it was meant to protect. There were searing moments that raised popular awareness about sexual harassment: (Anita Hill’s testimony about U.S. Supreme Court nominee Clarence Thomas in 1991; Senator Bob Packwood’s ouster for serial groping in 1995). There was, however, still plenty of space for men who harassed and assaulted women (and, in Kevin Spacey’s case, men) to shelter in place.
This wasn’t supposed to happen. Why did it?
Sex and Training
What makes sexual harassment so unnerving is not the harassment. It’s the sex—a subject, even a half-century into our so-called sexual revolution, about which we remain deeply confused.
The challenge going forward, now that the Hollywood honcho Weinstein and other notoriously lascivious beneficiaries of the liberation era have been removed, is how to negotiate the rules of attraction and punish predators in a culture that no longer embraces accepted norms for sexual behavior. Who sets the rules, and how do we enforce them? The self-appointed guardians of that galaxy used to be the feminist movement, but it is in no position to play that role today as it reckons not only with the gropers in its midst (Franken) but the ghosts of gropers past (Bill Clinton).
The feminist movement long ago traded MacKinnon’s radical feminism for political expedience. In 1992 and 1998, when her husband was a presidential candidate and then president, Hillary Clinton covered for Bill, enthusiastically slut-shaming his accusers. Her sin was and is at least understandable, if not excusable, given that the two are married. But what about America’s most glamorous early feminist, Gloria Steinem? In 1998, Steinem wrote of Clinton accuser Kathleen Willey: “The truth is that even if the allegations are true, the President is not guilty of sexual harassment. He is accused of having made a gross, dumb and reckless pass at a supporter during a low point in her life. She pushed him away, she said, and it never happened again. In other words, President Clinton took ‘no’ for an answer.” As for Monica Lewinsky, Steinem didn’t even consider the president’s behavior with a young intern to be harassment: “Welcome sexual behavior is about as relevant to sexual harassment as borrowing a car is to stealing one.”
The consequences of applying to Clinton what Steinem herself called the “one-free-grope” rule are only now becoming fully visible. Even in the case of a predator as malevolent as Weinstein, it’s clear that feminists no longer have a shared moral language or the credibility with which to condemn such behavior. Having tied their movement’s fortunes to political power, especially the Democratic Party, it is difficult to take seriously their injunctions about male behavior on either side of the aisle now (just as it was difficult to take seriously partisans on the right who defended the Alabama Senate candidate and credibly accused child sexual predator Roy Moore). Democrat Nancy Pelosi’s initial hemming and hawing about denouncing accused sexual harasser Representative John Conyers was disappointing but not surprising. As for Steinem, she’s gone from posing undercover as a Playboy bunny in order to expose male vice to sitting on the board of Playboy’s true heir, VICE Media, an organization whose bro-culture has spawned many sexual-harassment complaints. She’s been honored by Rutgers University, which created the Gloria Steinem Chair in Media, Culture, and Feminist Studies. One of the chair’s major endowers? Harvey Weinstein.
In place of older accepted norms or trusted moral arbiters, we have weaponized gossip. “S—-y Media Men” is a Google spreadsheet created by a woman who works in media and who, in the wake of the Weinstein revelations, wanted to encourage other women to name the gropers among us. At first a well-intentioned effort to warn women informally about men who had behaved badly, it quickly devolved into an anonymous unverified online litany of horribles devoid of context. The men named on the list were accused of everything from sending clumsy text messages to rape; Jia Tolentino of the New Yorker confessed that she didn’t believe the charges lodged against a male friend of hers who appeared on the list.
Others have found sisterhood and catharsis on social media, where, on Twitter, the phrase #MeToo quickly became the symbol for women’s shared experiences of harassment or assault. Like the consciousness-raising sessions of earlier eras, the hashtag supposedly demonstrated the strength of women supporting other women. But unlike in earlier eras, it led not to group hugs over readings of The Feminine Mystique, but to a brutally efficient form of insta-justice meted out on an almost daily basis against the accused. Writing in the Guardian, Jessica Valenti praised #MeToo for encouraging women to tell their stories but added, “Why have a list of victims when a list of perpetrators could be so much more useful?” Valenti encouraged women to start using the hashtag as a way to out predators, not merely to bond with one another. Even the New York Times has gone all-in on the assumption that the reckoning will continue: The newspaper’s “gender editor,” Jessica Bennett, launched a newsletter, The #MeToo Moment, described as “the latest news and insights on the sexual harassment and misconduct scandals roiling our society.”
As the also-popular hashtag #OpenSecret suggests, this #MeToo moment has brought with it troubling questions about who knew what and when—and a great deal of anger at gatekeepers and institutions that might have turned a blind eye to predators. The backlash against the Metropolitan Opera in New York is only the most recent example. Reports of conductor James Levine’s molestation of teenagers have evidently been widespread in the classical-music world for decades. And, as many social-media users hinted with their use of the hashtag #itscoming, Levine is not the only one who will face a reckoning.
To be sure, questioning and catharsis are welcome if they spark reforms such as crackdowns on the court-approved payoffs and nondisclosure agreements that allowed sexual predators like Weinstein to roam free for so long. And they have also brought a long-overdue recognition of the ineffectiveness of so much of what passes for sexual-harassment-prevention training in the workplace. As the law professor Lauren Edelman noted in the Washington Post, “There have been only a handful of empirical studies of sexual-harassment training, and the research has not established that such training is effective. Some studies suggest that training may in fact backfire, reinforcing gendered stereotypes that place women at a disadvantage.” One specific survey at a university found that “men who participated in the training were less likely to view coercion of a subordinate as sexual harassment, less willing to report harassment and more inclined to blame the victim than were women or men who had not gone through the training.”
Realistic Change vs. Impossible Revolution
Because harassment lies at the intersection of law, politics, ideology, and culture, attempts to re-regulate behavior, either by returning to older, more traditional norms, or by weaponizing women’s potential victimhood via Twitter, won’t work. America is throwing the book at foul old violators like Weinstein and Levine, but aside from warning future violators that they may be subject to horrible public humiliation and ruination, how is all this going to fix the problem?
We are a long way from Phyllis Schlafly’s ridiculous remark, made years ago during a U.S. Senate committee hearing, that “virtuous women are seldom accosted,” but Vice President Mike Pence’s rule about avoiding one-on-one social interactions with women who aren’t his wife doesn’t really scale up in terms of effective policy in the workplace, either. The Pence Rule, like corporate H.R. policies about sexual harassment, really exists to protect Pence from liability, not to protect women.
Indeed, the possibility of realistic change is made almost moot by the hysterical ambitions of those who believe they are on the verge of bringing down the edifice of American masculinity the way the Germans brought down the Berlin wall. Bennett of the Times spoke for many when she wrote in her description of the #MeToo newsletter: “The new conversation goes way beyond the workplace to sweep in street harassment, rape culture, and ‘toxic masculinity’—terminology that would have been confined to gender studies classes, not found in mainstream newspapers, not so long ago.”
Do women need protection? Since the rise of the feminist movement, it has been considered unacceptable to declare that women are weaker than men (even physically), yet, as many of these recent assault cases make clear, this is a plain fact. Men are, on average, physically larger and more aggressive than women; this is why for centuries social codes existed to protect women who were, by and large, less powerful, more vulnerable members of society.
MacKinnon’s definition of harassment at first seemed to acknowledge such differences; she described harassment as “dominance eroticized.” But like all good feminist theorists, she claimed this dominance was socially constructed rather than biological—“the legally relevant content of the term sex, understood as gender difference, should focus upon its social meaning more than upon any biological givens,” she wrote. As such, the reasoning went, men’s socially constructed dominance could be socially deconstructed through reeducation, training, and the like.
Culturally, this is the view that now prevails, which is why we pinball between arguing that women can do anything men can do and worrying that women are all the potential victims of predatory, toxic men. So which is it? Girl Power or the Fainting Couch?
Regardless, when harassment or assault claims arise, the cultural assumptions that feminism has successfully cultivated demand we accept that women are right and men are wrong (hence the insistence that we must believe every woman’s claim about harassment and assault, and the calling out of those who question a woman’s accusation). This gives women—who are, after all, flawed human beings just like men—too much accusatory power in situations where context is often crucial for understanding what transpired. Feminists with a historical memory should recall how they embraced this view after mandatory-arrest laws for partner violence that were passed in the 1990s netted many women for physically assaulting their partners. Many feminist legal scholars at the time argued that such laws were unfair to women precisely because they neglected context. (“By following the letter of the law… law enforcement officers often disregard the context in which victims of violence resort to using violence themselves,” wrote Susan L. Miller in the Violence Against Women journal in 2001.)
Worse, the unquestioned valorization of women’s claims leaves men in the position of being presumed guilty unless proven innocent. Consider a recent tweet by Washington Post reporter and young-adult author Monica Hesse in response to New York Times reporter Farhad Manjoo’s self-indulgent lament. Manjoo: “I am at the point where i seriously, sincerely wonder how all women don’t regard all men as monsters to be constantly feared. the real world turns out to be a legit horror movie that I inhabited and knew nothing about.”
Hesse’s answer: “Surprise! The answer is that we do, and we must, regard all men as potential monsters to be feared. That’s why we cross to the other side of the street at night, and why we sometimes obey when men say ‘Smile, honey!’ We are always aware the alternative could be death.” This isn’t hyperbole in her case; Hesse has so thoroughly internalized the message that men are to be feared, not trusted, that she thinks one might kill her on the street if she doesn’t smile at him. Such illogic makes the Victorian neurasthenics look like the Valkyrie.
But while most reasonable people agree that women and men both need to take responsibility for themselves and exercise good judgment, what this looks like in practice is not going to be perfectly fair, given the differences between men and women when it comes to sexual behavior. In her book, MacKinnon observed of sexual harassment, “Tacitly, it has been both acceptable and taboo; acceptable for men to do, taboo for women to confront, even to themselves.”
That’s one thing we can say for certain is no longer true. Nevertheless, if you begin with the assumption that every sexual invitation is a power play or the prelude to an assault, you are likely to find enemies lurking everywhere. As Hesse wrote in the Washington Post about male behavior: “It’s about the rot that we didn’t want to see, that we shoveled into the garbage disposal of America for years. Some of the rot might have once been a carrot and some it might have once been a moldy piece of rape-steak, but it’s all fetid and horrific and now, and it’s all coming up at once. How do we deal with it? Prison for everyone? Firing for some? …We’re only asking for the entire universe to change. That’s all.”
But women are part of that “entire universe,” too, and it is incumbent on them to make it clear when someone has crossed the line. Both women and men would be better served if they adopted the same rule—“If you see something, say something”—when it comes to harassment. Among the many details that emerged from the recent exposé at Vox about New York Times reporter Glenn Thrush was the setting for the supposedly egregious behavior: It was always after work and after several drinks at a bar. In all of the interactions described, one or usually both of the parties was tipsy or drunk; the women always agreed to go with Thrush to another location. The women also stayed on good terms with Thrush after he made his often-sloppy passes at them, in one case sending friendly text messages and ensuring him he didn’t need to apologize for his behavior. The Vox writer, who herself claims to have been victimized by Thrush, argues, “Thrush, just by his stature, put women in a position of feeling they had to suck up and move on from an uncomfortable encounter.” Perhaps. But he didn’t put them in the position of getting drunk after work with him. They put themselves in that position.
Also, as the Thrush story reveals, women sometimes use sexual appeal and banter for their own benefit in the workplace. If we want to clarify the blurred lines that exist around workplace relationships, then we will have to reckon with the women who have successfully exploited them for their own advantage.
None of this means women should be held responsible when men behave badly or illegally. But it puts male behavior in the proper context. Sometimes, things really are just about sex, not power. As New York Times columnist Ross Douthat bluntly noted in a recent debate in New York magazine with feminist Rebecca Traister, “I think women shouldn’t underestimate the extent to which male sexual desire is distinctive and strange and (to women) irrational-seeming. Saying ‘It’s power, not sex’ excludes too much.”
Social-Media Justice or Restorative Justice?
What do we want to happen? Do we want social-media justice or restorative justice for harassers and predators? The first is immediate, cathartic, and brutal, with little consideration for nuance or presumed innocence for the accused. The second is more painstaking because it requires reaching some kind of consensus about the allegations, but it is also ultimately less destructive of the community and culture as a whole.
Social-media justice deploys the powerful force of shame at the mere whiff of transgression, so as to create a regime of prevention. The thing is, Americans don’t really like shame (the sexual revolution taught us that). Our therapeutic age doesn’t think that suppressing emotions and inhibiting feelings—especially about sex—is “healthy.” So either we will have to embrace the instant and unreflective emotiveness of #MeToo culture and accept that its rough justice is better than no justice at all—or we will have to stop overreacting every time a man does something that is untoward—like sending a single, creepy text message—but not actually illegal (like assault or constant harassment).
After all, it’s not all bad news from the land of masculinity. Rates of sexual violence have fallen 63 percent since 1993, according to statistics from the Rape, Abuse, and Incest National Network, and as scholar Steven Pinker recently observed: “Despite recent attention, workplace sexual harassment has declined over time: from 6.1 percent of GSS [General Social Survey] respondents in 2002 to 3.6 percent in 2014. Too high, but there’s been progress, which can continue.”
Still, many men have taken this cultural moment as an opportunity to reflect on their own understanding of masculinity. In the New York Times, essayist Stephen Marche fretted about the “unexamined brutality of the male libido” and echoed Catharine MacKinnon when he asked, “How can healthy sexuality ever occur in conditions in which men and women are not equal?” He would have done better to ask how we can raise boys who will become men who behave honorably toward women. And how do we even raise boys to become honorable men in a culture that no longer recognizes and rewards honor?
The answers to those questions aren’t immediately clear. But one thing that will make answering them even harder is the promotion of the idea of “toxic masculinity.” New York Times columnist Charles Blow recently argued that “we have to re-examine our toxic, privileged, encroaching masculinity itself. And yes, that also means on some level reimagining the rules of attraction.” But the whole point of the phrase “rules of attraction” is to highlight that there aren’t any and never have been (if you have any doubts, read the 1987 Bret Easton Ellis novel that popularized the phrase). Blow’s lectures about “toxic masculinity” are meant to sow self-doubt in men and thus encourage some enlightened form of masculinity, but that won’t end sexual harassment any more than Lysistrata-style refusal by women to have sex will end war.
Parents should be teaching their sons about personal boundaries and consent from a young age, just as they teach their daughters, and unequivocally condemn raunchy and threatening remarks about women, whether they are uttered by a talk-radio host or by the president of the United States. The phrase “that isn’t how decent men behave” should be something every parent utters.
But such efforts are made more difficult by a liberal culture that has decided to equate caddish behavior with assault precisely because it has rejected the strict norms that used to hold sway—the old conservative norms that regarded any transgression against them as a seriousviolation and punished it accordingly. Instead, in an effort to be a kinder, gentler, more “woke” society that’s understanding of everyone’s differences, we’ve ended up arbitrarily picking and choosing among the various forms of questionable behavior for which we will have no tolerance, all the while failing to come to terms with the costs of living in such a society. A culture that hangs the accused first and asks questions later might have its virtues, but psychological understanding is not one of them.
And so we come back to sex and our muddled understanding of its place in society. Is it a meaningless pleasure you’re supposed to enjoy with as many people as possible before settling down and marrying? Or is it something more important than that? Is it something that you feel empowered to handle in Riot Grrrl fashion, or is getting groped once by a pervy co-worker something that prompts decades of nightmares and declarations that you will “never be the same”? How can we condemn people like Senator Al Franken, whose implicit self-defense is that it’s no big deal to cop a feel every so often, when our culture constantly offers up women like comedian Amy Schumer or Abbi and Ilana of the sketch show Broad City, who argue that women can and should be as filthy and degenerate as the most degenerate guy?
Perhaps it’s progress that the downfall of powerful men who engage in inappropriate sexual behavior is no longer called a “bimbo eruption,” as it was in the days of Bill Clinton, and that the men who harassed or assaulted women are facing the end of their careers and, in some cases, prison. But this is not the great awakening that so many observers have claimed it is. Awakenings need tent preachers to inspire and eager audiences to participate; our #MeToo moment has plenty of those. What it doesn’t have, unless we can agree on new norms for sexual behavior both inside and outside the workplace, is a functional theology that might cultivate believers who will actually practice what they preach.
That functional theology is out of our reach. Which means this moment is just that—a moment. It will die down, impossible though it seems at present. And every 10 or 15 years a new harassment scandal will spark widespread outrage, and we will declare that a new moment of reckoning and realization has emerged. After which the stories will again die down and very little will have changed.
No one wants to admit this. It’s much more satisfying to see the felling of so many powerful men as a tectonic cultural shift, another great leap forward toward equality between the sexes. But it isn’t, because the kind of asexual equality between the genders imagined by those most eager to celebrate our #MeToo moment has never been one most people embrace. It’s one that willfully overlooks significant differences between the sexes and assumes that thoughtful people can still agree on norms of sexual behavior.
They can’t. And they won’t.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The U.S. will endanger itself if it accedes to Russian and Chinese efforts to change the international system to their liking
A “sphere of influence” is traditionally understood as a geographical zone within which the most powerful actor can impose its will. And nearly three decades after the close of the superpower struggle that Churchill’s speech heralded, spheres of influence are back. At both ends of the Eurasian landmass, the authoritarian regimes in China and Russia are carving out areas of privileged influence—geographic buffer zones in which they exercise diplomatic, economic, and military primacy. China and Russia are seeking to coerce and overawe their neighbors. They are endeavoring to weaken the international rules and norms—and the influence of opposing powers—that stand athwart their ambitions in their respective “near abroads.” Chinese island-building and maritime expansionism in the South China Sea and Russian aggression in Ukraine and intimidation of the Baltic states are part and parcel of the quasi-imperial projects these revisionist regional powers are now pursuing.
Historically speaking, a world made up of rival spheres is more the norm than the exception. Yet such a world is in sharp tension with many of the key tenets of the American foreign-policy tradition—and with the international order that the United States has labored to construct and maintain since the end of World War II.
To be sure, Washington carved out its own spheres of influence in the Western Hemisphere beginning in the 19th century, and America’s myriad alliance blocs in key overseas regions are effectively spheres by another name. And today, some international-relations observers have welcomed the return of what the foreign-policy analyst Michael Lind has recently called “blocpolitik,” hoping that it might lead to a more peaceful age of multilateral equilibrium.
But for more than two centuries, American leaders have generally opposed the idea of a world divided into rival spheres of influence and have worked hard to deny other powers their own. And a reversion to a world dominated by great powers and their spheres of influence would thus undo some of the strongest traditions in American foreign policy and take the international system back to a darker, more dangerous era.I n an extreme form, a sphere of influence can take the shape of direct imperial or colonial control. Yet there are also versions in which a leading power forgoes direct military or administrative domination of its neighbors but nonetheless exerts geopolitical, economic, and ideological influence. Whatever their form, spheres of influence reflect two dominant imperatives of great-power politics in an anarchic world: the need for security vis-à-vis rival powers and the desire to shape a nation’s immediate environment to its benefit. Indeed, great powers have throughout history pursued spheres of influence to provide a buffer against the encroachment of other hostile actors and to foster the conditions conducive to their own security and well-being.
The Persian Empire, Athens and Sparta, and Rome all carved out domains of dominance. The Chinese tribute system—which combined geopolitical control with the spread of Chinese norms and ideas—profoundly shaped the trajectory of East Asia for hundreds of years. The 19th and 20th centuries saw the British Empire, Japan’s East Asian Co-Prosperity Sphere, and the Soviet bloc.
America, too, has played the spheres-of-influence game. From the early-19th century onward, American officials strove for preeminence in the Western Hemisphere—first by running other European powers off much of the North American continent and then by pushing them out of Latin America. With the Monroe Doctrine, first enunciated in 1823, America staked its claim to geopolitical primacy from Canada to the Southern Cone. Over the succeeding generations, Washington worked to achieve military dominance in that area, to tie the countries of the Western Hemisphere to America geopolitically and economically, and even to help pick the rulers of countries from Mexico to Brazil.
If this wasn’t a sphere of influence, nothing was. In 1895, Secretary of State Richard Olney declared that “the United States is practically sovereign on this continent and its fiat is law upon the subjects to which it confines its interposition.” After World War II, moreover, a globally predominant United States steadily expanded its influence into Europe through NATO, into East Asia through various military alliances, and into the Middle East through a web of defense, diplomatic, and political arrangements. The story of global politics over the past 200 years has, in large part, been the story of expanding U.S. influence.
Nonetheless, there has always been something ambivalent—critics would say hypocritical—about American views of this matter. For as energetic as Washington has been in constructing its geopolitical domain, a “spheres-of-influence world” is in perpetual tension with four strong intellectual traditions in U.S. strategy. These are hegemony, liberty, openness, and exceptionalism.
First, hegemony. The myth of America as an innocent isolationist country during its first 170 years is powerful and enduring; it’s also wrong. From the outset, American statesmen understood that the country’s favorable geography, expanding population, and enviable resource endowments gave it the potential to rival, and ultimately overtake, the European states that dominated world politics. America might be a fledgling republic, George Washington said, but it would one day attain “the strength of a giant.” From the revolution onward, American officials worried, with good reason, that France, Spain, and the United Kingdom would use their North American territories to strangle or contain the young republic. Much of early American diplomacy was therefore geared toward depriving the European powers of their North American possessions, using measures from coercive diplomacy to outright wars of conquest. “The world shall have to be familiarized with the idea of considering our proper dominion to be the continent of North America,” wrote John Quincy Adams in 1819. The only regional sphere of influence that Americans would accept as legitimate was their own.
By the late-19th century, the same considerations were pushing Americans to target spheres of influence further abroad. As the industrial revolution progressed, it became clear that geography alone might not protect the nation. Aggressive powers could now generate sufficient military strength to dominate large swaths of Europe or East Asia and then harness the accumulated resources to threaten the United States. Moreover, as America itself became an increasingly mighty country that sought to project its influence overseas, its leaders naturally objected to its rivals’ efforts to establish their own preserves from which Washington would be excluded. If much of America’s 19th-century diplomacy was dedicated to denying other powers spheres of influence in the Western Hemisphere, much of the country’s 20th-century diplomacy was an effort to break up or deny rival spheres of influence in Europe and East Asia.
From the Open Door policy, which sought to prevent imperial powers from carving up China, to U.S. intervention in the world wars, to the confrontation with the Soviet Empire in the Cold War, the United States repeatedly acted on the belief that it could be neither as secure nor influential as it desired in a world divided up and dominated by rival nations. The American geopolitical tradition, in other words, has long contained a built-in hostility to other countries’ spheres of influence.
The American ideological tradition shares this sense of preeminence, as reflected in the second key tenet: liberty. America’s founding generation did not see the revolution merely as the birth of a future superpower; they saw it as a catalyst for spreading political liberty far and wide. Thomas Paine proclaimed in 1775 that Americans could “begin the world anew”; John Quincy Adams predicted, several decades later, that America’s liberal ideology was “destined to cover the surface of the globe.” Here, too, the new nation was not cursed with excessive modesty—and here, too, the existence of rival spheres of influence threatened this ambition.
Rival spheres of influence—particularly within the Western Hemisphere—imperiled the survival of liberty at home. If the United States were merely one great power among many on the North American continent, the founding generation worried, it would be forced to maintain a large standing military establishment and erect a sort of 18th-century “garrison state.” Living in perpetual conflict and vigilance, in turn, would corrode the very freedoms for which the revolution had been fought. “No nation,” wrote James Madison, “can preserve its freedom in the midst of continual warfare.” Just as Madison argued, in Federalist No. 10, that “extending the sphere”—expanding the republic—was a way of safeguarding republicanism at home, expanding America’s geopolitical domain was essential to providing the external security that a liberal polity required to survive.
Rival spheres of influence also constrained the prospects for liberty abroad. Although the question of whether the United States should actively support democratic revolutions overseas has been a source of unending controversy, virtually all American strategists have agreed that the country would be more secure and influential in a world where democracy was widespread. Given this mindset, Americans could hardly be desirous of foreign powers—particularly authoritarian powers—establishing formidable spheres of influence that would allow them to dominate the international system or suppress liberal ideals. The Monroe Doctrine was a response to the geopolitical dangers inherent in renewed imperial control of South America; it was also a response to the ideological danger posed by European nations that would “extend the political system to any portion” of the Western Hemisphere. Similar concerns have been at the heart of American opposition to the British Empire and the Soviet bloc.
Economic openness, the third core dynamic of American policy, has long served as a commercial counterpart to America’s ideological proselytism. Influenced as much by Adam Smith as by Alexander Hamilton, early American statecraft promoted free trade, neutral rights, and open markets, both to safeguard liberty and enrich a growing nation. This mission has depended on access to the world’s seas and markets. When that access was circumscribed—by the British in 1812 and by the Germans in 1917—Americans went to war to preserve it. It is unsurprising, then, that Americans also looked askance at efforts by other powers to establish areas that might be walled off from U.S. trade and investment—and from the spread of America’s capitalist ideology.
A brief list of robust policy endeavors underscores the persistent U.S. hostility to an economically closed, spheres-of-influence world: the Model Treaty of 1776, designed to promote free and reciprocal trade; John Hay’s Open Door policy of 1899, designed to prevent any outside power from dominating trade with China; Woodrow Wilson’s advocacy in his “14 Points” speech of 1918 for the removal “of all economic barriers and the establishment of an equality of trade conditions among all nations”; and the focus of the 1941 Atlantic Charter on reducing trade restrictions while promoting international economic cooperation (assuming the allies would emerge triumphant from World War II).
Fourth and finally, there’s exceptionalism. Americans have long believed that their nation was created not simply to replicate the practices of the Old World, but to revolutionize how states and peoples interact with one another. The United States, in this view, was not merely another great power out for its own self-interest. It was a country that, by virtue of its republican ideals, stood for the advancement of universal rights, and one that rejected the back-alley methods of monarchical diplomacy in favor of a more principled statecraft. When Abraham Lincoln said America represented “the last best hope of earth,” or when Woodrow Wilson scorned secret agreements in favor of “open covenants arrived at openly,” they demonstrated this exceptionalist strain in American thinking. There is some hypocrisy here, of course, for the United States has often acted in precisely the self-interested, cutthroat manner its statesmen deplored. Nonetheless, American exceptionalism has had a pronounced effect on American conduct.
Compare how Washington led its Western European allies during the Cold War—the extent to which NATO rested on the authentic consent of its members, the way the United States consistently sought to empower rather than dominate its partners—with how Moscow managed its empire in Eastern Europe. In the same way, Americans have often recoiled from arrangements that reeked of the old diplomacy. Franklin Roosevelt might have tolerated a Soviet-dominated Eastern Europe after World War II, for instance, but he knew he could not admit this publicly. Likewise, the Helsinki Accords of 1975, which required Washington to acknowledge the diplomatic legitimacy of the Soviet sphere, proved controversial inside the United States because they seemed to represent just the sort of cynical, old-school geopolitics that American exceptionalism abhors.
To be clear, U.S. hostility to a spheres-of-influence world has always been leavened with a dose of pragmatism; American leaders have pursued that hostility only so far as power and prudence allowed. The Monroe Doctrine warned European powers to stay out of the Americas, but the quid pro quo was that a young and relatively weak United States would accept, for a time, a sphere of monarchical dominance within Europe. Even during the Cold War, U.S. policymakers generally accepted that Washington could not break up the Soviet bloc in Eastern Europe without risking nuclear war.
But these were concessions to expediency. As America gained greater global power, it more actively resisted the acquisition or preservation of spheres by others. From gradually pushing the Old World out of the New, to helping vanquish the German and Japanese Empires by force of arms, to assisting the liquidation of the British Empire after World War II, to containing and ultimately defeating the Soviet bloc, the United States was present at the destruction of spheres of influence possessed by adversaries and allies alike.
The acme of this project came in the quarter-century that followed the Cold War. With the collapse of the Warsaw Pact and the Soviet Union itself, it was possible to envision a world in which what Thomas Jefferson called America’s “empire of liberty” could attain global dimensions, and traditional spheres of influence would be consigned to history. The goal, as George W. Bush’s 2002 National Security Strategy proclaimed, was to “create a balance of power that favors human freedom.” This meant an international environment in which the United States and its values were dominant and there was no balance of power whatsoever.
Under presidents from George H.W. Bush to Barack Obama, this project entailed working to spread democracy and economic liberalism farther than ever before. It involved pushing American influence and U.S.-led institutions into regions—such as Eastern Europe—that were previously dominated by other powers. It meant maintaining the military primacy necessary to stop regional powers from establishing new spheres of influence, as Washington did by rolling back Saddam Hussein’s conquest of Kuwait in 1990 and by deterring China from coercing Taiwan in 1995–96. Not least, this American project involved seeking to integrate potential rivals—foremost Russia and China—into the post–Cold War order, in hopes of depriving them of even the desire to challenge it. This multifaceted effort reflected the optimism of the post-Cold War era, as well as the influence of tendencies with deep roots in the American past. Yet try as Washington might to permanently leave behind a spheres-of-influence world, that prospect is once again upon us.B egin with China’s actions in the Asia-Pacific region. The sources of Chinese conduct are diverse, ranging from domestic insecurity to the country’s confidence as a rising power to its sense of historical destiny as “the Middle Kingdom.” All these influences animate China’s bid to establish regional mastery. China is working, first, to create a power vacuum by driving the United States out of the Western Pacific, and second, to fill that vacuum with its own influence. A Chinese admiral made this ambition clear when he remarked—supposedly in jest—to an American counterpart that, in the future, the two powers should simply split the Pacific with Hawaii as the dividing line. Yang Jiechi, then China’s foreign minister, echoed this sentiment in a moment of frustration by lecturing the nations of Southeast Asia. “China is a big country,” he said, “and other countries are small countries, and that’s just a fact.”
Policy has followed rhetoric. To undercut America’s position, Beijing has harassed American ships and planes operating in international waters and airspace. The Chinese have warned U.S. allies they may be caught in the crossfire of a Sino-American war unless Washington accommodates China or the allies cut loose from the United States. China has simultaneously worked to undermine the credibility of U.S. alliance guarantees by using strategies designed to shift the regional status quo in ways even the mighty U.S. Navy finds difficult to counter. Through a mixture of economic aid and diplomatic coercion, Beijing has also successfully divided international bodies, such as the Association of Southeast Asian Nations, through which the United States has sought to rally opposition to Chinese assertiveness. And in the background, China has been steadily building, over the course of more than two decades, formidable military tools designed to keep the United States out of the region and give Beijing a free hand in dealing with its weaker neighbors. As America’s sun sets in the Asia-Pacific, Chinese leaders calculate, the shadow China casts over the region will only grow longer.
To that end, China has claimed, dubiously, nearly all of the South China Sea as its own and constructed artificial islands as staging points for the projection of military power. Military and paramilitary forces have teased, confronted, and violated the sovereignty of countries from Vietnam to the Philippines; China is likewise intensifying the pressure on Japan in the East China Sea. Economically, Beijing uses its muscle to reward those who comply with China’s policies and punish those not willing to bow to its demands. It is simultaneously advancing geoeconomic projects, such as the Belt and Road Initiative, Asian Infrastructure Investment Bank, and Regional Comprehensive Economic Project (RCEP) that are designed to bring the region into its orbit.
Strikingly, China has also moved away from its long-professed principle of noninterference in other countries’ domestic politics by extending the reach of Chinese propaganda organs and using investment and even bribery to co-opt regional elites. Payoffs to Australian politicians are as critical to China’s regional project as development of “carrier-killer” missiles. Finally, far from subscribing to liberal concepts of democracy and human rights, Beijing emphasizes its rejection of these values and its desire to create “Asia for Asians.” In sum, China is pursuing a classic spheres-of-influence project. By blending intimidation with inducement, Beijing aims to sunder its neighbors’ bonds with America and force them to accept a Sino-centric order—a new Chinese tribute system for the 21st century.A t the other end of Eurasia, Russia is playing geopolitical hardball of a different sort. The idea that Moscow should dominate its “near abroad” is as natural to many Russians as American regional primacy is to Americans. The loss of the Kremlin’s traditional buffer zone was, therefore, one of the most painful legacies of the Cold War’s end. And so it is hardly surprising that, as Russia has regained a degree of strength in recent years, it has sought to reassert its supremacy.
It has done so, in fact, through more overtly aggressive means than those employed by China. Moscow has twice seized opportunities to humiliate and dismember former Soviet republics that committed the sin of tilting toward the West or throwing out pro-Russian leaders, first in Georgia in 2008 and then in Ukraine in 2014. It has regularly reminded its neighbors that they live on Russia’s doorstep, through coercive activities such as conducting cyberattacks on Estonia in 2007 and holding aggressive military exercises on the frontiers of the Baltic states. In the same vein, the Kremlin has essentially claimed a veto over the geopolitical alignments of neighbors from the Caucasus to Scandinavia, whether by creating frozen conflicts on their territory or threatening to target them militarily—perhaps with nuclear weapons—should they join NATO.
Military muscle is not Moscow’s only tool. Russia has simultaneously used energy exports to keep the states on its periphery economically dependent, and it has exported corruption and illiberalism to non-aligned states in the former Warsaw Pact area to prevent further encroachment of liberal values. Not least, the Kremlin has worked to undermine NATO and the European Union through political subversion and intervention in Western electoral processes. And while Russia’s activities are most concentrated in Eastern Europe and Central Asia, it’s also projecting its influence farther afield. Russian forces intervened successfully in Syria in 2015 to prop up Bashar al-Assad, preserve access to warm-water ports on the Mediterranean, and demonstrate the improved accuracy and lethality of Russian arms. Moscow continues to make inroads in the Middle East, often in cooperation with another American adversary: Iran.
To be sure, the projects that China and Russia are pursuing today are vastly different from each other, but the core logic is indisputably the same. Authoritarian powers are re-staking their claim to privileged influence in key geostrategic areas.S o what does this mean for American interests? Some observers have argued that the United States should make a virtue of necessity and accept the return of such arrangements. By this logic, spheres of influence create buffer zones between contending great powers; they diffuse responsibility for enforcing order in key areas. Indeed, for those who think that U.S. policy has left the country exhausted and overextended, a return to a world in which America no longer has the burden of being the dominant power in every region may seem attractive. The great sin of American policy after the Cold War, many realist scholars argue, was the failure to recognize that even a weakened Russia would demand privileged influence along its frontiers and thus be unalterably opposed to NATO expansion. Similarly, they lament the failure to understand that China would not forever tolerate U.S. dominance along its own periphery. It is not surprising, then, to hear analysts such as Australia’s Hugh White or America’s John Mearsheimer argue that the United States should learn to “share power” with China in the Pacific, or that it must yield ground in Eastern Europe in order to avoid war with Russia.
Such claims are not meritless; there are instances in which spheres of influence led to a degree of stability. The division of Europe into rival blocs fostered an ugly sort of stasis during the Cold War; closer to home, America’s dominance in the Western Hemisphere has long muted geopolitical competition in our own neighborhood. For all the problems associated with European empires, they often partially succeeded in limiting scourges such as communal violence.
And yet the allure of a spheres-of-influence world is largely an illusion, for such a world would threaten U.S. interests, traditions, and values in several ways.
First, basic human rights and democratic values would be less respected. China and Russia are not liberal democracies; they are illiberal autocracies that see the spread of democratic values as profoundly corrosive to their own authority and security. Just as the United States has long sought to create a world congenial to its own ideological predilections, Beijing and Moscow would certainly do likewise within their spheres of dominance.
They would, presumably, bring their influence to bear in support of friendly authoritarian regimes. And they would surely undermine democratic governments seen to pose a threat of ideological contagion or insubordination to Russian or Chinese prerogatives. Russia has taken steps to prevent the emergence of a Western-facing democracy in Ukraine and to undermine liberal democracies in Europe and elsewhere; China is snuffing out political freedoms in Hong Kong. Such actions offer a preview of what we will see when these countries are indisputably dominant along their peripheries. Further aggressions, in turn, would not simply be offensive to America’s ideological sensibilities. For given that the spread of democracy has been central to the absence of major interstate war in recent decades, and that the spread of American values has made the U.S. more secure and influential, a less democratic world will also be a more dangerous world.
Second, a spheres-of-influence world would be less open to American commerce and investment. After all, the United States itself saw geoeconomic dominance in Latin America as the necessary counterpart to geopolitical dominance. Why would China take a less self-interested approach? China already reaps the advantages of an open global economy even as it embraces protectionism and mercantilism. In a Chinese-dominated East Asia, all economic roads will surely lead to Beijing, as Chinese officials will be able to use their leverage to ensure that trade and investment flows are oriented toward China and geopolitical competitors like the United States are left on the outside. Beijing’s current geoeconomic projects—namely, RCEP and the Belt and Road Initiative—offer insight into a regional economic future in which flows of commerce and investment are subject to heavy Chinese influence.
Third, as spheres of influence reemerge, the United States will be less able to shape critical geopolitical events in crucial regions. The reason Washington has long taken an interest in events in faraway places is that East Asia, Europe, and the Middle East are the areas from which major security challenges have emerged in the past. Since World War II, America’s forward military presence has been intended to suppress incipient threats and instability; that presence has gone hand in glove with energetic diplomacy that amplifies America’s voice and protects U.S. interests. In a spheres-of-influence world, Washington would no longer enjoy the ability to act with decisive effect in these regions; it would find itself reacting to global events rather than molding them.
This leads to a final, and crucial, issue. America would be more likely to find its core security interests challenged because world orders based on rival spheres of influence have rarely been as peaceful and settled as one might imagine.
To see this, just work backward from the present. During the Cold War, a bipolar balance did help avert actual war between Moscow and Washington. But even in Europe—where the spheres of influence were best defined—there were continual tensions and crises as Moscow tested the Western bloc. And outside Europe, violence and proxy wars were common as the superpowers competed to extend their reach into the Third World. In the 1930s, the emergence of German and Japanese spheres of influence led to the most catastrophic war in global history. The empires of the 19th century—spheres of influence in their own right—continually jostled one another, leading to wars and near-wars over the course of decades; the Peace of Amiens between England and Napoleonic France lasted a mere 14 months. And looking back to the ancient world, there were not one, but three Punic Wars fought between Rome and Carthage as two expanding empires came into conflict. A world defined by spheres of influence is often a world characterized by tensions, wars, and competition.
The reasons for this are simple. As the political scientist William Wohlforth observed, unipolar systems—such as the U.S.-dominated post–Cold War order—are anchored by a hegemonic power that can act decisively to maintain the peace. In a unipolar system, Wohlforth writes, there are few incentives for revisionist powers to incur the “focused enmity” of the leading state. Truly multipolar systems, by contrast, have often been volatile. When the major powers are more evenly matched, there is a greater temptation to aggression by those who seek to change the existing order of things. And seek to change things they undoubtedly will.
The idea that spheres of influence are stabilizing holds only if one assumes that the major powers are motivated only by insecurity and that concessions to the revisionists will therefore lead to peace. Churchill described this as the idea that if one “feeds the crocodile enough, the crocodile will eat him last.”
Unfortunately, today’s rising or resurgent powers are also motivated—as is America—by honor, ambition, and the timeless desire to make their international habitats reflect their own interests and ideals. It is a risky gamble indeed, then, to think that ceding Russia or China an uncontested sphere of influence would turn a revisionist authoritarian regime into a satisfied power. The result, as Robert Kagan has noted, might be to embolden those actors all the more, by giving them freer rein to bring their near-abroads under control, greater latitude and resources to pursue their ambitions, and enhanced confidence that the U.S.-led order is fracturing at its foundations. For China, dominance over the first island chain might simply intensify desires to achieve primacy in the second island chain and beyond; for Russia, renewed mastery in the former Soviet space could lead to desires to bring parts of the former Warsaw Pact to heel, as well. To observe how China is developing ever longer-range anti-access/area denial capabilities, or how Russia has been projecting military power ever farther afield, is to see this process in action.T he reemergence of a spheres-of-influence world would thus undercut one of the great historical achievements of U.S. foreign policy: the creation of a system in which America is the dominant power in each major geopolitical region and can act decisively to shape events and protect its interests. It would foster an environment in which democratic values are less prominent, authoritarian models are ascendant, and mercantilism advances as economic openness recedes. And rather than leading to multipolar stability, this change could simply encourage greater revisionism on the part of powers whose appetite grows with the eating. This would lead the world away from the relative stability of the post–Cold War era and back into the darker environment it seemed to have relegated to history a quarter-century ago. The phrase “spheres of influence” may sound vaguely theoretical and benign, but its real-world effects are likely to be tangible and pernicious.
Fortunately, the return of a spheres-of-influence world is not yet inevitable. Even as some nations will accept incorporation into a Chinese or Russian sphere of influence as the price of avoiding conflict, or maintaining access to critical markets and resources, others will resist because they see their own well-being as dependent on the preservation of the world order that Washington has long worked to create. The Philippines and Cambodia seem increasingly to fall into the former group; Poland and Japan, among many others, make up the latter. The willingness of even this latter group to take actions that risk incurring Beijing and Moscow’s wrath, however, will be constantly calibrated against an assessment of America’s own ability to continue leading the resistance to a spheres-of-influence world. Averting that outcome is becoming steadily harder, as the relative power and ambition of America’s authoritarian rivals rise and U.S. leadership seems to falter.
Harder, but not impossible. The United States and its allies still command a significant preponderance of global wealth and power. And the political, economic, and military weaknesses of its challengers are legion. It is far from fated, then, that the Western Pacific and Eastern Europe will slip into China’s and Russia’s respective orbits. With sufficient creativity and determination, Washington and its partners might still be able to resist the return of a dangerous global system. Doing so will require difficult policy work in the military, economic, and diplomatic realms. But ideas precede policy, and so simply rediscovering the venerable tradition of American hostility to spheres of influence—and no less, the powerful logic on which that tradition is based—would be a good start.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
What does the man with the baton actually do?
Why, then, are virtually all modern professional orchestras led by well-paid conductors instead of performing on their own? It’s an interesting question. After all, while many celebrity conductors are highly trained and knowledgeable, there have been others, some of them legendary, whose musical abilities were and are far more limited. It was no secret in the world of classical music that Serge Koussevitzky, the music director of the Boston Symphony from 1924 to 1949, found it difficult to read full orchestral scores and sometimes learned how to lead them in public by first practicing with a pair of rehearsal pianists whom he “conducted” in private.
Yet recordings show that Koussevitzky’s interpretations of such complicated pieces of music as Aaron Copland’s El Salón México and Maurice Ravel’s orchestral transcription of Mussorgsky’s Pictures at an Exhibition (both of which he premiered and championed) were immensely characterful and distinctive. What made them so? Was it the virtuosic playing of the Boston Symphony alone? Or did Koussevitzky also bring something special to these performances—and if so, what was it?
Part of what makes this question so tricky to answer is that scarcely any well-known conductors have spoken or written in detail about what they do. Only two conductors of the first rank, Thomas Beecham and Bruno Walter, have left behind full-length autobiographies, and neither one features a discussion of its author’s technical methods. For this reason, the publication of John Mauceri’s Maestros and Their Music: The Art and Alchemy of Conducting will be of special interest to those who, like my friend, wonder exactly what it is that conductors contribute to the performances that they lead.1
An impeccable musical journeyman best known for his lively performances of film music with the Hollywood Bowl Orchestra, Mauceri has led most of the world’s top orchestras. He writes illuminatingly about his work in Maestros and Their Music, leavening his discussions of such matters as the foibles of opera directors and music critics with sharply pointed, sometimes gossipy anecdotes. Most interesting of all, though, are the chapters in which he talks about what conductors do on the podium. To read Maestros and Their Music is to come away with a much clearer understanding of what its author calls the “strange and lawless world” of conducting—and to understand how conductors whose technique is deficient to the point of seeming incompetence can still give exciting performances.P rior to the 19th century, conductors of the modern kind did not exist. Orchestras were smaller then—most of the ensembles that performed Mozart’s symphonies and operas contained anywhere from two to three dozen players—and their concerts were “conducted” either by the leader of the first violins or by the orchestra’s keyboard player.
As orchestras grew larger in response to the increasing complexity of 19th-century music, however, it became necessary for a full-time conductor both to rehearse them and to control their public performances, normally by standing on a podium placed in front of the musicians and beating time in the air with a baton. Most of the first men to do so were composers, including Hector Berlioz, Felix Mendelssohn, and Richard Wagner. By the end of the century, however, it was becoming increasingly common for musicians to specialize in conducting, and some of them, notably Arthur Nikisch and Arturo Toscanini, came to be regarded as virtuosos in their own right. Since then, only three important composers—Benjamin Britten, Leonard Bernstein, and Pierre Boulez—have also pursued parallel careers as world-class conductors. Every other major conductor of the 20th century was a specialist.
What did these men do in front of an orchestra? Mauceri’s description of the basic physical process of conducting is admirably straightforward:
The right hand beats time; that is, it sets the tempo or pulse of the music. It can hold a baton. The left hand turns pages [in the orchestral score], cues instrumentalists with an invitational or pointing gesture, and generally indicates the quality of the notes (percussive, smoothly linked, sustained, etc.).
Beyond these elements, though, all bets are off. Most of the major conductors of the 20th century were filmed in performance, and what one sees in these films is so widely varied that it is impossible to generalize about what constitutes a good conducting technique.2 Most of them used batons, but several, including Boulez and Leopold Stokowski, conducted with their bare hands. Bernstein and Beecham gestured extravagantly, even wildly, while others, most famously Fritz Reiner, restricted themselves to tightly controlled hand movements. Toscanini beat time in a flowing, beautifully expressive way that made his musical intentions self-evident, but Wilhelm Furtwängler and Herbert von Karajan often conducted so unclearly that it is hard to see how the orchestras they led were able to follow them. (One exasperated member of the London Philharmonic claimed, partly in jest, that Furtwängler’s baton signaled the start of a piece “only after the thirteenth preliminary wiggle.”) Conductors of the Furtwängler sort tend to be at their best in front of orchestras with which they have worked for many years and whose members have learned from experience to “speak” their gestural language fluently.
Nevertheless, all of these men were pursuing the same musical goals. Beyond stopping and starting a given piece, it is the job of a conductor to decide how it will be interpreted. How loud should the middle section of the first movement be—and ought the violins to be playing a bit softer so as not to drown out the flutes? Someone must answer questions such as these if a performance is not to sound indecisive or chaotic, and it is far easier for one person to do so than for 100 people to vote on each decision.
Above all, a conductor controls the tempo of a performance, varying it from moment to moment as he sees fit. It is impossible for a full-sized symphony orchestra to play a piece with any degree of rhythmic flexibility unless a conductor is controlling the performance from the podium. Bernstein put it well when he observed in a 1955 TV special that “the conductor is a kind of sculptor whose element is time instead of marble.” These “sculptural” decisions are subjective, since traditional musical notation cannot be matched with exactitude. As Mauceri reminds us, Toscanini and Beecham both recorded La Bohème, having previously discussed their interpretations with Giacomo Puccini, the opera’s composer, and Toscanini conducted its 1896 premiere. Yet Beecham’s performance is 14 minutes longer than Toscanini’s. Who is “right”? It is purely a matter of individual taste, since both interpretations are powerfully persuasive.
Beyond the not-so-basic task of setting, maintaining, and varying tempos, it is the job of a conductor to inspire an orchestra—to make its members play with a charged precision that transcends mere unanimity. The first step in doing so is to persuade the players of his musical competence. If he cannot run a rehearsal efficiently, they will soon grow bored and lose interest; if he does not know the score in detail, they will not take him seriously. This requires extensive preparation on the part of the conductor, and an orchestra can tell within seconds of the downbeat whether he is adequately prepared—a fact that every conductor knows. “I’m extremely humble about whatever gifts I may have, but I am not modest about the work I do,” Bernstein once told an interviewer. “I work extremely hard and all the time.”
All things being equal, it is better than not for a conductor to have a clear technique, if only because it simplifies and streamlines the process of rehearsing an orchestra. Fritz Reiner, who taught Bernstein among others, did not exaggerate when he claimed that he and his pupils could “stand up [in front of] an orchestra they have never seen before and conduct correctly a new piece at first sight without verbal explanation and by means only of manual technique.”
While orchestra players prefer this kind of conducting, a conductor need not have a technique as fully developed as that of a Reiner or Bernstein if he knows how to rehearse effectively. Given sufficient rehearsal time, decisive and unambiguous verbal instructions will produce the same results as a virtuoso stick technique. This was how Willem Mengelberg and George Szell distinguished themselves on the podium. Their techniques were no better than adequate, but they rehearsed so meticulously that their performances were always brilliant and exact.
It also helps to supply the members of the orchestra with carefully marked orchestra parts. Beecham’s manual technique was notoriously messy, but he marked his musical intentions into each player’s part so clearly and precisely that simply reading the music on the stand would produce most of the effects that he desired.
What players do not like is to be lectured. They want to be told what to do and, if absolutely necessary, how to do it, at which point the wise conductor will stop talking and start conducting. Mauceri recalls the advice given to a group of student conductors by Joseph Silverstein, the concertmaster of the Boston Symphony: “Don’t talk to us about blue skies. Just tell us ‘longer-shorter,’ ‘faster-slower,’ ‘higher-lower.’” Professional musicians cannot abide flowery speeches about the inner meaning of a piece of music, though they will readily respond to a well-turned metaphor. Mauceri makes this point with a Toscanini anecdote:
One of Toscanini’s musicians told me of a moment in a rehearsal when the sound the NBC Symphony was giving him was too heavy. … In this case, without saying a word, he reached into his pocket and took out his silk handkerchief, tossed it into the air, and everyone watched it slowly glide to earth. After seeing that, the orchestra played the same passage exactly as Toscanini wanted.
Conducting, like all acts of leadership, is in large part a function of character. The violinist Carl Flesch went so far as to call it “the only musical activity in which a dash of charlatanism is not only harmless, but positively necessary.” While that is putting it too cynically, Flesch was on to something. I did a fair amount of conducting in college, but even though I practiced endlessly in front of a mirror and spent hours poring over my scores, I lacked the personal magnetism without which no conductor can hope to be more than merely competent at best.
On the other hand, a talented musician with a sufficiently compelling personality can turn himself into a conductor more or less overnight. Toscanini had never conducted an orchestra before making his unrehearsed debut in a performance of Verdi’s Aida at the age of 19, yet the players hastened to do his musical bidding. I once saw the modern-dance choreographer Mark Morris, whose knowledge of classical music is profound, lead a chorus and orchestra in the score to Gloria, a dance he had made in 1981 to a piece by Vivaldi. It was no stunt: Morris used a baton and a score and controlled the performance with the assurance of a seasoned pro. Not only did he have a strong personality, but he had also done his musical homework, and he knew that one was as important as the other.
The reverse, however, is no less true: The success of conductors like Serge Koussevitzky is at least as much a function of their personalities as of their preparation. To be sure, Koussevitzky had been an instrumental virtuoso (he played the double bass) before taking up conducting, but everyone who worked with him in later years was aware of his musical limitations. Yet he was still capable of imposing his larger-than-life personality on players who might well have responded indifferently to his conducting had he been less charismatic. Leopold Stokowski functioned in much the same way. He was widely thought by his peers to have been far more a showman than an artist, to the point that Toscanini contemptuously dismissed him as a “clown.” But he had, like Koussevitzky, a richly romantic musical imagination coupled with the showmanship of a stage actor, and so the orchestras that he led, however skeptical they might be about his musical seriousness, did whatever he wanted.
All great conductors share this same ability to impose their will on an orchestra—and that, after all, is the heart of the matter. A conductor can be effective only if the orchestra does what he wants. It is not like a piano, whose notes automatically sound when the keys are pressed, but a living organism with a will of its own. Conducting, then, is first and foremost an act of persuasion, as Mauceri acknowledges:
The person who stands before a symphony orchestra is charged with something both impossible and improbable. The impossible part is herding a hundred musicians to agree on something, and the improbable part is that one does it by waving one’s hands in the air.
This is why so many famous conductors have claimed that the art of conducting cannot be taught. In the deepest sense, they are right. To be sure, it is perfectly possible, as Reiner did, to teach the rudiments of clear stick technique and effective rehearsal practice. But the mystery at the heart of conducting is, indeed, unteachable: One cannot tell a budding young conductor how to cultivate a magnetic personality, any more than an actor can be taught how to have star quality. What sets the Bernsteins and Bogarts of the world apart from the rest of us is very much like what James M. Barrie said of feminine charm in What Every Woman Knows: “If you have it, you don’t need to have anything else; and if you don’t have it, it doesn’t much matter what else you have.”
2 Excerpts from many of these films were woven together into a two-part BBC documentary, The Art of Conducting, which is available on home video and can also be viewed in its entirety on YouTube
Choose your plan and pay nothing for six Weeks!
Not that he tries. What was remarkable about the condescension in this instance was that Franken directed it at women who accused him of behaving “inappropriately” toward them. (In an era of strictly enforced relativism, we struggle to find our footing in judging misbehavior, so we borrow words from the prissy language of etiquette. The mildest and most common rebuke is unfortunate, followed by the slightly more serious inappropriate, followed by the ultimate reproach: unacceptable, which, depending on the context, can include both attempted rape and blowing your nose into your dinner napkin.) Franken’s inappropriateness entailed, so to speak, squeezing the bottoms of complete strangers, and cupping the occasional breast.
Franken himself did not use the word “inappropriate.” By his account, he had done nothing to earn the title. His earlier vague denials of the allegations, he told his fellow senators, “gave some people the false impression that I was admitting to doing things that, in fact, I haven’t done.” How could he have confused people about such an important matter? Doggone it, it’s that damn sensitivity of his. The nation was beginning a conversation about sexual harassment—squeezing strangers’ bottoms, stuff like that—and “I wanted to be respectful of that broader conversation because all women deserve to be heard and their experiences taken seriously.”
Well, not all women. The women with those bottoms and breasts he supposedly manhandled, for example—their experiences don’t deserve to be taken seriously. We’ve got Al’s word on it. “Some of the allegations against me are not true,” he said. “Others, I remember very differently.” His accusers, in other words, fall into one of two camps: the liars and the befuddled. You know how women can be sometimes. It might be a hormonal thing.
But enough about them, Al seemed to be saying: Let’s get back to Al. “I know the work I’ve been able to do has improved people’s lives,” Franken said, but he didn’t want to get into any specifics. “I have used my power to be a champion of women.” He has faith in his “proud legacy of progressive advocacy.” He’s been passionate and worked hard—not for himself, mind you, but for his home state of Minnesota, by which he’s “blown away.” And yes, he would get tired or discouraged or frustrated once in a while. But then that big heart of his would well up: “I would think about the people I was doing this for, and it would get me back on my feet.” Franken recently published a book about himself: Giant of the Senate. I had assumed the title was ironic. Now I’m not sure.
Yet even in his flights of self-love, the problem that has ever attended Senator Franken was still there. You can’t take him seriously. He looks as though God made him to be a figure of fun. Try as he might, his aspect is that of a man who is going to try to make you laugh, and who is built for that purpose and no other—a close cousin to Bert Lahr or Chris Farley. And for years, of course, that’s the part he played in public life, as a writer and performer on Saturday Night Live. When he announced nine years ago that he would return to Minnesota and run for the Senate—when he came out of the closet and tried to present himself as a man of substance—the effect was so disorienting that I, and probably many others, never quite recovered. As a comedian-turned-politician, he was no longer the one and could never quite become the other.
The chubby cheeks and the perpetual pucker, the slightly crossed eyes behind Coke-bottle glasses, the rounded, diminutive torso straining to stay upright under the weight of an enormous head—he was the very picture of Comedy Boy, and suddenly he wanted to be something else: Politics Boy. I have never seen the famously tasteless tearjerker The Day the Clown Cried, in which Jerry Lewis stars as a circus clown imprisoned in a Nazi death camp, but I’m sure watching it would be a lot like watching the ex-funnyman Franken deliver a speech about farm price supports.
Then he came to Washington and slipped right into place. His career is testament to a dreary fact of life here: Taken in the mass, senators are pretty much interchangeable. Party discipline determines nearly every vote they cast. Only at the margins is one Democrat or Republican different in a practical sense from another Democrat or Republican. Some of us held out hope, despite the premonitory evidence, that Franken might use his professional gifts in service of his new job. Yet so desperate was he to be taken seriously that he quickly passed serious and swung straight into obnoxious. It was a natural fit. In no time at all, he mastered the senatorial art of asking pointless or showy questions in committee hearings, looming from his riser over fumbling witnesses and hollering “Answer the question!” when they didn’t respond properly.
It’s not hard to be a good senator, if you have the kind of personality that frees you to simulate chumminess with people you scarcely know or have never met and will probably never see again. There’s not much to it. A senator has a huge staff to satisfy his every need. There are experts to give him brief, personal tutorials on any subject he will be asked about, writers to write his questions for his committee hearings and an occasional op-ed if an idea strikes him, staffers to arrange his travel and drive him here or there, political aides to guard his reputation with the folks back home, press aides to regulate his dealings with reporters, and legislative aides to write the bills should he ever want to introduce any. The rest is show biz.
Oddly, Franken was at his worst precisely when he was handling the show-biz aspects of his job. While his inquisitions in committee hearings often showed the obligatory ferocity and indignation, he could also appear baffled and aimless. His speeches weren’t much good, and he didn’t deliver them well. As if to prove the point, he published a collection of them earlier this year, Speaking Franken. Until Pearl Harbor, he’d been showing signs of wanting to run for president. Liberal pundits were talking him up as a national candidate. Speaking Franken was likely intended to do for him what Profiles in Courage did for John Kennedy, another middling senator with presidential longings. Unfortunately for Franken, Ted Sorensen is still dead.
The final question raised by Franken’s resignation is why so many fellow Democrats urged him to give up his seat so suddenly, once his last accuser came forward. The consensus view involved Roy Moore, in those dark days when he was favored to win Alabama’s special election. With the impending arrival of an accused pedophile on the Republican side of the aisle, Democrats didn’t want an accused sexual harasser in their own ranks to deflect what promised to be a relentless focus on the GOP’s newest senator. This is bad news for any legacy Franken once hoped for himself. None of his work as a senator will commend him to history. He will be remembered instead for two things: as a minor TV star, and as Roy Moore’s oldest victim.
Choose your plan and pay nothing for six Weeks!
Review of 'Lioness' By Francine Klagsbrun
Golda Meir, Israel’s fourth prime minister, moved to Palestine from America in 1921, at the age of 22, to pursue Socialist Zionism. She was instrumental in transforming the Jewish people into a state; signed that state’s Declaration of Independence; served as its first ambassador to the Soviet Union, as labor minister for seven years, and as foreign minister for a decade. In 1969, she became the first female head of state in the Western world, serving from the aftermath of the 1967 Six-Day War and through the nearly catastrophic but ultimately victorious 1973 Yom Kippur War. She resigned in 1974 at the age of 76, after five years as prime minister. Her involvement at the forefront of Zionism and the leadership of Israel thus extended more than half a century.
This is the second major biography of Golda Meir in the last decade, after Elinor Burkett’s excellent Golda in 2008. Klagsbrun’s portrait is even grander in scope. Her epigraph comes from Ezekiel’s lamentation for Israel: What a lioness was your mother / Among the lions! / Crouching among the great beasts / She reared her cubs. The “mother” was Israel; the “cubs,” her many ancient kings; the “great beasts,” the hostile nations surrounding her. One finishes Klagsbrun’s monumental volume, which is both a biography of Golda and a biography of Israel in her time, with a deepened sense that modern Israel, its prime ministers, and its survival is a story of biblical proportions.Golda Meir’s story spans three countries—Russia, America, and Israel. Before she was Golda Meir, she was Golda Meyerson; and before that, she was Golda Mabovitch, born in 1898 in Kiev in the Russian Empire. Her father left for America after the horrific Kishinev pogrom in 1903, found work in Milwaukee as a carpenter, and in 1906 sent for his wife and three daughters, who escaped using false identities and border bribes. Golda said later that what she took from Russia was “fear, hunger and fear.” It was an existential fear that she never forgot.
In Milwaukee, Golda found socialism in the air: The city had both a socialist mayor and a socialist congressman, and she was enthralled by news from Palestine, where Jews were living out socialist ideals in kibbutzim. She immersed herself in Poalei Zion (Workers of Zion), a movement synthesizing Zionism and socialism, and in 1917 married a fellow socialist, Morris Meyerson. As soon as conditions permitted, they moved to Palestine, where the marriage ultimately failed—a casualty of the extended periods she spent away from home working for Socialist Zionism and her admission that the cause was more important to her than her husband and children. Klagsbrun writes that Meir might appear to be the consummate feminist: She asserted her independence from her husband, traveled continually and extensively on her own, left her husband and children for months to pursue her work, and demanded respect as an individual rather than on special standards based on her gender. But she never considered herself a feminist and indeed denigrated women’s organizations as reducing issues to women’s interests only, and she gave minimal assistance to other women. Klagsbrun concludes that questions about Meir as a feminist figure ultimately “hang in the air.”
Her American connection and her unaccented American English became strategic assets for Zionism. She understood American Jews, spoke their language, and conducted many fundraising trips to the United States, tirelessly raising tens of millions of dollars of critically needed funds. David Ben-Gurion called her the “woman who got the money which made the state possible.” Klagsbrun provides the schedule of her 1932 trip as an example of her efforts: Over the course of a single month, the 34-year-old Zionist pioneer traveled to Kansas City, Tulsa, Dallas, San Antonio, Los Angeles, San Francisco, Seattle, and three cities in Canada. She became the face of Zionism in America—“The First Lady,” in the words of a huge banner at a later Chicago event, “of the Jewish People.” She connected with American Jews in a way no other Zionist leader had done before her.
In her own straightforward way, she mobilized the English language and sent it into battle for Zionism. While Abba Eban denigrated her poor Hebrew—“She has a vocabulary of two thousand words, okay, but why doesn’t she use them?”—she had a way of crystallizing issues in plainspoken English. Of British attempts to prevent the growth of the Jewish community in Palestine, she said Britain “should remember that Jews were here 2,000 years before the British came.” Of expressions of sympathy for Israel: “There is only one thing I hope to see before I die, and that is that my people should not need expressions of sympathy anymore.” And perhaps her most famous saying: “Peace will come when the Arabs love their children more than they hate us.”
Once she moved to the Israeli foreign ministry, she changed her name from Meyerson to Meir, in response to Ben-Gurion’s insistence that ministers assume Israeli names. She began a decade-long tenure there as the voice and face of Israel in the world. At a Madison Square Garden rally after the 1967 Six-Day War, she observed sardonically that the world called Israelis “a wonderful people,” complimented them for having prevailed “against such odds,” and yet wanted Israel to give up what it needed for its self-defense:
“Now that they have won this battle, let them go back where they came from, so that the hills of Syria will again be open for Syrian guns; so that Jordanian Legionnaires, who shoot and shell at will, can again stand on the towers of the Old City of Jerusalem; so that the Gaza Strip will again become a place from which infiltrators are sent to kill and ambush.” … Is there anybody who has the boldness to say to the Israelis: “Go home! Begin preparing your nine and ten year olds for the next war, perhaps in ten years.”
The next war would come not in ten years, but in six, and while Meir was prime minister.
Klagsbrun’s extended discussion of Meir’s leadership before, during, and after the 1973 Yom Kippur War is one of the most valuable parts of her book, enabling readers to make informed judgments about that war and assess Meir’s ultimate place in Israeli history. The book makes a convincing case that there was no pre-war “peace option” that could have prevented the conflict. Egypt’s leader, Anwar Sadat, was insisting on a complete Israeli withdrawal before negotiations could even begin, and Meir’s view was, “We had no peace with the old boundaries. How can we have peace by returning to them?” She considered the demand part of a plan to push Israel back to the ’67 lines “and then bring the Palestinians back, which means no more Israel.”
A half-century later, after three Israeli offers of a Palestinian state on substantially all the disputed territories—with the Palestinians rejecting each offer, insisting instead on an Israeli retreat to indefensible lines and recognition of an alleged Palestinian “right of return”—Meir’s view looks prescient.
Klagsbrun’s day-by-day description of the ensuing war is largely favorable to Meir, who relied on assurances from her defense minister, Moshe Dayan, that the Arabs would not attack, and assurances from her intelligence community that, even if they did, Israel would have a 48-hour notice—enough time to mobilize the reserves that constituted more than 75 percent of its military force. Both sets of assurances proved false, and the joint Egyptian-Syrian attack took virtually everyone in Israel by surprise. Dayan had something close to a mental breakdown, but Meir remained calm and in control after the initial shock, making key military decisions. She was able to rely on the excellent personal relationships she had established with President Nixon and his national security adviser, Henry Kissinger, and the critical resupply of American arms that enabled Israel—once its reserves were called into action—to take the war into Egyptian and Syrian territories, with Israeli forces camped in both countries by its end.
Meir had resisted the option of a preemptive strike against Egypt and Syria when it suddenly became clear, 12 hours before the war started, that coordinated Egyptian and Syrian attacks were coming. On the second day of the war, she told her war cabinet that she regretted not having authorized the IDF to act, and she sent a message to Kissinger that Israel’s “failure to take such action is the reason for our situation now.” After the war, however, she testified that, had Israel begun the war, the U.S. would not have sent the crucial assistance that Israel needed (a point on which Kissinger agreed), and that she therefore believed she had done the right thing. A preemptive response, however, or a massive call-up of the reserves in the days before the attacks, might have avoided a war in which Israel lost 2,600 soldiers—the demographic equivalent of all the American losses in the Vietnam War.
It is hard to fault Meir’s decision, given the erroneous information and advice she was uniformly receiving from all her defense and intelligence subordinates, but it is a reminder that for Israeli prime ministers (such as Levi Eshkol in the Six-Day War, Menachem Begin with the Iraq nuclear reactor in 1981, and Ehud Olmert with the Syrian one in 2007), the potential necessity of taking preemptive action always hangs in the air. Klagsbrun’s extensive discussion of the Yom Kippur War is a case study of that question, and an Israeli prime minister may yet again face that situation.
The Meir story is also a tale of the limits of socialism as an organizing principle for the modern state. Klagsbrun writes about “Golda’s persistent—and hopelessly utopian—vision of how a socialist society should be conducted,” exemplified by her dream of instituting commune-like living arrangements for urban families, comparable to those in the kibbutzim, where all adults would share common kitchens and all the children would eat at school. She also tried to institute a family wage system, in which people would be paid according to their needs rather than their talents, a battle she lost when the unionized nurses insisted on being paid as professionals, based on their education and experience, and not the sizes of their families.
Socialism foundered not only on the laws of economics and human nature but also in the realm of foreign relations. In 1973, enraged that the socialist governments and leaders in Europe had refused to come to Israel’s aid during the Yom Kippur War, Meir convened a special London conference of the Socialist International, attended by eight heads of state and a dozen other socialist-party leaders. Before the conference, she told Willy Brandt, Germany’s socialist chancellor, that she wanted “to hear for myself, with my own ears, what it was that kept the heads of these socialist governments from helping us.”
In her speech at the conference, she criticized the Europeans for not even permitting “refueling the [American] planes that saved us from destruction.” Then she told them, “I just want to understand …what socialism is really about today”:
We are all old comrades, long-standing friends. … Believe me, I am the last person to belittle the fact that we are only one tiny Jewish state and that there are over twenty Arab states with vast territories, endless oil, and billions of dollars. But what I want to know from you today is whether these things are the decisive factors in Socialist thinking, too?
After she concluded her speech, the chairman asked whether anyone wanted to reply. No one did, and she thus effectively received her answer.
One wonders what Meir would think of the Socialist International today. On the centenary of the Balfour Declaration last year, the World Socialist website called it “a sordid deal” that launched “a nakedly colonial project.” Socialism was part of the cause for which she went to Palestine in 1921, and it has not fared well in history’s judgment. But the other half—
Zionism—became one of the great successes of the 20th century, in significant part because of the lifelong efforts of individuals such as she.
Golda Meir has long been a popular figure in the American imagination, particularly among American Jews. Her ghostwritten autobiography was a bestseller; Ingrid Bergman played her in a well-received TV film; Anne Bancroft played her on the Broadway stage. But her image as the “71-year old grandmother,” as the press frequently referred to her when she became prime minister, has always obscured the historic leader beneath that façade. She was a woman with strengths and weaknesses who willed herself into half a century of history. Francine Klagsbrun has given us a magisterial portrait of a lioness in full.
Choose your plan and pay nothing for six Weeks!
Back in 2016, then–deputy national-security adviser Ben Rhodes gave an extraordinary interview to the New York Times Magazine in which he revealed how President Obama exploited a clueless and deracinated press to steamroll opposition to the Iranian nuclear deal. “We created an echo chamber,” Rhodes told journalist David Samuels. “They”—writers and bloggers and pundits—“were saying things that validated what we had given them to say.”
Rhodes went on to explain that his job was made easier by structural changes in the media, such as the closing of foreign bureaus, the retirement of experienced editors and correspondents, and the shift from investigative reporting to aggregation. “The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns,” he said. “That’s a sea change. They literally know nothing.”
And they haven’t learned much. It was dispiriting to watch in December as journalists repeated arguments against the Jerusalem decision presented by Rhodes on Twitter. On December 5, quoting Mahmoud Abbas’s threat that moving the U.S. Embassy to Jerusalem would have “dangerous consequences,” Rhodes tweeted, “Trump seems to view all foreign policy as an extension of a patchwork of domestic policy positions, with no regard for the consequences of his actions.” He seemed blissfully unaware that the same could be said of his old boss.
The following day, Rhodes tweeted, “In addition to making goal of peace even less possible, Trump is risking huge blowback against the U.S. and Americans. For no reason other than a political promise he doesn’t even understand.” On December 8, quoting from a report that the construction of a new American Embassy would take some time, Rhodes asked, “Then why cause an international crisis by announcing it?”
Rhodes made clear his talking points for the millions of people inclined to criticize President Trump: Acknowledging Israel’s right to name its own capital is unnecessary and self-destructive. Rhodes’s former assistant, Ned Price, condensed the potential lines of attack in a single tweet on December 5. “In order to cater to his political base,” Price wrote, “Trump appears willing to: put U.S. personnel at great risk; risk C-ISIL [counter-ISIL] momentum; destabilize a regional ally; strain global alliances; put Israeli-Palestinian peace farther out of reach.”
Prominent media figures happily reprised their roles in the echo chamber. Susan Glasser of Politico: “Just got this in my in box from Ayman Odeh, leading Arab Israeli member of parliament: ‘Trump is a pyromaniac who could set the entire region on fire with his madness.’” BBC reporter Julia Merryfarlane: “Whether related or not, everything that happens from now on in Israel and the Pal territories will be examined in the context of Trump signaling to move the embassy to Jerusalem.” Neither Rhodes nor Price could have asked for more.
Network news broadcasts described the president’s decision as “controversial” but only reported on the views of one side in the controversy. Guess which one. “There have already been some demonstrations,” reported NBC’s Richard Engel. “They are expected to intensify, with Palestinians calling for three days of rage if President Trump goes through with it.” Left unmentioned was the fact that Hamas calls for days of rage like you and I call for pizza.
Throughout Engel’s segment, the chyron read: “Controversial decision could lead to upheaval.” On ABC, George Stephanopoulos said, “World leaders call the decision dangerous.” On CBS, Gayle King chimed in: “U.S. allies and leaders around the world say it’s a big mistake that will torpedo any chance of Middle East peace.” Oh? What were the chances of Middle East peace prior to Trump’s speech?
On CNN, longtime peace processor Aaron David Miller likened recognizing Jerusalem to hitting “somebody over the head with a hammer.” On MSNBC, Chris Matthews fumed: “Deaths are coming.” That same network featured foreign-policy gadfly Steven Clemons of the Atlantic, who said Trump “stuck a knife in the back of the two-state process.” Price and former Obama official Joel Rubin also appeared on the network to denounce Trump. “American credibility is shot, and in diplomacy, credibility relies on your word, and our word is, at this moment, not to be trusted from a peace-process perspective, certainly,” Rubin said. This from the administration that gave new meaning to the words “red line.”
Some journalists were so devoted to Rhodes’s tendentious narrative of Trump’s selfishness and heedlessness that they mangled the actual story. “He had promised this day would come, but to hear these words from the White House was jaw-dropping,” said Martha Raddatz of ABC. “Not only signing a proclamation reversing nearly 70 years of U.S. policy, but starting plans to move the embassy to Jerusalem. No one else on earth has an embassy there!” How dare America take a brave stand for a small and threatened democracy!
In fact, Trump was following U.S. policy as legislated by the Congress in 1995, reaffirmed in the Senate by a 90–0 vote just last June, and supported (in word if not in deed) by his three most recent predecessors as well as the last four Democratic party platforms. Most remarkable, the debate surrounding the Jerusalem policy ignored a crucial section of the president’s address. “We are not taking a position on any final-status issues,” he said, “including the specific boundaries of Israeli sovereignty in Jerusalem, or the resolution of contested borders. Those questions are up to the parties involved.” What we did then was simply accept the reality that the city that houses the Knesset and where the head of government receives foreign dignitaries is the capital of Israel.
However, just as had happened during the debate over the Iran deal, the facts were far less important to Rhodes than the overarching strategic goal. In this case, the objective was to discredit and undermine President Trump’s policy while isolating the conservative government of Israel. Yet there were plenty of reasons to be skeptical toward the disingenuous duo of Rhodes and Price. Trump’s announcement was bold, for sure, but the tepid protests from Arab capitals more worried about the rise of Iran, which Rhodes and Price facilitated, than the Palestinian issue suggested that the “Arab street” would sit this one out.
Which is what happened. Moreover, verbal disagreement aside, there is no evidence that the Atlantic alliance is in jeopardy. Nor has the war on ISIS lost momentum. As for putting “Israeli–Palestinian peace farther out of reach,” if third-party recognition of Jerusalem as Israel’s capital forecloses a deal, perhaps no deal was ever possible. Rhodes and Price would like us to overlook the fact that the two sides weren’t even negotiating during the Obama administration—an administration that did as much as possible to harm relations between Israel and the United States.
This most recent episode of the Trump show was a reminder that some things never change. Jerusalem was, is, and will be the capital of the Jewish state. President Trump routinely ignores conventional wisdom and expert opinion. And whatever nonsense President Obama and his allies say today, the press will echo tomorrow.