The view in the White House was that things were going well until March 1, when Ambassador Donald F. McHenry…
A defeat so overwhelming as that which Governor Reagan inflicted on President Carter soon takes on the air of the inevitable. Before it does it may be useful to record that those who were defeated in no way looked upon the outcome as fated. To the contrary, the view in the White House was that things were going well until March 1, when Ambassador Donald F. McHenry voted in favor of a particularly vicious anti-Israel resolution in the Security Council of the United Nations, followed three weeks later by the appearance of Secretary of State Cyrus R. Vance before the Senate Foreign Relations Committee in which he refused to disavow the vote. Thereafter, in this view, everything spun out of control. The Carter administration left Washington convinced—and proclaiming—that defeat was brought on by malevolent incompetence at the U.S. Mission to the United Nations and the inability of the Secretary of State to control the Mission. What they did not proclaim and only dimly understood was that they themselves had put in place the ideas which helped bring them down; that indeed in that sense the outcome was fated.
Set forth by President Carter and others, the sequence of events was as follows. Senator Edward M. Kennedy’s challenge to the President began poorly. On March 18 the two met in Illinois, the first industrial state to hold a primary. The President won handily. If Kennedy could be beaten a week later, in New York, his candidacy would collapse. A private poll conducted by Dresner, Morris & Tortorello Research in late January and early February showed the President leading the Senator 54 percent to 28 percent among probable Democratic primary voters in New York, with only 13 percent undecided. Yet in the end, Kennedy won, 59 percent to 41 percent.
In a gracious gesture, after the results were in, Lieutenant Governor Mario M. Cuomo, who headed Mr. Carter’s campaign in New York, called the President to apologize. “No,” said Mr. Carter as reported in the New York Times, “it was the United Nations vote.” In an interview with Meg Greenfield of the Washington Post on March 27, Mr. Carter, speaking of an incumbent’s problems in running for reelection, repeated the point:
. . . Then to make a mistake like we did on the UN vote and have the Secretary of State testify a few days before the election. . . .
The same theme was sounded, finally, in a postmortem by Steven R. Weisman and Terence Smith of the Times after the national election:
This blunder, in which the administration first voted in favor of a March 1 resolution rebuking Israel on settlements in Arab-claimed territory and then disavowed it, cost the President dearly among Jewish voters in the March 18 [sic] New York primary. Senator Kennedy carried the state and attracted new contributions to his campaign, which carried on through the last batch of primaries on June 3.
“New York was our chance to knock Kennedy out of the box early,” said Mr. [Robert S.] Strauss, the campaign chairman. “We blew it with that vote.”
Jody Powell told the Times reporters that in consequence, “We sure as hell spent a lot of time and money fighting him that would have been better spent against Reagan.”
Now it will be clear that there are many reasons President Carter lost the election, of which the UN vote was only one and scarcely the most important. What is important, however, is that the administration had looked upon its United Nations record as a huge success. Other policies had failed, and that proved costly. But this had succeeded, and proved costly. When the fall of a President is involved, and possibly also the fall of a party, some notice should be taken. For I do not conceal my judgment that so long as the ideas underlying the Carter administration’s UN policy are dominant within the Democratic party, we Democrats will be out of power.
In normal circumstances UN affairs play a marginal role in United States foreign policy, the simple reason being that American foreign policy is normally preoccupied with the Soviet Union, and the UN, with its profusion of small, even mini, states, is the last setting in which two powers would wish to conduct their affairs. But four years ago, to the incoming Carter administration, the main attraction of the UN as a setting in which to conduct foreign policy was precisely the prominent role Third World nations play in UN affairs, and the North-South axis of the place. This was a setting in which the cold war could at last be put behind us. In his first major foreign-policy address, given at Notre Dame on May 22, 1977, President Carter reported that the United States had overcome its “inordinate fear of Communism,” and proposed that the two powers now join in a cooperative effort to improve North-South relations, specifically through economic assistance to the developing nations. In the meantime, the name of the UN Ambassador was promoted to second place on the directory of the State Department building, immediately below that of the Secretary.
In his Notre Dame speech, as in his appointments, President Carter brought together two strains in Democratic thinking on foreign affairs. The first was the old tradition of liberal internationalism—the extension of domestic standards of social justice to the world at large—exemplified by President Harry S. Truman’s Point Four program or President John F. Kennedy’s Alliance for Progress.
But there was another and newer strain of thought, one much at odds with the traditions of Truman and Kennedy. This was the view that had emerged in the course of the Vietnam war to the effect that the United States, by virtue of its enormous power, and in consequence of policies and perhaps even national characteristics that were anything but virtuous, had become a principal source of instability and injustice in the world. We were, in short, a status-quo power, and the status quo we were trying to preserve was abominable. By contrast, a more positive future was available to mankind if it could break out of the American dominion. Much has been written of this, and one need not expand. For my part the most evocative and excruciating memory of the onset of this point of view was the day that a group of former Peace Corps volunteers, protesting the war, ran down the American flag at Peace Corps headquarters in Washington and ran up that of the Vietcong.
Through the 1970’s this view grew in strength within the Democratic party. It was most often to be encountered when issues of defense were involved. In an article written in November 1980, R. James Woolsey, who served with distinction as Under Secretary of the Navy in the Carter administration, described how
leaders of many of the interest groups that claim to represent the traditional Democratic constituencies have convinced themselves over the last decade or so that they must be the enemies of increased American military power.
He explained why these constituencies had come to feel this way:
What you spend on tanks you can’t spend on schools or welfare, nor can you keep it. This is, however, an ageless problem of government. . . . Perhaps more important, the agony of Vietnam introduced a new element and led the interest-group spokesmen and many liberal Democratic politicians to attack the existence of American military power as a way to curtail its exercise. Throughout much of the 1970’s, the halls of the Senate Office Buildings, for example, were jammed with young staff members looking for a weapons system to have their Senator oppose. They, and their friends in the executive branch, are now typing up their resumés in no small measure because the voters understood what many of the elected officials did not—that caution in using military power is wise, but unilateral restraint in obtaining it in the face of a massive build-up by a potential enemy is extremely dangerous.
There was a precise corollary to this doctrine of self-denial in defense, and it flowed from the idea that the political hostility which the United States encountered around the world, and especially in the Third World, was, very simply, evidence of American aggression or at least of American wrongdoing. The aggression could be military, but just as often it would be diagnosed as economic (the role of the multinational corporation) or ecological (plundering the planet to sustain an obscenely gross standard of living). Often it would be presented as nothing more specific than not being “on the side of history” or “the side of change.” No matter, the prescription was the same. If the United States denied itself the means of aggression, it would cease to be aggressive. When it ceased to be aggressive, there would be peace—in the halls of the United Nations no less than in the rice paddies of Southeast Asia.
As tanks and missiles were the instrument of military aggression, so ideas were the means of diplomatic aggression—specifically that array of attitudes, judgments, and prejudices which led Americans to suppose they represented on balance a successful society, one model of how developing societies, if fortunate, might turn out, and in the interval a fair standard by which to measure the merits of other societies.
Here, in the interest of what lawyers call full disclosure, let me acknowledge that, from the first, those members of the Carter administration responsible for policy at the UN, and more generally for relations with the developing nations, regarded my own brief tenure as U.S. Permanent Representative at the UN in 1975-76 as the prime example of American diplomatic aggression.
This was notably the view of C. William Maynes, who left the Carnegie Endowment for International Peace to become President Carter’s Assistant Secretary of State for International Organization Affairs. It was the view of the Ambassadors who came and went at the U.S. Mission beginning with Andrew Young and ending with Donald McHenry. In an interview published in September 1980, contrasting his performance with mine, Ambassador McHenry said:
I don’t believe in confrontation politics, I don’t believe in name-calling. I do believe in communicating with them [i.e., Third World nations], in stating my views, listening to theirs, respecting their views, expecting them to respect mine.
A few weeks later, on October 1, 1980, taking issue with a New York Times Magazine article entitled “How the Third World Runs the UN,” he returned to this theme:
The article was reminiscent of the speeches about the “Tyranny of the Majority” that one of my predecessors used to deliver when he represented our country at what he later called “A Very Dangerous Place.”1
Yet there was a fateful avoidance of reality in the new administration’s view: a denial that there is genuine hostility toward the United States in the world and true conflicts of interest between this nation and others—an illusion that a surface reasonableness and civility are the same as true cooperation.
To be sure, if there are conflicts of interest among states, there are also truly shared interests, and even genuine friendships. The world, alas, is complex, and although the new men of the Carter administration professed to understand complexity where others had missed it, they were in fact great simplifiers. They trivialized the sources of real conflict between the United States and other nations, and they exaggerated our ability to resolve them to everyone’s satisfaction.
Again, one notes a parallel with the approach of the new administration to defense and foreign policy. One of the first (and fateful) decisions of President Carter was to appoint Paul Warnke as negotiator for the strategic-arms-limitations talks with the Soviets. Warnke in his celebrated article “Apes on a Treadmill’ had set forth the thesis that the Soviets essentially imitate American behavior in defense matters. Thus just as the United States could turn enmity into friendship merely by avoiding “confrontation politics” in its dealings with the Third World, so the United States could change Soviet behavior simply by changing its own.
But if these ideas had a parallel structure, they did not prove equally durable. Although President Carter had campaigned in 1976 on a pledge to cut the defense budget, his promise did not survive the first encounters with reality—the reality of conflicting interests and genuine danger. Instead, it was buried, and (admittedly modest) increases in defense spending commenced. The same readiness to retreat from unrealistic approaches was evident in the area of human rights (and indeed, here the administration’s retreat was almost over-eager). But if in these areas reality obliged the administration to think better of the ideas by which it had hoped to guide policy, no such perceptions ever managed to penetrate our approach to the United Nations. We would unilaterally change the whole international atmosphere simply by avoiding “confrontation politics.” The United States would make amends for its past failures by a greater responsiveness, by greater openness, by at last understanding the problems of others and their perspectives. Thus the psychological arrogance that lay behind the seeming humility of our new relations with the Third World—it was we who still determined how others behaved—remained intact.
At the UN the arrogance of this view was particularly risky, for those convinced of the abuse of American power found themselves representing the United States at a time when our power was in fact much reduced. Whether American interests could, even so, be protected would depend on how well this decline was perceived, on the suppleness of the new tactics that would be brought to bear, and above all on the ability to sense failure when it struck one across the face. The new administration was conducting an experiment of a sort; much would depend on whether it could tell the difference between good results and bad.
Before defeat in the 1980 election forced a different conclusion upon them, the Carter people were of the opinion that the experiment had been a brilliant success. From the 1980 Democratic platform—prepared in cooperation with the staff of the National Security Council—one learned that when the administration came to power in 1977,
relations with the Third World were at their nadir. The United States appeared hostile and indifferent to the developing world’s aspirations for greater justice, respect, and dignity. All this has changed.
Testifying before a House Subcommittee on March 27, 1980 (two days, mind, after the New York primary), Assistant Secretary Maynes spoke even more glowingly of changes that had come over the UN:
. . . the UN has become the crossroad of global diplomacy.
. . . [It] now appears to be less unfriendly and dangerous a place than some have led us to believe. It is also possible that we will find there a greater spirit of cooperation than before—not just in condemning the lawless but also in advancing the rule of law. But these promises may come to naught unless we adopt a more mature stance toward the UN itself.
We must remind ourselves that the United States needs the UN at least as much as it needs us.
One might have thought this assessment would be reflected in votes in the General Assembly or the Security Council. But it was not. Worse, the ideas of the new administration stood in the way of seeing opportunities to be seized and understanding problems to be met.
This was perhaps to be expected. The heavy emphasis on North-South relations, after all, was surely a way of coping with, or at least diverting attention from, the difficult realities of the post-Vietnam world. “American imperialism” had been defeated. Our defeat had been caused, to be sure, by overreaching and after a point it could not perhaps have been avoided. But its consequences, all the same, would have to be lived with, and adjusted to; foremost among them would be a major opening for, and stimulus to, Soviet imperialism. Susan Sontag has recently acknowledged how little she and others in the anti-war movement had understood this equation:
It was not so clear to many of us as we talked of American imperialism how few options many of these countries had except for Soviet imperialism, which was maybe worse. When I was in Cuba and North Vietnam, it was not clear to me then that they would become Soviet satellites, but history has been very cruel and the options available to these countries were fewer than we had hoped. It’s become a lot more complicated.
But the perception of such complexity was beyond the powers of the U.S. Mission to the UN under the Carter administration. Its members could not see the signs of a new phase of Soviet policy: military support for Ethiopia in 1977, coups in both Afghanistan and South Yemen in April 1978, the invasion of Cambodia in December 1978. Unable to explain all this or to fit it to the purposes they had set themselves, American diplomats at the UN grew increasingly silent.
It also emerged that our representatives had little sense of the UN Charter as law that had to be upheld, and to be expounded. A superb opportunity came in the fall of 1977 when the Soviets switched sides in the Horn of Africa. Abandoning Somalia, they actively entered the war in the Ogaden, an ethnically Somali territory, on the side of Ethiopia. Of a sudden the Somalis were pounding on our doors begging for help, pleading for us to understand the “nature of the Soviet threat,” Soviet “neocolonialism,” the “Soviet plot to encircle the Gulf,” the “Soviet contempt for human rights and the rights of small nations.”
Now it happens that in 1975 the principal sponsor of the resolution that declared Zionism to be a form of racism was none other than Somalia (acting in its then capacity as an especially fawning satellite of the Soviets). After the resolution was adopted, I rose in the General Assembly and addressed the following words directly to the Somalis:
Today we have drained the word “racism” of its meaning. Tomorrow terms like “national self-determination” and “national honor” will be perverted in the same way to serve the purposes of conquest and exploitation. And when these claims begin to be made . . . it is the small nations of the world whose integrity will suffer. And how will the small nations of the world defend themselves, on what grounds will others be moved to defend and protect them, when the language of human rights, the only language by which the small can be defended, is no longer believed and no longer has a power of its own?
With the Somalis bleating in terror, pleading for help, did the U.S. Mission to the UN make a single reference to their behavior in 1975, and our response? None. This would have been to engage in “confrontation,” a practice of the discredited past.
The United States helped found the UN, mostly wrote the Charter, has largely paid for the place. U.S. representatives have an obligation to insist that there are standards written into that Charter. Occasionally we would stand up for them. In 1978 William J. vanden Heuvel, the U.S. representative to the UN in Geneva, actually objected to the appointment of a KGB officer as director of personnel for UN activities in that city. (The appointment was a clear violation of article 100 of the Charter.) But there were few such instances. Not even when UNESCO, that embodiment of a decent liberal optimism, set about developing an international regime for state control of the press under the insolent euphemism of “A New World Information Order” did we engage in “confrontation.” No, never. And so it went.
But the crucial turning point came with Camp David, which involved an irony worthy to be called tragic. Perhaps the most impressive achievement of Henry Kissinger as Secretary of State had been to cooperate with Anwar Sadat in maneuvering the Soviets out of Egypt Together Sadat and Kissinger had had to stand against the efforts of Soviet policy to scuttle the new pro-American alignment of Egypt and the step-by-step peace negotiations, which had scored a major success with the second disengagement agreement of May 1975. The Zionism-is-racism resolution in November of the same year was itself one part of this sabotage campaign.
That the UN and its Third World majority could be manipulated for the purposes of an assault on American policy was much more poorly apprehended after Kissinger’s departure. In fact, in its desire to dissociate itself from the past, the Carter administration set out to bring the Soviets back. The still startling Soviet-American communiqué (issued jointly but plainly Soviet-drafted) of October 1, 1977 proposed to reconvene the Geneva Conference, a meeting under UN auspices at which the two nations would be co-chairmen and to which all interested parties would be invited. To Sadat the meaning of this was clear: a veto in the hands of the radical forces, immediate stalemate, ultimately perhaps his overthrow. And so to avoid going to Geneva, he went to Jerusalem (where, he had every reason to know, a deal was waiting to be struck with the Begin government). This set in motion the events that ended with the Camp David accords of 1978, and the Egyptian-Israeli peace treaty of 1979—Carter’s single greatest achievement, albeit purchased only by a reversal of his original “Geneva” approach and by shifting negotiations over the Middle East away from the UN.
Inevitably forces at the UN would resent this. Thus it is not too much to say that the supreme test of the Carter policy at the United Nations was whether that body would leave him alone to make peace between Israel and its neighbors. Had his diplomats, through their new approach, acquired sufficient influence with the far-flung nations of the Third World to persuade them to stay out of disputes with which most of them had in any event only the remotest connection?
The answer was not long in coming. First, the remaining Arab states, with Iraq only momentarily absent, convened a “confrontation summit” in Damascus to fight the Camp David settlement. Iraq soon was brought in, and before the year was out leaders of all Arab states except Egypt had met in Baghdad to form a “rejection front” against Egypt and Israel. Simultaneously the Soviet Union (returning to the tactics it had used in 1975 to counter its expulsion from Egypt) escalated its campaign to delegitimate Israel by identifying it with the Nazis.
Having been sounded in 1971 with a two-part article in Pravda entitled “Anti-Sovietism is the Profession of Zionists,” this theme was steadily elaborated and diffused. (The original Pravda article, for example, asserted that the massacre at Babi Yar had been a collaboration of Nazis and Zionists.) Once the idea had been set, it proceeded to be popularized on television, in novels, and finally in children’s publications. Thus the October 10, 1980 issue of Pionerskaya Pravda, a tabloid-size weekly for children aged nine to fourteen who belong to the Soviet youth organization, Pioneers:
Zionists try to penetrate all spheres of public life, as well as ideology, science, and trade. Even Levi jeans contribute to their operations: the revenue obtained from the sale of these pants are used by the firm to help the Zionists.
Most of the largest monopolies in the manufacture of arms are controlled by Jewish bankers. Business made on blood brings them enormous profits. Bombs and missiles explode in Lebanon—the bankers Lazars and Leibs are making money. Thugs in Afghanistan torment schoolchildren with gases—the bundles of dollars are multiplying in the safes of the Lehmans and Guggenheims. It is clear that Zionism’s principal enemy—is peace on earth.
. . . The United Nations described Zionism as a form of racism and racial discrimination. More and more people today are beginning to realize that Zionism is present-day fascism.
This propaganda seemed to possess the Soviets internationally as well as at home, and they began to insist that other nations join in the campaign to treat Israel as an outlaw state, indeed a non-state, an entity without the rights of statehood. It began to work. In 1978 Cuba became head of the “nonaligned nations.” A summit meeting of these states in Havana between September 3 and 7, 1979 adopted a resolution that declared:
The heads of state or government reaffirmed that racism, including zionism [sic], racial discrimination, and especially apartheid constituted crimes against humanity and represented violations of the United Nations Charter and of the Universal Declaration of Human Rights [Paragraph 237, Final Declaration of the Conference].
In June 1980 at the ministerial meeting of the Organization of African Unity, held in Freetown, Sierra Leone, Israel was referred to in official documents merely as the “Zionist entity.” And on October 8, 1980 the Soviets signed a Friendship Treaty with Syria of which Article 3 declared:
The High Contracting Parties, guided by their belief in the equality of all peoples and states, regardless of race and religious beliefs, condemn colonialism, racism and zionism [sic] as one of the forms and manifestations of racism, and reaffirm their resolve to wage relentless struggle against them.
This was perhaps the clearest statement to date of the Soviet Union’s opposition to the very existence of the state of Israel, but its essential purpose had been evident for at least a decade.
No less evident was what the United States Mission to the United Nations should have done. The Arab nations were split; the United States was, in effect, allied with the largest of them, Egypt, and in the cause of peace in the Middle East. The Soviet Union, though it might declare that “thugs in Afghanistan” were “tormenting schoolchildren” for the profit of Zionists, had established itself beyond all question as a brutal conqueror of Third World peoples and as an anti-Semitic regime of near demented proportions. The moment to fragment or silence the opposition was at hand.
Faced with this assault on the UN Charter, on peace, on decency—and, not so incidentally, on the President of the United States—what did our people do? They took the other side.
To persons whose deepest conviction was that Third World nations were hostile to the United States because of our own neocolonial behavior; whose strong disposition was to believe that the Soviet Union in almost all instances supported the true liberationist forces in the former colonial world while the United States, on the wrong side of history, backed brutal but doomed dictatorships—the events from 1977 to 1980 could make no sense. It became ever more difficult for such people to understand and support their own government’s policy. For had not the Camp David framework, its peaceful appearances notwithstanding, called forth a more sustained disagreement between the U.S. and the Third World than even the “confrontationist” policies of the past? To understand this one had to entertain the possibility that the opposition we encountered there was not a matter of long-held grievances against our abuses of power. One had to entertain the possibility that there were those whose great fear was that in seeking peace we might succeed.
Confused, and after a point not altogether straightforward, the strategy of our diplomats in New York, backed up in the Department of State, started to undergo a subtle and disastrous transformation. They had begun with the proposition that if the United States put itself on the “right” side of history, we would find the nations of the world, most of which of course were “new,” coming over to our side in turn. Unaccountably, however, they were still not on our side. To the contrary, some were actively seeking to undo the greatest diplomatic achievement the administration had to its credit, and none—not one—was objecting to or trying to impede such efforts. Evidently, then, we must still be on the wrong side. Reasoning thus, our diplomats prepared themselves to vote for the Security Council Resolution of March 1, 1980 and (though this was certainly not their intention) to help bring down the administration they served.
The chain of resolutions passed in condemnation of Israel by the Security Council in 1979-80 forms a complex story. Yet to follow it only a single point needs to be understood. It is that, as a direct result of American policy, the Security Council was allowed to degenerate to the condition of the General Assembly.
Under the UN Charter the General Assembly reaches decisions by majority vote, but its decisions are purely recommendatory (Article 10). By contrast, the Security Council has power. In situations where it determines that there is a “threat to the peace, breach of the peace, or act of aggression,” the Council “shall make recommendations, or decide what measures shall be taken. . . .” These include “such action by air, sea, or land forces as may be necessary. . . .” The Security Council, in a word, may make war. And for that reason the Security Council does not operate by majority vote. Any permanent member may veto any action, simply by voting No. However, in the face of the increasingly vicious Soviet-Arab assaults that followed Camp David, the United States began to abstain. I have represented the United States on the Security Council; I have served as President of the Security Council. I state as a matter of plain and universally understood fact that for the United States to abstain on a Security Council resolution concerning Israel is the equivalent of acquiescing.
The first abstention in the sequence we are now tracing occurred on March 22, 1979 when the Council, in a resolution directed against Israel, established a three-member commission “to examine the situation relating to establishments in the Arab territories occupied since 1967, including Jerusalem.” The phrasing here was ominous: “Arab territories . . . including Jerusalem.” Jerusalem is the capital of Israel. How could its capital be in the territory of others?
Equally ominous, although at this point restrained, was the reaffirmation of earlier Council statements that the Fourth Geneva Convention “is applicable to the Arab territories occupied by Israel since 1967, including Jerusalem” and the strict injunction upon Israel “as the occupying Power, to abide scrupulously by the 1949 Fourth Geneva Convention.” Now, the Fourth Geneva Convention on the Protection of Civilian Persons in Time of War is one of a series of treaties designed to codify the behavior of Nazi Germany and make such behavior criminal under international law. This particular convention applied to the Nazi practice of deporting or murdering vast numbers of persons in Western Poland—as at Auschwitz—and plans for settling the territory with Germans. The assertion that the Geneva Convention also applied to the West Bank played, of course, perfectly into the Soviet propaganda position that “Zionism is present-day fascism.”
Within a year the new commission had submitted two reports. In response to these, on March 1, 1980, a resolution (465) was submitted to the Council that was as viciously anti-Israel—and as destructive of the Camp David accords—as any that had ever been encountered or could readily be devised. Israel was found to be in “flagrant violation of the Fourth Geneva Convention”: the first nation in history to be found guilty of behaving as the government of Nazi Germany had behaved. It was determined
that all measures taken by Israel to change the physical character, demographic composition, institutional structure or status of the Palestinian and other Arab territories occupied since 1967, including Jerusalem, or any part thereof, have no legal validity. . . .
In a word, according to Resolution 465, Israel is an outlaw state, guilty of war crimes. (Not the Vietnamese invaders of Cambodia, or the Soviets in Afghanistan. Israel!) Its alleged capital is not its capital at all—“Jerusalem or any part thereof”—and it is in illegal occupation of territory now for the first time designated “Palestinian.”
Here, then, was the triumph of everything the Soviets and the “Rejectionists” had stood for: the repudiation of everything Sadat, and for that matter Begin and Carter, had sought. Yet the United States voted in favor of this resolution. Shortly thereafter the administration stated that this had been a “mistake.” It was no mistake at all. Resolution 465 reflected the view of the majority of members of the United Nations, and the U.S. Mission there had simply come to accept that view. Their conception of the world, by now shared in Washington, gave them no alternative.
Once the vote was cast there came the shock of recognition, in Washington at least, that this was what that conception led to. But still they clung to it. The White House, sensing the disaster and the dilemma, did not want any testimony given before Congress. The State Department insisted, and so on March 20 the New York Times reported:
Vance Rebuffs Call for Full Disavowal of UN’s Israel Move
Yet it was more than that. Vance would neither disavow the episode nor acknowledge it. He could not bring himself to admit consequences he could not desire of a policy he could not repudiate.
The operative paragraphs of Resolution 465 began by stating that the Security Council:
- Commends the work done by the Commission in preparing the report. . . .;
- Accepts the conclusions and recommendations contained in the above-mentioned report of the Commission;
Yet Vance in his testimony on March 20 suggested that nothing, really, had happened, that voting for the resolution did not imply support for the commission report which had occasioned it.
Senator Paul S. Sarbanes of Maryland went directly to this point:
Senator Sarbanes. Mr. Secretary, the resolution that was passed and for which we voted, accepts the conclusions and recommendations contained in the report of the commission established by Security Council Resolution 446.
Do I take your assertion to be that the word “accepts” there means nothing more than “receives”?
Secretary Vance. You do correctly understand.
Senator Sarbanes. Why wasn’t the word “receives” used? I would understand the word “accepts” to carry with it some element of subscribing to the conclusions in the recommendation.
Secretary Vance. No; it was merely intended to connote receives. Accepts—they hand them to me, they are accepted.
I joined in the questioning:
Senator Moynihan. Very frankly, . . . Mr. Secretary, I am concerned with our reputation for plain dealing.
Did anyone at the U.S. Mission to the United Nations tell you that in a Security Council resolution, the word “accepts” should be read to mean “receives”?
Secretary Vance. Yes: I have been told that.
Senator Moynihan. You have been told that?
Secretary Vance. Yes
Senator Moynihan. Sir, I once served as U.S. Permanent Representative there. I can tell you that I could not conceive telling a Secretary of State that the word “accepts” should be read to mean as in a letter, “Dear Sir, I have received your letter of” so and so.
The first paragraph, the preambular paragraph of a Security Council resolution starts out always, “Taking note of.” This is the paragraph that says, “We have received.”
“Accepts,” on the other hand, is a slight variation on the word “endorse.” It would be the only way it would have been understood in my time there, sir.
I think you have been misinformed, sir, and I think you have been done a disservice.2 . . .
Something quite extraordinary was happening here. It is of course possible that the members of the U.S. Mission had simply not told the truth to the Secretary of State. (They had evidently been less than candid on some other questions concerning the resolutions—informing him, for example, that references to Jerusalem had been excised from the text when they had not.) But how could a lawyer of Cyrus Vance’s ability believe such an untruth save that at high and low levels alike the men of our government were deceiving themselves? The Carter administration had failed in its objectives at the UN; but to admit that failure was to cast in doubt the view of the world that justified the very existence of the administration. And to protect itself from having to face this failure, the administration had begun to undermine Camp David itself—its one great success.
The March 1 vote, then, was a disaster and should have stimulated a reappraisal of the route by which the administration had traveled to it. Israel had been permanently damaged, and (unless their perceptions are perilously dulled) other allies of the United States permanently warned. Yet no more was said than that it was a mistake, and only a partial mistake at that. The administration never thought its way through the matter. Those publicly most identified with these policies had already begun to leave—first Ambassador Young, then Assistant Secretary Maynes, and finally Secretary Vance himself. But the policies persisted. By the end of the Carter administration the pattern had become all but impossible to overcome. One can measure it this way: on nine substantive votes on the Middle East taken in the Security Council between January 1979 and August 1980 the United States abstained seven times.
Just once we cast a veto—striking down a Tunisian resolution of last April 30 which called for the creation of a Palestinian state. This resolution, one might note, unlike the one we voted for on March 1, did refer to “secure and recognized boundaries”—the language of Resolution 242—but only for the Palestinian state. Not for Israel.
To be sure, we occasionally made our unhappiness known. In August 1980, for example, Secretary of State Edmund S. Muskie went to New York and defended the American approach to a Middle East peace settlement based on the Camp David accords:
Let me . . . repeat our belief that this constant recourse to debates and resolutions that are not germane to the peace process—and even harmful to it—should stop.
A salutary sentiment, but what must the other members have thought? For Secretary Muskie was asking on behalf of the United States for the end of a process that it was perfectly within our power to end. If we believed such resolutions to be harmful to the peace process, we were free to veto them. We were free to deny them the force of law they acquire when they pass. The same point could be made of such American statements on the March 1 resolution as this one by President Carter:
While our opposition to the establishment of the Israeli settlements is long-standing and well known, we made strenuous efforts to eliminate the language with reference to the dismantling of settlements in the resolution.
Yet when the strenuous efforts failed, the U.S. Permanent Representative had only to raise his hand, to vote No, and the resolution would have failed.
Having committed itself, however, to solidarity with the majority at the UN, the Carter administration could not bring itself to exercise the veto. Thus in our flight from “confrontation” did we end not by understanding the perspectives of others, but by adopting them.
In so doing, we have acquiesced in a very great deal.
After March 1 the application of the Fourth Geneva Convention became a routine of Security Council resolutions. It was invoked in Resolutions 460 (May 8, 1980), 469 (May 20, 1980), and 471 (June 5, 1980), all three of which dealt with Israel’s expulsion of two Palestinian mayors in the wake of terrorist attacks on Israeli civilian settlers. Where once there was the routine affirmation of Resolution 242, we now have routine indictments of Israel for Hitlerian crimes.
The U.S. abstained even when Israel’s sovereignty itself was at issue. The last Security Council resolutions in this cycle of attacks on Israel were adopted in the summer of 1980 and dealt specifically with Jerusalem. Resolution 476 of June 30, 1980 warned Israel about its pending legislation on the annexation of East Jerusalem. One might well question the prudence of this Israeli law—and many have done so—but it was something else again to find that in Resolution 476 (as in its successor Resolution 477 of August 20) Israel had become the “occupying power” of its own capital. Both resolutions, in fact, seemed to include the entire city of Jerusalem within this charge. And Resolution 477 went still further: it declared the Basic Law on Jerusalem, by then passed, to be null and void. It declared in effect that Israel was not entitled to fix the location of its own capital city, and called—in a wholly unprecedented step—on member states to withdraw their embassies from this capital (which all did).
An epilogue of sorts took place in the third week of December 1980, as the Carter administration and the 35th General Assembly began winding down. On Monday, December 15, the General Assembly adopted five resolutions on the Middle East more virulent and anti-Semitic than perhaps anything the UN had yet seen. The debate was obscene. Thus the Ambassador of Jordan speaking of the Ambassador of Israel:
The representative of the Zionist entity is evidently incapable of concealing his deep-seated hatred toward the Arab world for having broken loose from the notorious exploitation of its natural resources, long held in bondage and plundered by his own people’s cabal, which controls and manipulates and exploits the rest of humanity by controlling the money and wealth of the world.
The occasion was the receipt of the most recent Report of the Committee on the Exercise of the Inalienable Rights of the Palestinian People, a body established by General Assembly resolution on November 10, 1975, the same day Zionism was declared to be a form of “racism and racial discrimination.”
The first of the resolutions was breathtaking:
Security Council Resolution 242 of 22 November 1967 does not provide an adequate basis for a just solution for the question of Palestine.
One of the more dishonest (and debilitating because profoundly misleading) assertions of the U.S. Mission during the Carter years was that the 1975 Zionism resolution was somehow brought about by the United States. Having resisted, America was judged to have provoked. That resolution passed 67 to 55 with 15 abstentions. This resolution, potentially far more destructive, was adopted 98 to 16 with 32 abstentions.
The United States said nothing. No American delegate went to the podium to offer the smallest demur. Next, a resolution denounced the Camp David accords, declaring that the General Assembly
Expresses its strong opposition to all partial agreements and separate treaties which constitute a flagrant violation of the rights of the Palestinian people, the principles of the Charter, . . . [etc].
The United States said nothing. The last of the resolutions reasserted Israeli violation of the Fourth Geneva Convention. This time the United States, by abstaining, said all there was to say.
There was something of note in the sponsors of the resolutions. The familiar Soviet-leaning or Soviet-dominated nations were present: Afghanistan, Cuba, Lao People’s Democratic Republic. But present also were Nicaragua and Zimbabwe, two Third World nations with which the Carter administration had presumably established relations of friendship and respect.
As for North-South relations, on Wednesday of that week Ambassador McHenry acknowledged that the General Assembly’s Special Session on economic development which had convened in September had come to nothing. Finally, on Friday, December 19, the United States voted for a Security Council resolution condemning Israel for the expulsion of two Arab mayors (an expulsion which followed upon parliamentary debate, trials before an independent judiciary, and the usual processes of a possibly wrongheaded but stubbornly democratic society). Ambassador McHenry explained that the Fourth Geneva Convention “prohibits deportations, whatever the motive of the occupying power.”
In an editorial entitled, “Joining the Jackals,” the Washington Post, which had supported the President for reelection, described this American vote against Israel in the Security Council on that Friday as representative of “the essential Carter.” Now the President himself was being held to account.
American failure was total. And it was squalid. These men, in New York and Washington, helped to destroy the President who appointed them, deeply injured the President’s party, hurt the United States, and hurt nations that have stood with the United States in seeking something like peace in the Middle East. They came to office full of themselves and empty of any steady understanding of the world. The world was a more dangerous place when at last they went away.
Those who now take office must deal with the aftermath of this massive failure of policy. The Security Council resolutions are time bombs. Ticking. The case has all but been made that Israel is an outlaw state, and indeed the General Assembly has now called on the Security Council to consider imposing sanctions against it. It will take the toughest-minded diplomacy to dismantle the indictment now in place—thanks to the Carter administration; thanks to those who brought the Democratic party to such confusions. The new administration will have to deal also with the whole question of the Third World. It should be clearer now that hostility toward the West, toward the United States, is abiding and, it may be, burgeoning.
Yet it remains for the United States to evolve a mode of dealing with the UN majority, and this in some measure turns on what kind of countries we think them to be. Irving Kristol has put the matter at its bleakest:
The radical-nationalist ideologies of these nations, so far from being a prelude to the liberal-constitutionalism we revere, are a kind of epilogue. They—or at least their ruling elites—have seen our present and reject it as their future. So long as we refuse to confront this reality, we do not have a clear vision of the world the U.S. inhabits. And so long as there is no such clear vision, there can be no coherent foreign policy.
My own view is more sanguine: consider India, Sri Lanka, Trinidad and Tobago, Jamaica. There are others—many others. Still, with the experience of these four years, we should at least have learned that foreign policy cannot be conducted under the pretense that we have no enemies in the world—or at any rate none whose enmity we have not merited by our own conduct. For it was this idea more than anything else, perhaps, that led the Carter administration into disaster abroad and overwhelming defeat at home.
1 It would be hard to pack more misinformation into a single sentence. It was President Gerald R. Ford, in an address at the opening of the General Assembly in the fall of 1974, who warned the UN against “the tyranny of the majority”; at the close of that session Ambassador John A. Scali repeated the warning. If I ever used the phrase, which I do not recall doing, it was only to cite them. As for “A Very Dangerous Place,” in 1978 I published a memoir about the UN with a passage on the first page: “I had first gone to Washington with John F. Kennedy and then stayed on with Lyndon Johnson. There I learned as an adult what I had known as a child, which is that the world is a dangerous place—and learned also that not everyone knows this.” My editor thought A Dangerous Place would be a good title; but I was not referring to the UN. As seamen are taught of the sea, the UN is not inherently a dangerous element, but is implacably punishing of carelessness.
2 A brief review of UN documents will make clear that “accept” has the everyday meaning of “endorse.” After the first commission report was submitted in July 1979, it became the subject of Security Council Resolution 452, in which the Council voted to “accept” its recommendations. The members of the commission easily understood what this meant. They wrote in their second report (which in turn became the subject of Resolution 465) that they had taken particular steps “bearing in mind that the Security Council, in Resolution 452 (1979), had accepted the recommendations contained in the commission’s first report . . .” (emphasis added).
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
“Joining the Jackals”
Must-Reads from Magazine
Good intentions, tragic consequences.
Chicago, Illinois — Andy has little time to chitchat. There are hundreds of hot towels to sort and fold, and when that’s done, there are yet more to wash and dry. The 41-year-old is one of half a dozen laundry-room workers at Misericordia, a community for people with disabilities in the Windy City. He and his colleagues, all of whom are intellectually disabled and reside on the Misericordia “campus,” know that their work has purpose, and they delight in each task and every busy hour.
In addition to his job at the laundry room, Andy holds two others. “For two days I work at Sacred Heart”—a nearby Catholic school—“and at Target. Target is a store, a big super-store. At Sacred Heart, I sweep floors and tables.”
“Ah, so you’re the janitor there?” I follow up.
“No, no! I just clean. I love working there.”
Andy’s packed schedule is typical for the higher-functioning residents at Misericordia, many of whom juggle multiple jobs. Their work at Misericordia helps meet real community needs—laundry, recycling, gardening, cooking, baking, and so on—while preparing residents for the private labor market. Andy has already found competitive employment (at Target), but many others rely on Misericordia’s own programs to stay active and employed.
Yet if progressive lawmakers and minimum-wage crusaders have their way, many of these opportunities would disappear, along with the Depression-era law which makes them possible.
The law, Section 14(c) of the Fair Labor Standards Act, permits employers to pay people with disabilities a specialized wage based on their ability to perform various jobs. It thus encourages the hiring of the disabled while ensuring that they are paid a wage commensurate with their productivity. The law safeguards against abuse by, among other things, requiring employers to regularly review and adjust wages as disabled employees make productivity gains. Many of these employers are nonprofit entities that exist solely to provide meaningful work for the disabled.
Only 20 percent of Americans with disabilities participate in the labor force. The share is even smaller among those with intellectual and developmental disabilities. For this group, work isn’t mainly about money—most of the Misericordia residents are oblivious to how much they get paid—so much as it is about purpose and community. What the disabled seek from work is “the feeling of safety, the opportunity to work alongside friends, and an atmosphere of kindness and understanding,” says Scott Mendel, chairman of Together for Choice, which campaigns for freedom of choice for the disabled and their families. (Mendel’s daughter, who has cerebral palsy, lives and works at Misericordia.)
Abstract principles of economic justice, divorced from economic realities and the lived experience of people with disabilities, are a recipe for disaster in this area. Yet that’s the approach taken by too many progressives these days.
Last month, for example, seven Senate progressives led by Elizabeth Warren of Massachusetts wrote a letter to Labor Secretary Alexander Acosta denouncing Section 14(c) for setting “low expectations for workers with disabilities” and relegating them to “second-class” status. The senators also took issue with so-called sheltered workshops, like those at Misericordia, which are specifically designed to help the disabled find pathways to market employment. Activists at the state level, meanwhile, continue to press for the abolition of such programs, and they have already succeeded in restricting or limiting them in a number of jurisdictions, most notably in Pennsylvania, where such settings have been all but eliminated.
While there have been a few, notorious cases of 14(c) and sheltered-workshop abuse over the years, existing law provides mechanisms for punishing firms for misconduct. Getting rid of 14(c) and sheltered workshops, however, could potentially leave hundreds of thousands of disabled people unemployed. Activists have yet to explain what it is they expect these newly jobless to do with their time.
Competitive employment simply isn’t an option for many of the most disabled. And even those like Andy, who are employed in the private economy, tend to work at most 20 hours a week at their competitive jobs. What would they do with the rest of their time, if sheltered workshops didn’t exist? Most likely, they would “veg out” in front of a television. Squeezing 14(c) program and forcing private employers to pay minimum wage to workers whose productivity falls far short of the norm wouldn’t improve the lot of the disabled; it would leave them jobless.
Economic reality is reality no less for the disabled.
Nor have progressives accounted for the effects on the lives of the disabled in jurisdictions that have restricted sheltered workshops. “None of these states have done an adequate job of ascertaining whether these actions actually enhanced the quality of life for the individuals affected,” a study in the Social Improvement Journal concluded last year. Less time in sheltered workshops, the study found, “was not replaced with a corollary increase in the use of more integrated forms of employment.” Rather, “these individuals were essentially unemployed, engaging in made-up day activities.”
Make-work is not what Andy and his colleagues are up to today at Misericordia. They complete real tasks, which benefit their fellow residents in concrete ways. “This work is training, but it also gives them meaning,” one Misericordia director told me. “It’s not just doing meaningless work, but it’s going toward something. We’re not setting them up to do something that someone else takes apart. This is something that’s needed.” Yet, in the name of economic justice, progressives are on the verge of depriving men and women like Andy of the dignity of work and the freedom of choice that non-disabled Americans take for granted.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Reminding voters what Democratic governance means.
To paraphrase New York Times columnist Ross Douthat (with apologies), the less Republicans do in office, the more popular they generally become. That is, when the GOP exists solely in voters’ minds as a bulwark against cultural and political liberalism, it can cobble together a winning coalition. Likewise, Democrats regain the national trust when they serve only as an obstacle to Republican objectives. It’s when both parties begin to talk about what they want to do with their power that they get into trouble.
That is an over-simplification, but the core thesis is an astute one. In an age of negative partisanship and without an acute foreign or domestic crisis to focus the national mind, it’s not unreasonable to presume that both parties’ chief value is defined in negative terms by the public. Considering how little of the national dialogue has to do with policy these days, general principles and heuristics are probably how most marginal voters navigate the political environment.
Somewhere along the way, though, Democrats managed to convince themselves that they cannot just be the anti-Donald Trump party. Their most influential members have become convinced that the party needs to articulate a positive agenda beyond a set of vague principles. For the moment, Democrats who merely want to present themselves as unobjectionable alternatives to Trumpism without going into much broader detail appear to be losing the argument.
According to a study of campaign-season advertisements released on Friday by the USA Today Network and conducted by Kantar Media’s Campaign Marketing Analysis Group, Democrats are not leaning into their opposition to Trump. While over 44,000 pro-Trump advertisements from Republican candidates have aired on local broadcast networks, only about 20,000 Democratic ads have highlighted a candidate’s anti-Trump bona fides. “Trump has been mentioned in 27 [percent] of Democratic ads for Congress, overwhelmingly in a negative light,” the study revealed. In the same period during the 2014 midterm election cycle, by contrast, 60 percent of Republican advertisements featured President Barack Obama in a negative light.
There are plenty of caveats that should prevent observers from drawing too many broad conclusions about what this means. First, comparing the political environment in 2018 to 2014 is apples and oranges. Recall that 2014 was Barack Obama’s second midterm election, so naturally enthusiasm among the incumbent party’s base to rally to the president’s defense wanes while the “out-party’s” anxiety over the incumbent president grows. If Donald Trump’s job-approval rating is still anemic in September, it is reasonable to expect that Republican candidates will soft-peddle their support for the president just as Democrats did in 2010. Second, Democrats running against Democrats in a Democratic primary race may not feel the need to emphasize their opposition to the president, since that doesn’t create a stark enough contrast with their opponent.
And yet, the net effect of the primary season is the same. Democrats aren’t just informing voters of their opposition to how Trump and the Republican Party have managed the nation’s affairs; they’re describing what they would do differently. By and large, the Democratic Party’s agenda consists of “doubling” spending on social-welfare programs, education, and infrastructure, and promising a series of five-year-plan prestige projects. But Democratic candidates are also leaning heavily into divisive social issues.
The themes that Democratic ads have embraced so far range from support for new gun-control measures (“f*** the NRA,” was one New Mexico candidate’s message), to protecting public funding for Planned Parenthood, to promoting support for same-sex marriage rights, to attacking Sinclair Broadcasting (which happened to own the network on which that particular ad ran). A number of Democratic candidates are running on their support for a single-payer health-care system, including the progressive candidate in Nebraska’s GOP-leaning 2nd Congressional District who narrowly defeated an establishment-backed former House member this week, putting that seat farther out of the reach of Democrats in November.
In the end, messages like these animate the Democratic Party’s progressive base, but they have the potential to alienate swing voters. That may not be enough to overcome the electorate’s tendency to reward the “out-party” in a president’s first midterm election. And yet, the risk Democrats run by being specific about what they actually want to do with renewed political power cannot be dismissed. Democrats in the activist base are convinced that embracing conflict-ridden identity politics is a moral imperative, and the party’s establishmentarian leaders appear to believe that being anti-Trump is not enough to ensure the party’s success in November. All the while, the Democratic Party’s position in the polls continues to deteriorate.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Meritocracy is in the eye of the beholder.
A running theme in Jonah Goldberg’s fantastic new book, Suicide of the West, is the extent to which those who were bequeathed the blessings associated with classically liberal capitalist models of governance are cursed with crippling insecurity. Western economic and political advancement has followed a consistently upward trajectory, albeit in fits and starts. Yet, the chief beneficiaries of this unprecedented prosperity seem unaware of that fact. In boom or bust, the verdict of many in the prosperous West remains the same: the capitalist model is flawed and failing.
Capitalism’s detractors are as likely to denounce the exploitative nature of free markets during a downturn as they are to lament the displacement and disorientation that follows when the economy roars. The bottom line is static; only the emphasis changes. Though this tendency is a bipartisan one, capitalism’s skeptics are still more at home on the left. With the lingering effects of the Great Recession all but behind us, the liberal argument against capitalism’s excesses has shifted from mitigating the effects on low-skilled workers to warnings about the pernicious effects of prosperity.
Matthew Stewart’s expansive piece in The Atlantic this month is a valuable addition to the genre. In it, Stewart attacks the rise of a permanent aristocracy resulting from the plague of “income inequality,” but his argument is not a recitation of the Democratic Party’s 2012 election themes. It isn’t just the mythic “1 percent,” (or, in the author’s estimation, the “top 0.1 percent”) but the top 9.9 percent that has not only accrued unearned benefits from capitalist society but has fixed the system to ensure that those benefits are hereditary.
Stewart laments the rise of a new Gilded Age in America, which is anecdotally exemplified by his own comfort and prosperity—a spoil he appears to view as plunder stolen from the blue-collar service providers he regularly patronizes. You see, he is a member of a new aristocracy, which leverages its economic and social capital to wall itself off from the rest of the world and preserves its influence. He and those like him have “mastered the old trick of consolidating wealth and passing privilege along at the expense of other people’s children.” This corruption and Stewart’s insecurity is, he contends, a product of consumerism. “The traditional story of economic growth in America has been one of arriving, building, inviting friends, and building some more,” Stewart wrote. “The story we’re writing looks more like one of slamming doors shut behind us and slowly suffocating under a mass of commercial-grade kitchen appliances.”
Though he diverges from the kind of scientistic Marxism reanimated by Thomas Piketty, Stewart nevertheless appeals to some familiar Soviet-style dialectical materialism. “Inequality necessarily entrenches itself through other, nonfinancial, intrinsically invidious forms of wealth and power,” he wrote. “We use these other forms of capital to project our advantages into life itself.” In this way, Stewart can have it all. The privilege enjoyed by the aristocracy is a symptom of Western capitalism’s sickness, but so, too, are the advantages bestowed on the underprivileged. Affirmative action programs in schools, for example, function in part to “indulge rich people in the belief that their college is open to all on the basis of merit.”
It goes on like this for another 13,000 words and, thus, has the strategic advantage of being impervious to a comprehensive rebuttal outside of a book. Stewart does make some valuable observations about entrenched interests, noxious rent-seekers, and the perils of empowering the state to pick economic winners and losers. Where his argument runs aground is his claim that meritocracy in America is an illusion. Capitalism is, he says, a brutal zero-sum game in which true advancement is rendered unattainable by unseen forces is a foundational plank of the liberal American ethos. This is not new. Not new at all.
Much of Stewart’s thesis can be found in a 2004 report in The Economist, which alleges that the American upper-middle-class has created a set of “sticky” conditions that preserve their status and result in what Teddy Roosevelt warned could become an American version of a “hereditary aristocracy.” In 2013, the American economist Joseph Stiglitz warned that the American dream is dead, and the notion that the United States is a place of opportunity is a myth. “Since capitalism required losers, the myth of the melting pot was necessary to promote the belief in individual mobility through hard work and competition,” read a line from a 1973 edition of a National Council for the Social Studies-issued handbook for teachers. The Southern Poverty Law Center, which for some reason produces a curriculum for teachers, has long recommended that educators advise students poverty is a result of systemic factors and not individual choices. Even today, a cottage industry has arisen around the notion that Western largess is decadence, that meritocracy is a myth, and that arguments to the contrary are acts of subversion.
The belief that American meritocracy is a myth persists despite wildly dynamic conditions on the ground. As the Brookings Institution noted, 60 percent of employed black women in 1940 worked as household servants, compared with just 2.2 percent today. In between 1940 and 1970, “black men cut the income gap by about a third,” wrote Abigail and Stephan Thernstrom in 1998. The black professional class, ranging from doctors to university lecturers, exploded in the latter half of the 20th Century, as did African-American home ownership and life expectancy rates. The African-American story is not unique. The average American income in 1990 was just $23,730 annually. Today, it’s $58,700—a figure that well outpaces inflation and that outstrips most of the developed world. The American middle-class is doing just fine, but that experience has not come at the expense of Americans at or near the poverty line. As the economic recovery began to take hold in 2014, poverty rates declined precipitously across the board, though that effect was more keenly felt by minority groups which recovered at faster rates than their white counterparts.
As National Review’s Max Bloom pointed out last year, 13 of the world’s top 25 universities and 21 of the world’s 50 largest universities are located in America. The United States attracts substantial foreign investment, inflating America’s much-misunderstood trade deficit. The influx of foreign immigrants and legal permanent residents streaming into America looking to take advantage of its meritocratic system rivals or exceeds immigration rates at the turn of the 20th Century. You could be forgiven for concluding that American meritocracy is self-evident to all who have not been informed of the general liberal consensus. Indeed, according to an October 2016 essay in The Atlantic by Victor Tan Chen, the United States so “fetishizes” meritocracy that it has become “exhausting” and ultimately “harmful” to its “egalitarian ideals.”
Stewart is not wrong that there has been a notable decline in economic mobility in this decade. That condition is attributable to many factors, ranging from the collapse of the mortgage market to the erosion of the nuclear family among lower-to middle-class Americans (a charge supported by none-too-conservative venues like the New York Times and the Brookings Institution). But Mr. Stewart will surely rejoice in the discovery that downward economic mobility is alive and well among the upper class. National Review’s Kevin Williamson observed in March of this year that the Forbes billionaires list includes remarkably few heirs to old money. “According to the Bureau of Labor Statistics, inherited wealth accounts for about 15 percent of the assets of the wealthiest Americans,” he wrote. Moreover, that list is not static; it churns, and that churn is reflective of America’s economic dynamism. In 2017, for example, “hedge fund managers have been displaced over the last two years not only by technology billionaires but by a fish stick king, meat processor, vodka distiller, ice tea brewer and hair care products peddler.”
There is plenty to be said in favor of America’s efforts to achieve meritocracy, imperfect as those efforts may be. But so few seem to be touting them, preferring instead to peddle the idea that the ideal of success in America is a hollow simulacrum designed to fool its citizens into toiling toward no discernable end. Stewart’s piece is a fine addition to a saturated marketplace in which consumers are desperate to reward purveyors of bad news. Here’s to his success.
Choose your plan and pay nothing for six Weeks!
Podcast: Donald Trump Jr. moves the ball forward.
We try, we really do try, to sort through the increasingly problematic “Russian collusion” narrative and establish a timeline of sorts—and figure out what’s real and what’s nonsense. Do we succeed? Give a listen.
Don’t forget to subscribe to our podcast on iTunes.
Choose your plan and pay nothing for six Weeks!
An immigrant from Italy, Morais had taught himself English utilizing the King James Bible. Few Americans spoke in this manner, including Abraham Lincoln. Three days later, the president himself reflected before an audience: “How long ago is it?—eighty-odd years—since on the Fourth of July for the first time in the history of the world a nation by its representatives assembled and declared as a self-evident truth that ‘all men are created equal.’” Only several months later, at the dedication of the Gettysburg cemetery, would Lincoln refer to the birth of our nation in Morais’s manner, making “four score and seven years ago” one of the most famous phrases in the English language and thereby endowing his address with a prophetic tenor and scriptural quality.
This has led historians, including Jonathan Sarna and Marc Saperstein, to suggest that Lincoln may have read Morais’s sermon, which had been widely circulated. Whether or not this was so, the Gettysburg address parallels Morais’s remarks in that it, too, joins mourning for the fallen with a recognition of American independence, allowing those who had died to define our appreciation for the day that our “forefathers brought forth a new nation conceived in liberty.” Lincoln’s words stressed that a nation must always link civic celebration of its independence with the lives given on its behalf. Visiting the cemetery at Gettysburg, he argued, requires us to dedicate ourselves to the unfinished work that “they who fought here have thus far so nobly advanced.” He went on: “From these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion,” thereby ensuring that “these dead shall not have died in vain.”
The literary link between Morais’s recalling of Jerusalem and Lincoln’s Gettysburg Address makes it all the more striking that it is the Jews of today’s Judea who make manifest the lessons of Lincoln’s words. Just as the battle of Gettysburg concluded on July 3, Israelis hold their Memorial Day commemorations on the day before their Independence Day celebrations. On the morning of the Fourth of Iyar, a siren sounds throughout the land, with all pausing their everyday activities in reverent memory of those who had died. There are few more stunning images of Israel today than those of highways on which thousands of cars grind to a halt, all travelers standing at the roadside, and all heads bowing in commemoration. Throughout the day, cemeteries are visited by the family members of those lost. Only in the evening does the somber Yom Hazikaron give way to the joy of the Fifth of Iyar’s Yom Ha’atzmaut, Independence Day. For anyone who has experienced it, the two days define each other. Those assembled in Israel’s cemeteries facing the unbearable loss of loved ones do so in the knowledge that it is the sacrifice of their beloved family members that make the next day’s celebration of independence possible. And the celebration of independence is begun with the acknowledgement by millions of citizens that those who lie in those cemeteries, who gave “their last full measure of devotion,” obligate the living to ensure that the dead did not die in vain.
The American version of Memorial Day, like the Gettysburg Address itself, began as a means of decorating and honoring the graves of Civil War dead. It is unconnected to the Fourth of July, which takes place five weeks later. Both holidays are observed by many (though not all) Americans as escapes from work, and too few ponder the link between the sacrifice of American dead and the freedom that we the living enjoy. There is thus no denying that the Israelis’ insistence on linking their Independence Day celebration with their Memorial Day is not only more appropriate; it is more American, a truer fulfillment of Lincoln’s message at Gettysburg.
In studying the Hebrew calendar of 1776, I was struck by the fact that the original Fourth of July, like that of 1863, fell on the 17th of Tammuz. It is, perhaps, another reminder that Gettysburg and America’s birth must always be joined in our minds, and linked in our civic observance. It is, of course, beyond unlikely that Memorial Day will be moved to adjoin the fourth of July. Yet that should not prevent us from learning from the Israeli example. Imagine if the third of July were dedicated to remembering the battle that concluded on that date. Imagine if “Gettysburg Day” involved a brief moment of commemoration by “us, the living” for those who gave the last full measure of devotion. Imagine if tens—perhaps hundreds—of millions of Americans paused in unison from their leisure activities for a minute or two to reflect on the sacrifice of generations past. Surely our observance of the Independence Day that followed could not fail to be affected; surely the Fourth of July would be marked in a manner more worthy of a great nation.