The 80's are more and more coming to be characterized by journalists, historians, and intellectuals as a costly if not…
COMMENTARY recently sent the following inquiry to a number
of American intellectuals of varying political views:
The 80’s are more and more coming to be characterized by journalists, historians, and intellectuals as a costly if not a disastrous decade for America. At home, it is charged, the economic and social policies of the Reagan administration stimulated greed on Wall Street and in the business community, encouraged a general mood of selfishness, and were responsible for an actual increase of poverty (as symbolized in particular by the plight of the homeless). Abroad, the indictment continues, indiscriminate bellicosity on a global scale succeeded mainly in provoking a dangerous arms race, needlessly exacerbating and prolonging conflicts with adversaries, alienating friends, and sowing the seeds of future mistrust of American intentions.
Do you accept these characterizations? If so, how would you account for the worldwide triumph by the end of the decade of the prevailing ideas and policies of the American 80’s? If not, how do you account for the fact that so many people in America itself have been condemning those ideas and policies?
The responses—nineteen in all—follow.
This symposium is sponsored by the Harry Elson COMMENTARY Fund.
Jeane J. Kirkpatrick:
Will history give Ronald Reagan credit for his accomplishments?, I have been asked many times by relatively nonpolitical American Reagan supporters.
COMMENTARY’s symposium reminds us that there is a more basic question: will the very existence of these accomplishments be acknowledged, or will they be ignored and obscured by an interpretation of events that jumps from Jimmy Carter’s human-rights policy to Carter monitoring elections in Nicaragua in 1990; or from the end of negotiations on SALT II to George Bush signing a new START treaty?
Will the collapse of Communism be ascribed simply to internal Soviet weaknesses—as if U.S. and Western policies played no significant role?
All this could happen. The writing of history is the job of a largely adversary professoriate and a largely adversary media, not much affected by the events of the past year.
It is possible that the collapse of Communism as a world revolutionary movement, its elimination as a framework for the critical analysis of capitalism and as an alternative focus of loyalty, may eventually have an effect on the relationship of the American Left with the United States, but I doubt it. I believe the Left’s attitudes toward America have influenced its attitudes toward Communism, rather than the reverse. These attitudes toward the United States are deeply rooted, and shielded from evidence because they are not based on evidence.
I believe one of the distinctive attributes of the American Left is a broad, though not universal, alienation from the dominant American society and culture which constitutes a cognitive screen through which new information about the world must pass before it is assimilated. This creates the negative evaluation of the 80’s confronted in this symposium.
What kind of alienation am I talking about? An example at hand is in a recent column by William Pfaff on Canada in the International Herald Tribune. He writes:
We shot up our West—murdered the Indians, overgrazed and overplowed the land into dust. Now we are de-treeing it. You sent your Redcoats to police and bring order to the West, singing—we were led to suppose—a neat baritone while going about it. How impressive to have won the West your way.
This excerpt (part of a longer equally prejudiced and prejudicial comparison) has no special significance. But it is typical of the tendency on the Left to sweeping condemnation of U.S. national character and practices and the easy praise of others.
Let me be clear. I understand that not everyone on the American Left is alienated from the United States. Like all broad political movements the American Left contains a number of potentially incompatible tendencies. Traditional Democrats, including many blacks and most of the labor movement, are not alienated from America. There are also some fairly traditional Marxists and quasi-Marxists for whom alienation is not central. But there are also denizens of the counterculture who contribute to the American Left a broad streak of romanticism, anarchism, and alienation largely missing from the serious social-democratic and Communist parties of Europe (though this streak has surfaced in Germany’s “Greens” and on the edges of Britain’s Labor party).
The anti-establishment, anti-bourgeois romantic Left has had in common with democratic-socialist and major Marxist parties the rejection of capitalism. But serious socialists reject capitalism because they think socialism can provide a better life and a higher standard of living for more people, not because they reject bourgeois styles of life or are contemptuous of bourgeois values. Persons whose politics are motivated chiefly by the desire to promote the best life for the most people change their minds and their policies on the basis of new evidence, and in recent years, many democratic-socialist parties and some non-democratic ones have abandoned key socialist policies in favor of market strategies on the basis of accumulating evidence of socialism’s failure and capitalism’s economic success. But the American Left has had special problems assimilating these facts.
I believe the reason is that many have been attracted to socialism less as an economic system that would work than as a system which disdained capitalism and capitalist society.
I believe they disdain capitalism for much the same reason they disdain John Wayne and Ronald Reagan—because it seems so American. Americans like Horatio Alger, the American dream, the homely virtues, the promise of reward for hard work. The heroes of the counterculture also work hard and live by the rules—and lose their farms to greedy bankers who divert depositors’ funds to support Ronald Reagan and his ilk. In this world of the counterculture, the good guys are all anti-establishment. Economics becomes a subfield in an ongoing morality play. In that morality play it matters less that socialists are unproductive than that they are anti-capitalist.
The Sociologist Seymour Martin Lipset has recently described the process by which one Western country after another has abandoned key socialist strategies after their economies floundered, and adopted market mechanisms instead. But not American Democrats. In attempting to explain why “the national Democrats have been more disposed to adhere to a redistributionist, progressive tax, anti-business orientation than most social-democratic parties elsewhere,” Lipset returns—as I think we must—to the thesis of Richard Hofstadter and Lionel Trilling that for American intellectuals, “the attachment [to Marxism] has been inspired and sustained more by a desire to be anti-establishment, to be adversarial toward bourgeois and national patriotic values, than by a concern to implement specific political and social programs.” I rebel, therefore I am.
These aspects of the American Left’s relationship to its society are present in speeches at Democratic conventions and in Democratic primaries: the America they depict is one of debt and depression, of decline, homelessness, and hopelessness, of insecurity and intolerance, of neglected children and neglected aged. Ronald Reagan’s America—as they depict it—is a callous, careless society with polluted air and water and drug-ridden, violent cities in which the homeless are left to freeze on winter streets. It is an America of greed and need.
This crime-ridden America projects its lawlessness abroad. It is described as the “world’s biggest deadbeat,” as a great “international lawbreaker.” At the Democratic convention of 1988, Ronald Reagan’s administrations were decried as irresponsible in economic policy, callous in social policy, and trigger-happy in foreign policy.
Do I exaggerate? No. They exaggerate.
The alienation of the American Left is manifest in the eager magnification of American failings and the reluctance to acknowledge American achievements. It is clear in the enthusiasm for such interesting “rights” as flag burning and in the eager condemnation of such “crimes” as John Poindexter’s. This alienation is nowhere more obvious than in the tendency to blame America first—for almost everything almost anywhere. It has cost the national Democrats votes and has been recognized by that party as a problem.
How can people who see the American economy in terms of depression, deficit, and decline accept the news that others find it the most promising system of all?
How can people who believed Ronald Reagan’s description of the Soviet Union as an “Evil Empire” to be dangerous and ludicrous accept the news from people who lived under it that it was a repressive regime in which alien rule was sustained by force?
How can they bear the pain of learning definitively that Eastern-bloc socialism had never won the allegiance of those who lived under it? Only by denial. Suddenly, all the old arguments have been settled: virtually no one chooses to live under Communism. Given a chance, people choose free governments and free markets. The speed with which Poland, Hungary, East Germany, and Czechoslovakia seized the opportunity to dump Communism and Communist rulers, transform their regimes, and loosen ties to the Warsaw Pact resolved once and for all the outstanding questions about whether those people of Eastern Europe had “chosen” socialism.
Worse, from the point of view of the Left, must be the widely acknowledged failure of socialist strategies even in a democratic context and the subsequent abandonment of central planning, redistributionist tax policies, and systems of comprehensive social insurance. “From Scandinavia to the South Pacific, socialist and other left-wing parties have taken the road back to capitalism,” writes Lipset.
This decision to travel “the road back” from socialism to capitalism rests on three conclusions—each of which contradicts central aspects of Marxist and quasi-Marxist perspectives: first, that socialism is an inefficient, unproductive economic system; second, that capitalism is a more effective, more productive system; and third, that it is possible to “go backward” in history. Of all these, I think the news most difficult for the American Left has been capitalism’s success, because accepting it means conceding America’s success. It means conceding that the Left got it all wrong: in fact there was no increase in poverty, no sudden explosion of greed, no adventurism in foreign policy, no increased misery of the poor in the 80’s.
There was a dramatic increase in drug addiction and in certain kinds of social ills for which the counterculture and liberal extremists bear a special resposibility. Homelessness is the very best example of an ill dramatically exacerbated by “liberals” who push out of mental hospitals those too sick to remember to take the medicine that makes them relatively sane, and who keep in the streets in the name of freedom people who have long since become slaves to addiction.
The alienated leftists make it clear day after day that they would rather curse the society than help the people in it cope. Only recently, for example, the New York Times reported new findings concerning the biological basis of addiction which, it said, “will result in an entirely new strategy for fighting drug addiction.” One might have thought this good news, but no. . . .
“I object to seeing the vulnerability in the person rather than in their poverty,” said a University of Colorado sociologist, apparently unmoved by the prospective good the new medicine might do. “Even if the scientific evidence turns out to be strong, people have the right to refuse being tested or given medication,” added the director of an “advocacy group for the mentally ill,” less concerned about the misery than the “right” to be ill.
Our homeless problem does reflect a failure of Ronald Reagan: his failure to solve the problem of “liberal” extremism which is ready to sacrifice persons to abstractions.
It matters that there are influential, highly educated Americans who think that the United States is a cold, careless society in debt and decline. But their views are not a conclusion based on evidence, and thus cannot be disproved by evidence. Too bad, because the evidence is now so abundantly at hand.
Robert B. Reich:
How can the decade just ended exemplify both the failure of American capitalism and, simultaneously, the triumph of American capitalism? There are two plausible explanations. The second is more convincing.
1. The warts-and-all hypothesis. For all its obvious shortcomings, American capitalism is still so superior to Communism that it will dramatically improve the lives of Eastern Europeans, Soviets, and others around the world nonetheless, and they know it.
Yes, certain blemishes were revealed during the last decade. American productivity has risen at a snail’s pace; the nation went from being the largest creditor to being the largest debtor in the world; it has lost market share in many of the technologies of the future; American companies are being bought up by non-Americans; the average real income of Americans has stagnated; American schools are falling apart, aptitude tests are down, and one out of every five young American adults is functionally illiterate; the rates of infant mortality in many of America’s major cities rival those of Third World nations; and a growing portion of the American population—including one out of every four children under the age of six—is impoverished, at least according to the way Americans define poverty.
But these are quibbles. In the world economy, everything is relative. During the same decade, Communist systems fared much worse. In fact, the gap between the two systems—American-style capitalism, and Soviet-style central planning—became so wide that the latter could not endure.
The relative collapse of central planning was due largely to new demands placed on all economic systems by rapidly changing information technologies. Central planning is up to the task of producing masses of identical objects, like steel ingots. Indeed, Soviet steel production increased by a brisk 9 percent a year between 1945 and 1960—enabling Nikita Khrushchev credibly to boast in 1959 that at the rate his economy was growing the Soviets would overtake America within twenty years. But when it comes to smaller runs of more intricate things that must be carefully synchronized with one another and continuously adapted to new opportunities and circumstances, central planning is hopeless. Bureaucratic coordination is impossible; there are too many variables. For designing and producing networks of computers, software, semiconductors, and fiber optics, and utilizing them to accomplish all sorts of tasks, there is no substitute for a decentralized price system that constantly signals how and where new ideas can be put to their best uses.
2. The it’s-not-American-capitalism-they-want-anyway hypothesis. Granted all of the above about the inevitable failure of central planning, there are capitalist alternatives other than the American Way. And the record of the 80’s suggests that the alternatives might be superior.
If you happen to live near Japan, South Korea, Taiwan, or Singapore you are probably considering adopting some variant on their highly successful forms of neo-mercantilist capitalism. To this end, take the following measures: hold down consumption; push exports; subsidize high technologies; pour huge amounts of money into education, training, and research and development; avoid the bureaucratic inefficiencies of central planning but keep some of its scale economies by consolidating your firms into giant industrial groups, each with its own lead bank; keep everyone on his toes by forcing the groups to compete vigorously with one another; but also create among your business, labor, and government elites a tight coordinating system, so that whoever loses from economic change can quickly compensate whoever wins. Most importantly, don’t be overly enamored of the price system. Use it to signal growth opportunities, but then rapidly mobilize the nation’s resources to capture them.
If you live near Western Europe, on the other hand, you’re probably considering adopting social-democratic capitalism. Western European productivity soared throughout the, 80’s, and the anticipated gain from economic union in 1992 has summoned a deluge of global capital. How to institute social-democratic capitalism? Easy: rely on free markets, but establish public health insurance and generous unemployment insurance; set up a highly progressive income tax; give organized labor a major say in corporate governance; have your big banks, labor unions, and finance ministries steer macroeconomic policy; so long as a strong social safety net is in place, tolerate unemployment for the sake of rapid productivity growth; pool public and private resources for major research-and-development projects; invest heavily in public works; provide workers with an abundance of apprenticeship, training, and retraining programs, and help them find jobs and relocate themselves. Also, consider sacrificing some political sovereignty in order to create economic opportunities that originate across your borders.
None of this is meant to suggest that the average Pole, Hungarian, Russian, Vietnamese, or Thai is carefully weighing and balancing these possibilities. Their overwhelming desire, presumably, is to get food on their tables and VCR’s on their television sets, and they care little about what the economic system is called or exactly how it is run. The point, rather, is that the alternative to central planning is not necessarily American capitalism. Those who are emerging from Communism are more likely to be attracted by a wartless variety.
If the recent past is any guide, it can be expected that the future ideological battleground will not be between Communism and capitalism, but among these three fundamentally different kinds of capitalism: East Asian neo-mercantilist, European social-democratic, and American. If they stick to their paths and we stick to ours, the differences between the American kind and the others are likely to be far greater by the year 2000 than they are today, and the relative unpopularity of our errant variety, far more apparent.
Free at last! Let us rejoice at the end of the 80’s. Even as I write, a cleansing wave of retrospective rage sweeps through the media at the greed and venality, jingoism and philistinism that emerged in America with the election of Ronald Reagan to the presidency. A wave of euphoria engulfs us as Americans celebrate more inspiring leaders, leftist leaders—Time’s Man of the Decade Mikhail Gorbachev and the media’s man of the millennium Nelson Mandela.
Mandela opens the 90’s by proclaiming his creed of racial harmony from Yankee Stadium to the U.S. Congress, hailing such pacific prophets as Yasir Arafat, Muammar Qaddafi, the PLO, the IRA, and the Puerto Rican redeemers who shot four U.S. Congressmen in 1955. In contrast to the bellicose Reagan, Gorbachev now pursues a policy of peace, raising defense spending by 20 percent as a share of Soviet output (from 10 to 12 percent of GNP).
Free at last! The American intelligentsia now revels in the prospect of higher taxes and higher spending and looks longingly toward environmental catastrophe. Taxes will bring a new spirit of “equal sacrifice” to our “plutocratic” and “undertaxed” nation in which government now spends a “niggling” one-third of GNP (without the help of socialized medicine that balloons the totals of other Western nations). Environmental disasters will put politicians (and intellectuals) back into control once and for all, and end the business bid for primacy in our national life.
Crime rates, too, seem likely to improve soon, as the clogged court system prepares to end “wilding” sprees in Washington and New York through the incarceration of John Poindexter and Michael Milken. Faced with the threat to public health and safety posed by these men, the government refreshingly eschewed the folderol of specific criminal charges and is putting these men away merely as symbols of 80’s excess.
Now in the 90’s, the U.S. can turn away from the “sterile” conflicts of the cold war and focus instead on the real threats to America—Israel and Japan. In a bold response to the threat of high-quality Japanese imports, the U.S. has already moved to dissuade the Japanese from working more than forty hours a week. Perhaps in time the Israelis can be persuaded to stop defending themselves and the West from terrorism so effectively, as this distresses potential American allies such as Iran, Syria, Iraq, Libya, Cuba, and Uganda.
It is easy to make fun of the fashionable attitudes toward the 80’s. But the question remains: why does the memory of the past decade make all these highbrows ache and furrow and the media mouths froth and sneer? The reason is obvious. For most of the American intelligentsia, the 80’s were nearly a total disaster.
The 80’s were the era when the leftist dreams all collapsed in travesty. The mock-heroic youth of the 60’s emerged from schools full of self-importance, sure that the world was evil and owed them a living for their moral superiority, and capable of making no contribution to society except teaching their crippling creeds to future generations. The 80’s taught them the unwanted lesson that Marxist slogans, a sense of grievance, and a rhetoric of rights are economically useless. Disdainful of science, enterprise, and other practical learning, they moved into law, teaching, and politics. Incapable of performing any useful task for a business, they thronged into the environmental movement where they could harass businesses from a moral pinnacle without submitting to the humbling discipline of serving customers. The youth of the 60’s crowded onto the pulpits of the media and the academy in such numbers that a reporter or professor made less money than a garbage man. Seething at such obvious inequities of capitalism, they castigated the prosperous for “greed” and “workaholism.”
The 80’s were the decade when socialism died and left nothing but a bristling carcass of weapons pointed toward the West. It was the decade when tax rates were cut in 55 nations and revenues dropped only in nations that raised their rates. It was the era when capitalism at last demonstrated conclusively its superiority as an economic system. It was the era when U.S. economic growth rates, long lagging behind the rest of the world, surged ahead of Europe, Africa, and Latin America, and nearly caught up with Japan’s for the first time since the early 50’s.
The 80’s also saw the longest peacetime expansion on record, continuing today, with the highest rates of investment in capital equipment and the highest rates of manufacturing productivity growth of any postwar recovery. During the 80’s, the U.S. increased its share of global manufacturing output, global exports, and global GNP. Contrary to thousands of reports to the contrary, U.S. balance sheets mostly improved, with debt as a share of assets dropping drastically for both businesses and households, as equity and real-estate values rose far more rapidly than indebtedness. Even government debt, as a share of GNP or in relation to real national assets, remained modest by historic and international standards.
After tax-rate reductions in the early 80’s took effect in 1983 and 1984, revenues rose some 9 percent a year in real terms, far faster than during the high-tax 70’s. During the 80’s recovery, industrial output rose nearly 40 percent, personal income 20 percent, and all segments of American society benefited from the creation of 22 million new jobs at rising real wages. Black employment rose 30.3 percent and Hispanic employment nearly 50 percent.
Why then all the gloom-mongering? Chiefly because by the critical indices of marital stability, which is what mostly matters, the 80’s brought tragedy to millions of American families and their children. Contrary to all the claims of the Left, female-headed families are mostly a disaster, incapable of disciplining boys or of escaping poverty. Poverty, however, is not the key problem. Census figures showed that the poorest Americans spent some three times more money than they reported as income. Their problem was not chiefly monetary but a breakdown of the moral codes of civilized society.
Although not economic in nature, this persistent problem of family breakdown cast a pall of failure on all the economic triumphs of the 80’s. Both sought and desired by the Left as a form of liberation, family breakdown accounted for most of the crime, drug, and other “poverty” problems widely blamed on Reagan policy. Without paternal discipline and role models, teenage boys run amok in nearly all societies.
Most of all, family dissolution was crucial to the single greatest propaganda triumph of the Left since the Great Depression: the homeless who haunt all the proud towers of 80’s prosperity. Like the Great Depression, which was caused by massive tax and tariff hikes, the homeless problem is a harvest of failed government policies: favoring divorce and illegitimacy over marriage, deinstitutionalizing the mentally ill, rescinding vagrancy laws, and stifling cheap housing with regulations, codes, and controls. But in the morbid feedback loops of liberalism, the answer is always more government subsidies for homelessness and family breakdown and thus more propaganda for the enemies of America. As the most gullible of Americans, many intellectuals blamed the homeless on the capitalist successes of the 80’s.
A more Basic reason so many intellectuals resent the 80’s is that they uphold an adversary culture, and the decade saw the triumph of mainstream values. American intellectuals prefer socialist regimes where intellectuals seem to rule rather than an America where power and status accrue chiefly to the providers of useful goods and services to others. Many intellectuals prefer a so-called “society of poets” where most people live in fear and famine (Nicaragua or Cuba) to a society where capitalists surpass intellectuals in income and status.
This preference of the intellectuals became obvious in the debates over taxes. Objectively the debate should be over and the Left in rout. It was predicted in Congress that the Reagan tax cuts would reduce the share of real taxes paid by the “rich”—those earning over $200,000—to 7.8 percent of total income-tax receipts; the same was predicted for the relatively poor, those earning under $15,000. Instead, by 1987 (the latest available detailed data), payments by the rich had soared to nearly 20 percent of all income-tax revenues and payments by the poor plummeted to 2.8 percent. As supply-siders predicted, the rich did pay a lower share of their incomes than previously, but they earned and declared more income, thus increasing both their absolute and proportionate tax contribution. This healthy and desirable result, which meant that American elites were working harder and investing more effectively, makes the New Republic, Kevin Phillips, and other levelers seethe.
In the long run, it is impossible to get more money out of the rich unless they earn and produce more. More rich people and more production are the essence of economic growth, which is indispensable both to any enduring improvement of the lot of the poor or any enduring increase in government revenues.
The fact is that people in America with enough gumption to work hard and keep their families intact thrived during the 80’s. The share of all families with real incomes over $50,000 rose from 20 percent to 26 percent and the share earning more than $35,000 rose from 41 percent to 46 percent.
In capitalism, the rich and well-off usually deserve their money, not as a prize for virtue or a celebration of consumption, but for their superiority as investors. Capitalism works because people who demonstrate their ability to create wealth also win the right to reinvest it. The experience of the 80’s around the world showed the effectiveness of this system both in increasing government revenues and expanding opportunity and mobility for the poor.
High tax rates do nothing to stop people from being rich; if you are already rich, you can always manipulate your funds and properties in ways that avoid taxation. British aristocrats and Swedish tycoons have demonstrated this for decades. High tax rates, however, are very effective in preventing poor and middle-class people from getting rich by working harder and more resourcefully than the classes above them. Thus these high rates impoverish entire societies. The 80’s forced the leftist intellectuals to make clear that they prefer equally distributed poverty to the enriching inequalities of capitalism.
The greatest triumph of the 80’s, however, was not fully evident in any economic data. It was the computer revolution, entirely a product of relentless discipline and creative genius in capitalist nations. Computer-industry revenues more than quadrupled, unit sales rose by a factor of hundreds, and computer cost effectiveness rose ten-thousandfold. At the end of the decade, U.S. companies still hold some two-thirds of the world market, and in critical software and leading-edge microchips their market share is above 70 percent and growing. In particular, the U.S. leads in using personal computers, with well over half of the world’s 100 million PC’s located in the U.S. in 1990. The U.S. has three times as much computer power per capita as the Japanese.
This development, which impelled most of the world’s economic growth during the decade, was also disastrous for the Left. The Left has always pinned its hopes on politics. The converging technologies of computers and telecommunications are radically reducing the power of politicians. An ever-increasing share of world wealth assumes the mobile form of information technologies which, unlike the massive industrial systems of the past, are difficult to measure, capture, or tax. The computer age is an age of mind, elusive and hard to control. This ascent of mind is devaluing all the entrenchments of material resources and geography within the ken and command of politicians. As Gorbachev himself has observed, the computer revolution was critical to the crisis of Communism: “We were among the last to understand that in the age of information sciences the most valuable asset is knowledge, springing from human imagination and creativity. We will be paying for our mistake for many years to come. ”
Perhaps Gorbachev was the leftist most visibly shaken by the experience of the 80’s. But all suffered a similar discomfiture and few enjoyed it. In the coming decade, politicians will fight back with an array of seductive appeals to intellectuals. Obsolete notions of national autonomy and self-determination—favoring the ruling elites and bureaucracies against the forces of global capitalism—will ring out at the UN, in every trade-policy forum, and on every campus. Using trade-gap data that are meaningless in an information age, politicians will try to divide business with protectionist demagoguery against Japanese rivals. “Balanced trade” will never again be seen on earth, and it can be pursued only by a globally impoverishing expansion of government power over international flows of goods and investments. A key issue for the future of America is which side the intellectuals will now take: the side of materialist envy and political prejudice, or the side of opportunity and intellect.
In the conflict between matter and mind, the euphoria of the 90’s poses a real threat to the U.S. no less serious than cold-war Communism. Abetted by nihilist and terrorist forces lashing out against the wealth they can neither earn nor produce, and exploiting the vast destructive powers of modern weaponry, the enemies of civilization are more dangerous than ever. Of all the Western nations, only Israel fully comprehends this menace and is appropriately resolute and resourceful in resisting it. Thus the new socialist assault, with its nihilist edge, will often target Israel as the world’s leading obstacle to “peace.” Israel’s ability to resist these pressures ironically will depend on its own capacity to transcend the socialist forces within its own borders that have turned what should be one of the world’s most prosperous nations into a dependent of American charity.
In the 90’s, intellectuals must understand that the reason to defend free societies is not their legal rectitude in international disputes or any obsolete shibboleth of national self-determination in little despotic satrapies around the globe. The reason to defend free nations is their role as the capitalist bearers of civilization and human progress, freedom and technology, in a world that will sink into unspeakable horror without their leadership.
The characterization of American failures in the 80’s offered by the editors of COMMENTARY seems to me excessively mild. For it’s not just that Wall Street has gone piratical and that buccaneers have looted the savings-and-loan (S & L) banks and that the 80’s turn out to have been, as ought to be noted, probably the most corrupt business decade in American history. Government, too, has descended further into corruption than at any time in living memory. Nor is it just that, in regard to foreign affairs, American bellicosity has had a bad effect on national interests. The election of the Reagan administration at the start of the decade produced horrific massacres in El Salvador, not without CIA complicity, and led to the first death squads in Honduras. In Nicaragua, tens of thousands of people have been killed as a result of our policies, and for no purpose that would not better have been served by following the peaceful advice of that sagacious social democrat, Oscar Arias of Costa Rica—whose own administration was, by the way, shamelessly undercut and subverted by White House shenanigans during the period in question. So there has been a mortal cost as well as costs that are moral and monetary.
There has been a cost to democracy in Washington itself, since the 80’s were marked by still another major Republican political scandal (after McCarthyism and Watergate), showing once again how, in America, the greatest threat to liberty comes from fanaticism of the Right. On a national scale, the 80’s have seen such a cheapening of electoral debate that Eastern European democrats, when they picture the kind of political life they would like to encourage, make a point of citing the recent American elections as the sort of failing that ought to be avoided. And all of these developments have come on top of a shift of wealth so vast and undemocratic that, as the editors correctly remind us through a quiet reference to homelessness, walking to work in a city like New York means stepping over the diseased addicted bodies of the most abject persons that anyone can imagine.
How to account, then, for “the worldwide triumph . . . of the prevailing ideas and policies of the American 80’s”? The question is delusionary. It is like the stuttering airplane passenger in Salman Rushdie’s Satanic Verses who flaps his arms every time his plane rushes down the runway, and after take-off turns to his seat-mate and says: “Wowoworks every time.” Yes, the federal budget grew amid a lot of rhetoric affirming otherwise; social welfare was diminished; law, conscience, justice, and Congress were flouted; wars were fanned. And sure enough, some happy events soon occurred, namely, the world democracy movement, the collapse of Communism plus that of two or three fascistic dictatorships on the Right. But the prior existence of A does not indicate a causal relation to B.
In one respect, though, a broad if not exactly worldwide “triumph” of the “prevailing ideas and policies of the American 80’s” may indeed have occurred. There has been a public-relations triumph. In many parts of the world, something called “Reaganism” has come to be admired. This can be explained by two factors. The United States still plays an emancipatory role in many parts of the world and especially in Europe—the role that was described most powerfully by Henry Adams a century ago. Morally you might suppose that the American ideal would long ago have drowned in seas of blood and injustice. But the ideal is unsinkable; not even the worst and most reactionary of decades can overwhelm it. The thing bobs back into view, and people look at America from far distances and keep noticing that political liberty and individual freedom and the possibility of self-improvement are not the ideological mirages that sometimes they are said to be—no matter how grim may be America’s other problems. And since those faraway people cannot easily distinguish between the ancient American ideal and a given ephemeral administration or era, they are likely to assign to that ideal whatever name appears most current. If some recent elections had gone a little differently, that name might well have been “Carterism,” “Mondaleism,” or “Dukakisism.”
Instead “Reaganism” enjoys a wide vogue. (No one will ever speak of “Bushism”; a man requires character to become a doctrine.) But apart from an identification with America, what does Reaganism (“the prevailing ideas and policies of the American 80’s”) mean? In this country, Reaganism means, I would say: laissez-faire and supply-side economics, diminished social welfare, diminished concern for the rights of minorities, militarism, anti-Communist bombast, religion (especially fundamentalism), cultural conservatism, and a populist tolerance of such tendencies as the anti-Darwin movement in education. Reaganism’s meaning abroad is not the same. An Eastern European who uses the term is likely to have in mind “liberalism” in the European sense (market economics, secularism, enlightenment)—combined with sympathy for American styles that belong more to the Left than to the Right. An Eastern European “Reaganite” might well favor unregulated market economics, bebop, rock-and-roll, and “We Shall Overcome” as an all-purpose anthem. The students of Prague, who tend to admire Ronald Reagan personally, sang precisely that non-Reaganite song in Czech and in English as they marched through the streets on the day that the 1989 revolution broke out.
If we are to bandy about the relative triumphs of American decades, I would argue, in fact, for the radical 60’s, whose global successes would make a fine theme for a further symposium. The role of the American (and Anglo-American) 60’s in spreading democratic ideas turns out to have been enormous. Anyone can verify that by walking past the scruffy idealistic guitar-twangers of Prague’s Wenceslas Square. The enduring popularity of certain aspects of 60’s radicalism, stripped of some political and chemical errors of that era, may prove to be one of the main ways in which America’s emancipatory role continues to be performed. Imagine! (Lyrics from “Imagine,” the radical utopian song by John Lennon, are anti-Communist graffiti in Prague. And why not?) Of course, in the matter of decades, the principal credit for the miraculous events of 1989 still belongs to the 1770’s and 1780’s. History moves a lot slower than we have sometimes thought it would.
The editors of COMMENTARY have addressed the right issues, but they have put these issues cart foremost. They speak of the failings of the American 80’s as a debatable point (“Do you accept these characterizations?”). But they speak of the “worldwide triumph” of the “prevailing ideas and policies” of that decade as a simple reality. This reversing of the debatable and the factual is an example, I think, of the American habit of dousing ourselves with celebratory champagne in order not to notice the evidence of capitalist horrors that sprawls nightly across our own doorsteps. Even so, the editors’ question has the virtue of at least noticing, if only to dismiss its significance, a possible contrast between American failings and successes. That contrast has always been the great tragedy of our national life. If today there has been in some respect an American triumph (no matter whether we ascribe that success to the American 80’s or to other, grander, more radical factors), the contrast, therefore the tragedy, is only the greater. What are we going to do about that?
Did the Reagan administration sponsor a culture of greed and selfishness? Is there a culture of greed and selfishness? Oddly, the answer to the first question is “yes, to some extent,” even though the answer to the second is probably “not really.”
Kevin Phillips has handed up the most unrelenting indictment of the Reagan administration in his recent book, The Politics of Rich and Poor. Much as I disagree with his economic analysis, I think Phillips is right about the political ambiance. The Reagan administration often seemed grandly indifferent to the ethics of limited government.
My own exposure to this indifference involved the policy debates about poverty, welfare, and the underclass. There were bright spots. The second echelon of officials included some people who thought deeply and cared passionately about these problems. Sometimes they managed to get the right paragraph into a presidential speech; sometimes they managed to get the right provision into a legislative proposal. At the cabinet level, William J. Bennett was a splendid exception, doing for the educational debate what needed doing for the welfare debate. But the dominant impression conveyed by the Reagan White House was that an expanding economy would minimize poverty, the social safety net would handle the residual problems, and that the best way to cut the domestic budget was by getting rid of all that fraud and waste in programs like Medicaid and food stamps. When it came right down to it, one suspected, the top people in the administration didn’t lie awake nights worrying about the poor.
This would not have been so bad—people who lie awake nights worrying about the poor have done much mischief—except that the indifference coexisted with the Reagan administration’s failure to confront the central standard of fairness involved in reducing the size of government: what’s sauce for the goose also must be sauce for the gander.
Take, for example, the administration’s attitude toward cutting the budget. Large sums of money can be cut from the domestic side of the budget if one is willing to yank the middle class and the corporations from their many federal teats, politics be damned. In Reagan’s first term, several idealistic officials in the administration were eager to do just that. They were largely rebuffed, however, either by more senior officials in the administration or by Congress. They needed leadership from the top. Only Ronald Reagan had the stature and the podium to articulate a coherent program for cutting domestic spending in which everyone would lose some of his special privileges, but the payoff would be balanced budgets, less intrusive government, and simpler, fairer rules for everyone. He didn’t give that leadership. The implied message was that the Reagan administration was comfortable with government benefits if they went to the right people.
The S & L fiasco is another example of the way in which the administration exhibited the ethical carelessness that lends credibility to the greed-and-selfishness indictment. The blame for the fiasco is complicated and diffuse, much of it antedating 1981. But a bedrock principle of Reaganism (as I understand Reaganism) is that people must bear the consequences of their actions. Giving S & L lenders license to make risky loans without incurring commensurate risk themselves should have caused instant consternation in every administration official who understood the specifics of the deregulation. Apparently it didn’t. Whatever the excuses, the result was that the Reagan administration permitted precisely the same error—government-created incentives for irresponsible behavior—concerning affluent S & L bankers that it regarded with such horror when it saw criminals let off without punishment, teenage girls rewarded for having babies, or students given diplomas without having to study.
So I will not defend the Reagan administration against accusations that it encouraged an “I’m all right, Jack” mentality that in turn encouraged greed and selfishness. That said, it still remains unclear to me how much real effect this mentality had on the nation, and how much of the hoopla about the greed and selfishness of the 80’s described a reality. In this brief COMMENTARY, I will limit myself to two observations.
The first is that you can’t have sustained, sizable economic growth without producing something very like the phenomena that have gotten journalists so upset about greed and selfishness. Economic growth means that a lot of people get rich. When people get rich, some of them are going to buy fancy new houses and splashy jewelry and big boats. A look back in American history reminds one of the late 19th-century Gilded Age, the Roaring 20’s, or the post-World War II prosperity that prompted Vance Packard to write The Status Seekers. Kevin Phillips refers to this conspicuous consumption as the “capitalist blowout” and sees it as something that could have been prevented. I’m not convinced. If you want exuberant economic growth, you’re also going to get nouveaux riches and buccaneers and con artists and splendiferous camp followers as part of the package. It is one thing to argue that this will eventually produce a political counter-swing (which might well be true), and another to say that it was a product of public policy.
Preventing such effects in the 80’s would have been especially difficult because of the increasing size of the economic base. Measured in proportional terms, growth in Gross National Product from the end of the recession in 1983 to Reagan’s last year in 1988 has been matched by many other episodes in American history. But as the base gets larger, even modest percentage increases in GNP can produce unprecedented raw quantities of new wealth. The increase in just the five years from 1983 to 1988 was larger than the increase during the entire three decades from 1870 to 1900 that constituted the Gilded Age. It was larger than the increase from 1900 through the next three decades, to the peak of the boom in 1929. And by “larger,” I refer to per-capita changes in real GNP, controlling for both inflation and population size.
The 80’s were also exceptional in terms of the size of the newly affluent population. This is no place for a lengthy analysis of income distribution, but this fundamental fact about the Reagan years has been obscured by the rhetoric about the rich getting richer at the expense of the poor: using the most basic government data on distribution of wealth, money income of households, constant 1988 dollars, and the Bureau of the Census’s, breakdown into nine income brackets, the proportion of households in every income bracket up to $35,000 decreased during the Reagan years. The proportion in the low-income brackets (up to $14,999) decreased from 29.4 to 27.3 percent of households from 1980 to 1988, while the proportion in the working- to middle-income brackets (from $15,000-$34,999) decreased from 37.5 to 34.6 percent.
This simple set of facts leaves much to be debated. It may be argued that the proportions of people in the lowest brackets decreased trivially, for example, or that the improvements reflect an increase in two-income families, not increases in wages. On the other side, it may be argued that household-income figures, which include welfare households, understate the true improvement among families who remained in the labor market. But it cannot be argued that during the Reagan years the poor got poorer or even that working-class households got poorer. They didn’t. They got somewhat less poor, though not dramatically so. Meanwhile, the big change occurred in the income brackets from $50,000 on up, which increased from 15.8 percent of households to 20.8 percent. This translates into a remarkable 6.3 million new households who moved above the $50,000 level during the Reagan years. It seems reasonable to infer that this newly prosperous group contributed to an atmosphere in which people seemed to be unusually preoccupied with material possessions. Affluent readers of this article may remind themselves of their spending behavior when they first found themselves in possession of discretionary income, even if that happy event occurred before the selfish and greedy 80’s.
In sum: the Reagan boom involved the creation of widespread wealth and sometimes great wealth, accompanied by the predictable side-effects. Radically redistributionist tax policies could have prevented these effects, but also would surely have killed the economic growth.
My Second observation is that the behaviors that get in the newspapers don’t necessarily have much to do with the day-to-day life of most people. Even given the large numbers of people who became more prosperous during the 80’s, and granting that some of them behaved as the tabloids breathlessly describe, it is not obvious that many people behaved in ways that fit the stereotype. My question (I don’t have the answer) is this: what proportion of the American population experienced at first hand an increase in greed and selfishness, either in themselves or their neighbors, in the 80’s? Maybe my experience is atypical, but somehow I missed out. For me, the 80’s were the polar opposite of the image of the decade, a time for sinking roots and thinking more, not less, about obligations to family and community. As I mentally run down my list of friends, I can think of one couple who became noticeably more prosperous during the 80’s but they along with all the others seem to have gotten less acquisitive, less self-absorbed, more concerned with others during the supposedly greedy and selfish decade.
There is a catch in this exercise, of course: I was in my forties for most of the decade, and so were most of my friends. I suggest that one final reason the decade was so commonly characterized as acquisitive and self-absorbed is that the Baby Boomers weren’t in their forties like me, but in their thirties. It is not a novel argument, but it bears repeating: ever since the 60’s, the Baby Boomers have defined the decade’s Zeitgeist—rebellion in the 60’s (the modal Baby Boomers were in their teens), narcissism in the 70’s (they were in their twenties), and acquisitiveness in the 80’s (they were in their thirties). I’m at a loss to explain precisely why this bulge in the population distribution so decisively affects the popular culture—they comprise only a modest proportion of the total population—but they have had that effect for three decades and there is no reason to think it will change during the next three.
Even as I write, the modal Baby Boomer is approaching his fortieth birthday and the characteristic attitudes of middle age. One may, therefore, confidently predict that no matter what happens in the White House, the 90’s will be a decade that celebrates family and community and traditional values. The Zeitgeist will change, and commentators on the culture will need to have explanations. Presidents are convenient for this purpose. (Could any presidential couple be a more convenient explanation for a return to family values than the quintessentially Dad-like George Bush and Mom-like Barbara Bush?) But that will not tell us how much American life has really changed, or how much the Bush administration has had to do with it—just as it is not clear how much American life changed during the 80’s, or how much the Beverly Hills Reagans had to do with it.
The collapse of Communism represents a victory, of sorts, for the West—but not much of a victory for the United States and certainly not for the “prevailing ideas and policies of the American 80’s.” The real winners, as everybody knows, are the West Germans and the Japanese, who owe their power and prosperity to a combination of circumstances having nothing to do with the “policies of the American 80’s.” While the United States and the Soviet Union were exhausting themselves in the production of armaments, West Germany and Japan, unburdened by competition in the arms race, rebuilt their shattered economies and cultivated the arts of peace. Unlike the United States, those countries learned something from the devastating experiences of the 30’s and 40’s. Whereas Americans learned only the dangers of “appeasement,” the West Germans and Japanese made a serious attempt to come to terms with their recent past. They gave up atavistic dreams of imperial conquest and racial destiny. Having been drawn more reluctantly into the modern world than other industrial nations, they embraced modernity, after World War II, with the enthusiasm of converts. They invested heavily in the modernization of infrastructure, replaced technologies geared to mass production with technologies geared to production for specialized markets, perfected first-rate systems of education that assured a trained and disciplined work force, built up an efficient civil service, and taxed their citizens in order to maintain elaborate health and welfare services. They drew on traditions of solidarity and collective discipline, inherited from the pre-modern past, and put them to work in the service of enlightened, secular objectives—peace, prosperity, health, social welfare.
These are not very exalted goals, and the success of West Germany and Japan illustrates not only the benefits of successful modernization but the price paid in the form of spiritual shallowness and a shallow concept of democracy—one that stresses the equitable distribution of material comforts rather than the character-forming effects of civic participation. But the point is that the “ideas and policies of the American 80’s” do not explain the success of West Germany and Japan or the revolutions in Eastern Europe. It is presumably the West German model, not the American or Soviet model, that attracts the populations of Eastern Europe, together with the prospect of membership in a European community that will give economically backward nations access to the benefits of development. Many people in Eastern Europe still think of the United States, no doubt, as a land of fabulous wealth; but further exposure will correct that impression and teach them to regard America as an example, if anything, of social and economic stagnation. A country that lacks many of the social services and amenities taken for granted in other advanced-industrial countries—national health insurance, good schools, efficient systems of public transportation, civic order—is unlikely to serve as a model or inspiration for Eastern Europe.
None of this is meant to deny that the collapse of the Soviet empire adds up to a diplomatic, military, and ideological victory for the West and even, in a limited sense, for the United States. Many years ago, George Kennan held out the hope that containment would eventually lead to the “breakup or mellowing of Soviet power.” He turns out to have been right: the United States forced the Soviet Union into an arms race that has wrecked its economy, perpetuated a corrupt and inefficient bureaucracy, and retarded the development of democratic institutions. Gorbachev’s resignation from the cold war is an admission of defeat, no doubt about it. It is above all an ideological defeat. Socialism can no longer claim to be the wave of the future. The hope that sustained a generation of leftists—that “actually existing socialism” would evolve toward “socialism with a human face,” while social-democratic regimes in the West would move toward a more thoroughgoing form of socialism—has suffered a blow from which it is not likely to recover. The removal of the socialist alternative from political debate (at least in its classic form) will have profound effects both in the East and in the West.
Still, it is too much to claim a victory for the type of free-market ideology promoted by Thatcher and Reagan. Britain and the United States are declining powers, and the privatization of public services has not only failed to arrest their decline but contributed to it. The ideological victory, if there is one, belongs to proponents of the “mixed economy” we used to hear so much about in the 50’s and 60’s. That we don’t hear much about the “mixed economy” today does not mean that it has disappeared, only that American liberals have lost control of the political agenda. Elsewhere in the industrial world, it remains the dominant polity—uninspiring though it may be.
The victory of the West, then, is not a victory for the free-market ideology that seeks to privatize everything in sight. Nor is it a vindication of American diplomacy, except in a highly provisional sense. We are witnessing the mellowing and possibly the breakup of Soviet power, but these developments may lead to destabilizing effects we will come to regret. In any case, we have already paid a heavy price for anything we can be said to have won. The containment policy, as critics pointed out from the beginning, required the creation of a global network of client states; and the need to sustain its credibility as the protector of those clients forced the United States into police actions, and finally into the disastrous war in Vietnam, that were inconsistent with its national interests. The cold war, moreover, caused serious distortions in the American economy. Military spending created jobs and promoted economic growth in the short run, but in the long run it deflected investment from plant expansion and modernization, making the United States weak in exports and more and more vulnerable to imports.
Besides undermining America’s position in world markets, the cold war brought about an institutional interpenetration of the corporations, government, and universities. It thus contributed to the centralization of economic and political power and widened the gap between the affluent, heavily subsidized sector of the American economy, which rests on technologies originally developed in connection with national defense, and the impoverished, technologically backward sector for which public subsidies are unavailable. The flexible technologies required by advanced societies—technologies compatible with decentralized control over production, a high degree of workers’ skill and initiative, and efficient use of energy (as opposed to heavy reliance on nonrenewable resources)—are not the kind of technologies that were encouraged by an economy based on the production of weapons. At a time when our competitors were beginning to perfect more sophisticated technologies, the defense economy forced the United States to adopt technologies suited only for mass production—technologies that enforced a rigid separation between the design and the execution of work, eradicated every vestige of workers’ control, and left the work force demoralized and apathetic.
Preoccupation with external affairs, during the long years of the cold war, led to the neglect of domestic reforms, even of basic services. Medical services, public health, and education are notoriously backward in this country. In the matter of education, the most dramatic instance of American decline, obsession with the Soviet threat led to ill-conceived reforms designed to educate a scientific elite, at the expense of an intelligent, enterprising, and politically knowledgeable work force. Here again, the Soviet threat proved to be far less serious than the economic threat from nations that understood the importance of education and rapidly outstripped both superpowers in the quality of their schools.
To list all the bad effects of the cold war would take more space than I have here. The development of a secret police, the erosion of civil liberties, the stifling of political debate in the interest of bipartisan consensus, the concentration of decision-making in the executive branch, the sheer growth of the executive and its declining accountability, the secrecy surrounding executive actions, the lying that has come to be accepted as routine in American politics—all these things derive either directly or indirectly from the cold war. Their worst effect has been to undermine confidence in government, to weaken our public culture, and to destroy the delicate fabric of trust on which civic life depends. If the West can be said to have won the cold war, the United States can hardly be said to have shared in the fruits of that victory. It would be closer to the truth to say that in the course of their long rivalry, the Soviet Union and the United States have destroyed each other as major powers, just as many critics of the cold war predicted. No doubt we have a long way to go before we reach Soviet levels of economic inefficiency, political apathy and cynicism, and bureaucratic intrigue; but that is pretty clearly the road we are traveling. We can rejoice that Eastern Europe has been delivered from Soviet control. For ourselves, however, there is little to celebrate.
Ronald Reagan is the first Republican I ever voted for for President, and I did it not once but twice. In 1976 I voted for Jimmy Carter, the man who should go down in history as the first President to use a blow dryer in the White House, but I couldn’t bring myself to do it a second time. A photograph of Carter, collapsed while jogging, weak in the knees, held up by secret-service men, provided a symbolism I felt I could not ignore. A collapsed jogger with a hot-comb hairdo and a big smile—that seemed to me precisely the condition of the United States under the man who preferred to be called Jimmy.
I am not at all sorry about having voted for Ronald Reagan, except perhaps only for the fact that having done so prevented me from scoring some fairly easy jokes off him. He made, let us face it, an easy target. A former and not particularly adept actor, a man entirely without intellectual interests or even pretensions (though consistently grammatical and well-spoken), married to a woman who seemed a damned icy little proposition—plenty of room here for the expression of casual contempt. Acquaintances, not under the burden of having voted for Reagan, were regularly telling me how “stupid” he was. “What,” I used to say in defense of my man, “has intelligence got to do with running the United States?”
That, I now realize, was only a half-joke. I don’t know how intelligent Ronald Reagan may be, though in ways that matter he is, in my view, more intelligent than either his predecessor (a graduate of Annapolis) or the man who succeeded him (Yale, Phi Beta Kappa). More important than intelligence, Reagan had, I think, correct beliefs. He believed (rightly) that the United States was slipping badly in almost every regard; he believed (again rightly) that Americans had lost confidence in themselves as a nation; he believed (bingo!) that Communism was no good thing and therefore was not to be encouraged by weakness or accorded undue respect; and he believed (more dubiously) that a common-sense approach could solve most contemporary problems. Ronald Reagan not only believed these things but he believed them deeply and absolutely, and, given the state of demoralization of the nation at large when he came into office, it was a good and useful thing that he did so believe them.
Most journalists, however, saw it quite differently. The Worst Years of Our Lives is how Barbara Ehrenreich, a heavy contributor to Mother Jones and Ms., titles her recent book of essays on the Reagan years; “Irreverent Notes from a Decade of Greed” runs her subtitle. Miss Ehrenreich bangs down on the melodeon of clichés with thick fingers. She blames Ronald Reagan for everything short of earthquakes that she doesn’t like in the United States. In her introduction, she tells us that her father, who has Alzheimer’s disease, when asked, as a test of mental competency, who was President of the country, would snort back, “Reagan, that dumb son of a bitch.” The little vegetables in nouvelle cuisine seem to Miss Ehrenreich the last word in decadence. Yuppie is a word that, in her moral lexicon, is right up there with fascist. With the thick-fingered method, you never have to worry about missing a note; this is accomplished by hitting all the keys at once:
Greed, the ancient lubricant of commerce, was declared a wholesome stimulant [during the Reagan years]. Nancy Reagan observed the deep recession of ’82 and ’83 by redecorating the White House, and continued with this Marie Antoinette theme while advising the underprivileged, the alienated, and the addicted to “say no.” Young people, mindful of their elders’ Wall Street capers, abandoned the study of useful things for finance banking and other occupations derived, ultimately, from three-card Monte. While the poor donned plastic outer-ware and cardboard coverings, the affluent ran nearly naked through the streets, working off power meals of goat cheese, walnut oil, and crème fraîche.
Bet you didn’t know that you had lived through such hellish times. But, then, at Ms. and Mother Jones the tendency is not to accentuate the positive.
Barbara Ehrenreich’s clichés are not restricted to her or to her favorite publications. When it comes to writing about the Reagan years, most journalists sidle up to the melodeon. Here, for a notable example, is the New York Times columnist Russell Baker, bemoaning the increased expense of life on Nantucket, where he has had a summer home for more than twenty years: “During the Reagan years the island experienced an onset of decamillionaires; that is, people so rich they could spend a million dollars decorating a house that in 1965 might have sold for $25,000.” A tricky business complaining about the wealth of others, especially when one has one of the better jobs in journalism and a few best-sellers under one’s own belt. But let that pass. The larger point is that the chief clichés about the Reagan years, which are now beginning to harden, have to do with unexampled greed: decamillionaires jogging off their nouvelle cuisine while chewing up the countryside.
Or consider the Yuppie, the principal cliché villain of the 80’s (though one heard a bit less about him as the decade drew to its close). “It’s one of those goddamn Yuppie restaurants,” was a sentence I seem often to have heard during the 80’s. “The Yuppies have moved in and wrecked the neighborhood,” was another. So widespread was the contempt connected with the term that it wasn’t easy to find anyone ready to own up to being one, though they were usually readily enough spotted: a Yuppie was generally the other fellow. And yet what was so heinous about being a Yuppie? If the Yuppies can be said truly to have existed, then they were young men and women who worked twelve- and fourteen-hour days in order to indulge their penchant for designer clothes, BMW’s and Saab’s, gorgeous grub, and minor appliances. The world, surely, has known worse villains, yet the Yuppie, as a social type, a real Reagan phenomenon, could certainly get people worked up into a frenzy. Perhaps it was because they weren’t as spiritual as you and I—or at least I.
Even though I twice voted for Ronald Reagan, I am not one of those decamillionaires, or even close to a unimillionaire. I drove into the 80’s in a 1978 Chevy Malibu and drove out of the decade in a 1988 Oldsmobile Cutlass Ciera. Not much evidence of taking advantage of such things as oil-depletion allowances and leveraged buyouts here. During these years, my taxes seemed regularly to go up, my expenses never down. Two children in private universities kept me typing much faster than I thought I could. At least I never felt the need to jog, having spent so much of the decade, financially, running in place. No doubt I was not well-positioned to take advantage of the famous greed of the 80’s. But then, those of us mired in the middle class are used to finding ourselves just there, smack in the middle—and where better to feel the squeeze?
Yet I am far from certain that I, and people in similar conditions, would have fared better under a Democratic presidency. Certainly we didn’t under the effable Jimmy Carter. I have the most limited economic knowledge, but I am, at least in a general way, for reducing government spending, not out of any high principle but mainly because, in my experience, it is almost invariably inefficient spending. But whenever the Reagan administration attempted to reduce federal spending, out came the journalistic clichés. Two of the great cliché phrases invoked endlessly during the Reagan years were “savage cuts,” which the administration was supposed to be attempting in all realms but that of defense—and “savage cuts,” as everyone surely knows, result in (all together now) “chilling effects.” When, for example, the Reagan administration was confronted with the awkward fact that a great many people were forfeiting on repayment of their federal student loans, and consequently made student aid of this kind more difficult to obtain and to evade repaying, a chorus of journalistic shock went up over these “savage cuts,” which could only have (how did you guess?) “chilling effects.” You would think that once, just once, the writers on the New York Times would discover a chilling cut that had a savage effect, but if they ever did I missed it.
The paramount economic cliché about the Reagan years is that under Reagan’s economic policies only the rich got richer. Kevin Phillips, who is usually advertised as writing from a conservative standpoint, recently reiterated this cliché in the New York Times Magazine, writing: “It was the truly wealthy, more than anyone else, who flourished under Reagan.” Phillips adds: “Meanwhile, everyone knew there was pain in society’s lower ranks, from laid-off steelworkers to foreclosed farmers. A disproportionate number of female, black, Hispanic, and young Americans lost ground in the 1980’s, despite the progress of upscale minorities in each category.” Now, doing a turn on this cliché by reversing it, Forbes went to a putatively liberal Harvard economist named Lawrence Lindsey, who reports that, in his view, owing to the Reagan tax cuts, the budget will be balanced by the middle of the 1990’s and the American economy will be generating enormous surpluses. Even more tax cuts are required, according to Professor Lindsey, “to preserve the incentive and to avoid giving the politicians money for pork.” So there you have it: a conservative announces that everything bad you have ever heard about the Reagan economic policies is true, and a liberal reports something like the reverse. What’s a simple-minded fellow in a 1988 Olds Cutlass Ciera to think?
From behind the tilt wheel and tinted glass of that Olds, it begins to look as if economists of all shades of political opinion are not that smart. “If You’re So Smart, How Come You Ain’t Richl” is the subtitle of an essay by Donald N. McCloskey (“The Limits of Expertise,” American Scholar, Summer 1988) that discusses, in an amusing and penetrating way, the extreme limits of the predictive powers of economists. (Some economists, it is true, are rich, but generally from giving advice and not from taking their own.) Along with not being very good at predicting the economy, economists are not especially good at describing it, either, which makes it very difficult to know what is going on at any moment. On this morning’s radio news, for example, I learned two perfectly and typically contradictory economic facts: that the American economy continues to show expansion and that the consumer confidence index is well down. Go, as they say, figure.
Again from my windshield’s-eye view, it strikes me that Reagan’s economic policies were successful at reducing inflation and unemployment, neither a small thing. As for these same policies encouraging unprecedented greed, I suspect that there is no act of greed without plenty of precedent. Moreover, I was interested to read in George Russell’s recent “Trashing Wall Street” (COMMENTARY, July 1990) that it was a Democratic Congress that paved the way for the great junk-bond fiasco by passing legislation freeing S & L institutions to invest in high-risk enterprises. Reagan’s economic policy is blamed, too, for the creation of the homeless. AIDS is also often blamed on what are felt to be Reagan’s rotten economic priorities. Dubious stuff, but then, as they used to say in the 60’s, if you’re not part of the solution, you’re part of the problem. All such accusations, though, it seems to me, lend to policy a greater prestige, and reality, than it perhaps merits.
Permit me here to bring in my own rather drab experience, this with government policy over a good portion of the past decade in the arts. In 1984 I was nominated by the Reagan White House to be a member, for a six-year-term, of something called the National Council of the National Endowment for the Arts. The Council, which meets quarterly, is an advisory group to the Endowment and its chairman, and for the most part discusses past and present and plans for future policy. Everyone currently on the Council is either a Reagan or Bush appointee, which, one would think, might make them conservative in outlook. Not quite so. A clichémeister such as Robert Brustein might write, in the New Republic, of “Reagan’s thin-lipped dismissal of the arts” (as opposed, one wonders, to whose fat-lipped acceptance?), but in fact the arts budget grew consistently during the Reagan years, and the spirit of the Council, as it was when I first joined it, remains preponderantly liberal. Most members have never met a work of art they didn’t like, and such policy as the Council creates, such public positions as it takes, tend to be very far from anything anyone would be likely to describe as conservative. Such, then, is the policy-enforcing power of modern Presidents.
If Enforcing policy in one’s own country is a tough enough job in a democracy, obviously it is even more difficult to estimate the effect of one’s foreign policy on other nations. And yet there is small doubt that United States foreign policy under Ronald Reagan—and insofar as this same policy was continued under George Bush—had a good deal to do with the astonishing collapse of several Eastern European Communist regimes. A less intransigent-sounding foreign policy than Reagan’s would, I suspect, have delayed this collapse, might even have postponed it indefinitely. In Central America, similarly, a less hostile policy toward the Sandinistas on our part would likely have put off for years the free election that removed the Sandinista government from power at the behest of the Nicaraguan people.
Liberal journalists used to get a hardy laugh out of what they felt was Ronald Reagan’s archaic, not to say troglodytic, anti-Communism, making many a smug little joke about his view of the Soviet Union as the “Evil Empire.” But as everyone now—knows-and ought to have known all along—the people who were forced to live under Soviet rule viewed it essentially as Ronald Reagan did. Reagan’s foreign policy, by keeping up a high degree of pressure on the Soviet Union and other Communist regimes, and by its implicit encouragement of the hopes of adversaries forced to live under these regimes, contributed greatly to the devastation of this, yes, “Evil Empire.” There are many other reasons, in my less than perfectly disinterested view, for feeling good about having voted for Ronald Reagan, but this is surely chief among them.
John B. Judis:
As a political era, the 80’s began in the 70’s, if not earlier—as a response to the antiwar and civil-rights movements, the slowdown of the American economy, and the decline of American power overseas. The reaction reached a climax in Ronald Reagan’s first term and then began to dissipate, though not to disappear. We are still in the 80’s.
The reaction was fundamentally conservative, rooted in a kind of imperial nostalgia—a sense that the United States, once the unchallenged leader of world capitalism, was in decline, and that to halt, if not reverse, that decline, it was necessary to go back to what had worked in the past, ranging from free-market individualism and the virtue of the Founding Fathers to cold-war preparedness and the sexual mores of Muncie.
In politics, the reaction first emerged in Barry Goldwater’s and George Wallace’s 1964 presidential campaigns. Its political success—meaning its ability to forge a majority coalition of Republicans and erstwhile Democrats—depended on two factors: first, the growing racial backlash among white Southern and urban working-class Democrats; second, the transformation by conservative politicians and intellectuals of 50’s-style radical rightism into a majority, governing philosophy.
By the last two years of the Carter administration, the reaction already dominated politics and policy. Just as liberals took charge of the Nixon administration’s domestic agenda, conservatives called the tune in the Carter administration’s last years. Carter’s tax reform became a capital-gains cut; labor-law reform was defeated; and the airlines were deregulated. Even on foreign policy, Carter, by 1980, had capitulated to his conservative critics: he shelved SALT II, and his projected military budget increases from FY 1981 through 1985 were comparable to those achieved during the Reagan administration.
Reagan carried these conservative initiatives forward and added some of his own—appointing a judiciary opposed to the post-1964 civil-rights rulings and removing restraints on S & L’s and mergers—but his primary innovations were ideological rather than substantive. Where Carter had come to stand for American defeat, Reagan stood for the possibility of victory—whether over the Soviet Union or scarce oil supplies. In years to come, Reagan’s first term and the 1984 “morning in America” election will be seen as a kind of ideological Indian summer—the final time in which American goodness and purity would be highlighted against the shadow of the “Evil Empire” to the east. Indeed, by 1986, Reagan and Secretary of State George Shultz had sharply turned course on foreign policy and were steering America out of the cold war and—to that extent—out of the 80’s.
How one evaluates this era of conservative reaction depends on how one defines its objectives. For instance, if one sees the U.S. in the late 70’s as having fallen woefully behind the Soviet Union militarily and strategically, if one sees Western Europe as having been on the verge of “Finlandization,” and if one’s primary goal was to remove this looming Soviet threat, then the program of conservative reaction was an enormous success, and the 80’s a time of American triumph.
On the other hand, if one believes that in 1979 the window of strategic vulnerability was a canard and that the Soviet Union posed little threat to the U.S. and Western Europe, then, from a standpoint of American national security, much of the military buildup of the early 80’s was wasted. What it may have accomplished was to accelerate the collapse of Soviet Communism and of the Soviet empire in Eastern Europe and to speed the Soviet withdrawal from Afghanistan. If this is so, then the buildup did benefit peoples under the Soviet thumb. This is a signal achievement, but its direct benefit to Americans is highly debatable.
This is particularly true if one’s measure of success is economic and social. By this measure, the period from 1971 through the present has been one of decline and disintegration, particularly in the 80’s. From 1980 to 1988, the U.S. share of world exports in automobiles dropped 46 percent, computers 36 percent, microelectronics 26 percent, and machine tools 17 percent. In 1980, the U.S. controlled 60 percent of the world market in semiconductors; by 1988, it controlled 38 percent and the Japanese 50 percent. As the political scientist Chalmers Johnson has remarked, the Soviet Union lost the cold war, but Japan won it.
Decline in these industries means that the American standard of living will sink as American workers are stuck with the less productive and lower-paying jobs within the international division of labor. If the decline continues, America will have the same relation economically to Japan and Germany that Britain or even Brazil has at present to the U.S. And the U.S. will become like Manhattan writ large, divided between a wealthy, parasitic banking class, beholden to foreign capital, and an increasingly unemployed and unemployable working class.
This decline would have occurred no matter who won the elections in 1980 and 1984. It is based less on immediate policies than on broader structural factors. As microelectronic-era manufacturing has required greater capital and long-term planning, the U.S. has been at a disadvantage because of the historic antagonism between industry and government. Japan and Western Europe have used government-industry consortia to pull even with or ahead of the U.S. in chip manufacturing, high-speed railroads, and now even commercial aircraft.
As manufacturing and services have required greater education and teamwork, U.S. industry has also been hampered by frayed relations between capital and labor and by a deteriorating educational system. In the 80’s, for instance, U.S. automakers discovered belatedly that the secret of Japan’s auto-making success was not its robots, but its greater reliance on team production and worker innovation.
In addition, the U.S., as the leader of world capitalism after World War II, incurred military and economic obligations that forced it to divert scarce fiscal and scientific resources to the military and to adhere to the canons of free trade even while other countries ignored them. Japan’s meteoric rise from 1965 to 1971 was largely due to demand created by the Vietnam war and to its protectionist trade and industrial strategies.
Thus, considerable damage had already been done by the late 70’s, when conservative policies began to prevail, but there is no question that these policies—the hostility toward government, the attitude toward labor exemplified by the reaction to the air-traffic-controllers’ strike, the neglect of the human and physical infrastructure, the increase in military spending, and the refusal to protect vital American industries—reinforced these structural causes of economic decline. In his book Trading Places, former Reagan-administration Commerce Department official Clyde V. Prestowitz, Jr. offers eloquent testimony to how the Reagan administration’s blind adherence to free trade allowed the U.S. semiconductor industry—perhaps the most important industry of the 21st century—to be gutted by the Japanese “dumping” chips at below cost in the U.S. market.
In addition, the Reagan administration contributed to an ongoing fiscal crisis that has severely limited government’s ability to boost economic growth. On one side of the ledger, the administration not only increased military spending, but through its reckless deregulatory policies, laid the foundation for the multibillion-dollar S 8 L bailout; on the other side, its crusade against government has encouraged opposition—to the point of paranoia—to any tax increases. Even if the Bush administration wanted to undertake a significant public-works or educational program, it would have difficulty doing so.
During the 80’s, the only bright spot in economic policy came because either massive political pressure or the imperatives of the cold war overrode the government’s commitment to laissez-faire economics. This occurred during the Carter administration’s bailout of Chrysler, which saved the company and made money for the government, and the Reagan administration’s belated funding of Sematech, a semiconductor consortium in Austin, Texas. But as the cold war has ended, conservatives in the Bush administration have abandoned any effort to protect and foster key American industries. In economic policy, the Bush administration is probably more conservative than the Reagan administration. (The Bush administration’s policy-makers, looking at Eastern Europe, even interpret the collapse of totalitarian planning as a victory for Friedmanite economics—a mistake that the Western Europeans and the Japanese are unlikely to make.)
During this era, liberal Democrats have not presented a viable alternative to conservative reaction, but have in effect either justified it or been part of it. In the 70’s, for instance, liberal jurists, backed by civil-rights organizations, virtually invited a white backlash by pressing for busing as a means of school integration. Liberal economic programs, like the Humphrey-Hawkins Full Employment Bill, conceived of the state as a gigantic version of the post office, dispensing jobs to the unemployed. With few exceptions, liberal policy on trade, foreign investment, and multinationals was the same as conservative policy. In 1984 and 1988, for instance, there were no significant differences between the presidential candidates on these issues.
As the U.S. enters the 90’s, it is necessary to move beyond both the liberalism of the 60’s and the conservatism of the 80’s. In economic policy, it will be important to recognize that the U.S. is no longer the unchallenged leader of world capitalism and, in important respects, has fallen behind both the Japanese and Germans. As the U.S. did in the 19th century and as the Japanese did after World War II, America will have to use the power of government to protect and nourish key industries—not so much to be number one, but simply to be part of the action in the most advanced industrial sectors. If the U.S. doesn’t do this—if it allows its economic future to be dictated by financial speculators or foreign lobbyists—it will continue to decline.
In foreign policy, the U.S. must begin reconceptualizing its role in the world. Crucial to this, of course, is the recognition that economics has once more become primary—not only in relation to former adversaries, but also in relation to Latin American countries whose debt to U.S. banks has imperiled our trade balance. In the 80’s, America’s share of Latin America markets increased but its total exports dropped precipitously. If American exports to Latin America during the 80’s had increased at the same rate as they had increased for the prior three decades, the U.S. would not have had a trade deficit.
The U.S. will also have to recognize that America’s alliances will shift—that it may well become in America’s interest to strengthen rather than weaken the Soviet Union as a counterweight internationally to Germany and Japan and to promote stability through aid in those regions where traditional ethnic rivalries, if allowed to fester, could return the world to the situation it faced in 1914. The Bush administration, to its credit, has moved in this direction, sometimes over opposition from both liberal Democrats and conservative Republicans.
For the purpose of this symposium, however, the most important point is that if the program of conservative reaction did confer benefits upon the U.S. and the world, these have now been entirely exhausted. There is no longer any reason whatsoever for seeking the destruction of the Soviet Union. On the contrary, the U.S. now has a distinct interest in preventing the Soviet Union from descending into chaos. Nor is there any reason to continue deregulating business and finance and defunding American inner cities. The question for the next decades will not be how to get government off people’s backs, but how to use it so that Americans can once more stand straight.
The 80’s, it is true, did indeed end with a bang of anti-Americanism even louder than the usual thumping of the American intelligentsia. To be fair, however, New York City was its epicenter, and those who live not quite so close to ground zero were perhaps less perturbed. New York is the headquarters for only one kind of anti-Americanism, which blames the system. There is another traditional critique of the country, which blames individuals. The meaning of redemption in the two cases is radically different, for one is Marxist, the other Christian. Both partake of a persistent theme of our history—the juxtaposition of the country’s noble goals and our inherent personal unworthiness. And there is much more in our cultural and religious tradition which warns of the pitfalls of too much wealth, power, arid glory.
One cannot blame the Left for its sputtering and near-hysteria at the end of the 80’s. It had been a tough decade for the Left, its worst ever. At home and abroad, in economics and politics, in strategy and diplomacy, the Left had been dispersed and wholly discredited. And there is worse to come. The Soviet government’s embrace of the Right’s traditional reading of Soviet history and practice was bad enough. Once the archives of the former Communist satellites begin to yield their assorted facts, even the worst of “right-wing anti-Communist paranoia” will come to appear as sweet naiveté.
Though the Left has thus been routed, it still retains its hegemony in matters cultural, so it is natural to expect a redoubling of its efforts there. And ironically—or better yet, dialectically—it is all possible because 80’s capitalist intrigues brought about the further embedding of the leftist view of things in the great telecommunications-amplification machinery in New York, and in the now relatively few publications/communications/entertainment combines that really matter. Growing economic concentration and monopolization in this sector—the result of 80’s “greed,” junk bonds, insider trading, and whatnot—have placed the Left in firm control, ideologically speaking, of these enormous combines.
The Left also benefited from the good fortune that Reaganesque “greed” produced elsewhere. The universities, the foundations, and other places which generate the ideas that trickle down, or up, into the plebeian culture, were all enriched by the Reagan, now Bush, bull market. It increased endowments and incomes enormously. The Left is now camped out in its version of Bohemian Grove, not quite clear about how or why it has become unimaginably rich, but knowing only that it deserves to be, and is thus all the more able to be spoiled, self-indulgent, and dissociated from reality in the blessed way of those who do not have to work.
Yet it is not obvious what any of this means. Even more than the rest of America, those sectors of society which sustain the Left became rich indeed. But as all the novels remind us, just because you’re rich doesn’t mean you’re happy. Is this An American Tragedy? As the Left’s real income and power seemed to grow, its psychic income can only have declined. We may have one-party control of our major media and of our cultural, intellectual, and academic life—but so did East Germany. And, on the one hand, while our one party—let us for convenience call it the Anti-America party (or AAP), because this has nothing to do with being a mere Republican or Democrat—seemed to be tightening its control intellectually, it could hardly be pleased by its problems politically. Was anybody listening?
The Anti-America party intervened visibly and vociferously in all three presidential campaigns in the 80’s, and in all three it failed. The 1988 fiasco and its aftermath were especially galling. The AAP had done everything it could for Michael Dukakis; it had pulled out all the stops. Walter Mondale had had the good grace to disappear, but Dukakis had turned out to be not even an Adlai Stevenson who could be mythologically inflated in defeat. Instead, Dukakis shrank to the point where not even those who most ardently promoted him felt much other than embarrassment. Meanwhile, the Bush presidency itself supplies nonstop frustration for the AAP which worked so energetically, and nastily, to bring about his defeat. First, it turns out that eight years of anti-Reagan indoctrination did not prevent what was, in effect, a vote for a third term for the “ex-actor.” As for Bush himself, he has become hugely popular, principally because times are good—better, in fact, than they have ever been, and everybody knows it—but also because Americans really do prefer old-fashioned Georges and Barbaras to newfangled Mikes and Kittys. Again, it turns out that a decade’s attack on the traditional family, and the traditional male role and all that, merely reminds people of how desperately they want traditional families and traditional men.
Still and all, this is only the minor part of the Left’s discomfort. Our homegrown Anti-America party has been stripped of its global pretensions and connections. It has turned out to be the American Century after all. The present ascendancy of the United States—strategic, economic, and ideological—is really quite breathtaking. No single country—not even Britain at its height-has ever held an equivalent position. We who have been taught to be magnanimous and humble in times of triumph can therefore be grateful that the noisy petulance of the American Left has drowned out any unseemly gloating about America’s successes.
What can explain the dichotomy between the AAP’s seeming successes and its real failures, between our familiarity with every detail of its thinking about every conceivable subject, and the triumph of things of which it has tried to keep us ignorant? The explanation might be as simple as the propensity of almost all people to sort through the junk and reject descriptions of reality which diverge radically from reality as they know it. In particular, the hateful anti-Americanism which suffuses public discourse is rejected by Americans to the degree that they know something about the subject under discussion. Besides, even citizens of a free society have learned the techniques of passive resistance to attempts at thought control. Consider, for example, what is permissible in public discourse about some of our domestic problems, what people really think about those same problems, and how they really discuss them among themselves. Do they flee Park Avenue apartments because they fear marauding bands of students from the Dalton School? In this respect, there is a certain harmonic resonance between the concepts of “workers’ paradise” and “gorgeous mosaic,” for they are joined by the propensity of sane people to flee from the true nature of both. People have learned from Lenin’s famous question “who/whom?”—they are seldom if ever confused about who is doing what to them.
One likes to think that this is nature’s way of mitigating the seeming cultural hegemony of the Left. And one also likes to think that those loci of the Left’s power not normally answerable either electorally or economically—the university professoriate, the federal judiciary, the civil service—will also be impressed, even though, formally speaking, they are immune to the normal disciplines imposed by our society. Nor will the kind of anti-Americanism propounded by our intellectual classes persuade many people in a world which has experienced the political and philosophical revolutions of the 80’s. Indeed, this mode of American academic and intellectual and journalistic discourse has entered the genre of self-parody, dotty and idiosyncratic when it is not just plain loony. More and more, people will react to it as they would react if North Korea’s Kim I Sung were to resume his old practice of buying full-page advertisements in the New York Times.
Then, too, there is the 80’s transformation of the communications industry into a relative handful of enormous commercial organizations. True, the people who actually own or run these businesses dread denunciation for philistinism or illiberalism by their hired help, and do not much bother these employees when business is good. But this is still capitalism, and when television networks continue to lose viewers, when film investors start to lose money, and when the returns from publishing are not what they need to be, then all the old bets may be off. Time, for example, has lost hundreds of thousands of readers. So far, at least, the increasingly bizarre ideological bent of the magazine’s editorial content has not been identified, in public anyway, as a possible culprit, but Time-Warner, Inc. may soon be driven to try anything, even normality.
Yet one cannot be matter of fact, or overly analytical, or in any way cavalier about the enormous damage inflicted by the application of Left doctrines, or about the enormous effort required to repair the damage. It is easy enough to hope that the damage will be repaired sooner or later, but that phrase can encompass a long, painful, and frustrating period for those who have paid the real costs. “Sooner or later,” for example, we will follow the lead of Leipzig and do away with the East German-like social and economic system which, in the name of progress and justice, the Left has managed to foist on our inner cities. This system, like socialism in general, contrives to make people poorer than they once were and then to keep them that way, all the while suppressing their capacities for normal civic, social, and cultural life.
The contribution of the 80’s, then, is to make the repairs seem possible. For unlike many earlier decades which we remember as times of muddle, the 80’s were the decade of clarity. As Kenneth Minogue of the London School of Economics has pointed out, there has never been a comparable test case in social, economic, and political life which was so plainly decided in favor of one side—the side of democracy and capitalism, our side. The decade, moreover, saw more freedom, more opportunity, more hope, and more prosperity for more people than any other ten-year period in the history of the world. That much of this was inspired and presided over by the United States validates our civic creed and vindicates the efforts of our citizens. It was a decade filled with accomplishments in this country and in the world on a scale so great that the mere memory of it will remind people of what can be achieved.
Meanwhile, the 90’s have begun and the new decade already wants to know what the 80’s have done for it lately. In praising the clarifying power of the 80’s, however, one need not lose sight of other instructive eras. Winston Churchill, we remember, wrote a six-volume history of World War II. He inscribed a theme in each one; the last, Triumph and Tragedy, was written, he said, to recount how the great democracies triumphed and were thereby free to resume the follies which had nearly cost them their lives. It is always possible that the 90’s will be a time for the resumption of such folly. But no one will be able to blame Ronald Reagan for that.
Have the ideas and policies prevailing in the United States in the 80’s triumphed worldwide?
The coming of liberal democracy to Eastern Europe and the unraveling of Communism in the Soviet Union vindicate enduring principles of the West—those of open societies, competitive economies, and democratic political life with guarantees of individual liberty. It is wishful thinking, or at least premature, to talk of those ideas triumphing worldwide. Communism remains entrenched in East Asia; Islamic fundamentalism convulses nations from North Africa through South Asia; regimes in Africa and Latin America continue to be wracked by intractable social problems and political instability. The ultimate political destiny of the Soviet Union itself remains dark and uncertain.
But if it is an error to take the past year’s progress for a consolidated victory and the Soviet bloc for the world, it is worse to confuse the common traditions of the West with particular policies of the Reagan administration. Liberals and conservatives alike have reason to celebrate the advance of principles that unite us. But the policies of the Reagan era that divide us have had no similar triumph, not among our allies and not at home.
I have no idea whether the Reagan administration’s economic and social policies “stimulated greed on Wall Street” or “encouraged a general mood of selfishness.” Greed and selfishness have never been in short supply on Wall Street or anywhere else. But the policies of the last decade have unquestionably left us with a colossal financial burden. The S & L fiasco is perhaps the most graphic illustration of the national costs of misconceived deregulation. The 1981 tax legislation helped to stimulate a wave of mergers and acquisitions that converted equity to debt on a massive scale and left many companies excessively leveraged and vulnerable to collapse, as the fall of the house of Drexel itself illustrated. Federal deficits at previously unimaginable levels added more to the national debt than in all of our previous history. Interest expenses now represent the third biggest item in the federal budget. As interest costs and the defense budget absorbed an increased share of federal spending, productive public investment—the share of government spending that goes to infrastructure and other productivity-improving purposes—declined to the lowest levels in four decades. Supply-side policies were supposed to increase the private savings rate; they failed to do so. America as a whole consumed 3 percent of GNP a year more than it produced; we made up the difference by borrowing from foreigners, and we shall be paying them back for years. All of this will be part of the long financial hangover from the 80’s.
So, a costly decade, yes; a disastrous one, no. Relative to our Gross National Product, the federal debt is still smaller than it was after World War II. And while the proportion of that debt owed to foreigners is higher, the burden is manageable. So, too, are the staggering costs of the S & L bailout. But to say burdens are manageable is not to say we should celebrate or continue the policies that brought them upon us. It was not these policies that gave the United States, much less the West as a whole, a victory over Soviet Communism. Soviet power is collapsing because of the deep and endemic problems of Communism. The liberal democracies of the West are now gaining ground because they have discovered the means of reconciling initiative and innovation with political and economic stability. The deep strength of our system is forgiving: it permits us to make errors in policy, and to recover. But the errors made by the United States have a cost to our power and prosperity. Relative to Western Europe and Japan, the United States has slipped—because we have been unwise, and because they have graduated from the junior position in the alliance that they accepted for nearly a half century after World War II. Only in that restricted, relative sense is it possible to speak of American “decline.”
In a larger sense, the United States faces extraordinary opportunities. Advances in science and technology are changing the basic relations of time and space in the economy and promoting a global expansion that should enable us to continue to grow and to cope with the problems of the environment, an aging society, and the persistence of hard-core poverty. The end of the cold war should enable us to shift resources, such as scientific talent, from the military to more economically productive uses. Nothing, of course, guarantees that we will succeed in capitalizing on those opportunities, but fate has not decreed that we go the way of the British empire.
Internally the picture is highly uneven, socially and regionally. Whether you think the 80’s were, from an economic standpoint, a good decade or a bad one depends exactly on that—your economic standpoint. For roughly the top third, the 80’s were exceptionally good. The value of financial assets soared. Between 1980 and 1988 America’s upper 20 percent of households increased their share of total income from 41.6 percent to 44.0 percent, while the share going to the bottom fifth fell from 5.1 percent to 4.6 percent. To take an even more stark contrast—this for the years 1977 to 1987—average incomes for the top 1 percent rose from $174,000 to $304,000 (up 74 percent), while average incomes for the bottom tenth dropped from $3,528 to $3,157 (down 10.5 percent). Forbes’s 1990 survey of 800 top chief executives found them making an average of $1.4 million, more than double the $620,000 they made in 1985, even though profits had increased only 40 percent over the same period. Average wages for production workers did not increase in real terms at all between 1980 and 1988. By the end of the decade, according to the economist Frank Levy, the median incomes (in 1989 dollars) of male high-school graduates, ages 25 to 34, had actually fallen from $23,000 to $20,000. One of four children under age six in America is growing up in a household with an income below the poverty line. Lower-income Americans have not just experienced a loss relative to the rich; their real level of living has declined. The homeless are but the most visible sign of that deterioration.
Increasing inequality has diverse sources. Since pre-tax incomes became more unequal during the 80’s, tax policy cannot be the entire explanation. While the global economy offers new opportunities for those with skills and resources, it puts America’s less skilled into direct competition with low-wage workers in poor countries. Economic changes unfriendly to less affluent Americans, however, did not appear for the first time in the 80’s. In other periods, national policy sought successfully to reduce the gap between rich and poor. The difference in the last decade is that national policy has widened the gap.
Poverty is only one aspect of the continuing failures of social policy. The system of public schools that once gave us one of the highest educational levels in the world is now in deep trouble. Even apart from the poor, our children perform nowhere near the level of children in Japan and Western Europe. America’s health-care system is by far the most expensive in the world, both absolutely and relative to our national income—yet some 37 million Americans, the majority of them members of families with a working adult, have no health insurance; and on every major indicator of health, we lag among the advanced societies. By the same comparison, we are beset by higher levels of drug use and violence and have the highest rate of imprisonment, with over a million Americans behind bars—the majority of them young men who ought to be contributing to rather than subtracting from America’s wealth. The “prevailing ideas and policies” of the United States in the 80’s show no sign of remedying these problems and are not models the rest of the world is driven to adopt.
In the great struggle between capitalism and socialism, socialism has lost. But among the capitalist countries, the variations in the design of institutions and policies are considerable, and the Reagan-era, laissez-faire model is by no means triumphing over the diverse alternatives. Highly interventionist governments in East Asia have had the highest rates of economic growth in recent years. The Western Europeans are committed to more comprehensive social policies and higher levels of public expenditure than is the United States; European unification is reinforcing that pattern. Throughout the world, environmental concerns are a reminder of the limits of the market.
In the United States, the two principal elements of the Reagan agenda of 1980—the drive to deregulate markets, privatize services, and cut back government, on the one hand, and to build up America’s military posture on the other—are now played out. In some areas, such as telecommunications and trucking, deregulation reflected a political consensus, gotten under way before 1980, and is now well established. In others, such as the environment, the Reagan initiatives were divisive and quickly cut short. And in still others—finance, cable television, airlines—the results have been unimpressive, in some respects dismal. As a politically popular movement, deregulation is finished: the S & L scandal is the coup de grâce.
So, too, with cutbacks in government spending. In fact, overall government spending did not decline during the 80’s because of higher defense budgets and interest costs. The cutbacks came entirely in discretionary social expenditures (that is, exclusive of Social Security), down from 9.7 to 7.3 percent of GNP (a cut of one-fourth). With the end of the cold war, that change may partially be reversed. California voters’ recent approval of higher taxes for public infrastructure suggests that the acute taxophobia of the 80’s is receding. President Bush’s retreat on taxes may also be a signal.
Did the United States need to shift spending from domestic programs to defense as much as it did? Believers will insist higher defense budgets were essential to bringing about the breakdown of Soviet Communism. But it is a hard case to make, given the severity and systemic origins of the Soviets’ problems. Some things are now clear. First, the CIA estimates of the Soviet economy that backed up our own high defense spending were grossly exaggerated; the Soviets were in much worse shape than conservatives, ironically, could admit. Second, the long-term contracts for weapons procurement made during the 80’s hang over us like the S & L bailout. They are now costly to terminate even when the rationale behind some weapons systems is disappearing.
I do not believe, on the other hand, that there is any particular ill legacy to Reagan’s foreign policy. The world has changed so radically in two years that the question of American intervention abroad takes on entirely different meaning today. The premises of 1980—that the United States needed to take a harder line against Soviet expansionism, deploying new weapons in Europe and opposing Soviet-supported regimes and movements elsewhere—lose their force in a world where the Europeans can take care of themselves, thank you, and many local and regional conflicts in the Third World are losing their former global significance. Confronted by an unfriendly dictator in some misbegotten Third World country, we ought to evaluate our interests in the cool recognition that it probably will not make much difference to us whether he is a white zebra with black stripes or a black zebra with white stripes. Indeed, the less the United States needs to vie with the Soviets for influence in the Third World, the less opportunity will such regimes have for playing one superpower against the other to extract arms and assistance. This is one of the less apparent “peace dividends” from the end of the cold war, and it will enable us to defend our real national interests in theaters of conflict where America’s moral and political role is genuinely needed.
America’s great opportunity now is to become the country we have held ourselves out to be. What people elsewhere in the world acclaim about the United States is not the policies that produce our budget deficits and trade deficits, our leveraged buyouts and financial bailouts, our failing public schools, costly and inequitable health-insurance system, welfare programs, housing policies, or drug-enforcement techniques. They admire our basic liberties, the openness of our society, the ingenuity of our technology, the freshness and energy of our culture. These deep springs of vitality continue to offer us our best hope for the future. The trick will be to apply that American energy to the common purposes and responsibilities to each other neglected during the last decade.
One of my favorite passages from presidential speeches runs as follows:
Entertaining a due sense of our equal right to the use of our own faculties [and] to the acquisitions of our own industry . . . ; enlightened by a benign religion, professed . . . and practiced in various forms, yet all of them inculcating honesty, truth, temperance, gratitude, and the love of man; acknowledging and adoring an overruling Providence, which by all its dispensations proves that it delights in the happiness of man here and his greater happiness hereafter—with all these blessings, what more is necessary to make us a happy and a prosperous people? Still one more thing, fellow citizens—wise and frugal Government, which shall restrain men from injuring one another, shall leave them otherwise free to regulate their own pursuits of industry and improvement, and shall not take from the mouth of labor the bread it has earned. This is the sum of good government, and this is necessary to close the circle of our felicities.
This passage, of course, is not from Ronald Reagan in 1981 but from Thomas Jefferson in 1801; but the reader will perhaps agree that the spirit is Ronald Reagan’s.
Indeed, Ronald Reagan spoke often of his presidency as “a new beginning,” a fresh start on the original American experiment. In the classic sense, every successful revolution (re-volvere) is a turning back to first principles, and in the American case turning back to principles “conceived in liberty and dedicated to the proposition that all men are created equal.” Reagan preached on democracy in London and Moscow, and made the principles of Jefferson and Lincoln echo far away: Jefferson’s Declaration (“We hold these truths . . .”) was used by Zdenek Janecek, a brewery worker in Prague in November 1989, to explain to his fellow workers the meaning of “the velvet revolution”; just as in Shanghai the preceding June, the students had carried aloft a white plaster replica of the Statue of Liberty. The Reagan revolution, like its predecessor, was a shot heard ’round the world.
Barely three weeks into office in 1981, President Reagan told guests at a White House dinner for Margaret Thatcher that Communism was being swept into the dustbin of history. His “simplistic” confidence made the sophisticated cringe, as did his later assertion that Communism was “the focus of evil” in the world. Described as Manichean and vociferously objected to by progressives in the West, this sentiment was solemnly assented to by those weighed down by that evil in the East and has become a commonplace in Eastern Europe today.
Indeed, the implication of the Reagan era for foreign policy is simple and inescapable. If he was right, the Left was wrong, wildly wrong, on just about every important issue of our time: on Communism, on human rights, on the arms buildup vs. the nuclear freeze, and even on the character of Mikhail Gorbachev.
I have some particular claim to knowledge of Reagan’s strategy on human rights since by his tenth day in office, the President had me flying to the Human Rights Commission (HRC) in Geneva, the first Reagan appointee to speak on any matter of foreign policy at all and “the first Reaganaut in European captivity,” as my ambassadorial colleagues were amusingly to describe me. Before my arrival, these colleagues were said to be very curious, as if a Reaganaut might show up with holster and cowboy boots.
I remember answering one question after another at my first formal dinner, at the home of a Scandinavian ambassador, in terms supplied me by the Reagan administration: that America was founded in the name of human rights, so that on the central propositions our allies could expect unwavering consistency; that my instructions were to condone no violation of human rights anywhere in the world, but to criticize all abuses by a consistent standard; that on most practical questions our voting instructions would most likely (97 times out of 100) be substantially like those of the Carter administration, since a great nation like an aircraft carrier changes course only slowly; but that our allies could expect our speeches and our actions to be more consistent with one another and more steady than the Carter administration’s. In particular, Reagan would insist on holding the USSR, its satellites, Africa, and the Islamic countries to the same standard as Latin America was held: one single standard for all. Again, Reagan intended to be judged by results, not words: that is, by whether, after his term was through, the actual human-rights situation around the world had improved.
By this measure, the revolution in human rights and in the building of democracy around the world since 1981, not only in the Philippines and in Chile and in eleven other new democracies in Latin America but also in the heart of the “Evil Empire” itself, is in line with Reagan’s policy. The world by 1990 is much closer to where Reagan wanted it to be—where Reagan pushed it to be—than anyone but he predicted. Let me dwell a little longer on the underlying strategy.
My colleague in Geneva, Richard Schifter (now Assistant Secretary of State for Human Rights), argued openly that Western military power was the only reliable guarantor of human rights in the world, and that a serious and careful use of words was absolutely essential to the protection of human rights. He and I expressed our loathing for the Orwellian usages of the HRC (particularly its inversion of such words as “racist” and “terrorist” in its relentless annual assault upon Israel). We urged our allies to unite behind a strategy to advance Western concepts of human rights, rather than surrendering defensively one step after another in their erosion. We helped to form the first majority in the (then) thirty-seven-year history of the HRC to discover and to denounce in public even one human-rights abuse behind the Iron Curtain (Poland under martial law), and we began the process of including Cuba under human-rights scrutiny. Ronald Reagan wanted to signal to the world that human rights are not protected by slogans, promises, or words, but by institutions of democracy and civil society. This was a conscious long-term strategy.
Thus, when Mikhail Gorbachev came belatedly upon the scene as General Secretary in 1985, he faced a different ideological momentum in the world and a new correlation of forces. He was not at first any champion of glasnost or perestroika. People may forget how closed, stubborn, and testy Gorbachev was in the beginning about human rights, as evidenced by the glacial resistance of the Soviet delegation in Bern, Switzerland, at the Helsinki follow-up talks of May 1986, whose best offer Reagan turned down as totally inadequate. This was at the time when the disaster at Chernobyl was being hushed up, just before glasnost altered the climate.
To whom goes the credit for the great and sudden change of direction in the USSR, Time asked Agostino Cardinal Casaroli at the close of 1989? Earlier, the prudent and reserved Vatican Secretary of State had sought to moderate Reagan’s military buildup, especially the Strategic Defense Initiative. Nonetheless, the experienced Vatican Cardinal carefully assigned credit as follows: “Ronald Reagan obligated the Soviet Union to increase its military spending to the limit of insupportability. He made everyone understand that rearmament was a dead-end street.” This is a great tribute to Ronald Reagan.
Further, it is probable that Mikhail Gorbachev could not have agreed to the process leading to the liberation of Eastern Europe in 1989—and perhaps not even to glasnost and perestroika in 1986—in the face of any other American President except Ronald Reagan. First of all, only Reagan could have engineered both the decisive military buildup and the change in ideological warfare. Second, any liberal Democrat, even a liberal Republican, who would have “trusted” Gorbachev in those early days would have-been sharply attacked by the very conservatives who were willing to be led by Reagan. For his part, Reagan laid the perfect foundation for Gorbachev’s desperate attempt to save Communism and the USSR. Before he went to Geneva, President Reagan told a small group of friends that if he could have four or five hours alone with the new Soviet leader he was sure he could persuade him that he intended the Soviet Union no harm; and that a man of his age could be concerned only with the children of his children’s generation. About Gorbachev, those present may have thought this hope terribly naive; but about Ronald Reagan it was thoroughly convincing. The record shows that Gorbachev also found it so.
In his earlier profession, Reagan had mastered both an instinctive grasp of the ideological drama within which persons speak their lines and a knack for reading quickly the essence of character, and so he quickly recognized Gorbachev as “a different kind of Communist . . . certainly no Lenin, . . . a new type of Russian,” and played his own presidential lines accordingly. More than anyone else, Ronald Reagan made recent history happen as it did, and this marked his presidency with greatness. In Clare Booth Luce’s game of epitaphs: “Ronald Reagan toppled Communism.”
Domestically, however, most of the New Class—leaders of culture, journalism, and social activism—have loathed what Ronald Reagan stands for since before he became President, while he was President, and still today. They were accustomed, in the age of television, to getting their way: they had driven out Johnson, destroyed Nixon, ridiculed Ford, humiliated Carter. They were astonished that Reagan could be elected, and it killed them that the public loved him more than them, which they blamed on the public’s gullibility. And so the years 1981-88 were the darkest in their lives. They pretend now that it was a troubling illusion, an unhappy memory, a phase already ended. Having failed to vanquish Reagan in the hearts of the people while he was in office, they hope at least to blacken his memory.
To the decade that saw the greatest outpouring of private philanthropy in history ($115 billion in 1989, 88 percent of it given by individuals) and the largest outpouring of volunteer hours (more than 40 percent of American adults giving at least five hours per week) they have fraudulently tried to attach the name of “greed.” Tens of millions of evangelicals, never politically active before, went from being “the silent majority” to taking an active part in politics. Millions rallied for “Back to Basics” programs in the schools. One of the greatest human-rights programs of all times, the pro-life movement, rose up from the grassroots with virtually no support whatever from the nation’s cultural elites—indeed against the visible contempt of elites-to protest against the excesses of Roe v. Wade in 1973. (The same elites who protested loudly against “police brutality” toward civil-rights marchers in 1964, and against “the pigs” during 1968’s youth rebellion, nodded with satisfaction as the police used unnecessary cruelty against the prayerful participants of Operation Rescue.)
Against much early scorn for Reaganomics, Ronald Reagan argued that cuts in income-tax rates were necessary for two reasons: both to hold level the percentage of national income going to government, and to generate economic growth. By arduous persuasion, brilliant tactics, and exceedingly close votes, Reagan got his tax cuts—and the country got the longest period of peacetime economic growth in American history, to the point that his critics stopped calling this success Reaganomics. When Reagan took office, the prime interest rate was 19 percent, inflation was 13.5 percent, and unemployment was over 7 (for a misery index of 40). When he left, the prime rate was 8, inflation less than 4, unemployment 5.2, and no one was talking about a misery index.
These successes both in foreign and in domestic affairs left the Left desperate. Thus, even as socialism was being discredited internationally (not only Communist socialism in Eastern Europe, but also Mitterrand’s socialist turn of 1981), and even as Mitterrand, González, Soares, Kinnock, and other European social democrats turned speedily to market principles, free enterprise, and lower tax rates, the American Left reflexively turned to the old Marxist tactic of class warfare, envy, and resentment.
This line of attack has become the conventional wisdom of those American journalists who are easily intimidated by the Left. Reagan’s tax cuts, they say, helped the rich and hurt the poor. Blacks especially, they say, suffered under Ronald Reagan. In America, inequality is growing, they say, and the U.S. leads all other major industrial countries in the gap dividing the upper fifth of the population from the lowest fifth.
All these assertions are false, as anyone who examines all the numbers sees; they are fashioned by selecting some numbers and neglecting others. Thus, Kevin Phillips in The Politics of Rich and Poor claims that “among major Western nations, the United States has displayed one of the sharpest cleavages between rich and poor.” Phillips supports this with exceptionally rough World Bank data (about which the Bank itself signals caution) from the years 1978-80 (before the Reagan inauguration). He forgets that the U.S. is a continent-sized, multi-ethnic nation, not a small homogeneous nation like Denmark. Measures of inequality in our most Swedish state, Minnesota, come quite close to those of Sweden; those of the U.S. as a whole are better compared with those of Canada, Australia, or Europe as a whole. Besides, unlike social—democratic nations (Kevin Phillips, social democrat?), the U.S. is not and has never been committed to income equality. “The protection of different and unequal faculties of acquiring property,” Madison wrote in Federalist #10, “is the first object of government.” The people of the U.S. define fairness in terms of opportunity, not of outcomes, as the social-democratic writer Jennifer L. Hochschild plaintively reports in What’s Fair? (1981).
Similar tricks are played with the lowest quintile in annual income. During the 80’s, the total percentage of income going to the bottom quintile declined—but the total amount of income received by that quintile rose. This last point is encouraging, because since at least 1960 the characteristics of those in the bottom quintile have been steadily changing. By now, almost two-thirds of the householders in the bottom quintile are single women, and relatively few are married men. About half of these female householders are elderly widows. For this and other reasons most householders in the bottom quintile are nowadays not in the work force. Thus, even the unprecedented economic growth of the 80’s could scarcely increase the cash income of those who are neither employed nor looking for work. And comparing the bottom quintile of 1988 with the bottom quintile of earlier decades is no longer comparing like with like.
The characteristics of householders in the top quintile have also changed. It is not so new that nearly all are married, have had at least four years of college, work full time, and are in their highest-earning years (45-64). What is new is that the highly-educated spouses of nearly all of them are also working full time for comparably high incomes—thus doubling the advantage of the top quintile over lower quintiles. As long as this phenomenon continues, so will “growing inequality.” The Census Bureau figures on income, incidentally, are pre-tax and so entirely unaffected by cuts in tax rates.
Many falsehoods have also been written about how the “Reagan tax cuts” have helped the rich at the expense of the poor. Yet Reagan’s economic growth with low inflation was not nearly so damaging to the poor as was Carter’s stagnation with high inflation. Carter’s inflation raised the poverty level from $5,815 in 1976 to $8,414 in 1980, sweeping some 4.3 million persons below the poverty line simply by eroding the value of incomes. Nothing hurts the poor like inflation; Reagan stopped that.
More important, though, the Reagan tax cuts exempted most of the officially poor from paying any federal income taxes at all. (It was not Reagan, but the Congress, that raised Social Security taxes.) By 1987, those in the upper brackets paid both the highest amounts of federal income tax ever paid and the highest proportion of all income taxes ever paid. For example, those in the top 5 percent paid 37.6 percent of all income taxes in 1979, but 43 percent in 1987. The actual taxes they paid were also significantly higher, both as a gross amount and as an average effective tax rate (22.4 percent). The top 10 percent of returns in 1987 paid 55.4 percent of all federal income taxes. Indeed, the top half of taxpayers in 1987 paid 94.1 percent of all federal income taxes. The entire bottom half paid only the other 5.9 percent.
Moreover, in certain respects, the condition of blacks in the U.S. improved significantly. In 1980, only 9 million blacks were employed; by 1988 this number had jumped to 11.4 million. The total annual income earned by all U.S. blacks rose steeply (in constant 1988 dollars) from $191 billion in 1980 to $259 billion in 1988—a sum larger than the GDP of all but ten nations in the world. The number of black families earning more than $50,000 per year jumped from 392,000 in 1982 to 936,000 in 1988. The median income of black married-couple families in 1988 climbed to $30,424. Of course, the bleak side of this picture is that the number of single-parent black families rose by 15 percent between 1980 and 1988, from 1.9 million to 2.2 million. The vast majority of single female householders were not in the work force, and a majority of black children in 1988 were born out of wedlock. Here median income was far lower; and nearly all indices of suffering and disability were higher. No wonder the measure of inequality (gini coefficient) among blacks was far higher (.450) than among whites (.382).
Choices of family pattern are not entirely amenable to government intrusion, although some argue that federal welfare policies enable certain self-injuring behaviors. Reagan was able to set welfare thinking on a new course, but not to show a reversal in damaging trends.
More than these great foreign and domestic achievements, however, Reagan deserves credit for inspiring this large, diverse, and important country to keep faith with its own destiny. For him that destiny was manifest, and manifest he made it to the world—even to the Communist party of the USSR. Ronald Reagan may not have caused a revolution in America, but he did renew the original one and give it universal reach.
The two questions posed by the editors of COMMENTARY actually come down to a single one: the relation of prevailing opinion among academics, intellectuals, and the media to political realities. The answer, unfortunately for the state of social thought, is that these opinions represent largely what people want to believe rather than sustained observation and analysis. Essentially, they reflect the disorder that has invaded much of our thinking about literature, philosophy, and politics. I suspect that this period may become known as the age of confusion.
In the political chaos that has surrounded the last decade—polemically known as the Reagan years or Reaganism—it is not easy to sort out the false from the true characterizations. On the whole, however, it is patently absurd to characterize the last decade as a plunge into darkness. Despite many of its failings, the country is scarcely in a catastrophic condition domestically. As for our foreign policies in the 80’s, our standing in the world is higher than it ever has been since soon after World War II. And the movement toward democratization and the free market in the Soviet Union and Eastern Europe certainly has vindicated our policies and done nothing to blacken the image of our own brand of democracy.
To be sure, not everything has been idyllic, even if it has not been disastrous. But to deny that it has been disastrous does not mean we should deny the failings. We cannot ignore the wild takeovers, the widespread corruption, and the frequent disregard of the national interest. Deregulation seems to have produced the S & L disaster. More might have been done to reduce the gap between the rich and the poor which appears to be widening; more could have been done to improve the environment; greater and more effective measures could have been taken to reduce crime and to do something about the abysmal state of education. But the confusion and the distortion arise when these failures are inflated so as to darken the entire picture and to dismiss the 80’s as the black years of Reaganism. (Perhaps to make clear my own perspective, I should say I have never been a Reaganite.) But surely Reagan was not responsible for every failure; some were just as much the responsibility of Congress (controlled by the Democrats). And many of the vulgarities and the unlimited pursuit of self-interest that we abhor are rooted in the laissez-faire system—which, to be sure, Reagan did little to control—and in the emergence of the much-publicized “me” generation and the Yuppies with radical politics. This seems to be the price we pay for our liberties and our prosperity.
As for our foreign policies, they have been more successful than our domestic ones, though they have not been always intelligent or responsible. Nor are they sufficiently long-range. For example, the Iran-contra affair, however well-intentioned it might have been, was stupidly planned and executed. Also, relations with Israel were often strained because of some almost inexplicable blindness toward the machinations of the Arabs and an unwillingness to understand the demagogy and treachery of Arafat. The misleading slogan of evenhandedness masks the specific causes of the Israeli-Arab conflict. On the other hand, the arms buildup and the stiffening attitude toward the Soviet Union under Reagan, which came under so much liberal criticism, turned out to be not only correct but successful—as can be seen in the democratization of Eastern Europe and the easing of relations with Russia. (It must be said in this connection that Bush does not seem to be following Reagan’s leads, particularly in his captious attitude to Israel and in his almost uncritical support of Gorbachev.)
Part of the negative characterization of the 80’s is a cultural matter. For a good deal of the criticism of Reagan by academics and the media was a reaction to his style and his personality. The image of the actor who could read his own lines fluently but was not at ease with the language on his own did not command much respect from the educated classes. Indeed, Reagan did not have any of the aura of an intellectual leader, of a statesman. As a matter of fact, the gap between Reagan’s reputed intellectual laziness and lowbrow tastes on the one hand, and the fact that he was able to pull the country out of its post-Vietnam sloth on the other, remains something of a mystery. Perhaps the explanation is to be found in the political genius of the country, noted early by Tocqueville. Unlike the nations of Europe, which produce statesmen, we create political leaders who, like Reagan, Eisenhower, and Truman, represent the essence of the common man elevated to a leader.
But this is only half of the equation between the political realities and the popular perceptions of them. The other half, I think, lies in the area of ideology, the ideology of liberals and the Left, who have had much to do with our thinking about the events of the last decade. And here the question is not so much whether the ideas in this quarter have been right or wrong as that they have been the source of much muddled thinking. The point therefore is not whether Left-liberals have been wrong about Reagan and his years in office; in some respects they have been right. It is, rather, that the interminable Reagan-bashing with its automatic dismissal of everything connected with the Reagan years has made it difficult to arrive at a clear understanding of the ideas and events of the 80’s. As I have suggested, in some matters, as in the new tax law, which is not what it was claimed to be, the Democrats have been as responsible as the Republicans. In other areas, such as crime, drugs, and economic inequities, the culpability that belongs to both parties and to the condition of our society as a whole has been attributed to the Reagan administration alone. On the other hand, the anti-anti-Communism of many liberals and much of the Left, which recent events have shown to be foolhardy, has generally escaped critical examination. The complete failure of socialism in the Soviet Union, for example, and the admission of terror and the slave-labor camps are themselves commentaries on those who regarded anti-Communist criticism as a species of conservatism.
The confusion scarcely lets up. We are now in the era of Bush-bashing, which is an automatic continuation of Reagan-bashing, even though on many key questions, with the notable exception of the issue of abortion and to some extent that of taxation, Bush is almost indistinguishable from liberal Democrats. On Lithuania and China, for example, there has been an ironic reversal, with the Democrats pressing for a tougher policy, and Bush leaning toward a softer one. On the other hand, extreme right-wingers, such as Patrick Buchanan, have made their own contributions to mindless policies by insisting that nothing basic has changed in the Soviet Union.
In Essence, the fact that Left-liberals have made it more difficult to think seriously about political and social questions is at least as important as their mistaken views. This is particularly unfortunate now when we are entering a new uncharted era, and when we are still fed the ideological platitudes that have served to conceal the real issues in the past. The two most important questions today are the unification of Germany and the extent of the changes in the Soviet Union under Gorbachev. And in both cases a certain amount of confusion comes from the persistence of old liberal views. In the matter of unification, fears of re-Nazification—which happen to coincide with Russian rather than American interests—are relics of liberal ideology (as well as the understandable feelings of many Jews). As for Gorbachev’s Russia, we have difficulty in assessing its power and stability to some extent because of the lingering effects of the tendency to play down Russian economic backwardness, political controls, and military strength. No doubt, glasnost and perestroika represent significant liberating changes, but so far they have been limited in scope: the Communist regime is still intact, and genuine democracy and pluralism are not in sight.
Another example of the will to cling to outworn beliefs is the response of some socialists to the changes in the Soviet Union and Eastern Europe. Many of them insist that socialism has not been repudiated and that there is no movement toward a free market. For example, Samuel Bowles, who describes himself as a leftist, says, in a recent comment in the Chronicle of Higher Education, that the death of socialism in Eastern Europe will revitalize leftism in this country. There is also the continuing obfuscation in the use of the term “socialism.” Usually it is not the abolition of the private ownership of the means of production advocated by Marx and Lenin that is meant by socialism. What is being referred to is some version of social democracy, generally a mild one, mainly little more than a welfare state.
These intellectual habits, it might be noted, are not limited to the area of practical politics. Much has been written criticizing the latest trends in deconstruction, feminist literary theory, black aesthetics—and the radical assault on the cultural tradition. (Helen Vendler’s recent piece in the New York Review of Books is a devastating criticism of the feminist approach to literature.) But, to my knowledge, it has not been pointed out that aside from the question of their validity, these new cultural theories actually have produced an enormous amount of intellectual confusion. For to deny the primary meaning of texts and the authenticity of the Western tradition has created a state of intellectual anarchy, in which any theory seems to have as much credibility as any other. It is difficult to say which is cause and which is effect. But the confusion in political matters is of a piece with that in cultural ones. One is reminded of Dostoevsky’s remark—through one of his characters—that if God doesn’t exist, then anything is possible. Similarly, if there is no authority informed by tradition, then anything goes.
If, as I have suggested, we are intellectually handicapped by political confusion as well as by the profusion of trendy theories, it would seem to be less profitable to argue the validity of specific positions than to clear the air of ideological commitments—of those habits of mind that are bound to twist our thinking about many political and cultural questions.
The characterization of the 80’s that introduces this symposium is taken seriously only by those who are not themselves serious. It is, not to put too fine a point on the matter, loser’s history. This is history as seen by the adversary culture, dominant among “journalists, historians, and intellectuals,” but quite unrecognizable to the unbeguiled majority. It is history as seen by those who don’t much like America, and most Americans like America just fine, even if they are sometimes bemused by the inability of the country’s cultural elites to appreciate the benefits of this almost chosen nation.
The antagonism of the adversary culture toward America’s bourgeois democratic reality is of course nothing new. Earlier traces of it can be found among losing Federalist elites in the late 18th and early 19th centuries, in the moral fastidiousness of the New England Transcendentalists, in the goo-goo restorationism of the post-Civil War era (of which the supercilious Adams brothers were the perfect expression). It has come fully into its own in the 20th century, first in the 20’s, again in the 50’s, and now in overripe maturity in the 80’s.
Two brief and partial points of concession. First, the current vogue of the adversary-culture perspective is part of the inevitable ebb and flow of political opinion. No political mood lasts forever, and it is in the nature of things that the conservatism of the 80’s should produce something of a left-wing reaction. That reaction is exacerbated by the Bush succession: had Richard Nixon rather than John Kennedy succeeded Eisenhower in 1961, the relatively subdued reaction of the cultural elites of the time against the 50’s would have been as exaggerated as is that of their counterparts against the 80’s today. Second, there are elements of truth in the critical view of the 80’s. There were, given the entrepreneurial enthusiasms of the time, excesses on Wall Street and elsewhere in the business community (though the idea that the Reagan administration somehow invented greed and selfishness is one of those stock moralisms that it takes an American liberal to believe), and there was a widening of the spread in income distribution during the period (though the vast majority of Americans improved their situation in the course of the decade and the desperate condition of the homeless and of the underclass of which they are a part owes far more to cultural and structural economic factors than to the policies of the Reagan administration).
The great reality of the 80’s, ignored almost as much on the Right as on the Left, is the ascendancy of conservatism. (The massive exception here—there are other partial ones—is the advance of feminism; its against-the-grain victories would require a separate essay properly to explore.) We have witnessed in the past decade a transformation of the vocabulary and agenda of the American political economy. Liberals (and Bourbon-restorationist conservatives) note that Americans have not fully repudiated the welfare state; they ignore the far more significant point that no one today calls for the return of the Great Society. Of course the Reagan administration did not restore laissez-faire; neither did the Harding, Coolidge, or Hoover administrations of the 20’s. (For that matter, there was never in the American experience a condition of full laissez-faire to be restored.) What matters is that not only was the once seemingly inexorable march toward an ever-greater statism retarded, the idea behind it was brought fundamentally into question.
The temptations of hyperbole in political discourse duly noted, it is still fair to say that there occurred during the 80’s a Reagan counterrevolution, and the further removed from it we become in time the more obvious in substance it will appear. I will leave to others in this symposium the making of the easy case in international affairs. Suffice it to say here that those who believe that the revolution of 1989 in Eastern Europe had little or nothing to do with the policies of the Reagan administration might as readily believe in the ministrations of the tooth fairy.
Reagan’s achievements in the domestic sphere were, if less dramatic in effect than those in foreign affairs, no less significant in fact. Here a comparison with the presidency of Franklin Roosevelt suggests itself. Thirty years ago the noted American historian Carl Degler referred to FDR’s New Deal as the Third American Revolution (after 1776 and 1861-65). By the term “revolution” he meant a number of things, but more than anything else he meant the idea that the federal government had taken ultimate responsibility for the functioning of the economy, that from that time forward it was to the public sector, not the private, that Americans would look as guarantor of economic health and security. Degler represented the conventional wisdom of the pre-Reagan postwar era, as was confirmed in President Richard Nixon’s observation-then considered quite unexceptionable—that “we are all Keynesians now.” (Today one wonders if there are any Keynesians left.)
The comparison of the Reagan 80’s with the Roosevelt 30’s can be pursued further. FDR’s critics on the Left, much like those of Reagan on the Right, complained that he had not done enough, that he had squandered his chances truly to set things right, truly to seize the moment and rearrange America’s political and economic landscape. It was only in hindsight that the significance of his changes became clear, only from Degler’s perspective a quarter-century later that observers could perceive the outlines, within the context of an extraordinarily long-lived national consensus, of a peaceful American “revolution.” It is in precisely such a context that one can speak soberly of a Reagan “counterrevolution,” of a return, mutatis mutandis, to the classical Lockean ideals of bourgeois republicanism that the New Deal and the realities of economic modernity had presumably relegated to the dustbin of history. Not only did Reagan frustrate the Left’s long-nurtured ambition to transform America’s modest welfare state into a social-democratic redistributive state, he revitalized faith in the traditional (and presumably outmoded) American national virtues: individualism, private enterprise, the work ethic, voluntarism, personal freedom and responsibility.
The magnitude of Reagan’s accomplishment becomes fully apparent only in historical perspective. Anyone who has taught American history is acutely aware of the degree to which modern American political thought has been dominated by what might be called the progressive paradigm. The paradigm is disarmingly simple in outline. It suggests that the growth of big government is the inevitable result of the development of a complex national economy. In that view—prevalent since the emergence of Populism in the 1890’s—modern industrial conditions required the development of a national governing authority equipped to manage the economy, regulate private corporations in the public interest, and provide a welfare system to meet the vagaries of modern life.
Originating in a form of economic determinism, the paradigm carried obvious implications for political ideology, not only in America but everywhere the modernization process operates. From its premises flowed implied justification for ever more fully developed manifestations of strong central government: the logic of historical development moved from laissez-faire to mild regulation to the welfare state to the planned economy to social democracy to one or another form of socialism. Within that perspective, liberal periods in the American past such as the Populist and Progressive eras and the New Deal represented necessary accommodations with the future, while conservative decades like the 20’s, the 50’s, and, by extension, the 80’s represented anachronistic and futile protests against the imperatives of history. The progressive paradigm provided, in other words, the story line for modern American development, the framework within which one could place and make sense of particular events in the nation’s history. And in that story line liberals were always right and conservatives always wrong.
The hegemony of the paradigm followed from its apparent unanswerability. The most dogmatic of libertarians, after all, would be hard-pressed to deny the paradigm, altogether. It is the case that big government did, to some extent, flow inexorably from the processes of modern industrial development. Bigness begat bigness. The flaw in the paradigm issues from a subtle and insidious fallacy: the assumption that if, given modern conditions, some increase in government is good, even necessary, then more of that necessary good thing must be even better. But the conclusion does not follow—not in logic, and far less in history.
Americans learned that lesson in the episode of the Great Society. The Great Society represented a logical extension of the spirit of the Depression-era New Deal—the latter’s innovations having been consolidated in modified terms during the Eisenhower 50’s—into the prosperous 60’s. But the Great Society was, by any standards, a great fiasco. It did not solve the social problems at which it threw such huge amounts of money (it made many of them worse); it was, in macroeconomic terms, fiscally debilitating; it created horrendous problems of social dependency; and it stimulated social divisions and social unrests from which the nation is still suffering. The lesson of the Great Society is inescapable: we would have been better off without it.
American liberalism has in fact never recovered from the Great Society. The Watergate debacle diverted America’s attention for a considerable period, and the diversion continued during the odd and inept Carter interlude, but the larger pattern remained clear: the progressive paradigm had been broken and the “L-word” with which it was associated had become an ideological onus from which prudent politicians carefully distanced themselves. Activist government was no longer in the public mind an unambiguous good thing; it had become at best uncertain and problematic, at worst the unwitting source of our deepest problems.
It was Ronald Reagan’s achievement to raise all this from the level of inchoate intuition to conscious perception. Perhaps, Americans came to understand, the 80’s were not an aberration at all, any more than had been the 20’s or 50’s before them. Perhaps it was the decade of the 30’s, with its unprecedented experience of economic catastrophe, that was the aberration. Perhaps a dominant free-enterprise economy, undergirded by a safety net for the small minority who could not make it on their own, made more sense of the American experience than did visions of collectivist social democracy. Perhaps—heretical thought—Reagan told the American story better than Roosevelt had. A counterrevolution, indeed—and one given ideological confirmation by the collapse of socialism and the socialist ideal everywhere except in circles of the willfully self-deceived.
Not that Reaganism has fully carried the day. It has had to deal with a problem of cultural lag. Even while the politics of the progressive paradigm came under radical reconsideration, its moral correlative remained more or less in force. Commonplace political rhetoric—rhetoric dominated, of course, by the “journalists, historians, and intellectuals” responsible for the tendentious characterization of the 80’s that introduces this symposium—continued to equate social morality with government activism on behalf of those deemed to be the least-advantaged members of society. Concepts of “compassion” and “caring” translated inescapably in that view into new laws passed, new monies spent, new bureaucracies set in place. Despite the by-now innumerable cautionary experiences of the law of (doleful) unintended consequences of social policy, it remained the case into the Bush era that an administration skeptical of activist government placed itself by definition in the moral wrong. (There is also the troubled world of social issues—abortion, quotas, gay rights, feminism, crime and punishment, the family, moral and religious values—in which conservatives have made, at best, mixed advances and which will for the indefinite future remain an arena of Kulturkampf.)
But if conservativism is not unambiguously victorious, liberalism has, for the moment and into the foreseeable future, quite definitively lost. The liberals are writing coterie history to the contrary—history that tries to make its case by changing the subject-but there is no good reason why the rest of us should indulge their fantasies.
I do not think the 80’s have by any means been a disastrous decade for America. At the beginning of that decade, the Soviet Union was a totalitarian state with a vast gulag of imprisoned souls, not to mention the oppressed people of Eastern Europe under its heel—all a far cry from what now obtains. To a significant extent the collapse of Communism had its own rhythms and causes—yet surely the military strength of the West and its relative unity under NATO account for some of that collapse. Who knows what Brezhnev and his heirs (not to mention the dreary and dreadful bureaucrats who ran the Eastern-bloc countries) might have done were the West a pushover for them politically and militarily! I think, all along, the implacable critique of Leninism and Stalinism has proven to be one of the glories of conservatism. Of course, not only naive, gullible, or wrong-headed leftists or liberals have made mistakes with respect to their judgment of Stalinist totalitarianism in its various forms. I would like to see some of our political scientists look carefully at what Jeane J. Kirkpatrick said about the possibilities for change in Communist dictatorships, and what Hannah Arendt handed down as virtually a doctrine in The Origins of Totalitarianism—the inviolability of such a political system. What would Arendt think and say were she alive about what has happened during the last few years of the 80’s?
Moreover, who is checking up on the remarks and predictions made by many leaders of the so-called nuclear-freeze movement upon the election of Ronald Reagan? I listened to Dr. Helen Caldicott speak during that time, and I heard her warning us that a nuclear war was right around the corner (and I am here toning down severely her various statements).
A factory worker in the General Motors Chevrolet plant in Framingham, Massachusetts, told me he had heard Dr. Caldicott speak on his car radio, and thought this:
She’s half-crazy, and so are some who hold her up as the wisest one around. You can hear it in her voice, not just her words—all that scary talk, all that screeching, it is! And if you disagree with them, they point fingers and shout and try to tell you they’re smart and you’re dumb, and they’re right and you’re—well, you’re not just wrong, you’re sick!
A strange silence from such people now—and no eagerness to look critically at their own past judgments and, too, the accusations they leveled at others. For example, in the early 80’s a few of us suggested to some leading nuclear-freeze activists that their rhetoric was terribly exaggerated—and that we really did not think it fair that they should go to schools, as they were doing, with a message that Reagan’s election meant that a nuclear war was “inevitable” during his presidency. The response, I fear, showed that hysteria and meanness are by no means absent in people who regard themselves as sensitive and thoughtful and well educated. We were told we had this or that “problem,” we were “denying reality,” we were “rationalizing”—yet another reminder of a peculiar kind of cussing that is to be found among some of us in the psychoanalyzed segment of the liberal intelligentsia: if you don’t agree with someone, quickly slap some psychological label on him or her, cast doubt about the person’s motives, and in general use psychiatry as a discrediting device. In any event, all that seemed a strangely distant and irrelevant controversy by the end of the 80’s.
As for our domestic scene, I applauded the emphasis in the 80’s upon “family values”—a solid home life as the mainstay of society. I also was horrified and disgusted by an increasing indifference to the needs of millions of our most vulnerable citizens who were in jeopardy. I was, too, appalled by the various scandals of this past decade, the parade of crooks and phonies and liars who have walked before us on television and in our newspapers and magazines: the Boeskys and Milkens, the Swaggarts and Bakkers, and, of course, the high federal officials who have been shown to be part of all that—nothing new in our history, granted, but a reminder that those who denounce government excess and proclaim a new morality, all too readily, on the assumption of power, can exemplify in abundance what they have condemned.
Sometimes, as I look at this country’s recent political life, I think we are shaping a system that casts doubt on any federal assistance to the ordinary working people of this country, while granting, in the form of tax advantages and credits and privileges, tariff laws, subsidies, all sorts of encouragement and support to the well-to-do and the wealthy—a sort of free enterprise for the working class and socialism for the rich. I grant that some of our welfare laws have not worked—as I well know from my work with teenage mothers, who so often need most the moral and spiritual life they badly lack. Without such a life they do, indeed, become the apathetic, depressed, “welfare-dependent” single parents their harshest critics accuse them of being. Still, we have not in the 80’s reached out strenuously and adequately to such people—tried to figure out how we can engage with them so that they can, in turn, engage with our social and economic system. There is, too, plenty of old-fashioned, out-and-out racial suspicion and hate in this country—and, at times, I worry that such a side of our continuing history has been more than heeded (even catered to) by those who have succeeded politically in this past decade.
Personally, I have felt utterly out of keeping with much that has happened politically and culturally during the 80’s. I have strongly endorsed Christopher Lasch’s brilliant critique of the “culture of narcissim,” including the role played in that “culture” by my ilk, the ubiquitous shrinks. (We’ve removed the Bible from our classrooms, and all too commonly downplayed the significance of the flag, but the school psychologists are a mainstay, it seems, and the ideology they propound—the moral and philosophical assumptions they often unwittingly affirm—seems in no danger from either our Supreme Court or our liberal critics.) I associate myself with Lasch and with Jean Bethke Elshtain, and with much that Daniel Bell has written—conservative on so called social and cultural issues, and populist or egalitarian on economic issues: anxious to give working families and the poor the kind of economic boost our Congress gladly gives to already rich farmers and industrialists and bankers (the S & L scandal!) and defense contractors through various kinds of “special legislation”—again, tax breaks, credits, price supports, and on and on: a kind of “welfare dependency” we hear less discussed than is the case with that to be found in our urban ghettos. For some of us, then, the 80’s may well have been a decade of political loneliness—unable to stomach the cultural side of American liberalism as well as the class bias of American conservatism, and tempted, as I surely was in 1990, not even to vote in a presidential election.
The characterizations are wide of the mark. How, then, to explain why “so many people in America” accept the indictment? Well, let’s see now . . . I think it goes something like this:
Throughout the 70’s, the combination of inflation and an unindexed, “progressive” tax code moved all middle-income earners into tax brackets designed for the wealthy. This created great uneasiness across the land and led to the election of Ronald Reagan. Supply-side economics came in at about the same time. Stripped of all jargon, the new theory turned out to be timeless. Its central claim was that economic policy must be compatible with human nature. “Humans are rational,” the supply-siders said, in effect, “and they will not work very hard if unjust and excessive taxation strips them of the fruits of their labor.”
This insistence that human rationality must be acknowledged and deferred to by economic theorists may now seem unsurprising, but ten years ago the economics profession had strayed so far from all such “psychological” considerations that it seemed revolutionary. There ensued a tremendous outburst of indignation—echoes of which may still occasionally be heard on the editorial pages of the New York Times, the Nation, etc., and almost nightly on the CBS Evening News.
Economic journalists in particular led the charge against the restoration of human nature to economics. According to the cherished world view of the economics profession, there was something called “the economy” which worked hydraulically. There were income streams and savings sumps and liquidity traps—and worrisome imbalances if the levers of trade or fiscal or monetary policy weren’t properly handled. (The pressure might drop alarmingly throughout the whole system and we’d all be in a pickle.) Tax cuts? They would be inflationary and the math was there to prove it. But as long as wisely trained, well-intentioned people were at the control panels in Washington, “the economy” would perform satisfactorily enough. Human nature had nothing to do with it. It was absurd and dangerous even to suggest such a thing. (Would you amateurs mind staying out of this serious business, best left to professionals?)
The insistence that people wouldn’t be productive if deprived of the fruits of their labor was treated as though it were the kind of observation that a century earlier had been discarded as irrelevant by Alfred Marshall of Cambridge, the teacher of John Maynard Keynes. Marshall’s Principles of Economics, published in 1890, had transformed “political economy” into “economics,” and in the process philosophizing had been displaced by science, and much reactionary baggage discarded. In particular, the idea that the economic performance of a country depended on the self-interested behavior of its citizens was called into question.
Marshall believed, quaintly as it now seems, that human nature was changing rapidly, especially in the fifty years from 1840 to 1890 (corresponding to Charles Darwin’s ascendancy), and this in turn, Marshall thought, had brought about a “change in the point of view of economics,” which was beginning to pay “every year a greater attention to the pliability of human nature, and to the way in which the character of man affects and is affected by the prevalent methods of the production, distribution, and consumption of wealth.” The need for private property, then, earlier regarded as axiomatic by economists, “doubtless reaches no deeper than the qualities of human nature,” Marshall wrote. (Describing the intellectual climate of the late Victorian period, Bertrand Russell dryly noted that “everything was supposed to be evolving.”)
But now, in 1980, there came this retrograde development. Supply-side amateurs were insisting that it was back to square one for human nature! The reintroduction of the idea of incentives into economics threatened to overturn whole libraries and faculties, institutions and schools of thought—a carefully nurtured way of looking at the economic universe which excluded the individual almost completely (except as a consumer of goods). So Arthur Laffer was laughed at and George Gilder ridiculed, but that damnable Ronald Reagan seemed to be putting their ideas into practice, more or less; and in England the pushy Margaret Thatcher, the greengrocer’s daughter who became Prime Minister, was promoting the same silly ideas. How lacking in idealism they were, these parvenu advocates of the nouveaux riches! Was there to be no new society built? (No transformation of human nature after all?) No room for social justice? Were we to be thrown back on the tired old nostrums of private property, selfishness, and “greed”? Quite a setback for progress and civilization.
We all know what happened next. The detested ideas were put into effect (in 1982) and the U.S. economy grew rapidly; it has not stopped growing since. There was likewise a great improvement of the British economy in the 80’s. Furthermore, by the late 80’s it no longer became plausible to pretend (as the CIA had been pretending) that the socialist economies of Eastern Europe were chugging along quite nicely in their own quirky way. (The 1989 edition of The Statistical Abstract of the United States claims that the GNP per capita of East Germany is higher than that of West Germany!) Then, in the fall of 1989, the Communist governments of these Eastern-bloc countries gave up without a struggle. Today the only question is: how do you build capitalism on socialist ruins?
My own belief is that the final failure of socialist economics has been quite distressing for many of the same people whose daily avocation has been accusing Americans of greed and selfishness. In 1982, the left-wing economist Robert Heilbroner was candid enough to admit that
the collapse of the vision of socialism is one of the great intellectual traumas of the West. . . . As inefficiencies and indecencies have become evident in the Soviet Union, Cuba, China, East Germany, Poland, not to mention Yugoslavia itself, the once hallowed term “socialism” has become emptied of content. Moreover, as we look at the ideas of socialism apart from the forms it has taken in specific countries—ideas of central planning, nationalization, the “dictatorship of the proletariat”—we find the same sickening sense of vanishing ideals, empty slogans, terrible mistakes.
I’m sure that this “sickening sense” has been experienced by many intellectuals much more recently than 1982, but we hear few confessions from these people. (You only have to look at the worshipful reaction to Nelson Mandela to realize that the hunger for a socialist order is still very much alive.)
Don’t forget, the Soviet Union was supposed to create “New Soviet Man.” And that failure has been another big disappointment. The idea that human nature could be transformed came from Western Europe, and it is useful to think of the Soviet Union as having been for seventy years a passive laboratory for an evil experiment on man, suggested by Western intellectuals and carried out on unwilling subjects. As long as there was still some hope that this experiment might produce positive results, the Soviet Union remained effectively immune from Western criticism. And don’t forget that these hopes were still alive as late as the 60’s. But the experiment finally failed in the Gulag Archipelago, and today it has been effectively abandoned. The Soviet Union itself is unlikely to survive the experiment. It was a very costly and dangerous failure in applied sociology, and again, quietly traumatic for many in the West.
Only recently, it had seemed, we were well on the way to building new societies all over the world. They would be run by incorruptible, highly qualified leaders of men, beloved by their peoples, pragmatic in outlook, but driven by moral rather than merely material ambition. Julius Nyerere of Tanzania was the ideal. We would provide the cash and the condoms and the know-how where necessary (Robert McNamara and his World Bank would see to that); but when it came to soul and authenticity, by golly they would have something to teach us for a change.
Now it is becoming clear that if other countries are to attain our standard of living there is only one way they will be able to do so: they will have to forget about tribalism and adopt Western institutions. I am talking about the rule of law, secure private-property rights, the freedom of contract, government preferably limited constitutionally, and so on. (We are ourselves in danger of forgetting about, or even actively undermining, these institutions. Our leaders don’t talk about them enough, or even really understand them. We talk too exclusively about democracy—an insufficient prescription.)
Those who take pleasure in condemning America find all this very galling. They may pay lip-service to the free market but in practice they despise it and will continue to do so. It creates wealth, yes, but it also denies power to intellectuals. It reduces lawyers to service-providers—working on wills and estates. I’ll never forget Barry Bosworth, then as now with the Brookings Institution (he also worked for the Carter administration), telling me in 1976 that the trouble with free-market economists was that they were always talking themselves out of a job. True.
In the 1860’s, an English legal historian named Sir Henry Maine noted that “the movement of progressive societies has hitherto been a movement from status to contract.” Those who were formerly born to greatness were more and more being displaced by those who had succeeded through their own efforts. But even as Maine wrote, the socialist movement was gathering steam. There followed Sidney and Beatrice Webb, the Fabians (to whom Alfred Marshall was always deferential), the revolution of 1905, the Bolshevik Revolution, “New Soviet Man,” Mao Zedong, Khrushchev, Sputnik, Fidel Castro, the 60’s, “people’s democracies,” the Communist triumph in Vietnam . . . followed by Thatcher and Reagan and the 1980’s. In retrospect, what we have been through is a century-long attempt to reverse Henry Maine’s dictum: in other words, to replace contract with another kind of status—one deriving from credentials rather than birth. The 80’s saw the end of this counterrevolution of expertise, much to the annoyance of the experts.
It is rarely wise, professional historians will warn you, to judge a decade—or a government—immediately upon its passing. Not only are we too close to the events and the personalities to be objective, we simply lack the longer-term perspective to provide us with the proper comparative measure of historical “success” or “failure.” A monarch or a president regarded as mediocre by his contemporaries looks a lot better in posterity’s eyes if it turns out that he is followed by several totally incompetent successors; a decade characterized by economic recovery and increased business confidence (the 20’s, say) appears much more suspect retrospectively if it leads toward financial collapse and industrial decay (the 30’s). If COMMENTARY is still flourishing in twenty-five years’ time, it will be instructive to return to this issue, and check how valid our early rush to judgment upon the 80’s appears by then.
Moreover, in this particular case the problems of premature historical assessment are compounded by the wildly conflicting images of—and opinions about—the Reagan presidency. While some adore Reagan, others denounce him vehemently. Already one has the sense that the more unbalanced critics have forgotten just how popular and appealing the previous President was in the eyes of many of his countrymen. His sense of humor, his gallantry, his natural charm, his preference for riding on the range rather than chairing committees, all in their way complemented his determination to “stand up” to Communism, to make the United States stronger militarily, to assert basic Western values. The Soviet Union had to be dealt with from a position of strength; international nuisances like Libya had to be taught a lesson; if military measures were called for, they would be carried out, be it in Grenada or the Persian Gulf. To an American public, angry and humiliated at various setbacks from Vietnam to Iran, this was a welcome relief. It was as if a frontier town, previously terrorized by outlaws, had at last received a new sheriff. The B-movie actor of the 40’s cowboy films came into his own in the 80’s White House. And the palpable weakening of resolve of the Soviet outlaw (or “Evil Empire”) by the close of that decade, with Gorbachev virtually begging for a compromise, was ample vindication of this Reaganite policy of peace through strength. Clearly, in the view of many Americans, this had been a “good” presidency.
Yet while the Reagan presidency was restoring the American position in world affairs in the short term, it was also weakening it over the longer term. This is not to blame the post-50’s relative decline of the United States solely upon Republican mismanagement, as the Dukakis electoral campaign of 1988 tended to do. The shrinking American share of global GNP and world-manufacturing output, the erosion of its lead in high-tech industries, the failures in its public educational system, the dreadful poverty of its inner cities, the aging of its infrastructure, were long-term developments which neither Democratic nor Republican administrations had succeeded in reversing.
But Reagan’s presidency was “bad,” not simply because it tended to ignore such isues, but also—and more seriously—because it exacerbated them. Its reckless fiscal policies swiftly turned the United States from being the world’s greatst creditor-nation to being its greatest debtor-nation. Its propensity to live beyond its means increased its deficits, and worsened its international indebtedness (and its overall current accounts); by 1988, it was no longer a truly independent nation financially. Its military buildup, based upon exaggerated estimates of Soviet “power,” had siphoned off engineers, scientists, and skilled craftsmen from export-oriented industry, and given a further advantage to Japanese and European competitors. Its encouragement of a totally laissez-faire mentality had prevented any long-term industrial planning on the Japanese model. Its fondness for consumption had hurt savings rates and capital investment. Its concentration upon such symbolic issues as the pledge of allegiance in schools was accompanied by a neglect of any fundamental reforms of the educational system as a whole.
Above all, the sunny, upbeat tone of the Reagan presidency and the positive response of the American public to his “good” years in office—aspects of which were repeatedly emphasized in the Bush campaign of 1988—naturally left the country less ready and prepared to accept the “bad” years of confronting the harsher reality that lay ahead. Increasing taxes, reducing consumption, cutting entitlements, investing in science and infrastructure rather than automobiles and household goods, were going to be that much harder to achieve in future years because Reagan had succeeded in convincing a majority of the American people that all was well and that there was no need to change.
Ironically, one suspects that by early next century the greatest critics of American policies in the 80’s are not going to be the liberals (although they no doubt will still have much to complain about), but the true conservatives: that is, those who prefer fiscal rectitude to profligacy, believe that the United States should possess and protect key strategic industries, distrust laissez-faire economics, and instinctively feel that the country’s long-term prospects as a Great Power do rest upon a manufacturing and financial base that was badly eroded during the period of Reagan’s carefree, good-humored, but essentially feckless presidency. It is from that quarter that there will be the greatest resentment of this 80’s legacy of short-term charm and long-term harm.
Richard John Neuhaus:
Since I do not accept the characterization offered of the policies and ideas of the 80’s, I have only to address the second question. To ask why “so many people” do accept the characterization offered is to ask why so many people identify with the general drift of the Left. Answering that would require an extended excursus on the intellectual history of the West. The more manageable question in this context is: if there has been something like a “worldwide triumph” of American ideas and policies in the 80’s, why does the Left persist in denying the lessons to be drawn from that?
In trying to understand attitudes, whether on the Right or the Left, one should not underestimate the stupidity factor. Also, there is the fact that people value continuity. Their sense of self is at stake. They do not want to let down the side or to be accused of betraying the cause. Even the most putatively radical are traditionalists, as evident in the exhortation to “keep the faith.” In addition, the revolution of 1989 took almost everyone by surprise. People were not prepared for the dramatic demise of Communism and the vindication of democracy and market economics. These events are still very new and have not been intellectually assimilated, never mind formulated into a usable partisan line. When in doubt, people who are paid to explain the world go on saying what they said before. Since recent turns of events falsify more propositions of the Left than of the Right, the Left is the more embarrassed and ends up offering more manifestly silly explanations.
We are currently in a period of post-cold-war Newspeak that, unfortunately, may not be short-lived. Thus people in comatose socialist regimes who still subscribe to the conventional doctrines of the Left are now called conservatives. Thus we are told that the Soviets’ loss of the cold war proves that there never was a cold war, except in the fevered American imagination. Thus it is explained that the revolution of 1989 demonstrates the superiority of Communism, since they are having their revolution while there is no sign of revolution in our society. There is at present a deep, and frequently amusing, incoherence in the explanations being proffered by the Left.
That will not change any time soon. A turn toward coherence would require acknowledging that political arch-enemies (e.g., Reagan and his capitalist gang) were, at least in large part, right, and the Left was wrong. Self-examination, contrition, repentance, and amendment of life do not come easily. That is true on the Left and on the Right, but it is more true on the Left. It is more true on the Left because the secularized Left has made a deeper investment, even a religious investment, in political construals of reality.
The Right is more inclined to relativize the imperiousness of the political, and to take alarm at the slightest whiff of utopianism. The Left is more attractive to political heavy breathers who seek a perpetual high from visions of peace, justice, equality, and cosmic harmony. They seek, in sum, a world very much like that promised in the messianic age. Everyone, knowingly or not, shares that yearning, but the Left is different in that it thought it had the political and economic formulas for the realization of that hope in history. In the aftermath of 1989, a measure of sympathy is in order for people whose entire construction of reality has been rudely destroyed by the very history in which they had placed their faith. In the absence of an alternative belief system, they will, at least for a time, continue to sing the tunes they know, even if they are the tunes of the gods that failed.
Others on the Left are less ideologically driven but are extremely nervous about any talk of the “triumph” of American ideas and policies. They might, sotto voce, admit that America and the West come out of the 80’s looking pretty good. They might even admit that their domestic political opponents turned out to be right on some important scores. But any talk of triumph, they fear, distracts attention from the many problems that need to be addressed in our society. So they tell us that, while it is clear that Communism and even socialism have failed, it is by no means evident that democracy and capitalism have triumphed. In this telling, the failure of the former seems almost causeless. Or, if there are causes, they are to be explained in terms of the internal contradictions, so to speak, of Communism.
Underlying the strategic concern to keep attention focused on America’s faults and failures, there is a moral anxiety about any talk of “triumph.” Many in our elite cultures have learned well Reinhold Niebuhr’s cautions against hubris, while ignoring what he had to say about historical responsibility. Patriotism, in their view, necessarily bespeaks arrogance. Patriotism and nationalism go together, and both offend against the vaulting universalism of the liberal vision. In the Niebuhrian view, patriotism may be grounded not in arrogance but in gratitude joined to a sense of responsibility. The “so many people” mentioned by the editors have great difficulties with that. The National Council of Churches, for example, this year issued a jeremiad declaring that 1992, the 500th anniversary of the landing by Columbus, should be marked not by celebration but by repentance for the exploitation, racism, colonialism, and genocide perpetrated by Europeans in the Americas.
The jeremiad is a venerable genre on both the Right and the Left. Jeremiads from the Right tend to deplore our decline from the high standards and achievements of the culture that defines who we are. Jeremiads from the Left tend to deplore the conceit that we are anything special to begin with. It is therefore much more embarrassing for the Left when Americans find themselves admired and emulated by almost the entire world. It was all right for Lincoln to talk about this experiment as the “last, best, hope of earth” in the context of our bleeding for our sin of slavery, but it is intolerable that we should derive any satisfaction from the world’s agreeing with Lincoln in the present situation. In the view of the Left, invoking American ideals against America is acceptable. Thus, during the Vietnam war, we were incessantly reminded that the constitution promulgated by the gangsters in Hanoi was redolent with rhetoric lifted from Jefferson and friends. In short, in both its religious and more secularized forms, the Left typically confuses the Christian virtue of humility with self-hatred. More precisely, what is flaunted as humility is hatred of those who dissent from the Left’s construal of America’s crimes and failures.
The question of why “so many people in America itself” condemn the ideas and policies that seem now to be carrying the day has many parts. The aforementioned time factor accounts, partly, for the difficulty the Left is having in coming to terms with the fact that, in the last twenty years or so, the “stupid party” got smart. The Center and Right-of-Center is where the ideas are. That is hard to accept for a Left that has historically understood itself to be the party of innovation, imagination, and creativity.
Examples abound. Only this year did an institution of the moderate Left, Brookings, get around to recognizing the imperative of parental choice in educational reform. The explanation merchants of the Left are tying themselves in knots to make sense of “conservative bleeding hearts” such as Jack Kemp who advance new ideas about what might be good for the poor. Others find themselves having second thoughts about, horribile dictu, censorship as they simultaneously advocate government support for pornography and the prohibition of politically incorrect speech on university campuses. And yet others are, reluctantly, beginning to acknowledge that the interests of black America are not entirely congruent with continued support for a superannuated civil-rights leadership. A growing impatience is expressed not only with the Al Sharptons but with the upmarket Al Sharptons such as Jesse Jackson. And so it is that—on education, poverty, censorship, race, and much else—the moderate Left, slowly and painfully, makes “respectable” the ideas that have for years been current on the Right-of-Center. To be sure, even this sluggish process of change has not touched the “so many people” that the editors have in mind.
The really big new thing in the aftermath of the revolution of 1989 is the now indisputable centrality of the cultural questions in public life. Public debate, in any society, is basically about three clusters of questions: the political, the economic, and the cultural. In the modern era, the great debates have concentrated on the political and economic. Indeed, for a very long time there has been a self-conscious effort to steer away from the cultural. The chief reason for this is that at the heart of the cultural are the most powerful beliefs and passions that, as in the wars of religion in the 16th and 17th centuries, can destroy civil discourse altogether. But now the arguments over the political and economic have come to an end, at least for a time. While there are marginal disagreements, it is now evident to all rational parties that representative democracy is the way to go politically, and the mainly free market is the way to go economically. For purposes of significant public debate, that leaves the cultural questions.
Both the Right and the Left will have a hard time getting accustomed to this new situation. But the Left will have the harder time, for the Right generally eschewed the notions of economic determinism and political utopianism. The Right, in most of its expressions, is more comfortable with the “social and moral questions,” which is to say the cultural questions. The emergence of the cultural questions at front stage center returns us, interestingly enough, to what Aristotle understood as authentic politics. Ethics and politics, according to Aristotle, are the same inquiry, both asking, “How ought we to order our life together?” Public debate in our century was distracted from that question by the relatively brief political madness of Hitler, and by the prolonged political and economic madness of Marxism-Leninism. The energies of the best, brightest, and most sane of our thinkers were expended in combating those bloody absurdities.
Now, at long last, we are returned to the real business of politics. The new situation cannot but be deeply distressing to the “so many people.” The Right has generally understood itself to be the guardian of culture, of the historically transmitted, of the “givens” of how we order our life together. In literature, the arts, and social arrangements so basic as the family, the Left understands itself to be countercultural. The self-understanding of the Left assumes that others will attend to preserving the culture that it is its business to change. Now, more than at any time since 1914, it becomes evident that the continuing argument is between Edmund Burke and John Stuart Mill, between traditional responsibility and the unlimited play of critical consciousness.
If I am right about the recentering of the cultural, we should prepare ourselves for ever more concentrated debate on what are called the social and moral issues. On this view, for example, the abortion debate is not a distraction from politics but raises one of the most urgent political questions of our time: who belongs to the community for which we accept common responsibility? A reporter for the New York Times who covered both the big pro-choice march of 1989 and the pro-life march of earlier this year remarked on the phenomenon of “two Americas” and “two cultures,” as indicated by the different vocabularies employed. The dominant language of the pro-choicers, she said, was about “rights and laws,” while the dominant language of the pro-lifers was about “rights and wrongs.”
In the era that we have now entered, those who can, with public persuasiveness, speak of rights and wrongs have the advantage. The “triumph” of American ideas and policies is the continuing triumph of the moral claims of the West after the two great barbarous aberrations of this century. This is not occasion for triumphalism, but for gratitude, true humility, and renewed seriousness about transmitting the civilizational story to the next generation. For many reasons, some of them mentioned above, this way of viewing the matter is abhorrent to the “so many people” who worry the editors, and who should worry all of us.
Eugene D. Genovese:
If the dreary remains of the left-wing press may be taken as a guide, the doughty survivors of the radical Left and of Left-liberalism—to the extent that the two can any longer be distinguished—are once again determined to make fools of themselves. Confronted by a victorious worldwide counterrevolution against everything they have stood for, they happily dwell on the evil legacy of Ronald Reagan. Ever ready to display a sense of humor, they claim credit for the wonderful doings in Europe, which, of course, reflect their own highest aspirations and the realization of what they themselves have wanted all along. Simultaneously, they condemn the ideology and policies of the American Right, which the leaders of those wonderful doings, not to mention the voters, take as their model.
The Left, viewing the disgraceful rout of its troops on all fronts, proudly claims victory abroad, while it undertakes the small task of convincing the American people that only the implementation of a thoroughly discredited agenda at home could save us from the horrors it claims as democratic triumphs everywhere else. Lacking the satirical genius of a Jonathan Swift, I respectfully ask to be allowed to pass over the spectacle in silence.
The pros and cons of Reaganism and the depth of the changes it has introduced will take a long time to sort out, and nothing is to be gained by continuing the present exchanges of encomiums and laments. But surely, the 80’s will be remembered as the decade in which socialism met its Waterloo. No amount of blather about the collapse of Communism’s having opened the way to “real” and “democratic” socialism will serve. The Communists, for better or worse, introduced the only socialism we have had, whereas the social democrats have everywhere settled for one or another form of state-regulated capitalism. Many things went into the making of the collapse of the Communist regimes, but, as every honest Communist from Gorbachev on admits, the immediate cause has been exposed as state ownership of the means of production-and for reasons that, alas, Ludwig von Mises, among other right-wingers, long ago identified.
That the collapse of socialism proves the virtue of a (nonexistent) free-market capitalism is another matter. The left-wing critics of Reaganism score heavily in their condemnation of the social and cultural barbarism that now reigns triumphant, even if they fail to notice that much of their critique was prefigured by traditionalist conservatives, who never have been enamored of the market and its businessman’s culture. The crisis that has wracked the socialist world has obscured the crisis that is wracking the capitalist world in general and the United States in particular. And the many-sided crisis of both demonstrates the truth of the witticism, uttered more than a decade ago by John Lukacs, that the “isms” have become “wasms.”
Socialism failed to do the one thing that might have saved it: generate the economic prosperity that could have provided the time for it to disassemble its Byzantine political systems and to establish the moral legitimacy without which no sense of political obligation is possible. In contrast, capitalism has once again proven superior to all alternatives in generating economic growth and technological development. It is easy to forget that the entire project of the socialist reconstruction of society proceeded on the bold Marxist assertion of the superiority of socialism as an economic system. Nothing could be more naive than the radical-Left daydream, which is today stronger than ever, that the economy would fare better in the wake of the destruction of all hierarchy and stratification and the transfer of economic power to the workplace. Everywhere and always, when such nostrums have been tried, the result has been a catastrophic erosion of the work ethic and a retreat into modern equivalents of peasant self-sufficiency and attendant economic devolution.
Yet the beat goes on. Almost nowhere on the American Left do we find an inclination to subject its time-honored premises to the “radical critique” called for with respect to everything else. To begin with, the classic premise of the Left has been the inherent goodness or quasi-infinite malleability of human nature—a premise shared by much of the free-market Right, occasional pretenses and qualifications aside. This classic premise has especially triumphed in the mainline churches, which have traded the hard wisdom of Christian theology for a neo-transcendentalist reduction of sin to the passing embarrassment of a lapse from the good, and which have embraced a neo-universalist doctrine of the salvation of everyone. (Personally, I have always been thrilled by the prospect of meeting Adolf Hitler in heaven.) To this day it has not occurred to radical leftists and Left-liberals that the central contradiction in the socialist countries has been the vain attempt to combine an unrealizable goal of personal liberation with a form of social organization that, above all, requires maximum social discipline. Meanwhile, the Right, having sealed off its traditionalists, embraces the illusion of personal freedom and, with sound logic, hails the market as the one great social force for its realization.
Personal liberation we are getting with a vengeance. Unfortunately, the great theologians were right and the radical leftists and free-market right-wingers—for that is what these “conservatives” really are—are wrong. With high if unintended humor, the Left demands an “involved,” “concerned,” “compassionate,” and “activist” central government to save the afflicted, and it does so, at least if we are to take seriously the rhetoric on abortion and gay liberation, by campaigning for the individual’s absolute right to his—I trust not only her-own body. With even higher, if also unintended humor, the Right weeps and wails over the collapse of the family, of morality, of respect for religion, of education and higher learning, of our “social fabric,” and it does so while campaigning for the extension of the very marketplace of consumer choice that has stamped capitalism as the greatest revolutionary solvent of traditional values in world history. Apparently, it never occurs to these defenders of God, family, and social order that “consumers,” left to choose, should be expected, more often than not, to choose self-indulgence, corruption, and scintillating filth over more mundane commodities. What a pity we cannot heed William J. Bennett and Allan Bloom and restore a genuine core curriculum to all our schools. For then we might be able to require a four-semester sequence in Christian theology or at least in common decency and elementary good sense.
As is, the rot deepens: we are being overwhelmed by drugs; mass homelessness; the poisoning of our children with pornography, perversion, and impossible aspirations; the transformation of our cities into Third World metropolises for the ostentatiously rich and the miserably poor; and the steady decline of the national productivity, work ethic, and production of basic goods that under-gird the whole. I very much doubt that sane, decent, and honorable right-wingers are sleeping any better than their left-wing counterparts.
The free market has always been a myth, never more so than in this age of international conglomerates and such evidence of massive corruption, greed, favoritism, and mismanagement as the S & L scandal. Irving Kristol may be right that much of the “peace dividend” will prove illusory, but, surely, the end of the cold war ought to offer fresh opportunities for some expansion and redirection of investment into socially more constructive efforts. The collapse of socialism should reinforce general acceptance of private property and a strong market sector, but it need not invite further irresponsibility by uncontrolled or inadequately controlled private interests.
We are living through the early stage of a massive worldwide economic and social transformation, the outcome of which is neither fated nor readily predictable. Private property and markets are compatible with a wide variety of social, political, and economic systems. If socialism (state ownership of the means of production) has been economically and politically discredited, capitalism (private ownership of the means of production under minimal control) is daily being morally and socially discredited. Probably, if the best elements of the Left and Right were speaking to each other, they could agree on that much and on the need for a new departure. But no part of the political spectrum seems able to lay its own ghosts.
The Left has carried Marx’s utopian view of human nature to its logical-and, ironically, ultra-individualist—conclusion and embraced every mutually exclusive call for personal as well as group liberation. The left wing of liberalism is following a similar trajectory, as the comical course of the national Democratic party attests. The right wing of liberalism stands for nothing discernible, at least at this moment. The Right, celebrating its electoral victories, finds itself in ideological disarray and on the verge of a political split. Morally responsible free-marketeers, including such libertarians as Murray Rothbard, stand aghast at the cultural unraveling that is accompanying the victories of their political and economic program. Unwilling to acknowledge the unraveling—the libertinism against which Rothbard admirably protests—to be the predictable outcome of a market mentality that reduces everything to the status of commodities, they content themselves with exhortations to consumers to make the right moral choices.
The free-marketeers, absurdly called “conservatives,” either find our cultural situation perfectly acceptable or at least bearable, or blame the atrocities on continued statism, bureaucracy, and welfare paternalism. They may have a point about those diseases, but their projected cure of heavier doses of individualism promises nothing except exacerbation. The sickening racial crisis alone should be enough to make clear that no solution short of the ghastly would be possible without measures that bring communities and groups to center stage and reaffirm the traditionalist and once-Marxist principle that individual freedom must be understood as a product of social organization. The free society most of us want can be free only to the extent that social safety and the requisites of social discipline permit. A society deserves to be called free to the extent that it places the burden of proof on the state when limitations on individual freedom are called for. No one has an absolute right to his own body, but no state should be tolerated if, without compelling reason, it restricts the individual’s claims to privacy and to rights in his body.
None of us has an immediate solution to the drug problem, which has become a life-threatening cancer on our body politic. But who could believe that a country with our level of political culture and material resources could be overawed by a gangster empire, if it had the will to prevail? Undoubtedly, action would require curtailment or redefinition of civil liberties, not to mention the high cost of incarceration and some judiciously selected executions. The requisite action carries with it potential evils that ought not to be taken lightly. But if the drug problem is indeed a cancer, if we are indeed in a “war,” if we mean what we say, then the only matter left to discuss ought to be how to provide maximum guarantees against excesses of a police-state nature.
And the same principle holds for racism, for poverty, for crime, for homelessness, for pornography, for all other conditions and practices incompatible with civilized life. I do not believe that the necessary measures could be carried out without the restoration of a productive national economy. And even then, I do not believe that such measures could be carried out safely—with respect for the genuine claims of freedom—without the restructuring of our socioeconomic relations in the context of a corporatism that respects private property but makes it subject to social control and the guidelines of a national moral consensus. Traditionalists, especially the Southern traditionalists to whom the Reagan and Bush administrations have given short shrift, have been saying much of this for a long time. Those on the Left who did not get drunk on the anarchism of the 60’s have too. And unless I badly misread the signs of the times, large numbers of people across the political spectrum are ready to say Amen! Whether we remain able to hammer out the specifics of an appropriate vision and find leaders worthy of the challenge remains to be seen.
Of the very few things that we can believe with absolute certainty about the 80’s—a decade more difficult to assess than any I have lived through—one is that the cold-war anti-Communists were proved right not only in their moral condemnation of Communism but in the policies they devised for its destruction. Another is that these same cold-war anti-Communists are never going to be forgiven by the Left—which nowadays includes almost (if not quite) everybody in the media, the academy, the arts, the literary and publishing worlds, the entertainment industry, and the Democratic Party—for being right about the most important political question of this century. The Left will fight to the last ditch—or the last op-ed article, anyway-to uphold the notion, which no sane person east of the now-dismantled Berlin Wall believes for one moment, that the steadfast cold-war policies of the West had nothing whatever to do with the collapse of Communism in Eastern Europe and its accelerating disintegration in the Soviet Union.
Indeed, the claim is now made—not only in places like the Nation but even in Time and other mainstream publications—that the United States actually impeded the collapse of Communism by resisting its triumph. (I know this sounds crazy, but that is the kind of craziness with which we are now obliged to contend.) It was left to Arthur M. Schlesinger, Jr., always ready to bend history to the purposes of liberal mythology, to cap such claims by arguing that the events of 1989 represent a vindication of FDR’s concessions to Stalin at Yalta! “In the short run,” Schlesinger wrote recently in the Wall Street Journal, “Stalin gained. But statesmanship is tested by the long run. Now, forty-five years later, the Soviet Union is at last honoring the Yalta agreements.” In other words, it was Franklin Roosevelt, not Ronald Reagan, who won the cold war. Or are we being asked to believe that there wasn’t any cold war to win? In this kind of Alice-in-Wonderland historiography, reality—even if it has cost millions of lives and caused suffering and loss beyond measure—is easily dispensed with.
It would be nice to be able to attribute such absurdities to sheer cynicism or fatuousness—factors that are never to be discounted in the thinking of the liberal old guard. But in fact something far more significant and sinister is involved, and in this development old-guard liberals like Professor Schlesinger, still harboring their daydream of an eternal return to the New Deal or the New Frontier, are mere supernumeraries. They can still be wheeled in on ceremonial occasions to lend intellectual respectability to certain ideas—the idea, for example, that the policies of the Reagan administration had nothing whatever to do with the collapse of Communism in Eastern Europe—but they no longer play much of a role in setting the Left’s agenda, which in fact is a good deal more radical than anything dreamt of in Professor Schlesinger’s philosophy.
That agenda, it is worth recalling, had its origin in the radical protest movements and counterculture of the 60’s—above all, in the idea that America, or rather “Amerika,” was at once an intolerably repressive society at home and the principal source of political evil abroad. The rapidity with which this bizarre and fundamentally mendacious political fantasy became so deeply rooted in our cultural life and the power it has continued to exert over the course of the last two decades—power that has now come to dominate larger and larger areas of American life—have never, to my satisfaction, been adequately explained. Perhaps there is such a thing as a collective death-wish that comes to afflict certain societies when so many of their material needs are met and so many of their spiritual hungers are left unmet, thus opening them to the appeal of strange gods. I would like to think otherwise, but the evidence of our own society just now makes one cautious about rejecting such distasteful theories. Whatever our explanation for the state of affairs in which we find ourselves, however, the fact remains that this thoroughly bogus and pernicious idea of American civilization is the one that currently prevails at almost every level of cultural life. It is as much in evidence in our law schools as it is in our wretched pop music. It governs the ideological outlook of the commercial television networks quite as much, though not quite as vehemently or systematically, as it dominates PBS programming and National Public Radio. And it is now a permanent part of the academic curriculum—the only part that meets with no protests about “canonization”—in our elite schools and universities. It is the reason why there is no longer anything in American life that can reasonably be described as a political center. The polarization that the radical movement of the 60’s set out to achieve—the division of American society into “them” and “us”—is now a fact of cultural life, and looks to be an enduring one. We have become, in effect, two nations—or at least two societies—that are so deeply divided about the fundamental issues of American life that public questions as diverse as Supreme Court appointments, the grant-making procedures of the National Endowment for the Arts, and the problems entailed in dealing with the AIDS crisis are inevitably escalated into battles resembling a form of cultural civil war.
Given these divisions, there was never any chance that the real achievements of America in the 80’s—a prosperous economy at home and a triumphant victory in the cold war abroad—would be treated by the culture as anything but a further index of American failure. Toward the Reagan administration in particular and American society in general our cultural establishment has long operated on the model of the Soviet justice system: the first thing to be determined is a verdict—which is always the same verdict: guilty!—and only then is it necessary to cite an appropriate crime. Is it any wonder that the media are so universally despised or that so many newspapers and magazines are suffering a historic loss of readership? The other America—the America that is so benighted that it cannot understand why its grade-school kids, while barely able to read, must be instructed in the use of condoms, not to mention the niceties of anal intercourse, or why its tax money should be devoted to exhibiting pictures of young men inflicting sexual torture upon each other—this America knows when it is being lied to about issues of great moral import. This other America—which, I daresay, encompasses a larger portion of the “enlightened” middle class than can nowadays ever admit to doubts on these matters, lest it be branded yahoos, reactionaries, or (worst of all) Reaganites—feels itself morally disenfranchised and more than a little terrified. Wherever it turns for moral support—the classroom, the pulpit, the legal system, or even its representatives in government-it meets with the same response: elaborate, “caring,” sophistical explanations as to why yesterday’s road to perdition must now be regarded as tomorrow’s path to salvation. No wonder there is such a widespread feeling of moral panic in this country.
As for all those pronouncements about greed and selfishness and the disorderly life of our cities and schools, I am not myself inclined to look for lessons in ethics and morals—not to mention social policy—from the folks whose lethal programs and ideas set us on the course that has led to our present horrors. We know from what quarter the concentrated assault on the family as an institution was initiated, and with what results. We know what elements in our political life organized the destruction of the New York City school system—until then, one of the best in the world—back in the 60’s. We know in what part of our culture the legitimation—nay, the celebration—of drugs as a way of life originated. (Allen Ginsberg, please take a bow here.) We know who launched the “black-power” movement, thereby destroying the civil-rights movement of the early 60’s, that has cost blacks even more than it has cost the rest of us in this society. In the politics of the social disintegration that now besets us, as in the decadent culture that is now swamping us, we are witnessing the denouement of every rotten idea the 60’s bequeathed us a generation or more ago. If ours was a society with a highly developed sense of shame, the folks who championed these ideas would be reduced to a repentant silence. Instead they are more loudly than ever urging more of the same for the future.
It was the New Republic—not exactly a Reaganite journal—that spoke in its 70th-anniversary editorial, in 1984, of “the terrible social pathologies emerging from the welfare state,” and went on to point out that “liberals are in crisis because they have hardly begun to figure out a coherent response, first, to the unhappy social facts; and second, to the vast defection of voters from the liberal dispensation in public policy.” The only thing that has changed since those words were written is that six years later the same “terrible social pathologies” are even more advanced than hitherto and are now invoked on the Left as if they were simply a product of the Reagan administration. Whether this is a case of historical amnesia or political cyncism—or both—I shall leave for others to determine.
Something like this same combination of historical amnesia and political cynicism, together with a large element of moral insensibility, now beclouds the discussion of the momentous events taking place abroad. The ease with which the media and the academy in this country have been allowed to claim the collapse of Communism as, of all things, a vindication of the idea of revolution is yet another sign of the bad faith that permeates virtually all public discourse about politics on the Left today. What has been happening in Eastern Europe and the Soviet Union is, in actual fact, the most sweeping example of a counterrevolution known to modern history. Yet our press, our pundits, and—most unforgivable of all—our Republican President prattle on, without challenge, about the “revolutionary” developments that are attempting, not always very successfully, to bring democracy and capitalism to societies devastated by the destructive consequences of revolution. This is something more than a quarrel about words. It is a battle of ideas—ideas about how to live our lives-and in that battle we seem unable to articulate the moral imperatives of the counterrevolutionary movement we have done so much to set in motion. This, too, is a legacy of the 60’s.
The American 80’s: Disaster or Triumph?
Must-Reads from Magazine
t can be said that the Book of Samuel launched the American Revolution. Though antagonistic to traditional faith, Thomas Paine understood that it was not Montesquieu, or Locke, who was inscribed on the hearts of his fellow Americans. Paine’s pamphlet Common Sense is a biblical argument against British monarchy, drawing largely on the text of Samuel.
Today, of course, universal biblical literacy no longer exists in America, and sophisticated arguments from Scripture are all too rare. It is therefore all the more distressing when public intellectuals, academics, or religious leaders engage in clumsy acts of exegesis and political argumentation by comparing characters in the Book of Samuel to modern political leaders. The most common victim of this tendency has been the central character in the Book of Samuel: King David.
Most recently, this tendency was made manifest in the writings of Dennis Prager. In a recent defense of his own praise of President Trump, Prager wrote that “as a religious Jew, I learned from the Bible that God himself chose morally compromised individuals to accomplish some greater good. Think of King David, who had a man killed in order to cover up the adultery he committed with the man’s wife.” Prager similarly argued that those who refuse to vote for a politician whose positions are correct but whose personal life is immoral “must think God was pretty flawed in voting for King David.”
Prager’s invocation of King David was presaged on the left two decades ago. The records of the Clinton Presidential Library reveal that at the height of the Lewinsky scandal, an email from Dartmouth professor Susannah Heschel made its way into the inbox of an administration policy adviser with a similar comparison: “From the perspective of Jewish history, we have to ask how Jews can condemn President Clinton’s behavior as immoral, when we exalt King David? King David had Batsheva’s husband, Uriah, murdered. While David was condemned and punished, he was never thrown off the throne of Israel. On the contrary, he is exalted in our Jewish memory as the unifier of Israel.”
One can make the case for supporting politicians who have significant moral flaws. Indeed, America’s political system is founded on an awareness of the profound tendency to sinfulness not only of its citizens but also of its statesmen. “If men were angels, no government would be necessary,” James Madison informs us in the Federalist. At the same time, anyone who compares King David to the flawed leaders of our own age reveals a profound misunderstanding of the essential nature of David’s greatness. David was not chosen by God despite his moral failings; rather, David’s failings are the lens that reveal his true greatness. It is in the wake of his sins that David emerges as the paradigmatic penitent, whose quest for atonement is utterly unlike that of any other character in the Bible, and perhaps in the history of the world.
While the precise nature of David’s sins is debated in the Talmud, there is no question that they are profound. Yet it is in comparing David to other faltering figures—in the Bible or today—that the comparison falls flat. This point is stressed by the very Jewish tradition in whose name Prager claimed to speak.
It is the rabbis who note that David’s predecessor, Saul, lost the kingship when he failed to fulfill God’s command to destroy the egregiously evil nation of Amalek, whereas David commits more severe sins and yet remains king. The answer, the rabbis suggest, lies not in the sin itself but in the response. Saul, when confronted by the prophet Samuel, offers obfuscations and defensiveness. David, meanwhile, is similarly confronted by the prophet Nathan: “Thou hast killed Uriah the Hittite with the sword, and hast taken his wife to be thy wife, and hast slain him with the sword of the children of Ammon.” David’s immediate response is clear and complete contrition: “I have sinned against the Lord.” David’s penitence, Jewish tradition suggests, sets him apart from Saul. Soon after, David gave voice to what was in his heart at the moment, and gave the world one of the most stirring of the Psalms:
Have mercy upon me, O God, according to thy lovingkindness: according unto the multitude of thy tender mercies blot out my transgressions.
Wash me thoroughly from mine iniquity, and cleanse me from my sin. For I acknowledge my transgressions: and my sin is ever before me.
. . . Deliver me from bloodguiltiness, O God, thou God of my salvation: and my tongue shall sing aloud of thy righteousness.
O Lord, open thou my lips; and my mouth shall shew forth thy praise.
For thou desirest not sacrifice; else would I give it: thou delightest not in burnt offering.
The sacrifices of God are a broken spirit: a broken and a contrite heart, O God, thou wilt not despise.
The tendency to link David to our current age lies in the fact that we know more about David than any other biblical figure. The author Thomas Cahill has noted that in a certain literary sense, David is the only biblical figure that is like us at all. Prior to the humanist autobiographies of the Renaissance, he notes, “we can count only a few isolated instances of this use of ‘I’ to mean the interior self. But David’s psalms are full of I’s.” In David’s Psalms, Cahill writes, we “find a unique early roadmap to the inner spirit—previously mute—of ancient humanity.”
At the same time, a study of the Book of Samuel and of the Psalms reveals how utterly incomparable David is to anyone alive today. Haym Soloveitchik has noted that even the most observant of Jews today fail to feel a constant intimacy with God that the simplest Jew of the premodern age might have felt, that “while there are always those whose spirituality is one apart from that of their time, nevertheless I think it safe to say that the perception of God as a daily, natural force is no longer present to a significant degree in any sector of modern Jewry, even the most religious.” Yet for David, such intimacy with the divine was central to his existence, and the Book of Samuel and the Psalms are an eternal testament to this fact. This is why simple comparisons between David and ourselves, as tempting as they are, must be resisted. David Wolpe, in his book about David, attempts to make the case as to why King David’s life speaks to us today: “So versatile and enduring is David in our culture that rare is the week that passes without some public allusion to his life…We need to understand David better because we use his life to comprehend our own.”
The truth may be the opposite. We need to understand David better because we can use his life to comprehend what we are missing, and how utterly unlike our lives are to his own. For even the most religious among us have lost the profound faith and intimacy with God that David had. It is therefore incorrect to assume that because of David’s flaws it would have been, as Amos Oz has written, “fitting for him to reign in Tel Aviv.” The modern State of Israel was blessed with brilliant leaders, but to which of its modern warriors or statesmen should David be compared? To Ben Gurion, who stripped any explicit invocation of the Divine from Israel’s Declaration of Independence? To Moshe Dayan, who oversaw the reconquest of Jerusalem, and then immediately handed back the Temple Mount, the locus of King David’s dreams and desires, to the administration of the enemies of Israel? David’s complex humanity inspires comparison to modern figures, but his faith, contrition, and repentance—which lie at the heart of his story and success—defy any such engagement.
And so, to those who seek comparisons to modern leaders from the Bible, the best rule may be: Leave King David out of it.
Three attacks in Britain highlight the West’s inability to see the threat clearly
This lack of seriousness manifests itself in several ways. It’s perhaps most obvious in the failure to reform Britain’s chaotic immigration and dysfunctional asylum systems. But it’s also abundantly clear from the grotesque underfunding and under-resourcing of domestic intelligence. In MI5, Britain has an internal security service that is simply too small to do its job effectively, even if it were not handicapped by an institutional culture that can seem willfully blind to the ideological roots of the current terrorism problem.
In 2009, Jonathan Evans, then head of MI5, confessed at a parliamentary hearing about the London bus and subway attacks of 2005 that his organization only had sufficient resources to “hit the crocodiles close to the boat.” It was an extraordinary metaphor to use, not least because of the impression of relative impotence that it conveys. MI5 had by then doubled in size since 2001, but it still boasted a staff of only 3,500. Today it’s said to employ between 4,000 and 5,000, an astonishingly, even laughably, small number given a UK population of 65 million and the scale of the security challenges Britain now faces. (To be fair, the major British police forces all have intelligence units devoted to terrorism, and the UK government’s overall counterterrorism strategy involves a great many people, including social workers and schoolteachers.)
You can also see that unseriousness at work in the abject failure to coerce Britain’s often remarkably sedentary police officers out of their cars and stations and back onto the streets. Most of Britain’s big-city police forces have adopted a reactive model of policing (consciously rejecting both the New York Compstat model and British “bobby on the beat” traditions) that cripples intelligence-gathering and frustrates good community relations.
If that weren’t bad enough, Britain’s judiciary is led by jurists who came of age in the 1960s, and who have been inclined since 2001 to treat terrorism as an ordinary criminal problem being exploited by malign officials and politicians to make assaults on individual rights and to take part in “illegal” foreign wars. It has long been almost impossible to extradite ISIS or al-Qaeda–linked Islamists from the UK. This is partly because today’s English judges believe that few if any foreign countries—apart from perhaps Sweden and Norway—are likely to give terrorist suspects a fair trial, or able to guarantee that such suspects will be spared torture and abuse.
We have a progressive metropolitan media elite whose primary, reflexive response to every terrorist attack, even before the blood on the pavement is dry, is to express worry about an imminent violent anti-Muslim “backlash” on the part of a presumptively bigoted and ignorant indigenous working class. Never mind that no such “backlash” has yet occurred, not even when the young off-duty soldier Lee Rigby was hacked to death in broad daylight on a South London street in 2013.
Another sign of this lack of seriousness is the choice by successive British governments to deal with the problem of internal terrorism with marketing and “branding.” You can see this in the catchy consultant-created acronyms and pseudo-strategies that are deployed in place of considered thought and action. After every atrocity, the prime minister calls a meeting of the COBRA unit—an acronym that merely stands for Cabinet Office Briefing Room A but sounds like a secret organization of government superheroes. The government’s counterterrorism strategy is called CONTEST, which has four “work streams”: “Prevent,” “Pursue,” “Protect,” and “Prepare.”
Perhaps the ultimate sign of unseriousness is the fact that police, politicians, and government officials have all displayed more fear of being seen as “Islamophobic” than of any carnage that actual terror attacks might cause. Few are aware that this short-term, cowardly, and trivial tendency may ultimately foment genuine, dangerous popular Islamophobia, especially if attacks continue.R
ecently, three murderous Islamist terror attacks in the UK took place in less than a month. The first and third were relatively primitive improvised attacks using vehicles and/or knives. The second was a suicide bombing that probably required relatively sophisticated planning, technological know-how, and the assistance of a terrorist infrastructure. As they were the first such attacks in the UK, the vehicle and knife killings came as a particular shock to the British press, public, and political class, despite the fact that non-explosive and non-firearm terror attacks have become common in Europe and are almost routine in Israel.
The success of all three plots indicates troubling problems in British law-enforcement practice and culture, quite apart from any other failings on the parts of the state in charge of intelligence, border control, and the prevention of radicalization. At the time of writing, the British media have been full of encomia to police courage and skill, not least because it took “only” eight minutes for an armed Metropolitan Police team to respond to and confront the bloody mayhem being wrought by the three Islamist terrorists (who had ploughed their rented van into people on London Bridge before jumping out to attack passersby with knives). But the difficult truth is that all three attacks would be much harder to pull off in Manhattan, not just because all NYPD cops are armed, but also because there are always police officers visibly on patrol at the New York equivalents of London’s Borough Market on a Saturday night. By contrast, London’s Metropolitan police is a largely vehicle-borne, reactive force; rather than use a physical presence to deter crime and terrorism, it chooses to monitor closed-circuit street cameras and social-media postings.
Since the attacks in London and Manchester, we have learned that several of the perpetrators were “known” to the police and security agencies that are tasked with monitoring potential terror threats. That these individuals were nevertheless able to carry out their atrocities is evidence that the monitoring regime is insufficient.
It also seems clear that there were failures on the part of those institutions that come under the leadership of the Home Office and are supposed to be in charge of the UK’s border, migration, and asylum systems. Journalists and think tanks like Policy Exchange and Migration Watch have for years pointed out that these systems are “unfit for purpose,” but successive governments have done little to take responsible control of Britain’s borders. When she was home secretary, Prime Minister Theresa May did little more than jazz up the name, logo, and uniforms of what is now called the “Border Force,” and she notably failed to put in place long-promised passport checks for people flying out of the country. This dereliction means that it is impossible for the British authorities to know who has overstayed a visa or whether individuals who have been denied asylum have actually left the country.
It seems astonishing that Youssef Zaghba, one of the three London Bridge attackers, was allowed back into the country. The Moroccan-born Italian citizen (his mother is Italian) had been arrested by Italian police in Bologna, apparently on his way to Syria via Istanbul to join ISIS. When questioned by the Italians about the ISIS decapitation videos on his mobile phone, he declared that he was “going to be a terrorist.” The Italians lacked sufficient evidence to charge him with a crime but put him under 24-hour surveillance, and when he traveled to London, they passed on information about him to MI5. Nevertheless, he was not stopped or questioned on arrival and had not become one of the 3,000 official terrorism “subjects of interest” for MI5 or the police when he carried out his attack. One reason Zaghba was not questioned on arrival may have been that he used one of the new self-service passport machines installed in UK airports in place of human staff after May’s cuts to the border force. Apparently, the machines are not yet linked to any government watch lists, thanks to the general chaos and ineptitude of the Home Office’s efforts to use information technology.
The presence in the country of Zaghba’s accomplice Rachid Redouane is also an indictment of the incompetence and disorganization of the UK’s border and migration authorities. He had been refused asylum in 2009, but as is so often the case, Britain’s Home Office never got around to removing him. Three years later, he married a British woman and was therefore able to stay in the UK.
But it is the failure of the authorities to monitor ringleader Khuram Butt that is the most baffling. He was a known and open associate of Anjem Choudary, Britain’s most notorious terrorist supporter, ideologue, and recruiter (he was finally imprisoned in 2016 after 15 years of campaigning on behalf of al-Qaeda and ISIS). Butt even appeared in a 2016 TV documentary about ISIS supporters called The Jihadist Next Door. In the same year, he assaulted a moderate imam at a public festival, after calling him a “murtad” or apostate. The imam reported the incident to the police—who took six months to track him down and then let him off with a caution. It is not clear if Butt was one of the 3,000 “subjects of interest” or the additional 20,000 former subjects of interest who continue to be the subject of limited monitoring. If he was not, it raises the question of what a person has to do to get British security services to take him seriously as a terrorist threat; if he was in fact on the list of “subjects of interest,” one has to wonder if being so designated is any barrier at all to carrying out terrorist atrocities. It’s worth remembering, as few do here in the UK, that terrorists who carried out previous attacks were also known to the police and security services and nevertheless enjoyed sufficient liberty to go at it again.B
ut the most important reason for the British state’s ineffectiveness in monitoring terror threats, which May addressed immediately after the London Bridge attack, is a deeply rooted institutional refusal to deal with or accept the key role played by Islamist ideology. For more than 15 years, the security services and police have chosen to take note only of people and bodies that explicitly espouse terrorist violence or have contacts with known terrorist groups. The fact that a person, school, imam, or mosque endorses the establishment of a caliphate, the stoning of adulterers, or the murder of apostates has not been considered a reason to monitor them.
This seems to be why Salman Abedi, the Manchester Arena suicide bomber, was not being watched by the authorities as a terror risk, even though he had punched a girl in the face for wearing a short skirt while at university, had attended the Muslim Brotherhood-controlled Didsbury Mosque, was the son of a Libyan man whose militia is banned in the UK, had himself fought against the Qaddafi regime in Libya, had adopted the Islamist clothing style (trousers worn above the ankle, beard but no moustache), was part of a druggy gang subculture that often feeds individuals into Islamist terrorism, and had been banned from a mosque after confronting an imam who had criticized ISIS.
It was telling that the day after the Manchester Arena suicide-bomb attack, you could hear security officials informing radio and TV audiences of the BBC’s flagship morning-radio news show that it’s almost impossible to predict and stop such attacks because the perpetrators “don’t care who they kill.” They just want to kill as many people as possible, he said.
Surely, anyone with even a basic familiarity with Islamist terror attacks over the last 15 or so years and a nodding acquaintance with Islamist ideology could see that the terrorist hadn’t just chosen the Ariana Grande concert in Manchester Arena because a lot of random people would be crowded into a conveniently small area. Since the Bali bombings of 2002, nightclubs, discotheques, and pop concerts attended by shameless unveiled women and girls have been routinely targeted by fundamentalist terrorists, including in Britain. Among the worrying things about the opinion offered on the radio show was that it suggests that even in the wake of the horrific Bataclan attack in Paris during a November 2015 concert, British authorities may not have been keeping an appropriately protective eye on music venues and other places where our young people hang out in their decadent Western way. Such dereliction would make perfect sense given the resistance on the part of the British security establishment to examining, confronting, or extrapolating from Islamist ideology.
The same phenomenon may explain why authorities did not follow up on community complaints about Abedi. All too often when people living in Britain’s many and diverse Muslim communities want to report suspicious behavior, they have to do so through offices and organizations set up and paid for by the authorities as part of the overall “Prevent” strategy. Although criticized by the left as “Islamophobic” and inherently stigmatizing, Prevent has often brought the government into cooperative relationships with organizations even further to the Islamic right than the Muslim Brotherhood. This means that if you are a relatively secular Libyan émigré who wants to report an Abedi and you go to your local police station, you are likely to find yourself speaking to a bearded Islamist.
From its outset in 2003, the Prevent strategy was flawed. Its practitioners, in their zeal to find and fund key allies in “the Muslim community” (as if there were just one), routinely made alliances with self-appointed community leaders who represented the most extreme and intolerant tendencies in British Islam. Both the Home Office and MI5 seemed to believe that only radical Muslims were “authentic” and would therefore be able to influence young potential terrorists. Moderate, modern, liberal Muslims who are arguably more representative of British Islam as a whole (not to mention sundry Shiites, Sufis, Ahmmadis, and Ismailis) have too often found it hard to get a hearing.
Sunni organizations that openly supported suicide-bomb attacks in Israel and India and that justified attacks on British troops in Iraq and Afghanistan nevertheless received government subsidies as part of Prevent. The hope was that in return, they would alert the authorities if they knew of individuals planning attacks in the UK itself.
It was a gamble reminiscent of British colonial practice in India’s northwest frontier and elsewhere. Not only were there financial inducements in return for grudging cooperation; the British state offered other, symbolically powerful concessions. These included turning a blind eye to certain crimes and antisocial practices such as female genital mutilation (there have been no successful prosecutions relating to the practice, though thousands of cases are reported every year), forced marriage, child marriage, polygamy, the mass removal of girls from school soon after they reach puberty, and the epidemic of racially and religiously motivated “grooming” rapes in cities like Rotherham. (At the same time, foreign jihadists—including men wanted for crimes in Algeria and France—were allowed to remain in the UK as long as their plots did not include British targets.)
This approach, simultaneously cynical and naive, was never as successful as its proponents hoped. Again and again, Muslim chaplains who were approved to work in prisons and other institutions have sometimes turned out to be Islamist extremists whose words have inspired inmates to join terrorist organizations.
Much to his credit, former Prime Minister David Cameron fought hard to change this approach, even though it meant difficult confrontations with his home secretary (Theresa May), as well as police and the intelligence agencies. However, Cameron’s efforts had little effect on the permanent personnel carrying out the Prevent strategy, and cooperation with Islamist but currently nonviolent organizations remains the default setting within the institutions on which the United Kingdom depends for security.
The failure to understand the role of ideology is one of imagination as well as education. Very few of those who make government policy or write about home-grown terrorism seem able to escape the limitations of what used to be called “bourgeois” experience. They assume that anyone willing to become an Islamist terrorist must perforce be materially deprived, or traumatized by the experience of prejudice, or provoked to murderous fury by oppression abroad. They have no sense of the emotional and psychic benefits of joining a secret terror outfit: the excitement and glamor of becoming a kind of Islamic James Bond, bravely defying the forces of an entire modern state. They don’t get how satisfying or empowering the vengeful misogyny of ISIS-style fundamentalism might seem for geeky, frustrated young men. Nor can they appreciate the appeal to the adolescent mind of apocalyptic fantasies of power and sacrifice (mainstream British society does not have much room for warrior dreams, given that its tone is set by liberal pacifists). Finally, they have no sense of why the discipline and self-discipline of fundamentalist Islam might appeal so strongly to incarcerated lumpen youth who have never experienced boundaries or real belonging. Their understanding is an understanding only of themselves, not of the people who want to kill them.
Review of 'White Working Class' By Joan C. Williams
Williams is a prominent feminist legal scholar with degrees from Yale, MIT, and Harvard. Unbending Gender, her best-known book, is the sort of tract you’d expect to find at an intersectionality conference or a Portlandia bookstore. This is why her insightful, empathic book comes as such a surprise.
Books and essays on the topic have accumulated into a highly visible genre since Donald Trump came on the American political scene; J.D. Vance’s Hillbilly Elegy planted itself at the top of bestseller lists almost a year ago and still isn’t budging. As with Vance, Williams’s interest in the topic is personal. She fell “madly in love with” and eventually married a Harvard Law School graduate who had grown up in an Italian neighborhood in pre-gentrification Brook-lyn. Williams, on the other hand, is a “silver-spoon girl.” Her father’s family was moneyed, and her maternal grandfather was a prominent Reform rabbi.
The author’s affection for her “class-migrant” spouse and respect for his family’s hardships—“My father-in-law grew up on blood soup,” she announces in her opening sentence—adds considerable warmth to what is at bottom a political pamphlet. Williams believes that elite condescension and “cluelessness” played a big role in Trump’s unexpected and dreaded victory. Enlightening her fellow elites is essential to the task of returning Trump voters to the progressive fold where, she is sure, they rightfully belong.
Liberals were not always so dense about the working class, Williams observes. WPA murals and movies like On the Waterfront showed genuine fellow feeling for the proletariat. In the 1970s, however, the liberal mood changed. Educated boomers shifted their attention to “issues of peace, equal rights, and environmentalism.” Instead of feeling the pain of Arthur Miller and John Steinbeck characters, they began sneering at the less enlightened. These days, she notes, elite sympathies are limited to the poor, people of color (POC), and the LGBTQ population. Despite clear evidence of suffering—stagnant wages, disappearing manufacturing jobs, declining health and well-being—the working class gets only fly-over snobbery at best and, more often, outright loathing.
Williams divides her chapters into a series of explainers to questions she has heard from her clueless friends and colleagues: “Why Does the Working Class Resent the Poor?” “Why Does the Working Class Resent Professionals but Admire the Rich?” “Why Doesn’t the Working Class Just Move to Where the Jobs Are?” “Is the Working Class Just Racist?” She weaves her answers into a compelling picture of a way of life and worldview foreign to her targeted readers. Working-class Americans have had to struggle for whatever stability and comfort they have, she explains. Clocking in for midnight shifts year after year, enduring capricious bosses, plant closures, and layoffs, they’re reliant on tag-team parenting and stressed-out relatives for child care. The campus go-to word “privileged” seems exactly wrong.
Proud of their own self-sufficiency and success, however modest, they don’t begrudge the self-made rich. It’s snooty professionals and the dysfunctional poor who get their goat. From their vantage point, subsidizing the day care for a welfare mother when they themselves struggle to manage care on their own dime mocks both their hard work and their beliefs. And since, unlike most professors, they shop in the same stores as the dependent poor, they’ve seen that some of them game the system. Of course that stings.
White Working Class is especially good at evoking the alternate economic and mental universe experienced by Professional and Managerial Elites, or “PMEs.” PMEs see their non-judgment of the poor, especially those who are “POC,” as a mark of their mature understanding that we live in an unjust, racist system whose victims require compassion regardless of whether they have committed any crime. At any rate, their passions lie elsewhere. They define themselves through their jobs and professional achievements, hence their obsession with glass ceilings.
Williams tells the story of her husband’s faux pas at a high-school reunion. Forgetting his roots for a moment, the Ivy League–educated lawyer asked one of his Brooklyn classmates a question that is the go-to opener in elite social settings: “What do you do?” Angered by what must have seemed like deliberate humiliation by this prodigal son, the man hissed: “I sell toilets.”
Instead of stability and backyard barbecues with family and long-time neighbors and maybe the occasional Olive Garden celebration, PMEs are enamored of novelty: new foods, new restaurants, new friends, new experiences. The working class chooses to spend its leisure in comfortable familiarity; for the elite, social life is a lot like networking. Members of the professional class may view themselves as sophisticated or cosmopolitan, but, Williams shows, to the blue-collar worker their glad-handing is closer to phony social climbing and their abstract, knowledge-economy jobs more like self-important pencil-pushing.
White Working Class has a number of proposals for creating the progressive future Williams would like to see. She wants to get rid of college-for-all dogma and improve training for middle-skill jobs. She envisions a working-class coalition of all races and ethnicities bolstered by civics education with a “distinctly celebratory view of American institutions.” In a saner political environment, some of this would make sense; indeed, she echoes some of Marco Rubio’s 2016 campaign themes. It’s little wonder White Working Class has already gotten the stink eye from liberal reviewers for its purported sympathies for racists.
Alas, impressive as Williams’s insights are, they do not always allow her to transcend her own class loyalties. Unsurprisingly, her own PME biases mostly come to light in her chapters on race and gender. She reduces immigration concerns to “fear of brown people,” even as she notes elsewhere that a quarter of Latinos also favor a wall at the southern border. This contrasts startlingly with her succinct observation that “if you don’t want to drive working-class whites to be attracted to the likes of Limbaugh, stop insulting them.” In one particularly obtuse moment, she asserts: “Because I study social inequality, I know that even Malia and Sasha Obama will be disadvantaged by race, advantaged as they are by class.” She relies on dubious gender theories to explain why the majority of white women voted for Trump rather than for his unfairly maligned opponent. That Hillary Clinton epitomized every elite quality Williams has just spent more than a hundred pages explicating escapes her notice. Williams’s own reflexive retreat into identity politics is itself emblematic of our toxic divisions, but it does not invalidate the power of this astute book.
When music could not transcend evil
he story of European classical music under the Third Reich is one of the most squalid chapters in the annals of Western culture, a chronicle of collective complaisance that all but beggars belief. Without exception, all of the well-known musicians who left Germany and Austria in protest when Hitler came to power in 1933 were either Jewish or, like the violinist Adolf Busch, Rudolf Serkin’s father-in-law, had close family ties to Jews. Moreover, most of the small number of non-Jewish musicians who emigrated later on, such as Paul Hindemith and Lotte Lehmann, are now known to have done so not out of principle but because they were unable to make satisfactory accommodations with the Nazis. Everyone else—including Karl Böhm, Wilhelm Furtwängler, Walter Gieseking, Herbert von Karajan, and Richard Strauss—stayed behind and served the Reich.
The Berlin and Vienna Philharmonics, then as now Europe’s two greatest orchestras, were just as willing to do business with Hitler and his henchmen, firing their Jewish members and ceasing to perform the music of Jewish composers. Even after the war, the Vienna Philharmonic was notorious for being the most anti-Semitic orchestra in Europe, and it was well known in the music business (though never publicly discussed) that Helmut Wobisch, the orchestra’s principal trumpeter and its executive director from 1953 to 1968, had been both a member of the SS and a Gestapo spy.
The management of the Berlin Philharmonic made no attempt to cover up the orchestra’s close relationship with the Third Reich, no doubt because the Nazi ties of Karajan, who was its music director from 1956 until shortly before his death in 1989, were a matter of public record. Yet it was not until 2007 that a full-length study of its wartime activities, Misha Aster’s The Reich’s Orchestra: The Berlin Philharmonic 1933–1945, was finally published. As for the Vienna Philharmonic, its managers long sought to quash all discussion of the orchestra’s Nazi past, steadfastly refusing to open its institutional archives to scholars until 2008, when Fritz Trümpi, an Austrian scholar, was given access to its records. Five years later, the Viennese, belatedly following the precedent of the Berlin Philharmonic, added a lengthy section to their website called “The Vienna Philharmonic Under National Socialism (1938–1945),” in which the damning findings of Trümpi and two other independent scholars were made available to the public.
Now Trümpi has published The Political Orchestra: The Vienna and Berlin Philharmonics During the Third Reich, in which he tells how they came to terms with Nazism, supplying pre- and postwar historical context for their transgressions.1 Written in a stiff mixture of academic jargon and translatorese, The Political Orchestra is ungratifying to read. Even so, the tale that it tells is both compelling and disturbing, especially to anyone who clings to the belief that high art is ennobling to the spirit.U
nlike the Vienna Philharmonic, which has always doubled as the pit orchestra for the Vienna State Opera, the Berlin Philharmonic started life in 1882 as a fully independent, self-governing entity. Initially unsubsidized by the state, it kept itself afloat by playing a grueling schedule of performances, including “popular” non-subscription concerts for which modest ticket prices were levied. In addition, the orchestra made records and toured internationally at a time when neither was common.
These activities made it possible for the Berlin Philharmonic to develop into an internationally renowned ensemble whose fabled collective virtuosity was widely seen as a symbol of German musical distinction. Furtwängler, the orchestra’s principal conductor, declared in 1932 that the German music in which it specialized was “one of the very few things that actually contribute to elevating [German] prestige.” Hence, he explained, the need for state subsidy, which he saw as “a matter of [national] prestige, that is, to some extent a requirement of national prudence.” By then, though, the orchestra was already heavily subsidized by the city of Berlin, thus paving the way for its takeover by the Nazis.
The Vienna Philharmonic, by contrast, had always been subsidized. Founded in 1842 when the orchestra of what was then the Vienna Court Opera decided to give symphonic concerts on its own, it performed the Austro-German classics for an elite cadre of longtime subscribers. By restricting membership to local players and their pupils, the orchestra cultivated what Furtwängler, who spent as much time conducting in Vienna as in Berlin, described as a “homogeneous and distinct tone quality.” At once dark and sweet, it was as instantly identifiable—and as characteristically Viennese—as the strong, spicy bouquet of a Gewürztraminer wine.
Unlike the Berlin Philharmonic, which played for whoever would pay the tab and programmed new music as a matter of policy, the Vienna Philharmonic chose not to diversify either its haute-bourgeois audience or its conservative repertoire. Instead, it played Beethoven, Brahms, Haydn, Mozart, and Schubert (and, later, Bruckner and Richard Strauss) in Vienna for the Viennese. Starting in the ’20s, the orchestra’s recordings consolidated its reputation as one of the world’s foremost instrumental ensembles, but its internal culture remained proudly insular.
What the two orchestras had in common was a nationalistic ethos, a belief in the superiority of Austro-German musical culture that approached triumphalism. One of the darkest manifestations of this ethos was their shared reluctance to hire Jews. The Berlin Philharmonic employed only four Jewish players in 1933, while the Vienna Philharmonic contained only 11 Jews at the time of the Anschluss, none of whom was hired after 1920. To be sure, such popular Jewish conductors as Otto Klemperer and Bruno Walter continued to work in Vienna for as long as they could. Two months before the Anschluss, Walter led and recorded a performance of the Ninth Symphony of Gustav Mahler, his musical mentor and fellow Jew, who from 1897 to 1907 had been the director of the Vienna Court Opera and one of the Philharmonic’s most admired conductors. But many members of both orchestras were open supporters of fascism, and not a few were anti-Semites who ardently backed Hitler. By 1942, 62 of the 123 active members of the Vienna Philharmonic were Nazi party members.
The admiration that Austro-German classical musicians had for Hitler is not entirely surprising since he was a well-informed music lover who declared in 1938 that “Germany has become the guardian of European culture and civilization.” He made the support of German art, music very much included, a key part of his political program. Accordingly, the Berlin Philharmonic was placed under the direct supervision of Joseph Goebbels, who ensured the cooperation of its members by repeatedly raising their salaries, exempting them from military service, and guaranteeing their old-age pensions. But there had never been any serious question of protest, any more than there would be among the members of the Vienna Philharmonic when the Nazis gobbled up Austria. Save for the Jews and one or two non-Jewish players who were fired for reasons of internal politics, the musicians went along unhesitatingly with Hitler’s desires.
With what did they go along? Above all, they agreed to the scrubbing of Jewish music from their programs and the dismissal of their Jewish colleagues. Some Jewish players managed to escape with their lives, but seven of the Vienna Philharmonic’s 11 Jews were either murdered by the Nazis or died as a direct result of official persecution. In addition, both orchestras performed regularly at official government functions and made tours and other public appearances for propaganda purposes, and both were treated as gems in the diadem of Nazi culture.
As for Furtwängler, the most prominent of the Austro-German orchestral conductors who served the Reich, his relationship to Nazism continues to be debated to this day. He had initially resisted the firing of the Berlin Philharmonic’s Jewish members and protected them for as long as he could. But he was also a committed (if woolly-minded) nationalist who believed that German music had “a different meaning for us Germans than for other nations” and notoriously declared in an open letter to Goebbels that “we all welcome with great joy and gratitude . . . the restoration of our national honor.” Thereafter he cooperated with the Nazis, by all accounts uncomfortably but—it must be said—willingly. A monster of egotism, he saw himself as the greatest living exponent of German music and believed it to be his duty to stay behind and serve a cause higher than what he took to be mere party politics. “Human beings are free wherever Wagner and Beethoven are played, and if they are not free at first, they are freed while listening to these works,” he naively assured a horrified Arturo Toscanini in 1937. “Music transports them to regions where the Gestapo can do them no harm.”O
nce the war was over, the U.S. occupation forces decided to enlist the Berlin Philharmonic in the service of a democratic, anti-Soviet Germany. Furtwängler and Herbert von Karajan, who succeeded him as principal conductor, were officially “de-Nazified” and their orchestra allowed to function largely undisturbed, though six Nazi Party members were fired. The Vienna Philharmonic received similarly privileged treatment.
Needless to say, there was more to this decision than Cold War politics. No one questioned the unique artistic stature of either orchestra. Moreover, the Vienna Philharmonic, precisely because of its insularity, was now seen as a living museum piece, a priceless repository of 19th-century musical tradition. Still, many musicians and listeners, Jews above all, looked askance at both orchestras for years to come, believing them to be tainted by Nazism.
Indeed they were, so much so that they treated many of their surviving Jewish ex-members in a way that can only be described as vicious. In the most blatant individual case, the violinist Szymon Goldberg, who had served as the Berlin Philharmonic’s concertmaster under Furtwängler, was not allowed to reassume his post in 1945 and was subsequently denied a pension. As for the Vienna Philharmonic, the fact that it made Helmut Wobisch its executive director says everything about its deep-seated unwillingness to face up to its collective sins.
Be that as it may, scarcely any prominent musicians chose to boycott either orchestra. Leonard Bernstein went so far as to affect a flippant attitude toward the morally equivocal conduct of the Austro-German artists whom he encountered in Europe after the war. Upon meeting Herbert von Karajan in 1954, he actually told his wife Felicia that he had become “real good friends with von Karajan, whom you would (and will) adore. My first Nazi.”
At the same time, though, Bernstein understood what he was choosing to overlook. When he conducted the Vienna Philharmonic for the first time in 1966, he wrote to his parents:
I am enjoying Vienna enormously—as much as a Jew can. There are so many sad memories here; one deals with so many ex-Nazis (and maybe still Nazis); and you never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget. The city is so beautiful, and so full of tradition. Everyone here lives for music, especially opera, and I seem to be the new hero.
Did Bernstein sell his soul for the opportunity to work with so justly renowned an orchestra—and did he get his price by insisting that its members perform the symphonies of Mahler, with which he was by then closely identified? It is a fair question, one that does not lend itself to easy answers.
Even more revealing is the case of Bruno Walter, who never forgave Furtwängler for staying behind in Germany, informing him in an angry letter that “your art was used as a conspicuously effective means of propaganda for the regime of the Devil.” Yet Walter’s righteous anger did not stop him from conducting in Vienna after the war. Born in Berlin, he had come to identify with the Philharmonic so closely that it was impossible for him to seriously consider quitting its podium permanently. “Spiritually, I was a Viennese,” he wrote in Theme and Variations, his 1946 autobiography. In 1952, he made a second recording with the Vienna Philharmonic of Mahler’s Das Lied von der Erde, whose premiere he had conducted in 1911 and which he had recorded in Vienna 15 years earlier. One wonders what Walter, who had converted to Christianity but had been driven out of both his native lands for the crime of being Jewish, made of the text of the last movement: “My friend, / On this earth, fortune has not been kind to me! / Where do I go?”
As for the two great orchestras of the Third Reich, both have finally acknowledged their guilt and been forgiven, at least by those who know little of their past. It would occur to no one to decline on principle to perform with either group today. Such a gesture would surely be condemned as morally ostentatious, an exercise in what we now call virtue-signaling. Yet it is impossible to forget what Samuel Lipman wrote in 1993 in Commentary apropos the wartime conduct of Furtwängler: “The ultimate triumph of totalitarianism, I suppose it can be said, is that under its sway only a martyred death can be truly moral.” For the only martyrs of the Berlin and Vienna Philharmonics were their Jews. The orchestras themselves live on, tainted and beloved.
He knows what to reveal and what to conceal, understands the importance of keeping the semblance of distance between oneself and the story of the day, and comprehends the ins and outs of anonymous sourcing. Within days of his being fired by President Trump on May 9, for example, little green men and women, known only as his “associates,” began appearing in the pages of the New York Times and Washington Post to dispute key points of the president’s account of his dismissal and to promote Comey’s theory of the case.
“In a Private Dinner, Trump Demanded Loyalty,” the New York Times reported on May 11. “Comey Demurred.” The story was a straightforward narrative of events from Comey’s perspective, capped with an obligatory denial from the White House. The next day, the Washington Post reported, “Comey associates dispute Trump’s account of conversations.” The Post did not identify Comey’s associates, other than saying that they were “people who have worked with him.”
Maybe they were the same associates who had gabbed to the Times. Or maybe they were different ones. Who can tell? Regardless, the story these particular associates gave to the Post was readable and gripping. Comey, the Post reported, “was wary of private meetings and discussions with the president and did not offer the assurance, as Trump has claimed, that Trump was not under investigation as part of the probe into Russian interference in last year’s election.”
On May 16, Michael S. Schmidt of the Times published his scoop, “Comey Memo Says Trump Asked Him to End Flynn Investigation.” Schmidt didn’t see the memo for himself. Parts of it were read to him by—you guessed it—“one of Mr. Comey’s associates.” The following day, Robert Mueller was appointed special counsel to oversee the Russia investigation. On May 18, the Times, citing “two people briefed” on a call between Comey and the president, reported, “Comey, Unsettled by Trump, Is Said to Have Wanted Him Kept at a Distance.” And by the end of that week, Comey had agreed to testify before the Senate Intelligence Committee.
As his testimony approached, Comey’s people became more aggressive in their criticisms of the president. “Trump Should Be Scared, Comey Friend Says,” read the headline of a CNN interview with Brookings Institution fellow Benjamin Wittes. This “Comey friend” said he was “very shocked” when he learned that President Trump had asked Comey for loyalty. “I have no doubt that he regarded the group of people around the president as dishonorable,” Wittes said.
Comey, Wittes added, was so uncomfortable at the White House reception in January honoring law enforcement—the one where Comey lumbered across the room and Trump whispered something in his ear—that, as CNN paraphrased it, he “stood in a position so that his blue blazer would blend in with the room’s blue drapes in an effort for Trump to not notice him.” The integrity, the courage—can you feel it?
On June 6, the day before Comey’s prepared testimony was released, more “associates” told ABC that the director would “not corroborate Trump’s claim that on three separate occasions Comey told the president he was not under investigation.” And a “source with knowledge of Comey’s testimony” told CNN the same thing. In addition, ABC reported that, according to “a source familiar with Comey’s thinking,” the former director would say that Trump’s actions stopped short of obstruction of justice.
Maybe those sources weren’t as “familiar with Comey’s thinking” as they thought or hoped? To maximize the press coverage he already dominated, Comey had authorized the Senate Intelligence Committee to release his testimony ahead of his personal interview. That testimony told a different story than what had been reported by CNN and ABC (and by the Post on May 12). Comey had in fact told Trump the president was not under investigation—on January 6, January 27, and March 30. Moreover, the word “obstruction” did not appear at all in his written text. The senators asked Comey if he felt Trump obstructed justice. He declined to answer either way.
My guess is that Comey’s associates lacked Comey’s scalpel-like, almost Jesuitical ability to make distinctions, and therefore misunderstood what he was telling them to say to the press. Because it’s obvious Comey was the one behind the stories of Trump’s dishonesty and bad behavior. He admitted as much in front of the cameras in a remarkable exchange with Senator Susan Collins of Maine.
Comey said that, after Trump tweeted on May 12 that he’d better hope there aren’t “tapes” of their conversations, “I asked a friend of mine to share the content of the memo with a reporter. Didn’t do it myself, for a variety of reasons. But I asked him to, because I thought that might prompt the appointment of a special counsel. And so I asked a close friend of mine to do it.”
Collins asked whether that friend had been Wittes, known to cable news junkies as Comey’s bestie. Comey said no. The source for the New York Times article was “a good friend of mine who’s a professor at Columbia Law School,” Daniel Richman.
Every time I watch or read that exchange, I am amazed. Here is the former director of the FBI just flat-out admitting that, for months, he wrote down every interaction he had with the president of the United States because he wanted a written record in case the president ever fired or lied about him. And when the president did fire and lie about him, that director set in motion a series of public disclosures with the intent of not only embarrassing the president, but also forcing the appointment of a special counsel who might end up investigating the president for who knows what. And none of this would have happened if the president had not fired Comey or tweeted about him. He told the Senate that if Trump hadn’t dismissed him, he most likely would still be on the job.
Rarely, in my view, are high officials so transparent in describing how Washington works. Comey revealed to the world that he was keeping a file on his boss, that he used go-betweens to get his story into the press, that “investigative journalism” is often just powerful people handing documents to reporters to further their careers or agendas or even to get revenge. And as long as you maintain some distance from the fallout, and stick to the absolute letter of the law, you will come out on top, so long as you have a small army of nightingales singing to reporters on your behalf.
“It’s the end of the Comey era,” A.B. Stoddard said on Special Report with Bret Baier the other day. On the contrary: I have a feeling that, as the Russia investigation proceeds, we will be hearing much more from Comey. And from his “associates.” And his “friends.” And persons “familiar with his thinking.”
In April, COMMENTARY asked a wide variety of writers,
thinkers, and broadcasters to respond to this question: Is free speech under threat in the United States? We received twenty-seven responses. We publish them here in alphabetical order.
Floyd AbramsFree expression threatened? By Donald Trump? I guess you could say so.
When a president engages in daily denigration of the press, when he characterizes it as the enemy of the people, when he repeatedly says that the libel laws should be “loosened” so he can personally commence more litigation, when he says that journalists shouldn’t be allowed to use confidential sources, it is difficult even to suggest that he has not threatened free speech. And when he says to the head of the FBI (as former FBI director James Comey has said that he did) that Comey should consider “putting reporters in jail for publishing classified information,” it is difficult not to take those threats seriously.
The harder question, though, is this: How real are the threats? Or, as Michael Gerson put it in the Washington Post: Will Trump “go beyond mere Twitter abuse and move against institutions that limit his power?” Some of the president’s threats against the institution of the press, wittingly or not, have been simply preposterous. Surely someone has told him by now that neither he nor Congress can “loosen” libel laws; while each state has its own libel law, there is no federal libel law and thus nothing for him to loosen. What he obviously takes issue with is the impact that the Supreme Court’s 1964 First Amendment opinion in New York Times v. Sullivan has had on state libel laws. The case determined that public officials who sue for libel may not prevail unless they demonstrate that the statements made about them were false and were made with actual knowledge or suspicion of that falsity. So his objection to the rules governing libel law is to nothing less than the application of the First Amendment itself.
In other areas, however, the Trump administration has far more power to imperil free speech. We live under an Espionage Act, adopted a century ago, which is both broad in its language and uncommonly vague in its meaning. As such, it remains a half-open door through which an administration that is hostile to free speech might walk. Such an administration could initiate criminal proceedings against journalists who write about defense- or intelligence-related topics on the basis that classified information was leaked to them by present or former government employees. No such action has ever been commenced against a journalist. Press lawyers and civil-liberties advocates have strong arguments that the law may not be read so broadly and still be consistent with the First Amendment. But the scope of the Espionage Act and the impact of the First Amendment upon its interpretation remain unknown.
A related area in which the attitude of an administration toward the press may affect the latter’s ability to function as a check on government relates to the ability of journalists to protect the identity of their confidential sources. The Obama administration prosecuted more Espionage Act cases against sources of information to journalists than all prior administrations combined. After a good deal of deserved press criticism, it agreed to expand the internal guidelines of the Department of Justice designed to limit the circumstances under which such source revelation is demanded. But the guidelines are none too protective and are, after all, simply guidelines. A new administration is free to change or limit them or, in fact, abandon them altogether. In this area, as in so many others, it is too early to judge the ultimate treatment of free expression by the Trump administration. But the threats are real, and there is good reason to be wary.
Floyd Abrams is the author of The Soul of the First Amendment (Yale University Press, 2017).
Ayaan Hirsi AliFreedom of speech is being threatened in the United States by a nascent culture of hostility to different points of view. As political divisions in America have deepened, a conformist mentality of “right thinking” has spread across the country. Increasingly, American universities, where no intellectual doctrine ought to escape critical scrutiny, are some of the most restrictive domains when it comes to asking open-ended questions on subjects such as Islam.
Legally, speech in the United States is protected to a degree unmatched in almost any industrialized country. The U.S. has avoided unpredictable Canadian-style restrictions on speech, for example. I remain optimistic that as long as we have the First Amendment in the U.S., any attempt at formal legal censorship will be vigorously challenged.
Culturally, however, matters are very different in America. The regressive left is the forerunner threatening free speech on any issue that is important to progressives. The current pressure coming from those who call themselves “social-justice warriors” is unlikely to lead to successful legislation to curb the First Amendment. Instead, censorship is spreading in the cultural realm, particularly at institutions of higher learning.
The way activists of the regressive left achieve silence or censorship is by creating a taboo, and one of the most pernicious taboos in operation today is the word “Islamophobia.” Islamists are similarly motivated to rule any critical scrutiny of Islamic doctrine out of order. There is now a university center (funded by Saudi money) in the U.S. dedicated to monitoring and denouncing incidences of “Islamophobia.”
The term “Islamophobia” is used against critics of political Islam, but also against progressive reformers within Islam. The term implies an irrational fear that is tainted by hatred, and it has had a chilling effect on free speech. In fact, “Islamophobia” is a poorly defined term. Islam is not a race, and it is very often perfectly rational to fear some expressions of Islam. No set of ideas should be beyond critical scrutiny.
To push back in this cultural realm—in our universities, in public discourse—those favoring free speech should focus more on the message of dawa, the set of ideas that the Islamists want to promote. If the aims of dawa are sufficiently exposed, ordinary Americans and Muslim Americans will reject it. The Islamist message is a message of divisiveness, misogyny, and hatred. It’s anachronistic and wants people to live by tribal norms dating from the seventh century. The best antidote to Islamic extremism is the revelation of what its primary objective is: a society governed by Sharia. This is the opposite of censorship: It is documenting reality. What is life like in Saudi Arabia, Iran, the Northern Nigerian States? What is the true nature of Sharia law?
Islamists want to hide the true meaning of Sharia, Jihad, and the implications for women, gays, religious minorities, and infidels under the veil of “Islamophobia.” Islamists use “Islamophobia” to obfuscate their vision and imply that any scrutiny of political Islam is hatred and bigotry. The antidote to this is more exposure and more speech.
As pressure on freedom of speech increases from the regressive left, we must reject the notions that only Muslims can speak about Islam, and that any critical examination of Islamic doctrines is inherently “racist.”
Instead of contorting Western intellectual traditions so as not to offend our Muslim fellow citizens, we need to defend the Muslim dissidents who are risking their lives to promote the human rights we take for granted: equality for women, tolerance of all religions and orientations, our hard-won freedoms of speech and thought.
It is by nurturing and protecting such speech that progressive reforms can emerge within Islam. By accepting the increasingly narrow confines of acceptable discourse on issues such as Islam, we do dissidents and progressive reformers within Islam a grave disservice. For truly progressive reforms within Islam to be possible, full freedom of speech will be required.
Ayaan Hirsi Ali is a research fellow at the Hoover Institution, Stanford University, and the founder of the AHA Foundation.
Lee C. BollingerI know it is too much to expect that political discourse mimic the measured, self-questioning, rational, footnoting standards of the academy, but there is a difference between robust political debate and political debate infected with fear or panic. The latter introduces a state of mind that is visceral and irrational. In the realm of fear, we move beyond the reach of reason and a sense of proportionality. When we fear, we lose the capacity to listen and can become insensitive and mean.
Our Constitution is well aware of this fact about the human mind and of its negative political consequences. In the First Amendment jurisprudence established over the past century, we find many expressions of the problematic state of mind that is produced by fear. Among the most famous and potent is that of Justice Brandeis in Whitney v. California in 1927, one of the many cases involving aggravated fears of subversive threats from abroad. “It is the function of (free) speech,” he said, “to free men from the bondage of irrational fears.” “Men feared witches,” Brandeis continued, “and burned women.”
Today, our “witches” are terrorists, and Brandeis’s metaphorical “women” include the refugees (mostly children) and displaced persons, immigrants, and foreigners whose lives have been thrown into suspension and doubt by policies of exclusion.
The same fears of the foreign that take hold of a population inevitably infect our internal interactions and institutions, yielding suppression of unpopular and dissenting voices, victimization of vulnerable groups, attacks on the media, and the rise of demagoguery, with its disdain for facts, reason, expertise, and tolerance.
All of this poses a very special obligation on those of us within universities. Not only must we make the case in every venue for the values that form the core of who we are and what we do, but we must also live up to our own principles of free inquiry and fearless engagement with all ideas. This is why recent incidents on a handful of college campuses disrupting and effectively censoring speakers is so alarming. Such acts not only betray a basic principle but also inflame a rising prejudice against the academic community, and they feed efforts to delegitimize our work, at the very moment when it’s most needed.
I do not for a second support the view that this generation has an unhealthy aversion to engaging differences of opinion. That is a modern trope of polarization, as is the portrayal of universities as hypocritical about academic freedom and political correctness. But now, in this environment especially, universities must be at the forefront of defending the rights of all students and faculty to listen to controversial voices, to engage disagreeable viewpoints, and to make every effort to demonstrate our commitment to the sort of fearless and spirited debate that we are simultaneously asking of the larger society. Anyone with a voice can shout over a speaker; but being able to listen to and then effectively rebut those with whom we disagree—particularly those who themselves peddle intolerance—is one of the greatest skills our education can bestow. And it is something our democracy desperately needs more of. That is why, I say to you now, if speakers who are being denied access to other campuses come here, I will personally volunteer to introduce them, and listen to them, however much I may disagree with them. But I will also never hesitate to make clear why I disagree with them.
Lee C. Bollinger is the 19th president of Columbia University and the author of Uninhibited, Robust, and Wide-Open: A Free Press for a New Century. This piece has been excerpted from President Bollinger’s May 17 commencement address.
Richard A. Epstein
Today, the greatest threat to the constitutional protection of freedom of speech comes from campus rabble-rousers who invoke this very protection. In their book, the speech of people like Charles Murray and Heather Mac Donald constitutes a form of violence, bordering on genocide, that receives no First Amendment protection. Enlightened protestors are both bound and entitled to shout them down, by force or other disruptive actions, if their universities are so foolish as to extend them an invitation to speak. Any indignant minority may take the law into its own hands to eradicate the intellectual cancer before it spreads on their own campus.
By such tortured logic, a new generation of vigilantes distorts the First Amendment doctrine: Speech becomes violence, and violence becomes heroic acts of self-defense. The standard First Amendment interpretation emphatically rejects that view. Of course, the First Amendment doesn’t let you say what you want when and wherever you want to. Your freedom of speech is subject to the same limitations as your freedom of action. So you have no constitutional license to assault other people, to lie to them, or to form cartels to bilk them in the marketplace. But folks such as Murray, Mac Donald, and even Yiannopoulos do not come close to crossing into that forbidden territory. They are not using, for example, “fighting words,” rightly limited to words or actions calculated to provoke immediate aggression against a known target. Fighting words are worlds apart from speech that provokes a negative reaction in those who find your speech offensive solely because of the content of its message.
This distinction is central to the First Amendment. Fighting words have to be blocked by well-tailored criminal and civil sanctions lest some people gain license to intimidate others from speaking or peaceably assembling. The remedy for mere offense is to speak one’s mind in response. But it never gives anyone the right to block the speech of others, lest everyone be able to unilaterally increase his sphere of action by getting really angry about the beliefs of others. No one has the right to silence others by working himself into a fit of rage.
Obviously, it is intolerable to let mutual animosity generate factional warfare, whereby everyone can use force to silence rivals. To avoid this war of all against all, each side claims that only its actions are privileged. These selective claims quickly degenerate into a form of viewpoint discrimination, which undermines one of the central protections that traditional First Amendment law erects: a wall against each and every group out to destroy the level playing field on which robust political debate rests. Every group should be at risk for having its message fall flat. The new campus radicals want to upend that understanding by shutting down their adversaries if their universities do not. Their aggression must be met, if necessary, by counterforce. Silence in the face of aggression is not an acceptable alternative.
Richard A. Epstein is the Laurence A. Tisch Professor of Law at the New York University School of Law.
David FrenchWe’re living in the midst of a troubling paradox. At the exact same time that First Amendment jurisprudence has arguably never been stronger and more protective of free expression, millions of Americans feel they simply can’t speak freely. Indeed, talk to Americans living and working in the deep-blue confines of the academy, Hollywood, and the tech sector, and you’ll get a sense of palpable fear. They’ll explain that they can’t say what they think and keep their jobs, their friends, and sometimes even their families.
The government isn’t cracking down or censoring; instead, Americans are using free speech to destroy free speech. For example, a social-media shaming campaign is an act of free speech. So is an economic boycott. So is turning one’s back on a public speaker. So is a private corporation firing a dissenting employee for purely political reasons. Each of these actions is largely protected from government interference, and each one represents an expression of the speaker’s ideas and values.
The problem, however, is obvious. The goal of each of these kinds of actions isn’t to persuade; it’s to intimidate. The goal isn’t to foster dialogue but to coerce conformity. The result is a marketplace of ideas that has been emptied of all but the approved ideological vendors—at least in those communities that are dominated by online thugs and corporate bullies. Indeed, this mindset has become so prevalent that in places such as Portland, Berkeley, Middlebury, and elsewhere, the bullies and thugs have crossed the line from protected—albeit abusive—speech into outright shout-downs and mob violence.
But there’s something else going on, something that’s insidious in its own way. While politically correct shaming still has great power in deep-blue America, its effect in the rest of the country is to trigger a furious backlash, one characterized less by a desire for dialogue and discourse than by its own rage and scorn. So we’re moving toward two Americas—one that ruthlessly (and occasionally illegally) suppresses dissenting speech and the other that is dangerously close to believing that the opposite of political correctness isn’t a fearless expression of truth but rather the fearless expression of ideas best calculated to enrage your opponents.
The result is a partisan feedback loop where right-wing rage spurs left-wing censorship, which spurs even more right-wing rage. For one side, a true free-speech culture is a threat to feelings, sensitivities, and social justice. The other side waves high the banner of “free speech” to sometimes elevate the worst voices to the highest platforms—not so much to protect the First Amendment as to infuriate the hated “snowflakes” and trigger the most hysterical overreactions.
The culturally sustainable argument for free speech is something else entirely. It reminds the cultural left of its own debt to free speech while reminding the political right that a movement allegedly centered around constitutional values can’t abandon the concept of ordered liberty. The culture of free speech thrives when all sides remember their moral responsibilities—to both protect the right of dissent and to engage in ideological combat with a measure of grace and humility.
David French is a senior writer at National Review.
Pamela GellerThe real question isn’t whether free speech is under threat in the United States, but rather, whether it’s irretrievably lost. Can we get it back? Not without war, I suspect, as is evidenced by the violence at colleges whenever there’s the shamefully rare event of a conservative speaker on campus.
Free speech is the soul of our nation and the foundation of all our other freedoms. If we can’t speak out against injustice and evil, those forces will prevail. Freedom of speech is the foundation of a free society. Without it, a tyrant can wreak havoc unopposed, while his opponents are silenced.
With that principle in mind, I organized a free-speech event in Garland, Texas. The world had recently been rocked by the murder of the Charlie Hebdo cartoonists. My version of “Je Suis Charlie” was an event here in America to show that we can still speak freely and draw whatever we like in the Land of the Free. Yet even after jihadists attacked our event, I was blamed—by Donald Trump among others—for provoking Muslims. And if I tried to hold a similar event now, no arena in the country would allow me to do so—not just because of the security risk, but because of the moral cowardice of all intellectual appeasers.
Under what law is it wrong to depict Muhammad? Under Islamic law. But I am not a Muslim, I don’t live under Sharia. America isn’t under Islamic law, yet for standing for free speech, I’ve been:
- Prevented from running our advertisements in every major city in this country. We have won free-speech lawsuits all over the country, which officials circumvent by prohibiting all political ads (while making exceptions for ads from Muslim advocacy groups);
- Shunned by the right, shut out of the Conservative Political Action Conference;
- Shunned by Jewish groups at the behest of terror-linked groups such as the Council on American-Islamic Relations;
- Blacklisted from speaking at universities;
- Prevented from publishing books, for security reasons and because publishers fear shaming from the left;
- Banned from Britain.
A Seattle court accused me of trying to shut down free speech after we merely tried to run an FBI poster on global terrorism, because authorities had banned all political ads in other cities to avoid running ours. Seattle blamed us for that, which was like blaming a woman for being raped because she was wearing a short skirt.
This kind of vilification and shunning is key to the left’s plan to shut down all dissent from its agenda—they make legislation restricting speech unnecessary.
The same refusal to allow our point of view to be heard has manifested itself elsewhere. The foundation of my work is individual rights and equality for all before the law. These are the foundational principles of our constitutional republic. That is now considered controversial. Truth is the new hate speech. Truth is going to be criminalized.
The First Amendment doesn’t only protect ideas that are sanctioned by the cultural and political elites. If “hate speech” laws are enacted, who would decide what’s permissible and what’s forbidden? The government? The gunmen in Garland?
There has been an inversion of the founding premise of this nation. No longer is it the subordination of might to right, but right to might. History is repeatedly deformed with the bloody consequences of this transition.
Pamela Geller is the editor in chief of the Geller Report and president of the American Freedom Defense Initiative.
Jonah GoldbergOf course free speech is under threat in America. Frankly, it’s always under threat in America because it’s always under threat everywhere. Ronald Reagan was right when he said in 1961, “Freedom is never more than one generation away from extinction. We didn’t pass it on to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same.”
This is more than political boilerplate. Reagan identified the source of the threat: human nature. God may have endowed us with a right to liberty, but he didn’t give us all a taste for it. As with most finer things, we must work to acquire a taste for it. That is what civilization—or at least our civilization—is supposed to do: cultivate attachments to certain ideals. “Cultivate” shares the same Latin root as “culture,” cultus, and properly understood they mean the same thing: to grow, nurture, and sustain through labor.
In the past, threats to free speech have taken many forms—nationalist passion, Comstockery (both good and bad), political suppression, etc.—but the threat to free speech today is different. It is less top-down and more bottom-up. We are cultivating a generation of young people to reject free speech as an important value.
One could mark the beginning of the self-esteem movement with Nathaniel Branden’s 1969 paper, “The Psychology of Self-Esteem,” which claimed that “feelings of self-esteem were the key to success in life.” This understandable idea ran amok in our schools and in our culture. When I was a kid, Saturday-morning cartoons were punctuated with public-service announcements telling kids: “The most important person in the whole wide world is you, and you hardly even know you!”
The self-esteem craze was just part of the cocktail of educational fads. Other ingredients included multiculturalism, the anti-bullying crusade, and, of course, that broad phenomenon known as “political correctness.” Combined, they’ve produced a generation that rejects the old adage “sticks and stones can break my bones but words can never harm me” in favor of the notion that “words hurt.” What we call political correctness has been on college campuses for decades. But it lacked a critical mass of young people who were sufficiently receptive to it to make it a fully successful ideology. The campus commissars welcomed the new “snowflakes” with open arms; truly, these are the ones we’ve been waiting for.
“Words hurt” is a fashionable concept in psychology today. (See Psychology Today: “Why Words Can Hurt at Least as Much as Sticks and Stones.”) But it’s actually a much older idea than the “sticks and stones” aphorism. For most of human history, it was a crime to say insulting or “injurious” things about aristocrats, rulers, the Church, etc. That tendency didn’t evaporate with the Divine Right of Kings. Jonathan Haidt has written at book length about our natural capacity to create zones of sanctity, immune from reason.
And that is the threat free speech faces today. Those who inveigh against “hate speech” are in reality fighting “heresy speech”—ideas that do “violence” to sacred notions of self-esteem, racial or gender equality, climate change, and so on. Put whatever label you want on it, contemporary “social justice” progressivism acts as a religion, and it has no patience for blasphemy.
When Napoleon’s forces converted churches into stables, the clergy did not object on the grounds that regulations regarding the proper care and feeding of animals had been violated. They complained of sacrilege and blasphemy. When Charles Murray or Christina Hoff Summers visits college campuses, the protestors are behaving like the zealous acolytes of St. Jerome. Appeals to the First Amendment have as much power over the “antifa” fanatics as appeals to Odin did to champions of the New Faith.
That is the real threat to free speech today.
Jonah Goldberg is a senior editor at National Review and a fellow at the American Enterprise Institute.
KC JohnsonIn early May, the Washington Post urged universities to make clear that “racist signs, symbols, and speech are off-limits.” Given the extraordinarily broad definition of what constitutes “racist” speech at most institutions of higher education, this demand would single out most right-of-center (and, in some cases, even centrist and liberal) discourse on issues of race or ethnicity. The editorial provided the highest-profile example of how hostility to free speech, once confined to the ideological fringe on campus, has migrated to the liberal mainstream.
The last few years have seen periodic college protests—featuring claims that significant amounts of political speech constitute “violence,” thereby justifying censorship—followed by even more troubling attempts to appease the protesters. After the mob scene that greeted Charles Murray upon his visit to Middlebury College, for instance, the student government criticized any punishment for the protesters, and several student leaders wanted to require that future speakers conform to the college’s “community standard” on issues of race, gender, and ethnicity. In the last few months, similar attempts to stifle the free exchange of ideas in the name of promoting diversity occurred at Wesleyan, Claremont McKenna, and Duke. Offering an extreme interpretation of this point of view, one CUNY professor recently dismissed dialogue as “inherently conservative,” since it reinforced the “relations of power that presently exist.”
It’s easy, of course, to dismiss campus hostility to free speech as affecting only a small segment of American public life—albeit one that trains the next generation of judges, legislators, and voters. But, as Jonathan Chait observed in 2015, denying “the legitimacy of political pluralism on issues of race and gender” has broad appeal on the left. It is only most apparent on campus because “the academy is one of the few bastions of American life where the political left can muster the strength to impose its political hegemony upon others.” During his time in office, Barack Obama generally urged fellow liberals to support open intellectual debate. But the current campus environment previews the position of free speech in a post-Obama Democratic Party, increasingly oriented around identity politics.
Waning support on one end of the ideological spectrum for this bedrock American principle should provide a political opening for the other side. The Trump administration, however, seems poorly suited to make the case. Throughout his public career, Trump has rarely supported free speech, even in the abstract, and has periodically embraced legal changes to facilitate libel lawsuits. Moreover, the right-wing populism that motivates Trump’s base has a long tradition of ideological hostility to civil liberties of all types. Even in campus contexts, conservatives have defended free speech inconsistently, as seen in recent calls that CUNY disinvite anti-Zionist fanatic Linda Sarsour as a commencement speaker.
In a sharply polarized political environment, awash in dubiously-sourced information, free speech is all the more important. Yet this same environment has seen both sides, most blatantly elements of the left on campuses, demand restrictions on their ideological foes’ free speech in the name of promoting a greater good.
KC Johnson is a professor of history at Brooklyn College and the CUNY Graduate Center.
Laura KipnisI find myself with a strange-bedfellows problem lately. Here I am, a left-wing feminist professor invited onto the pages of Commentary—though I’d be thrilled if it were still 1959—while fielding speaking requests from right-wing think tanks and libertarians who oppose child-labor laws.
Somehow I’ve ended up in the middle of the free-speech-on-campus debate. My initial crime was publishing a somewhat contentious essay about campus sexual paranoia that put me on the receiving end of Title IX complaints. Apparently I’d created a “hostile environment” at my university. I was investigated (for 72 days). Then I wrote up what I’d learned about these campus inquisitions in a second essay. Then I wrote about it all some more, in a book exposing the kangaroo-court elements of the Title IX process—and the extra-legal gag orders imposed on everyone caught in its widening snare.
I can’t really comment on whether more charges have been filed against me over the book. I’ll just say that writing about being a Title IX respondent could easily become a life’s work. I learned, shortly after writing this piece, that I and my publisher were being sued for defamation, among other things.
Is free speech under threat on American campuses? Yes. We know all about student activists who wish to shut down talks by people with opposing views. I got smeared with a bit of that myself, after a speaking invitation at Wellesley—some students made a video protesting my visit before I arrived. The talk went fine, though a group of concerned faculty circulated an open letter afterward also protesting the invitation: My views on sexual politics were too heretical, and might have offended students.
I didn’t take any of this too seriously, even as right-wing pundits crowed, with Wellesley as their latest outrage bait. It was another opportunity to mock student activists, and the fact that I was myself a feminist rather than a Charles Murray or a Milo Yiannopoulos, made them positively gleeful.
I do find myself wondering where all my new free-speech pals were when another left-wing professor, Steven Salaita, was fired (or if you prefer euphemism, “his job offer was withdrawn”) from the University of Illinois after he tweeted criticism of Israel’s Gaza policy. Sure the tweets were hyperbolic, but hyperbole and strong opinions are protected speech, too.
I guess free speech is easy to celebrate until it actually challenges something. Funny, I haven’t seen Milo around lately—so beloved by my new friends when he was bashing minorities and transgender kids. Then he mistakenly said something authentic (who knew he was capable of it!), reminiscing about an experience a lot of gay men have shared: teenage sex with older men. He tried walking it back—no, no, he’d been a victim, not a participant—but his fan base was shrieking about pedophilia and fleeing in droves. Gee, they were all so against “political correctness” a few minutes before.
It’s easy to be a free-speech fan when your feathers aren’t being ruffled. No doubt what makes me palatable to the anti-PC crowd is having thus far failed to ruffle them enough. I’m just going to have to work harder.
Laura Kipnis’s latest book is Unwanted Advances: Sexual Paranoia Comes to Campus.
Eugene KontorovichThe free and open exchange of views—especially politically conservative or traditionally religious ones—is being challenged. This is taking place not just at college campuses but throughout our public spaces and cultural institutions. James Watson was fired from the lab he led since 1968 and could not speak at New York University because of petty, censorious students who would not know DNA from LSD. Our nation’s founders and heroes are being “disappeared” from public commemoration, like Trotsky from a photograph of Soviet rulers.
These attacks on “free speech” are not the result of government action. They are not what the First Amendment protects against. The current methods—professional and social shaming, exclusion, and employment termination—are more inchoate, and their effects are multiplied by self-censorship. A young conservative legal scholar might find himself thinking: “If the late Justice Antonin Scalia can posthumously be deemed a ‘bigot’ by many academics, what chance have I?”
Ironically, artists and intellectuals have long prided themselves on being the first defenders of free speech. Today, it is the institutions of both popular and high culture that are the censors. Is there one poet in the country who would speak out for Ann Coulter?
The inhibition of speech at universities is part of a broader social phenomenon of making longstanding, traditional views and practices sinful overnight. Conservatives have not put up much resistance to this. To paraphrase Martin Niemöller’s famous dictum: “First they came for Robert E. Lee, and I said nothing, because Robert E. Lee meant nothing to me.”
The situation with respect to Israel and expressions of support for it deserves separate discussion. Even as university administrators give political power to favored ideologies by letting them create “safe spaces” (safe from opposing views), Jews find themselves and their state at the receiving end of claims of apartheid—modern day blood libels. It is not surprising if Jewish students react by demanding that they get a safe space of their own. It is even less surprising if their parents, paying $65,000 a year, want their children to have a nicer time of it. One hears Jewish groups frequently express concern about Jewish students feeling increasingly isolated and uncomfortable on campus.
But demanding selective protection from the new ideological commissars is unlikely to bring the desired results. First, this new ideology, even if it can be harnessed momentarily to give respite to harassed Jews on campus, is ultimately illiberal and will be controlled by “progressive” forces. Second, it is not so terrible for Jews in the Diaspora to feel a bit uncomfortable. It has been the common condition of Jews throughout the millennia. The social awkwardness that Jews at liberal arts schools might feel in being associated with Israel is of course one of the primary justifications for the Jewish State. Facing the snowflakes incapable of hearing a dissonant view—but who nonetheless, in the grip of intersectional ecstasy, revile Jewish self-determination—Jewish students should toughen up.
Eugene Kontorovich teaches constitutional law at Northwestern University and heads the international law department of the Kohelet Policy Forum in Jerusalem.
Nicholas LemannThere’s an old Tom Wolfe essay in which he describes being on a panel discussion at Princeton in 1965 and provoking the other panelists by announcing that America, rather than being in crisis, is in the middle of a “happiness explosion.” He was arguing that the mass effects of 20 years of post–World War II prosperity made for a larger phenomenon than the Vietnam War, the racial crisis, and the other primary concerns of intellectuals at the time.
In the same spirit, I’d say that we are in the middle of a free-speech explosion, because of 20-plus years of the Internet and 10-plus years of social media. If one understands speech as disseminated individual opinion, then surely we live in the free-speech-est society in the history of the world. Anybody with access to the unimpeded World Wide Web can say anything to a global audience, and anybody can hear anything, too. All threats to free speech should be understood in the context of this overwhelmingly reality.
It is a comforting fantasy that a genuine free-speech regime will empower mainly “good,” but previously repressed, speech. Conversely, repressive regimes that are candid enough to explain their anti-free-speech policies usually say that they’re not against free speech, just “bad” speech. We have to accept that more free speech probably means, in the aggregate, more bad speech, and also a weakening of the power, authority, and economic support for information professionals such as journalists. Welcome to the United States in 2017.
I am lucky enough to live and work on the campus of a university, Columbia, that has been blessedly free of successful attempts to repress free speech. Just in the last few weeks, Charles Murray and Dinesh D’Souza have spoken here without incident. But, yes, the evidently growing popularity of the idea that “hate speech” shouldn’t be permitted on campuses is a problem, especially, it seems, at small private liberal-arts colleges. We should all do our part, and I do, by frequently and publicly endorsing free-speech principles. Opposing the BDS movement falls squarely into that category.
It’s not just on campuses that free-speech vigilance is needed, though. The number-one threat to free speech, to my mind, is that the wide-open Web has been replaced by privately owned platforms such as Facebook and Google as the way most people experience the public life of the Internet. These companies are committed to banning “hate speech,” and they are eager to operate freely in countries, like China, that don’t permit free political speech. That makes for a far more consequential constrained environment than any campus’s speech code.
Also, Donald Trump regularly engages in presidentially unprecedented rhetoric demonizing people who disagree with him. He seems to think this is all in good fun, but, as we have already seen at his rallies, not everybody hears it that way. The place where Trumpism will endanger free speech isn’t in the center—the White House press room—but at the periphery, for example in the way that local police handle bumptious protestors and the journalists covering them. This is already happening around the country. If Trump were as disciplined and knowledgeable as Vladimir Putin or Recep Tayyip Erdogan, which so far he seems not to be, then free speech could be in even more serious danger from government, which in most places is its usual main enemy.
Nicholas Lemann is a professor at Columbia Journalism School and a staff writer for the New Yorker.
Michael J. LewisFree speech is a right but it is also a habit, and where the habit shrivels so will the right. If free speech today is in headlong retreat—everywhere threatened by regulation, organized harassment, and even violence—it is in part because our political culture allowed the practice of persuasive oratory to atrophy. The process began in 1973, an unforeseen side effect of Roe v. Wade. Legislators were delighted to learn that by relegating this divisive matter of public policy to the Supreme Court and adopting a merely symbolic position, they could sit all the more safely in their safe seats.
Since then, one crucial question of public policy after another has been punted out of the realm of politics and into the judicial. Issues that might have been debated with all the rhetorical agility of a Lincoln and a Douglas, and then subjected to a process of negotiation, compromise, and voting, have instead been settled by decree: e.g., Chevron, Kelo, Obergefell. The consequences for speech have been pernicious. Since the time of Pericles, deliberative democracy has been predicated on the art of persuasion, which demands the forceful clarity of thought and expression without which no one has ever been persuaded. But a legislature that relegates its authority to judges and regulators will awaken to discover its oratorical culture has been stunted. When politicians, rather than seeking to convince and win over, prefer to project a studied and pleasant vagueness, debate withers into tedious defensive performance. It has been decades since any presidential debate has seen any sustained give and take over a matter of policy. If there is any suspense at all, it is only the possibility that a fatigued or peeved candidate might blurt out that tactless shard of truth known as a gaffe.
A generation accustomed to hearing platitudes smoothly dispensed from behind a teleprompter will find the speech of a fearless extemporaneous speaker to be startling, even disquieting; unfamiliar ideas always are. Unhappily, they have been taught to interpret that disquiet as an injury done to them, rather than as a premise offered to them to consider. All this would not have happened—certainly not to this extent—had not our deliberative democracy decided a generation ago that it preferred the security of incumbency to the risks of unshackled debate. The compulsory contraction of free speech on college campuses is but the logical extension of the voluntary contraction of free speech in our political culture.
Michael J. Lewis’s new book is City of Refuge: Separatists and Utopian Town Planning (Princeton University Press).
Heather Mac DonaldThe answer to the symposium question depends on how powerful the transmission belt is between academia and the rest of the country. On college campuses, violence and brute force are silencing speakers who challenge left-wing campus orthodoxies. These totalitarian outbreaks have been met with listless denunciations by college presidents, followed by . . . virtually nothing. As of mid-May, the only discipline imposed for 2017’s mass attacks on free speech at UC Berkeley, Middlebury, and Clare-mont McKenna College was a letter of reprimand inserted—sometimes only temporarily—into the files of several dozen Middlebury students, accompanied by a brief period of probation. Previous outbreaks of narcis-sistic incivility, such as the screaming-girl fit at Yale and the assaults on attendees of Yale’s Buckley program, were discreetly ignored by college administrators.
Meanwhile, the professoriate unapologetically defends censorship and violence. After the February 1 riot in Berkeley to prevent Milo Yiannapoulos from speaking, Déborah Blocker, associate professor of French at UC Berkeley, praised the rioters. They were “very well-organized and very efficient,” Blocker reported admiringly to her fellow professors. “They attacked property but they attacked it very sparingly, destroying just enough University property to obtain the cancellation order for the MY event and making sure no one in the crowd got hurt” (emphasis in original). (In fact, perceived Milo and Donald Trump supporters were sucker-punched and maced; businesses downtown were torched and vandalized.) New York University’s vice provost for faculty, arts, humanities, and diversity, Ulrich Baer, displayed Orwellian logic by claiming in a New York Times op-ed that shutting down speech “should be understood as an attempt to ensure the conditions of free speech for a greater group of people.”
Will non-academic institutions take up this zeal for outright censorship? Other ideological products of the left-wing academy have been fully absorbed and operationalized. Racial victimology, which drives much of the campus censorship, is now standard in government and business. Corporate diversity trainers counsel that bias is responsible for any lack of proportional racial representation in the corporate ranks. Racial disparities in school discipline and incarceration are universally attributed to racism rather than to behavior. Public figures have lost jobs for violating politically correct taboos.
Yet Americans possess an instinctive commitment to the First Amendment. Federal judges, hardly an extension of the Federalist Society, have overwhelmingly struck down campus speech codes. It is hard to imagine that they would be any more tolerant of the hate-speech legislation so prevalent in Europe. So the question becomes: At what point does the pressure to conform to the elite worldview curtail freedom of thought and expression, even without explicit bans on speech?
Social stigma against conservative viewpoints is not the same as actual censorship. But the line can blur. The Obama administration used regulatory power to impose a behavioral conformity on public and private entities. School administrators may have technically still possessed the right to dissent from novel theories of gender, but they had to behave as if they were fully on board with the transgender revolution when it came to allowing boys to use girls’ bathrooms and locker rooms.
Had Hillary Clinton had been elected president, the federal bureaucracy would have mimicked campus diversocrats with even greater zeal. That threat, at least, has been avoided. Heresies against left-wing dogma may still enter the public arena, if only by the back door. The mainstream media have lurched even further left in the Trump era, but the conservative media, however mocked and marginalized, are expanding (though Twitter and Facebook’s censorship of conservative speakers could be a harbinger of more official silencing).
Outside the academy, free speech is still legally protected, but its exercise requires ever greater determination.
Heather Mac Donald is a fellow at the Manhattan Institute and the author of The War on Cops.
John McWhorterThere is a certain mendacity, as Brick put it in Cat on a Hot Tin Roof, in our discussion of free speech on college campuses. Namely, none of us genuinely wish that absolutely all issues be aired in the name of education and open-mindedness. To insist so is to pretend that civilized humanity makes nothing we could call advancement in philosophical consensus.
I doubt we need “free speech” on issues such as whether slavery and genocide are okay, whether it has been a mistake to view women as men’s equals, or to banish as antique the idea that whites are a master race while other peoples represent a lower rung on the Darwinian scale. With all due reverence of John Stuart Mill’s advocacy for the regular airing of even noxious views in order to reinforce clarity on why they were rejected, we are also human beings with limited time. A commitment to the Enlightenment justifiably will decree that certain views are, indeed, no longer in need of discussion.
However, our modern social-justice warriors are claiming that this no-fly zone of discussion is vaster than any conception of logic or morality justifies. We are being told that questions regarding the modern proposals about cultural appropriation, about whether even passing infelicitous statements constitute racism in the way that formalized segregation and racist disparagement did, or about whether social disparities can be due to cultural legacies rather than structural impediments, are as indisputably egregious, backwards, and abusive as the benighted views of the increasingly distant past.
That is, the new idea is not only that discrimination and inequality still exist, but that to even question the left’s utopian expectation on such matters justifies the same furious, sloganistic and even physically violent resistance that was once levelled against those designated heretics by a Christian hegemony.
Of course the protesters in question do not recognize themselves in a portrait as opponents of something called heresy. They suppose that Galileo’s opponents were clearly wrong but that they, today, are actually correct in a way that no intellectual or moral argument could coherently deny.
As such, we have students allowed to decree college campuses as “racist” when they are the least racist spaces on the planet—because they are, predictably given the imperfection of humans, not perfectly free of passingly unsavory interactions. Thinkers invited to talk for a portion of an hour from the right rather than the left and then have dinner with a few people and fly home are treated as if they were reanimated Hitlers. The student of color who hears a few white students venturing polite questions about the leftist orthodoxy is supported in fashioning these questions as “racist” rhetoric.
The people on college campuses who openly and aggressively spout this new version of Christian (or even Islamist) crusading—ironically justifying it as a barricade against “fascist” muzzling of freedom when the term applies ominously well to the regime they are fostering—are a minority. However, the sawmill spinning blade of their rhetoric has succeeding in rendering opposition as risky as espousing pedophilia, such that only those natively open to violent criticism dare speak out. The latter group is small. The campus consensus thereby becomes, if only at moralistic gunpoint à la the ISIS victim video, a strangled hard-leftism.
Hence freedom of speech is indeed threatened on today’s college campuses. I have lost count of how many of my students, despite being liberal Democrats (many of whom sobbed at Hillary Clinton’s loss last November), have told me that they are afraid to express their opinions about issues that matter, despite the fact that their opinions are ones that any liberal or even leftist person circa 1960 would have considered perfectly acceptable.
Something has shifted of late, and not in a direction we can legitimately consider forwards.
John McWhorter teaches linguistics, philosophy, and music history at Columbia University and is the author of The Language Hoax, Words on the Move, and Talking Back, Talking Black.
Kate Bachelder OdellIt’s 2021, and Harvard Square has devolved into riots: Some 120 people are injured in protests, and the carnage includes fire-consumed cop cars and smashed-in windows. The police discharge canisters of tear gas, and, after apprehending dozens of protesters, enforce a 1:45 A.M. curfew. Anyone roaming the streets after hours is subject to arrest. About 2,000 National Guardsmen are prepared to intervene. Such violence and disorder is also roiling Berkeley and other elite and educated areas.
Oh, that’s 1970. The details are from the Harvard Crimson’s account of “anti-war” riots that spring. The episode is instructive in considering whether free speech is under threat in the United States. Almost daily, there’s a new YouTube installment of students melting down over viewpoints of speakers invited to one campus or another. Even amid speech threats from government—for example, the IRS’s targeting of political opponents—nothing has captured the public’s attention like the end of free expression at America’s institutions of higher learning.
Yet disruption, confusion, and even violence are not new campus phenomena. And it’s hard to imagine that young adults who deployed brute force in the 1960s and ’70s were deeply committed to the open and peaceful exchange of ideas.
There may also be reason for optimism. The rough and tumble on campus in the 1960s and ’70s produced a more even-tempered ’80s and ’90s, and colleges are probably heading for another course correction. In covering the ruckuses at Yale, Missouri, and elsewhere, I’ve talked to professors and students who are figuring out how to respond to the illiberalism, even if the reaction is delayed. The University of Chicago put out a set of free-speech principles last year, and others schools such as Princeton and Purdue have endorsed them.
The NARPs—Non-Athletic Regular People, as they are sometimes known on campus—still outnumber the social-justice warriors, who appear to be overplaying their hand. Case in point is the University of Missouri, which experienced a precipitous drop in enrollment after instructor Melissa Click and her ilk stoked racial tensions last spring. The college has closed dorms and trimmed budgets. Which brings us to another silver lining: The economic model of higher education (exorbitant tuition to pay ever more administrators) may blow up traditional college before the fascists can.
Note also that the anti-speech movement is run by rich kids. A Brookings Institution analysis from earlier this year discovered that “the average enrollee at a college where students have attempted to restrict free speech comes from a family with an annual income $32,000 higher than that of the average student in America.” Few rank higher in average income than those at Middlebury College, where students evicted scholar Charles Murray in a particularly ugly scene. (The report notes that Murray was received respectfully at Saint Louis University, “where the median income of students’ families is half Middlebury’s.”) The impulses of over-adulated 20-year-olds may soon be tempered by the tyranny of having to show up for work on a daily basis.
None of this is to suggest that free speech is enjoying some renaissance either on campus or in America. But perhaps as the late Wall Street Journal editorial-page editor Robert Bartley put it in his valedictory address: “Things could be worse. Indeed, they have been worse.”
Kate Bachelder Odell is an editorial writer for the Wall Street Journal.
Jonathan RauchIs free speech under threat? The one-syllable answer is “yes.” The three-syllable answer is: “Yes, of course.” Free speech is always under threat, because it is not only the single most successful social idea in all of human history, it is also the single most counterintuitive. “You mean to say that speech that is offensive, untruthful, malicious, seditious, antisocial, blasphemous, heretical, misguided, or all of the above deserves government protection?” That seemingly bizarre proposition is defensible only on the grounds that the marketplace of ideas turns out to be the most powerful engine of knowledge, prosperity, liberty, social peace, and moral advancement that our species has had the good fortune to discover.
Every new generation of free-speech advocates will need to get up every morning and re-explain the case for free speech and open inquiry—today, tomorrow, and forever. That is our lot in life, and we just need to be cheerful about it. At discouraging moments, it is helpful to remember that the country has made great strides toward free speech since 1798, when the Adams administration arrested and jailed its political critics; and since the 1920s, when the U.S. government banned and burned James Joyce’s great novel Ulysses; and since 1954, when the government banned ONE, a pioneering gay journal. (The cover article was a critique of the government’s indecency censors, who censored it.) None of those things could happen today.
I suppose, then, the interesting question is: What kind of threat is free speech under today? In the present age, direct censorship by government bodies is rare. Instead, two more subtle challenges hold sway, especially, although not only, on college campuses. The first is a version of what I called, in my book Kindly Inquisitors, the humanitarian challenge: the idea that speech that is hateful or hurtful (in someone’s estimation) causes pain and thus violates others’ rights, much as physical violence does. The other is a version of what I called the egalitarian challenge: the idea that speech that denigrates minorities (again, in someone’s estimation) perpetuates social inequality and oppression and thus also is a rights violation. Both arguments call upon administrators and other bureaucrats to defend human rights by regulating speech rights.
Both doctrines are flawed to the core. Censorship harms minorities by enforcing conformity and entrenching majority power, and it no more ameliorates hatred and injustice than smashing thermometers ameliorates global warming. If unwelcome words are the equivalent of bludgeons or bullets, then the free exchange of criticism—science, in other words—is a crime. I could go on, but suffice it to say that the current challenges are new variations on ancient themes—and they will be followed, in decades and centuries to come, by many, many other variations. Memo to free-speech advocates: Our work is never done, but the really amazing thing, given the proposition we are tasked to defend, is how well we are doing.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of Kindly Inquisitors: The New Attacks on Free Thought.
Nicholas Quinn RosenkranzSpeech is under threat on American campuses as never before. Censorship in various forms is on the rise. And this year, the threat to free speech on campus took an even darker turn, toward actual violence. The prospect of Milo Yiannopoulos speaking at Berkeley provoked riots that caused more than $100,000 worth of property damage on the campus. The prospect of Charles Murray speaking at Middlebury led to a riot that put a liberal professor in the hospital with a concussion. Ann Coulter’s speech at Berkeley was cancelled after the university determined that none of the appropriate venues could be protected from “known security threats” on the date in question.
The free-speech crisis on campus is caused, at least in part, by a more insidious campus pathology: the almost complete lack of intellectual diversity on elite university faculties. At Yale, for example, the number of registered Republicans in the economics department is zero; in the psychology department, there is one. Overall, there are 4,410 faculty members at Yale, and the total number of those who donated to a Republican candidate during the 2016 primaries was three.
So when today’s students purport to feel “unsafe” at the mere prospect of a conservative speaker on campus, it may be easy to mock them as “delicate snowflakes,” but in one sense, their reaction is understandable: If students are shocked at the prospect of a Republican behind a university podium, perhaps it is because many of them have never before laid eyes on one.
To see the connection between free speech and intellectual diversity, consider the recent commencement speech of Harvard President Drew Gilpin Faust:
Universities must be places open to the kind of debate that can change ideas….Silencing ideas or basking in intellectual orthodoxy independent of facts and evidence impedes our access to new and better ideas, and it inhibits a full and considered rejection of bad ones. . . . We must work to ensure that universities do not become bubbles isolated from the concerns and discourse of the society that surrounds them. Universities must model a commitment to the notion that truth cannot simply be claimed, but must be established—established through reasoned argument, assessment, and even sometimes uncomfortable challenges that provide the foundation for truth.
Faust is exactly right. But, alas, her commencement audience might be forgiven a certain skepticism. After all, the number of registered Republicans in several departments at Harvard—e.g., history and psychology—is exactly zero. In those departments, the professors themselves may be “basking in intellectual orthodoxy” without ever facing “uncomfortable challenges.” This may help explain why some students will do everything in their power to keep conservative speakers off campus: They notice that faculty hiring committees seem to do exactly the same thing.
In short, it is a promising sign that true liberal academics like Faust have started speaking eloquently about the crucial importance of civil, reasoned disagreement. But they will be more convincing on this point when they hire a few colleagues with whom they actually disagree.
Nicholas Quinn Rosenkranz is a professor of law at Georgetown. He serves on the executive committee of Heterodox Academy, which he co-founded, on the board of directors of the Federalist Society, and on the board of directors of the Foundation for Individual Rights in Education (FIRE).
Ben ShapiroIn February, I spoke at California State University in Los Angeles. Before my arrival, professors informed students that a white supremacist would be descending on the school to preach hate; threats of violence soon prompted the administration to cancel the event. I vowed to show up anyway. One hour before the event, the administration backed down and promised to guarantee that the event could go forward, but police officers were told not to stop the 300 students, faculty, and outside protesters who blocked and assaulted those who attempted to attend the lecture. We ended up trapped in the auditorium, with the authorities telling students not to leave for fear of physical violence. I was rushed from campus under armed police guard.
Is free speech under assault?
Of course it is.
On campus, free speech is under assault thanks to a perverse ideology of intersectionality that claims victim identity is of primary value and that views are a merely secondary concern. As a corollary, if your views offend someone who outranks you on the intersectional hierarchy, your views are treated as violence—threats to identity itself. On campus, statements that offend an individual’s identity have been treated as “microaggressions”–actual aggressions against another, ostensibly worthy of violence. Words, students have been told, may not break bones, but they will prompt sticks and stones, and rightly so.
Thus, protesters around the country—leftists who see verbiage as violence—have, in turn, used violence in response to ideas they hate. Leftist local authorities then use the threat of violence as an excuse to ideologically discriminate against conservatives. This means public intellectuals like Charles Murray being run off of campus and his leftist professorial cohort viciously assaulted; it means Ann Coulter being targeted for violence at Berkeley; it means universities preemptively banning me and Ayaan Hirsi Ali and Condoleezza Rice and even Jason Riley.
The campus attacks on free speech are merely the most extreme iteration of an ideology that spans from left to right: the notion that your right to free speech ends where my feelings begin. Even Democrats who say that Ann Coulter should be allowed to speak at Berkeley say that nobody should be allowed to contribute to a super PAC (unless you’re a union member, naturally).
Meanwhile, on the right, the president’s attacks on the press have convinced many Republicans that restrictions on the press wouldn’t be altogether bad. A Vanity Fair/60 Minutes poll in late April found that 36 percent of Americans thought freedom of the press “does more harm than good.” Undoubtedly, some of that is due to the media’s obvious bias. CNN’s Jeff Zucker has targeted the Trump administration for supposedly quashing journalism, but he was silent when the Obama administration’s Department of Justice cracked down on reporters from the Associated Press and Fox News, and when hacks like Deputy National Security Adviser Ben Rhodes openly sold lies regarding Iran. But for some on the right, the response to press falsities hasn’t been to call for truth, but to instead echo Trumpian falsehoods in the hopes of damaging the media. Free speech is only important when people seek the truth. Leftists traded truth for tribalism long ago; in response, many on the right seem willing to do the same. Until we return to a common standard under which facts matter, free speech will continue to rest on tenuous grounds.
Ben Shapiro is the editor in chief of The Daily Wire and the host of The Ben Shapiro Show.
Judith ShulevitzIt’s tempting to blame college and university administrators for the decline of free speech in America, and for years I did just that. If the guardians of higher education won’t inculcate the habits of mind required for serious thinking, I thought, who will? The unfettered but civil exchange of ideas is the basic operation of education, just as addition is the basic operation of arithmetic. And universities have to teach both the unfettered part and the civil part, because arguing in a respectful manner isn’t something anyone does instinctively.
So why change my mind now? Schools still cling to speech codes, and there still aren’t enough deans like the one at the University of Chicago who declared his school a safe-space-free zone. My alma mater just handed out prizes for “enhancing race and/or ethnic relations” to two students caught on video harassing the dean of their residential college, one screaming at him that he’d created “a space for violence to happen,” the other placing his face inches away from the dean’s and demanding, “Look at me.” All this because they deemed a thoughtful if ill-timed letter about Halloween costumes written by the dean’s wife to be an act of racist aggression. Yale should discipline students who behave like that, even if they’re right on the merits (I don’t think they were, but that’s not the point). They certainly don’t deserve awards. I can’t believe I had to write that sentence.
But in abdicating their responsibilites, the universities have enabled something even worse than an attack on free speech. They’ve unleashed an assault on themselves. There’s plenty of free speech around; we know that because so much bad speech—low-minded nonsense—tests our constitutional tolerance daily, and that’s holding up pretty well. (As Nicholas Lemann observes elsewhere in this symposium, Facebook and Google represent bigger threats to free speech than students and administrators.) What’s endangered is good speech.
Universities were setting themselves up to be used. Provocateurs exploit the atmosphere on campus to goad overwrought students, then gleefully trash the most important bastion of our crumbling civil society. Higher education and everything it stands for—logical argument, the scientific method, epistemological rigor—start to look illegitimate. Voters perceive tenure and research and higher education itself as hopelessly partisan and unworthy of taxpayers’ money.
The press is a secondary victim of this process of delegitimization. If serious inquiry can be waved off as ideology, then facts won’t be facts and reporting can’t be trusted. All journalism will be equal to all other journalism, and all journalists will be reduced to pests you can slam to the ground with near impunity. Politicians will be able to say anything and do just about anything and there will be no countervailing authority to challenge them. I’m pretty sure that that way lies Putinism and Erdoganism. And when we get to that point, I’m going to start worrying about free speech again.
Judith Shulevitz is a critic in New York.
Harvey SilverglateFree speech is, and has always been, threatened. The title of Nat Hentoff’s 1993 book Free Speech for Me – but Not for Thee is no less true today than at any time, even as the Supreme Court has accorded free speech a more absolute degree of protection than in any previous era.
Since the 1980s, the high court has decided most major free-speech cases in favor of speech, with most of the major decisions being unanimous or nearly so.
Women’s-rights advocates were turned back by the high court in 1986 when they sought to ban the sale of printed materials that, because deemed pornographic by some, were alleged to promote violence against women. Censorship in the name of gender–based protection thus failed to gain traction.
Despite the demands of civil-rights activists, the Supreme Court in 1992 declared cross-burning to be a protected form of expression in R.A.V. v. City of St. Paul, a decision later refined to strengthen a narrow exception for when cross-burning occurs primarily as a physical threat rather than merely an expression of hatred.
Other attempts at First Amendment circumvention have been met with equally decisive rebuff. When the Reverend Jerry Falwell sued Hustler magazine publisher Larry Flynt for defamation growing out of a parody depicting Falwell’s first sexual encounter as a drunken tryst with his mother in an outhouse, a unanimous Supreme Court lectured on the history of parody as a constitutionally protected, even if cruel, form of social and political criticism.
When the South Boston Allied War Veterans, sponsor of Boston’s Saint Patrick’s Day parade, sought to exclude a gay veterans’ group from marching under its own banner, the high court unanimously held that as a private entity, even though marching in public streets, the Veterans could exclude any group marching under a banner conflicting with the parade’s socially conservative message, notwithstanding public-accommodations laws. The gay group could have its own parade but could not rain on that of the conservatives.
Despite such legal clarity, today’s most potent attacks on speech are coming, ironically, from liberal-arts colleges. Ubiquitous “speech codes” limit speech that might insult, embarrass, or “harass,” in particular, members of “historically disadvantaged” groups. “Safe spaces” and “trigger warnings” protect purportedly vulnerable students from hearing words and ideas they might find upsetting. Student demonstrators and threats of violence have forced the cancellation of controversial speakers, left and right.
It remains unclear how much campus censorship results from politically correct faculty, control-obsessed student-life administrators, or students socialized and indoctrinated into intolerance. My experience suggests that the bureaucrats are primarily, although not entirely, to blame. When sued, colleges either lose or settle, pay a modest amount, and then return to their censorious ways.
This trend threatens the heart and soul of liberal education. Eventually it could infect the entire society as these students graduate and assume influential positions. Whether a resulting flood of censorship ultimately overcomes legal protections and weakens democracy remains to be seen.
Harvey Silverglate, a Boston-based lawyer and writer, is the co-author of The Shadow University: The Betrayal of Liberty on America’s Campuses (Free Press, 1998). He co-founded the Foundation for Individual Rights in Education in 1999 and is on FIRE’s board of directors. He spent some three decades on the board of the ACLU of Massachusetts, two of those years as chairman. Silverglate taught at Harvard Law School for a semester during a sabbatical he took in the mid-1980s.
Christina Hoff SommersWhen Heather Mac Donald’s “blue lives matter” talk was shut down by a mob at Claremont McKenna College, the president of neighboring Pomona College sent out an email defending free speech. Twenty-five students shot back a response: “Heather Mac Donald is a fascist, a white supremacist . . . classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.”
Some blame the new campus intolerance on hypersensitive, over-trophied millennials. But the students who signed that letter don’t appear to be fragile. Nor do those who recently shut down lectures at Berkeley, Middlebury, DePaul, and Cal State LA. What they are is impassioned. And their passion is driven by a theory known as intersectionality.
Intersectionality is the source of the new preoccupation with microaggressions, cultural appropriation, and privilege-checking. It’s the reason more than 200 colleges and universities have set up Bias Response Teams. Students who overhear potentially “otherizing” comments or jokes are encouraged to make anonymous reports to their campus BRTs. A growing number of professors and administrators have built their careers around intersectionality. What is it exactly?
Intersectionality is a neo-Marxist doctrine that views racism, sexism, ableism, heterosexism, and all forms of “oppression” as interconnected and mutually reinforcing. Together these “isms” form a complex arrangement of advantages and burdens. A white woman is disadvantaged by her gender but advantaged by her race. A Latino is burdened by his ethnicity but privileged by his gender. According to intersectionality, American society is a “matrix of domination,” with affluent white males in control. Not only do they enjoy most of the advantages, they also determine what counts as “truth” and “knowledge.”
But marginalized identities are not without resources. According to one of intersectionality’s leading theorists, Patricia Collins (former president of the American Sociology Association), disadvantaged groups have access to deeper, more liberating truths. To find their voice, and to enlighten others to the true nature of reality, they require a safe space—free of microaggressive put-downs and imperious cultural appropriations. Here they may speak openly about their “lived experience.” Lived experience, according to intersectional theory, is a better guide to the truth than self-serving Western and masculine styles of thinking. So don’t try to refute intersectionality with logic or evidence: That only proves that you are part of the problem it seeks to overcome.
How could comfortably ensconced college students be open to a convoluted theory that describes their world as a matrix of misery? Don’t they flinch when they hear intersectional scholars like bell hooks refer to the U.S. as an “imperialist, white-supremacist, capitalist patriarchy”? Most take it in stride because such views are now commonplace in high-school history and social studies texts. And the idea that knowledge comes from lived experience rather than painstaking study and argument is catnip to many undergrads.
Silencing speech and forbidding debate is not an unfortunate by-product of intersectionality—it is a primary goal. How else do you dismantle a lethal system of oppression? As the protesting students at Claremont McKenna explained in their letter: “Free speech . . . has given those who seek to perpetuate systems of domination a platform to project their bigotry.” To the student activists, thinkers like Heather MacDonald and Charles Murray are agents of the dominant narrative, and their speech is “a form of violence.”
It is hard to know how our institutions of higher learning will find their way back to academic freedom, open inquiry, and mutual understanding. But as long as intersectional theory goes unchallenged, campus fanaticism will intensify.
Christina Hoff Sommers is a resident scholar at the American Enterprise Institute. She is the author of several books, including Who Stole Feminism? and The War Against Boys. She also hosts The Factual Feminist, a video blog. @Chsommers
John StosselYes, some college students do insane things. Some called police when they saw “Trump 2016” chalked on sidewalks. The vandals at Berkeley and the thugs who assaulted Charles Murray are disgusting. But they are a minority. And these days people fight back.
Someone usually videotapes the craziness. Yale’s “Halloween costume incident” drove away two sensible instructors, but videos mocking Yale’s snowflakes, like “Silence U,” make such abuse less likely. Groups like Young America’s Foundation (YAF) publicize censorship, and the Foundation for Individual Rights in Education (FIRE) sues schools that restrict speech.
Consciousness has been raised. On campus, the worst is over. Free speech has always been fragile. I once took cameras to Seton Hall law school right after a professor gave a lecture on free speech. Students seemed to get the concept. Sean, now a lawyer, said, “Protect freedom for thought we hate; otherwise you never have a society where ideas clash, and we come up with the best idea.” So I asked, “Should there be any limits?” Students listed “fighting words,” “shouting fire in a theater,” malicious libel, etc.— reasonable court-approved exceptions. But then they went further. Several wanted bans on “hate” speech, “No value comes out of hate speech,” said Javier. “It inevitably leads to violence.”
No it doesn’t, I argued, “Also, doesn’t hate speech bring ideas into the open, so you can better argue about them, bringing you to the truth?”
“No,” replied Floyd, “With hate speech, more speech is just violence.”
So I pulled out a big copy of the First Amendment and wrote, “exception: hate speech.”
Two students wanted a ban on flag desecration “to respect those who died to protect it.”
One wanted bans on blasphemy:
“Look at the gravity of the harm versus the value in blasphemy—the harm outweighs the value.”
Several wanted a ban on political speech by corporations because of “the potential for large corporations to improperly influence politicians.”
Finally, Jillian, also now a lawyer, wanted hunting videos banned.
“It encourages harm down the road.”
I asked her, incredulously, “you’re comfortable locking up people who make a hunting film?”
“Oh, yeah,” she said. “It’s unnecessary cruelty to feeling and sentient beings.”
So, I picked up my copy of the Bill of Rights again. After “no law . . . abridging freedom of speech,” I added: “Except hate speech, flag burning, blasphemy, corporate political speech, depictions of hunting . . . ”
That embarrassed them. “We may have gone too far,” said Sean. Others agreed. One said, “Cross out the exceptions.” Free speech survived, but it was a close call. Respect for unpleasant speech will always be thin. Then-Senator Hillary Clinton wanted violent video games banned. John McCain and Russ Feingold tried to ban political speech. Donald Trump wants new libel laws, and if you burn a flag, he tweeted, consequences might be “loss of citizenship or a year in jail!” Courts or popular opinion killed those bad ideas.
Free speech will survive, assuming those of us who appreciate it use it to fight those who would smother it.
John Stossel is a FOX News/FOX Business Network Contributor.
Warren TreadgoldEven citizens of dictatorships are free to praise the regime and to talk about the weather. The only speech likely to be threatened anywhere is the sort that offends an important and intolerant group. What is new in America today is a leftist ideology that threatens speech precisely because it offends certain important and intolerant groups: feminists and supposedly oppressed minorities.
So far this new ideology is clearly dominant only in colleges and universities, where it has become so strong that most controversies concern outside speakers invited by students, not faculty speakers or speakers invited by administrators. Most academic administrators and professors are either leftists or have learned not to oppose leftism; otherwise they would probably never have been hired. Administrators treat even violent leftist protestors with respect and are ready to prevent conservative and moderate outsiders from speaking rather than provoke protests. Most professors who defend conservative or moderate speakers argue that the speakers’ views are indeed noxious but say that students should be exposed to them to learn how to refute them. This is very different from encouraging a free exchange of ideas.
Although the new ideology began on campuses in the ’60s, it gained authority outside them largely by means of several majority decisions of the Supreme Court, from Roe (1973) to Obergefell (2015). The Supreme Court decisions that endanger free speech are based on a presumed consensus of enlightened opinion that certain rights favored by activists have the same legitimacy as rights explicitly guaranteed by the Constitution—or even more legitimacy, because the rights favored by activists are assumed to be so fundamental that they need no grounding in specific constitutional language. The Court majorities found restricting abortion rights or homosexual marriage, as large numbers of Americans wish to do, to be constitutionally equivalent to restricting black voting rights or interracial marriage. Any denial of such equivalence therefore opposes fundamental constitutional rights and can be considered hate speech, advocating psychological and possibly physical harm to groups like women seeking abortions or homosexuals seeking approval. Such speech may still be constitutionally protected, but acting upon it is not.
This ideology of forbidding allegedly offensive speech has spread to most of the Democratic Party and the progressive movement. Rather than seeing themselves as taking one side in a free debate, progressives increasingly argue (for example) that opposing abortion is offensive to women and supporting the police is offensive to blacks. Some politicians object so strongly to such speech that despite their interest in winning votes, they attack voters who disagree with them as racists or sexists. Expressing views that allegedly discriminate against women, blacks, homosexuals, and various other minorities can now be grounds for a lawsuit.
Speech that supposedly offends women or minorities has already cost some people their careers, their businesses, and their opportunities to deliver or hear speeches. Such intimidation is the intended result of an ideology that threatens free speech.
Warren Treadgold is a professor of history at Saint Louis University.
Matt WelchLike a sullen zoo elephant rocking back and forth from leg to leg, there is an oversized paradox we’d prefer not to see standing smack in the sightlines of most our policy debates. Day by day, even minute by minute, America simultaneously gets less free in the laboratory, but more free in the field. Individuals are constantly expanding the limits and applications of their own autonomy, even as government transcends prior restraints on how far it can reach into our intimate business.
So it is that the Internal Revenue Service can charge foreign banks with collecting taxes on U.S. citizens (therefore causing global financial institutions to shun many of the estimated 6 million-plus Americans who live abroad), even while block-chain virtuosos make illegal transactions wholly undetectable to authorities. It has never been easier for Americans to travel abroad, and it’s never been harder to enter the U.S. without showing passports, fingerprints, retinal scans, and even social-media passwords.
What’s true for banking and tourism is doubly true for free speech. Social media has given everyone not just a platform but a megaphone (as unreadable as our Facebook timelines have all become since last November). At the same time, the federal government during this unhappy 21st century has continuously ratcheted up prosecutorial pressure against leakers, whistleblowers, investigative reporters, and technology companies.
A hopeful bulwark against government encroachment unique to the free-speech field is the Supreme Court’s very strong First Amendment jurisprudence in the past decade or two. Donald Trump, like Hillary Clinton before him, may prattle on about locking up flag-burners, but Antonin Scalia and the rest of SCOTUS protected such expression back in 1990. Barack Obama and John McCain (and Hillary Clinton—she’s as bad as any recent national politician on free speech) may lament the Citizens United decision, but it’s now firmly legal to broadcast unfriendly documentaries about politicians without fear of punishment, no matter the electoral calendar.
But in this very strength lies what might be the First Amendment’s most worrying vulnerability. Barry Friedman, in his 2009 book The Will of the People, made the persuasive argument that the Supreme Court typically ratifies, post facto, where public opinion has already shifted. Today’s culture of free speech could be tomorrow’s legal framework. If so, we’re in trouble.
For evidence of free-speech slippage, just read around you. When both major-party presidential nominees react to terrorist attacks by calling to shut down corners of the Internet, and when their respective supporters are actually debating the propriety of sucker punching protesters they disagree with, it’s hard to escape the conclusion that our increasingly shrill partisan sorting is turning the very foundation of post-1800 global prosperity into just another club to be swung in our national street fight.
In the eternal cat-and-mouse game between private initiative and government control, the former is always advantaged by the latter’s fundamental incompetence. But what if the public willingly hands government the power to muzzle? It may take a counter-cultural reformation to protect this most noble of American experiments.
Matt Welch is the editor at large of Reason.
Adam. J. WhiteFree speech is indeed under threat on our university campuses, but the threat did not begin there and it will not end there. Rather, the campus free-speech crisis is a particularly visible symptom of a much more fundamental crisis in American culture.
The problem is not that some students, teachers, and administrators reject traditional American values and institutions, or even that they are willing to menace or censor others who defend those values and institutions. Such critics have always existed, and they can be expected to use the tools and weapons at their disposal. The problem is that our country seems to produce too few students, teachers, and administrators who are willing or able to respond to them.
American families produce children who arrive on campus unprepared for, or uninterested in, defending our values and institutions. For our students who are focused primarily on their career prospects (if on anything at all), “[c]ollege is just one step on the continual stairway of advancement,” as David Brooks observed 16 years ago. “They’re not trying to buck the system; they’re trying to climb it, and they are streamlined for ascent. Hence they are not a disputatious group.”
Meanwhile, parents bear incomprehensible financial burdens to get their kids through college, without a clear sense of precisely what their kids will get out of these institutions in terms of character formation or civic virtue. With so much money at stake, few can afford for their kids to pursue more than career prospects.
Those problems are not created on campus, but they are exacerbated there, as too few college professors and administrators see their institutions as cultivators of American culture and republicanism. Confronted with activists’ rage, they offer no competing vision of higher education—let alone a compelling one.
Ironically, we might borrow a solution from the Left. Where progressives would leverage state power in service of their health-care agenda, we could do the same for education. State legislatures and governors, recognizing the present crisis, should begin to reform and renegotiate the fundamental nature of state universities. By making state universities more affordable, more productive, and more reflective of mainstream American values, they will attract students—and create incentives for competing private universities to follow suit.
Let’s hope they do it soon, for what’s at stake is much more than just free speech on campus, or even free speech writ large. In our time, as in Tocqueville’s, “the instruction of the people powerfully contributes to the support of a democratic republic,” especially “where instruction which awakens the understanding is not separated from moral education which amends the heart.” We need our colleges to cultivate—not cut down—civic virtue and our capacity for self-government. “Republican government presupposes the existence of these qualities in a higher degree than any other form,” Madison wrote in Federalist 55. If “there is not sufficient virtue among men for self-government,” then “nothing less than the chains of despotism” can restrain us “from destroying and devouring one another.”
Adam J. White is a research fellow at the Hoover Institution.
Cathy YoungA writer gets expelled from the World Science Fiction Convention for criticizing the sci-fi community’s preoccupation with racial and gender “inclusivity” while moderating a panel. An assault on free speech, or an exercise of free association? How about when students demand the disinvitation of a speaker—or disrupt the speech? When a critic of feminism gets banned from a social-media platform for unspecified “abuse”?
Such questions are at the heart of many recent free-speech controversies. There is no censorship by government; but how concerned should we be when private actors effectively suppress unpopular speech? Even in the freest society, some speech will—and should—be considered odious and banished to unsavory fringes. No one weeps for ostracized Holocaust deniers or pedophilia apologists.
But shunned speech needs to remain a narrow exception—or acceptable speech will inexorably shrink. As current Federal Communications Commission chairman Ajit Pai cautioned last year, First Amendment protections will be hollowed out unless undergirded by cultural values that support a free marketplace of ideas.
Sometimes, attacks on speech come from the right. In 2003, an Iraq War critic, reporter Chris Hedges, was silenced at Rockford College in Illinois by hecklers who unplugged the microphone and rushed the stage; some conservative pundits defended this as robust protest. Yet the current climate on the left—in universities, on social media, in “progressive” journalism, in intellectual circles—is particularly hostile to free expression. The identity-politics left, fixated on subtle oppressions embedded in everyday attitudes and language, sees speech-policing as the solution.
Is hostility to free-speech values on the rise? New York magazine columnist Jesse Singal argues that support for restrictions on public speech offensive to minorities has remained steady, and fairly high, since the 1970s. Perhaps. But the range of what qualifies as offensive—and which groups are to be shielded—has expanded dramatically. In our time, a leading liberal magazine, the New Republic, can defend calls to destroy a painting of lynching victim Emmett Till because the artist is white and guilty of “cultural appropriation,” and a feminist academic journal can be bullied into apologizing for an article on transgender issues that dares to mention “male genitalia.”
There is also a distinct trend of “bad” speech being squelched by coercion, not just disapproval. That includes the incidents at Middlebury College in Vermont and at Claremont McKenna in California, where mobs not only prevented conservative speakers—Charles Murray and Heather Mac Donald—from addressing audiences but physically threatened them as well. It also includes the use of civil-rights legislation to enforce goodthink in the workplace: Businesses may face stiff fines if they don’t force employees to call a “non-binary” co-worker by the singular “they,” even when talking among themselves.
These trends make a mockery of liberalism and enable the kind of backlash we have seen with Donald Trump’s election. But the backlash can bring its own brand of authoritarianism. It’s time to start rebuilding the culture of free speech across political divisions—a project that demands, above all, genuine openness and intellectual consistency. Otherwise it will remain, as the late, great Nat Hentoff put it, a call for “free speech for me, but not for thee.”
Cathy Young is a contributing editor at Reason.
Robert J. ZimmerFree speech is not a natural feature of human society. Many people are comfortable with free expression for views they agree with but would withhold this privilege for those they deem offensive. People justify such restrictions by various means: the appeal to moral certainty, political agendas, demand for change, opposing change, retaining power, resisting authority, or, more recently, not wanting to feel uncomfortable. Moral certainty about one’s views or a willingness to indulge one’s emotions makes it easy to assert that others are doing true damage or creating unacceptable offense simply by presenting a fundamentally different perspective.
The resulting challenges to free expression may come in the form of laws, threats, pressure (whether societal, group, or organizational), or self-censorship in the face of a prevailing consensus. Specific forms of challenge may be more or less pronounced as circumstances vary. But the widespread temptation to consider the silencing of “objectionable” viewpoints as acceptable implies that the challenge to free expression is always present.
The United States today is no exception. We benefit from the First Amendment, which asserts that the government shall make no law abridging the freedom of speech. However, fostering a society supporting free expression involves matters far beyond the law. The ongoing and increasing demonization of one group by another creates a political and social environment conducive to suppressing speech. Even violent acts opposing speech can become acceptable or encouraged. Such behavior is evident at both political rallies and university events. Our greatest current threat to free expression is the emergence of a national culture that accepts the legitimacy of suppression of speech deemed objectionable by a segment of the population.
University and college campuses present a particularly vivid instance of this cultural shift. There have been many well-publicized episodes of speakers being disinvited or prevented from speaking because of their views. However, the problem is much deeper, as there is significant self-censorship on many campuses. Both faculty and students sometimes find themselves silenced by social and institutional pressures to conform to “acceptable” views. Ironically, the very mission of universities and colleges to provide a powerful and deeply enriching education for their students demands that they embrace and protect free expression and open discourse. Failing to do so significantly diminishes the quality of the education they provide.
My own institution, the University of Chicago, through the words and actions of its faculty and leaders since its founding, has asserted the importance of free expression and its essential role in embracing intellectual challenge. We continue to do so today as articulated by the Chicago Principles, which strongly affirm that “the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” It is only in such an environment that universities can fulfill their own highest aspirations and provide leadership by demonstrating the value of free speech within society more broadly. A number of universities have joined us in reinforcing these values. But it remains to be seen whether the faculty and leaders of many institutions will truly stand up for these values, and in doing so provide a model for society as a whole.
Robert J. Zimmer is the president of the University of Chicago.