Commentary asked Edward Banfield, Nathan Glazer, Michael Harrington, Tom Kahn, Christopher Lasch, Robert Lekachman, Bayard Rustin, Gus Tyler, and George…
In recent months a heated debate has been taking place on the past, present, and future of social policy in the United States. In an effort to explore the major issues involved in this debate, we invited a number of writers representing diverse political and social views to discuss the following questions:
- It is frequently said that the Great Society was a failure. Do you agree with this assessment? If not, to what particular features of the Johnson program would you attribute its success? If so, where would you place the blame? Was it insufficient financing? Was it the “services” approach?
- It is also frequently said that Richard Nixon is leading a “counterrevolution” against the general trend of social policy of the last thirty or forty years. Do you agree with this characterization of the Nixon program? If so, toward what objectives would you say the “counterrevolution” is aimed? If not, how would you characterize the general thrust of the Nixon administration’s policies, especially as reflected in the new budget?
- What, in your judgment, would constitute a sound program in the area of social policy? In the light of the experience of the past ten years, what do you think the possibilities are in the foreseeable future? What are the limits?
Edward C. Banfield:
1) The Great Society was a Great Cornucopia overflowing with all sorts of goodies: civil-rights laws, the Office of Economic Opportunity (a cornucopia within a cornucopia), Model Cities (another cornucopia), manpower-training programs, compensatory-education programs, and so on—hundreds of items altogether. Their diversity resists any one-word verdict.
Many millions of dollars were spent monitoring and evaluating, but the products of these efforts are of little value for the present purpose. The evaluations are mostly of bits and pieces of certain programs—Community Action in some cities and from certain standpoints, for example. So far as I am aware, not a single program has been evaluated systematically and in detail. That would be impossible in most cases because of the vague and contradictory nature of the goals. Model Cities, for example, was intended to concentrate resources in order to make a “substantial impact” on poor neighborhoods, to improve decision-making procedures in the offices of mayors and city managers, and (among other things) to foster coordination, innovation, and institutional change. No one had, or has, any way of knowing exactly what was meant by such words and, apart from that, there was, and is, no way of judging how much of one goal (say, innovation) ought to have been sacrificed in order to secure more in terms of another (say, coordination).
It must be remembered, too, that much of what is associated in the public mind with the Great Society had long been in existence. When Secretary George Romney called the Federal Housing Administration programs “a $100-billion-dollar mistake” he was not referring to those of the Great Society: the national housing goal was set by Congress in 1949 and even at that time the principal housing programs were a decade old.
Despite these considerations, I believe it is possible to separate out the Great Society programs and pass judgment upon them collectively. The Great Society had two general goals. The more widely publicized was that of bringing incomes up to what came to be called the poverty line. The other, which was more important from the standpoint of most of the professionals who participated in the design of the programs, was to bring “culturally deprived” persons into the “mainstream.” The chronically poor, especially the young among them, were to be given training in schools and work places so that they could get steady, high-paying jobs; their civil rights were to be protected and extended; and they were to be provided with better health, housing, and recreational facilities—all with “maximum feasible participation” on their part. This, it was thought, would reduce frustration and “alienation” and engender self-confidence, self-respect, and a healthy desire for political and economic independence.
Judged against these goals, almost all of the Great Society programs (the exceptions that I have in mind are the civil-rights laws) range from unsuccessful to counter productive.
The number of the poor did, it is true, decline by one-fourth between 1965 and 1970. Without any doubt the Great Society programs accounted for some of this (OEO alone spent nearly $10 billion) but the Social Security program established by the New Deal accounted for more (mostly old people whose incomes had not been much below the line to begin with) and there were others—no one seems to know how many—whose increase of income was due to the natural growth of the economy. Indeed, on the whole poverty seems to have decreased at a slower rate in the 1960’s than before. Robert J. Lampman, in his valuable Ends and Means of Reducing Income Poverty (Markham, 1971), reports that the percentage of the total population in low-income status fell from 26 in 1947 to 19 in 1957 and to 12 in 1967. The principal factors affecting the rate of movement out of poverty are not the good intentions of legislators or the generosity of taxpayers: rather they are changes in the composition of the population, in occupations (especially farm versus non-farm), and in the size of the gross national product. According to Lampman and other economists, it is reasonable to expect that by 1980 no one will be below the existing poverty line. This would be the case no doubt even if the War on Poverty had never been declared.
There has also been progress toward the other goal of the Great Society programs—that of bringing the “culturally deprived” into the “mainstream.” But here again most of it is not attributable to the programs. The income return to blacks who finished school in recent years is now about equal to that of whites; this has encouraged young blacks to want—and to get—somewhat more schooling than do whites of the same ability as measured by test performance. The Great Society’s civil-rights laws deserve some of the credit for these developments, but surely the fundamental fact of the situation (accounting, among other things, for the passage of the laws) is that there has been a dramatic decline in white bigotry and insensitivity since the Second World War, resulting in vastly improved employment opportunities for the “culturally deprived” and making it possible for the motivated among them to find their way into the “mainstream.” There is no evidence, so far as I have been able to discover, that the Community Action and other Great Society programs designed to stimulate upward mobility have succeeded in doing so. Where motivation developed it may have done so in spite of these programs rather than because of them. There is no doubt but that the injection of many billions of dollars of public funds benefited the black communities. The people who gained most, however, were middle class, not “culturally deprived.”
That compensatory education programs have not worked and probably cannot be made to work is a conclusion now widely accepted even among those who expected most from them. Robert Levine, OEO’s director of research in the Johnson administration, has written (in The Poor Ye Need Not Have with You, MIT Press, 1970) that in general “. . . the evaluation of educational programs shows that very little is known about what [will work] and even throws doubt on the importance of anything that might work. . . .” Much of the same has been said by others with respect to delinquency control, manpower training, community action, coordination, and most of the other programs. Indeed, in a paper presented in March at the annual meeting of the Southwestern Political Science Association, Professor Robert J. Leonard of the University of Evansville showed that anti-poverty expenditures (OEO, VISTA, Community Action, and Head Start) in 1968 had no effect on poverty, education, employment, and crime in the cities to which they went so far as could be judged by such crude but plausible measures as proportion of employed males per capita before and after.
Whatever judgment one makes as to the benefits of these programs, one must face the fact of their costs. These have been, and are still, very-large. It has been estimated (by Charles Schultze and his collaborators of the Brookings Institution in Setting National Priorities: The 1973 Budget) that federal expenditures on the major Great Society programs increased from $1.7 billion in fiscal 1963 to $35.7 billion in fiscal 1973.
These money costs are far exceeded, in my judgment, by many other, more or less intangible, costs, especially the following: the multiplication of categorical-grant programs beyond the capacity of the executive branch to administer them, with the result being delay, confusion, waste and corruption, and the “elbowing aside” (as the President recently put it) of state and local governments and of the private sector and their further decline in vigor and capacity; the raising of expectations to unreasonable levels, leading to widespread disappointment and frustration and, on the part of quite a few, to the conclusion that this is a “sick” society “not worth saving”; the use of public funds in some cities to underwrite the “leadership” of known criminals, revolutionaries, and mountebanks who exploited—and in some instances terrorized and ultimately destroyed—neighborhoods and institutions over which they were enabled to gain control (for a case in point, see the account of the destruction of the Woodlawn area of Chicago in the Winter 1973 issue of the Public Interest); and, finally, the cooptation of most of the young potential leaders of the poor neighborhoods and their subsequent neutralization and, in many instances, demoralization.
It was not for lack of money that the Great Society programs failed. Some of the principal efforts—Model Cities, for example—had more of it than they could spend. Others spent prodigally without measurable achievement. Federal aid to public schools, for example, increased from $19 to $52 billion in the 1960’s, but the test scores of pupils, which had previously been rising, declined. (However, it should be remembered that in this decade the schools were holding more low-achieving pupils for longer periods; presumably they had some success with a considerable number of them.)
If an “income” (as opposed to “services”) strategy means giving money to people rather than to governments, it is doubtful whether—except perhaps in the very long run—it would have succeeded any better in changing the style of life of those whose handicap was not simply, or even primarily, lack of income—that is, the “culturally deprived.” In my opinion, putting millions more on welfare (which is what the “income strategy” seems to mean in practice) would permanently seal a great many people off from the world of work. I agree with the authors of Work in America (MIT Press, 1973) that “the key to reducing familial dependency on the government lies in the opportunity [I would add also the disposition to accept the opportunity] for the central provider to work full-time at a living wage.”
It is a virtue of the “services strategy” that the conspicuous failure of a “service” makes it politically vulnerable. Not so with the “income strategy”; whatever ill effects that produced would probably pass unnoticed or, in any event, not be charged against it. Politically it would be unassailable. Thus we have recently been told by Joel Handler that the theory that “if enough people get on welfare it will be politically untenable to treat them as ‘undeserving’ . . .” is “both brilliant and humane” (Reforming the Poor, Basic Rooks, 1972).
2) President Nixon no doubt wishes that Americans would do more for themselves and expect less to be done for them by government. Nevertheless I do not think that he is in the least likely to lead a “counterrevolution” against the trend of social policy. He knows, better perhaps than any man alive, that it is an indispensable condition of the working of the American political system that it offer strong incentives to all sorts of interests to press for advantages (not always “selfish” ones of course) and that incentives exist only as there is expectation of at least partial and occasional successes. He knows, therefore, that unwise and even outrageous measures must frequently be adopted, and that if it were otherwise (if, say, Congress were somehow made “responsible”) the result would be to deprive the system of the energy that makes it work. That he is himself a very strenuous exerter of influence is evidence that he knows how to act effectively within the system, not that he wants to change it.
Although he may deplore it, the President must also be fully aware that the volume of demands placed upon the government is bound to increase. As Americans become more affluent, schooled, and leisured they discover (and also invent) more and more “social problems” which (they fondly suppose) can be “solved” if the government “really cares” (that is, if it passes enough laws, hires enough officials, and spends enough money). That one whose business it is to come to terms with reality, and who has shown himself to be extraordinarily adept at this business, will lead a “counterrevolution” against so conspicuous a feature of reality seems most unlikely. The President is a politician, not a preacher. His task and talent are for making things work, not for changing them.
The view that I am taking is in no way contradicted by the current budget proposals. The President is trying to curb inflation, avoid increases in taxes, get rid of programs that almost everyone knows have not worked, consolidate others for better administration, and turn responsibility for a wide range of matters back to the states and cities. Even if his budget contained no new initiatives (in fact it contains several major ones), it could not reasonably be taken as a portent of “counterrevolution.”
The President’s efforts to shift responsibilities to the states and local governments might perhaps be judged “counterrevolutionary” if he were leaving it to them to finance the programs. But this is not what he is doing. The fact is—although one would never guess it from the howls of mayors and governors—that the 1974 budget proposes to give state and local government more federal aid than they received this year (to make the figures comparable one must take into account that in 1974 public assistance for the aged, blind, and disabled will go to them directly rather than via grants to the states) and about four-and one-half times what they received a decade ago. And this although state and local governments are presently enjoying an aggregate revenue surplus which, if they do not lower their tax rates, is expected to reach some $13 billion in 1975.
I see revenue sharing and the New Federalism in general as a sort of domestic Vietnamization strategy under which Washington will provide the “villagers” with material resources and technical advice while allowing them to fight the “war” in their own way. This is not necessarily a strategy for winding down the “war.” It may merely represent a facing of the fact, obvious in the Johnson administration but not faced by it, that federal programs have become too many and too complex to be administered from Washington. Another possibility—I find this more probable, although I do not suppose that it is the President’s wish—is that the new strategy is preparatory to an escalation of the “war” and the opening of vast new fronts (health seems to be the most likely one now that education and welfare are both stalemated).
3) In my judgment a sound program in the area of social policy would involve a radical devolution of federal activities to state and local government and, beyond that, of many public ones to competitive markets. Such a program is, however, incompatible with the nature of our political system, which is energized by the pressures that interests exert to get things from government. Since I believe that despite its evident faults this political system is vastly better than any practical alternative, I am in the awkward position of having to conclude that a sound program is really unsound. When constituents begin asking politicians, “What have you undone for me lately?” the situation will improve.
1) Writing in 1973, ten years after the planning of the poverty program began in Washington, I still find it very hard to give summary judgments on the Great Society programs. Whether one speaks of Medicare and Medicaid, Elementary and Secondary Education and all its titles, the programs of the Office of Economic Opportunity; and whether one considers Community Action programs, Head Start and its progeny, Legal Services, Neighborhood Health Care Centers, Neighborhood Youth Corps, Job Corps, or all the others; or leased housing, rent supplements, Model Cities, Section 235 (homeownership for the poor) or Section 236 (rental housing for the poor), or a great range of manpower programs (and one can of course expand this list considerably)—it is almost impossible for those of us who are not experts in given programs and their variants in fifty states and thousands of communities to give a simple verdict of “success” or “failure.”
Even though I have spent most of my time for ten years on the study of such programs and teaching about them, I find it difficult to say with any great confidence that this should go or that should stay; or to propose improvements with any greater confidence. I believe some of the implications of such a confession of non-competence are: (a) because of the federal system and the great variety of situations we find in regions, states, and cities, and indeed in neighborhoods of cities, many of these programs—such as Model Cities and Neighborhood Youth Corps and Community Action—have meant very different things in different places; (b) the design of these programs (e.g., emphasis on local variation, experiment, demonstration) often guaranteed that they would be infinitely various and thus difficult to evaluate as programs; (c) we have not had in any case very good systems of reporting and evaluation to enable us to make summary judgments—in the nature of the case (see points a and b) we probably couldn’t; (d) the objectives of these programs—were they to relieve state and city budgets? employ more blacks and minorities? overcome deeply-rooted social problems? pay off the professionals enough to permit some non-professionals to get into the act? punish Republican governors and mayors and reward Democrats, or vice versa, depending on who was in power?—were often so mixed that for this reason alone the question of success or failure was a difficult one to decide; (e) the independent evaluations of these programs by outside social scientists suffered of course from all the problems described in a, b, c, and d above, as well as some special ones. Thus, it turned out we had much better studies of the beginnings of programs than of how they operated once started and once the initial phases of conflict were overcome. We know from Stephan Thernstrom’s Poverty, Planning, and Politics in the New Boston: The Origins of ABCD, Peter Marris and Martin Rein’s Dilemmas of Social Reform, Daniel P. Moynihan’s Maximum Feasible Misunderstanding, and other studies, what was wrong with the design of the central part of the poverty program, the Community Action program, and how much unrewarding conflict was built into it. After it was more or less running at the end of the 1960’s our data—at least in the form of good studies by social scientists—become sparser. Yet we may well be ending the program on the basis of what it was intended to be when it began, the confusion with which it started, the criticisms made of it then, rather than on the basis of how it is actually conducted in hundreds of cities.
I say we may be—I don’t know. In each of these areas one must go, not to general pundits or experts, but to special experts, who have detailed and comprehensive knowledge. Yet when one does go looking one discovers in despair that no one has detailed and comprehensive knowledge. This is not a general confession that we never know enough to act; in some philosophical sense, undoubtedly we don’t. It is a confession that we have designed our programs so that it is very hard to know enough to act. Thus, we know enough generally about Social Security to act (although there are plenty of complexities in that program, too). The same amount of money is sent out from a central point to everyone in the same status. The program’s objectives are clear and simple, and thus it is easy to evaluate. It is not meant to solve the complex social, emotional, personal problems of the aged—what could? It has no input at all from cities and states and thus does not vary from location to location. It cannot be used to reward some areas and punish others, though of course Congress and the President scheme to see how they may take credit for one or another proposed change in this huge program.
Obviously Social Security is not a sound model for all social programs. There may be no way of dealing with the problems of juvenile delinquency, or declining neighborhoods, or manpower training, or medical care for the poor, using such a model. At the same time, of course, it could have been predicted that the design of many of the Great Society programs was one which meant that evaluation would be difficult or impossible. It was a design for pluralism, for conflict, for a mix of forces, for varied outcomes in different places.
Having said all this, let me pronounce a tentative summary judgment: the Great Society program was clearly not a uniform failure. No one basically challenges Medicare, Medicaid, the Civil Rights Act of 1964, the affirmative-action programs, expanded manpower-training programs. The housing programs are more of a mixed story. The shift away from conventional public housing represented by the legislation of 1965 and 1968 was necessary and a major achievement. It increased the rate of building of subsidized housing units tenfold by fiscal 1972. Most of these units were being built outside the blighted central city areas. It is also true that the new programs offered a good deal of possibility for corruption and excessive profit-making, and increased the required subsidy payments on subsidized housing units enormously. Clearly some changes in these programs are necessary. But certainly one must pronounce partial success.
The most difficult judgments to pronounce are in the most original area, that of neighborhood-controlled social programs, as in Community Action and Model Cities. My overall guess is that as of 1973 most of the money in these programs (it is, by the way, not very much—about a billion dollars) is providing useful social services to children, youth, and old people, and a good deal of employment to people who are doing no damage and who will not in the future find it easy to get jobs as good as the ones they now hold.
If we ask, did these programs overcome “poverty” and its problems, clearly we have to answer no. I put poverty in quotation marks because I do not think it was really poverty we were talking about. We were concerned in the early 1960’s with the anti-social behavior of large numbers of young people, with the failure of such institutions as family and school to socialize them into some form of decent behavior, and with the destructive effects of their behavior on city life. We were confused about why the transition from school to work was not taking place easily for so very many. If anything, the situation became considerably worse between 1963 and 1973. I do not know what the sources of such behavior are. Clearly they are in some way related to the experience of blacks in the United States, though Puerto Ricans in New York share in the behavior, and whites in other groups have also shown such behavior—perhaps not as marked—in the past, and still show some of it in the early 1970’s. I will not list all the potential hypotheses. The fact is we did not know what to do about it, and still don’t.
Would more money have helped? Frankly, I do not know—and I also do not know where it might have been spent to be a help. At first we defined the issue as that of the “drop-out.” We have had a good deal of research on dropouts since, and our conclusion seems to be that there is no particularly useful way of keeping all young people in school until eighteen or whatever age we define as being appropriate to earn a high-school diploma, and it is not particularly clear that drop-outs suffer by dropping out if there is an alternative for them to drop into (the army, an apprenticeship program, or something else). We tried to set up alternatives or supplements to school for drop-outs and potential drop-outs—Job Corps, Neighborhood Youth Corps, Upward Bound, and endless other programs. Have they been successes or failures? Should we have spent more on them? Could we have? Once such programs were launched they were not necessarily overwhelmed with potential clients—nor did those selected necessarily stay the term. It is not easy to pronounce an overall judgment. One can tell horror stories on the one side ($11,000 a year per trainee for training), and presumably success stories on the other—a rather substantial number of people who had gone through episodes as drug addicts and juvenile delinquents ended up in college programs (how we evaluate this is another story, yet in almost any college I would guess there is some attrition of the most self-defeating and community-damaging kinds of juvenile criminal behavior, as well, of course, as a considerable importation of such behavior into institutions that would not have experienced it had they been more selective).
Among the rather severe criticisms of this type of 60’s program was the argument that it represented a “services” strategy rather than a “jobs” or an “income” strategy. One should give Paul Goodman credit for making the simple division of the sums that were expected to be spent on Mobilization for Youth in the early 60’s on the Lower East Side and pointing out that we would be better off giving the juveniles the money. An ingenious observation: but was it true? I wonder. As it was, we hired great numbers of social workers and consultants, increasing their income. Many of these—few at the beginning, more later—came from minority groups; we were providing the jobs through these programs for the barely college-trained that other programs were producing; and from the same communities. This was a minor benefit, but many pointlessly overpaid social workers are now finding it hard to get a job where their qualifications or experience mean much. (I say this without being sure it is true. Perhaps private business is taking them in, in view of the affirmative-action pressure. I think it may be.) The services themselves generally provided a place out of the cold for young people, might have led back to school or to college, and, less positively, offered opportunities for hustles. If the sums had been distributed in income to young people, their effects would have been frightfully disruptive. The already damaged families, with fathers absent or earning little, would have found even more power shifting to the adolescent young. Presumably they might have been paid not to be delinquent, as they often were in summer-work programs, and one assumes that would have helped somewhat in reducing anti-social behavior. But it sounds like a difficult program to police and evaluate. Had it been income instead of services, we would have been in the same quandary of evaluation we are in today.
Of course, “jobs” instead of “services” was always the more attractive approach. And yet we should not underestimate the difficulty of this approach, either. It was no simple matter to provide jobs for the young unemployed (the area where the problem was most concentrated and most severe) or better jobs for the large numbers of black men who had already dropped out by their twenties, and indeed were not to be found by the census until they began returning to be counted in their forties and fifties, burnt out. After all, we were not dealing with a mass unemployment situation during most of this period as we had been in the 1930’s, when men with the experience of working, with the expectation of working, with habits that permitted work, were easily and usefully employed in vast numbers at subsistence wages. By the mid-1960’s, such a population did not exist, and many of those who were not working would not work at subsistence wages. All sorts of developments, apparently good in themselves, had conspired to produce such a result: lengthening the years of required schooling, raising the minimum wage, liberalizing welfare. At the bottom was the race problem in American life. So even if we had moved to a large job-creation program, we would have had to build services into it.
The other great alternative to services, the guaranteed annual income, as it finally emerged as a feasible political proposal in the Family Assistance Plan in 1969, was superior to what we had, but it too did not reach the problem of the anti-social behavior that was destroying the possibility of decent life in the older central cities. While formally of course not a product of the Great Society, FAP was created by Johnson men, even if accepted by Nixon and Nixon men, and was part of the thinking of Great Society architects from the middle 60’s. Its great virtue was that it was a way of distributing money to persons in families, money that would be less stigmatized than welfare, would go to men, and would go to workers. It would lift people some part of the way out of poverty without imposing the concurrent damage to family and work that the welfare system did. One of the reasons some of us were less than fully enthusiastic was that we saw something like FAP already in existence in the advanced states (New York) without the concurrent advantages which might have been expected.
I do not feel the problem was insufficient money—there was often more than anyone knew how to spend usefully. I do not think the problem was services instead of income and jobs—many people needed the services. Jobs were certainly the best area for reform, and job-creation was neglected. Yet it is also true that unemployment was low in the later 1960’s, and inflation a threat, and even job-creation would have involved an array of supportive services and considerable care to limit inflation. I would conclude: the new services were necessary, but they turned out not to be adequate for dealing with our severe problems; more jobs would have been the best supplement, but a job program would have had its own problems. Of course the awful Vietnamese war was in a way a job program—we were spending up to $30 billion annually more than we would have in job-creation, which served to devastate Vietnam rather than to rebuild the United States. But it did employ minorities, and in jobs which bore no relief stigma. Would things have been strikingly different if this money had gone into domestic jobs, doing all the things we need so badly in this country—rebuilding the parks, cleaning the streets, more housing and public facilities, etc.? This would have been infinitely better for the country, but one wonders how much it would have done for our social problems.
2) The Nixon program, as presented in the budget for 1973-74 and the supporting statements, can be read in many ways. As I see it, there are a number of major decisions: (a) taxes will not be raised; (b) if they are not, there is not much money available for new social programs, in view of the enormous share in the “human services” budget that now goes to Social Security and Medicare; (c) as for the programs of the Great Society, the aim is to shift more and more of them into revenue sharing which will go to states and cities.
What is the effect of these decisions? It is not particularly to reduce money to functions, but it is to change the way in which these functions will be met. Instead of having education money mandated to the use of poor children, we will have education money to be spent as the local authorities decide. Instead of having manpower money spread over a variety of federal programs, it will be spent the way local state and municipal authorities decide. Money for community plans and programs will now come in the form of revenue sharing for community development, spent as states and cities wish, rather than under Community Action programs and Model Cities programs. There are a number of still undetermined questions here. Will there be as much money as there was, growing to take account of inflation? Apparently less. But that is owing to decisions (a) and (b) above, decisions in which Congress concurs and which indeed it has made. Will less of this money go to targeted groups, the poor and minorities? That will depend on the local political situation. On the whole, less will. But will the agencies that have developed a claim on funds in New York City, Chicago, San Francisco, and elsewhere, have any less claim on these funds if they come as different kinds of revenue sharing rather than as grants from the federal government? Certainly the politics will change. If the poor and minorities get less it will be because of insufficient clout at the local level—just as the proposals themselves reflect insufficient clout at the national level.
What of the programs for minorities specifically, stemming from the Civil Rights Acts of the 1960’s? Obviously nothing can undo Negro voting power established by the Civil Rights Acts in the South—unless the Negroes of the South follow those of the North in not registering and voting. I do not believe that the administration has been particularly remiss (the comparison for purposes of defining a “counterrevolution” must be with previous administrations) in implementing the laws against discrimination in education and employment. The problems we are facing in this area are problems any administration would have faced, and the thrust to implementation from the courts and the lower bureaucracy seems to me scarcely to have weakened—in many cases it has been strengthened under Nixon, though of course it is less the administration than the bureaucracy and the courts that have been responsible. The fight for the opening of the suburbs to the minorities and the poor is a complex one—no one could claim there has been less progress under Nixon than under Kennedy and Johnson, though once again whatever progress has been made is, one assumes, less the result of administration initiatives than of the change in the economic capacity of the blacks.
A counterrevolution? Not quite. Little will change in the gross sums available for education, though a good deal may change in how they are spent. Medicare and Medicaid will grow (the new charges which add to the outrageous complexity of Medicare will hopefully be dropped, and both systems may yet be folded into a necessary larger plan for health insurance). The increase in families on welfare seems to have stopped, but that is owing to the actions of state and local elected officials, liberal as well as conservative. The neighborhood-controlled programs are now threatened—Community Action programs and Model Cities. They are already under mayoral authority, however, and already have their claws clipped, or have been incorporated into the legitimate political system, depending on one’s political perspective.
Low-cost housing programs are under moratorium—but everyone agrees they must be coordinated, clarified, replaced by something else. The pipeline is fairly full, and Congress, which almost passed new housing legislation last year (and which in any case tends to pass its own rather than the administration’s), can get to work again.
3) This is not to say I would not argue with the Nixon budget, but I would not use so large and loose a term as “counterrevolution.” My own thinking goes along the following lines:
(a) When I read the criticism of the effect of dropping Community Action, Model Cities, and other programs oriented to neighborhood-level services, I am struck by how many of these services seem to be doing things that should be a part of any decent modern society. Thus, I see on the local Boston television that a program which provides one hot meal a day to older people confined to their homes is funded under Community Action. This, it is my impression, is what much of Community Action has become—various kinds of day care, summer and other recreation programs for youth, programs for older people. We huffed and puffed and fought to do what every modern welfare state considers a matter of course. We should have hot meals for old people, places where mothers can leave children so they can get to medical appointments or attend courses, more recreational facilities for youth, more centers where one can get advice on schooling and jobs, and so on and so on. All these require money. I do not think that on the whole the level of public services in this country is so high we can cut back on them, even if they are being maintained through untidy arrangements. I would prefer best of all to tidy up and upgrade the arrangements, so that the level of services available in such a well-run upper-middle-class community as Palo Alto, let us say, is increasingly available everywhere. The most important service is hardly available anywhere yet: comprehensive health care available to all, including the poorest.
My point can be expanded. If schools are in bad physical condition they should be improved. If teachers have large classes they should be reduced. If poor areas have insufficient clinics and doctors, something must be done to provide them. And so on, through the range of services of the modern state. Our object, to use the English term, would be simply “provision,” not solution.
(b) Of course we should understand that these services will do little to reduce the truly agonizing problems of the inner cities (though some modest effect might be expected—reducing the strain on families reduces the need to break up families, better services for the very young may help in reducing anti-social behavior, and so on). We should undertake these programs because this is what a society should do—just as it should clean the streets, maintain lighting, insure clean water, etc.—rather than because we expect any “output” that deals with basic social problems.
Is there any direct attack, however, that might help? I believe the most useful tack for new social programs at the present is to consider the problem of low-paid, unstable work with poor or no fringe benefits, and to see what can be done to make it more stable, to attach fringe benefits, increase its security, and in effect make the low-income work which at present supports and must be the major support of the low-income population more rewarding and more attractive. Here is the one great area in which social inventiveness is needed, and surprisingly enough the one in which the least effort has been made. Consider—in order to see what is possible in such a field—what happened when the government introduced insured amortized mortgages on government-inspected houses in the 1930’s. The entire trauma of house-owning—the prospect of losing a house because one could not meet the payments—disappeared. A mortgage was turned into something like rent, simply by a financial and administrative reform. Because the mortgages were insured and given on a property of known worth, they became marketable. Thus more funds flowed into housing, and more housing could be built. The banks still earned interest—indeed, more than ever—and people who owned homes still owned them basically on the payment of interest, but home-owning became possible for a good part of the American population because the Federal Housing Administration-inspected and mortgage-guaranteed house was devised. Can one devise such a government-guaranteed job for the worker—a job which offers a minimum wage, a guarantee of that wage as long as he is willing to work, health insurance, disability insurance, unemployment insurance and social security, vacations with pay, all attached to the job? It would take organizational innovation in the low-wage and unstable-job area, akin to the innovation once exercised in the unstable home mortgage-area.
Much of our ingenuity in recent years has gone into the effort to provide more money to the family without a worker or with a worker who provides less than a poverty income. We have made non-work more and more attractive in at least our more advanced states. Is it possible to do something now for work? While there are few incentives for employers to provide the work benefits that go with standard jobs to those less qualified (primarily in service occupations), it might be possible for the government to set up something like a “minimum job standard guarantee” program. The government would not provide the job, no more than it provides the FHA-guaranteed house. But it would provide the fringe benefits and the security that make it a real job. I have defined the objective of a program rather than a program. It has a good deal in common with other proposals—for example, house-cleaning by a salaried, uniformed service, rather than by an individual hired by the hour by a homemaker.
In addition to this form of job development, one can see almost limitless possibilities for additional employment in local public services, along the lines of the work done by paraprofessionals and in the Emergency Employment program. There are problems here of the relationship of such work to the civil service, of job security, and the like, but I think our experience by now should permit us to design good programs for public employment of the underqualified. Should the federal government design and mandate such programs in view of the shift to revenue sharing? I believe we should have a mechanism whereby, if the cities and states do not do their share, a federal job-creation program for public services would come into operation.
If one could provide a full array of welfare-state services on the one hand, and on the other hand do something to upgrade and stabilize the kind of jobs into which and out of which the low-income population drifts, we would, I believe, be doing the most important things the state can do to deal with our domestic problems. Our hopes would be modest. We would provide the means for people to lift themselves out of the mire, rather than requiring that they do so. And more than this a democratic state cannot do.
Even this much would be expensive. But there is no reason to think that the United States will be able to solve its problems by taxing its people less than do the governments of England, France, Germany, or Sweden. And here is a task for Congress, the administration, and for all those among us who criticize both. When we are ready to support higher taxes, we will be in a better position to criticize the Nixon program.
1) The Great Society scored some significant success which must be defended against Nixon’s reactionary onslaught in 1973. At the same time, it was a fundamental—and structural—failure, and that point must be carefully understood. The Great Society’s inadequacies were not due, as the neo-conservative wisdom now has it, to “throwing money at problems,” to the hubris of planners who acted far beyond the limits of their knowledge, or to too casual, and radical, innovation. The hidden agenda of the Great Society was suffused with corporate and anti-social priorities, and the disappointing outcome of the effort proves not that we are somehow unable to translate our intentions into federal policies, but that basically flawed intentions articulated in a progressive rhetoric will yield basically flawed policies.
The point of my critique is not to argue for a retreat from the Great Society, as Nixon does. It is to demand new departures in which the excellent and far-reaching goals so often enunciated by Lyndon Johnson will be expressed in programs which can achieve them.
First, the Great Society did not “throw money at problems.” In The Politics of a Guaranteed Income, Daniel P. Moynihan summarizes the evidence shrewdly: “The social reforms of the mid-decade had been oversold, and, with the coming of the war, underfinanced to the degree that seeming failure could be ascribed almost to intent.” Yet in the Brookings Institution study, Setting National Priorities: The 1973 Budget (from which the statistics in this article, unless otherwise noted, will be taken), it is shown that, during the 60’s, “federal civilian expenditures as a percentage of GNP almost doubled.” It is clearly this latter statement which provides the statistical basis for Nixon’s assertion that Kennedy and Johnson (or, more precisely, Johnson, since the quantum leap occurred between 1965 and 1970 and not at all under the New Frontier) were prodigal with public monies. How, then, does one reconcile Moynihan and Brookings? Why do I insist that the Great Society was not so lavish?
The answer is to be found when one looks at where the expenditure increases took place. Between 1960 and 1970, there was a $44.3 billion rise in the funds spent on Social Security and on Medicare ($33.9 billion for the pensions, $10.4 billion for health). That was three times as much as all the increased expenditures on public assistance (welfare, Medicaid, food stamps, housing subsidies, and student aid). Therefore, the overwhelming bulk of the 60’s increments came, not in radical innovation, but in providing money for one program which was a generation old and inadequate (Social Security) and in achieving a very limited installment of a proposal (national health insurance) which had first been seriously urged by Harry S. Truman in 1949. Moreover, both of these expenditures were, and are, overwhelmingly popular precisely among those white working-class voters who are said to be the chief critics of the liberal prodigality of the 60’s.
When one examines the Great Society programs which might be termed innovative, Medicare alone accounts for well over half the total expenditure during this period. The Office of Economic Opportunity, low-cost housing, and other ventures which might be thought of as new departures were, precisely as Moynihan says, “underfinanced to the degree that seeming failure could be ascribed almost to intent.”
But if the problem with the Great Society was not the result of giving princely governmental support to the untested schemes of social engineers, where was the central flaw? It was not in the adoption of a “services strategy,” though some failures in that area are particularly relevant to the underlying error. In the case of medical and legal care for the poor (and, for that matter, for the not-so-poor, too, most of whom are priced out of both markets) a service strategy is obviously needed. Fee-for-service medicine and law are neither practical nor economical ways to deliver such protection; both should be socialized.
However, Medicare (and Medicaid) does give hints of the basic problem, which is, that for all the grandiloquent rhetoric, the Great Society acted quite timidly. Medicare and Medicaid were partial installments on national health insurance, an area in which this society can hardly be accused of super-innovation (we have yet to catch up with Lloyd George—or perhaps even Bismarck). Because they were such limited increments they had perverse effects. They bid up the price of health, but not the ability to pay, for the working people who were neither retired nor poor enough to qualify for Medicaid; and Medicaid did provide an incentive for staying poor. In both cases, if the United States had acted boldly we would not have had such difficulties.
We did not act boldly, and the cause was inherent in the Great Society strategy itself. Lyndon Johnson built his programs upon the foundation of that amazing coalition which he assembled in the election of 1964. It included Henry Ford, Walter Reuther and George Meany, the more moderate Dixiecrats and Martin Luther King, Jr. It was therefore imperative that no Great Society proposal disturb the fundamental power relations of the society. As a result, the social programs were often suffused with corporate priorities.
Manpower training is a paradigmatic case in point. Throughout the decade, there were important individuals and social forces insisting that full-employment policies required that government become, at a minimum, the employer of last resort (some of us wanted it to act in some cases as an employer of first resort). This was the position taken by the labor movement and by economists like Leon Keyserling and John Kenneth Galbraith during the 1961 debates within the Kennedy administration on the relative merits of a tax cut and social investments. It inspired A. Philip Randolph’s Freedom Budget in 1966 and was a recommendation of the National Commission on Technology, Automation, and Economic Progress in the same year. This proposal was repeated with even greater urgency by the National Commission on Civil Disorders in 1968.
At no point did the Great Society respond to this suggestion. The manpower-training programs of the Kennedy administration were continued—but without the crucial element of guaranteeing a job to every graduate. And then Mr. Johnson created the Job Opportunities in the Business Sector program (JOBS) in which the government subsidized those employers who hired marginal, hard-core unemployed workers. JOBS announced remarkable achievements, most of which we now know were imaginary. It scored some very limited success in the auto industry but that became problematic with the recession so carefully started by Richard Nixon in 1969 as part of his “game plan” to deal with inflation.
The point is that the Great Society rhetoric often concealed corporate priorities. Thus, there was tremendous talk of great strides forward in housing for the poor, including the 1968 Housing and Urban Development Act’s target of 26 million new units for the nation in ten years, 10 per cent of them for low-income people. But the actual outlays were small—a recent estimate has us 45 per cent behind the pace—while the subsidies for the rich were enormous. A Joint Economic Committee staff study last year estimated that tax expenditures for the homes of the (primarily) affluent were worth more than $12 billion a year. That is a radical housing program; it is also, one might add, a conservative, and antisocial, one.
Therefore I would not argue that the failures of the Great Society derive from utopianism, or from the hubris of the planners. They were the predictable consequence of the relatively conservative way in which so many of the programs were designed. (I say predictable because I did predict these results in Toward a Democratic Left, a book which was completed in 1967 and published in the spring of 1968.)
But if I am thus fundamentally critical of the very foundations of the Great Society, how can I propose to defend any of its works against Nixon? The answer is that there were some real successes, even if they were partial and inadequate. Under Kennedy and Johnson, unemployment was steadily reduced; under Nixon it was deliberately raised to Eisenhower levels and still remains at an intolerable 5.1 per cent. Medicare and Medicaid have all the flaws which I have noted, but they do represent an advance in human decency. Thus I find it outrageous that Mr. Nixon proposes to deal with some cost “overruns” in Medicare by cruelly pricing some of the aging out of care they desperately need. This is done on the basis of the undocumented assertion that it is the overutilization of services by patients which is the problem and without even a thought of the fact that doctors’ fees are the main inflationary factor.
And the War on Poverty—even though it turned out to be only a skirmish before Mr. Nixon announced his peace without honor—made gains. Certainly the legal services for the poor, for all their limitations, were an effective innovation. Governor Reagan’s rage, and Mr. Nixon’s maneuvering to get the program under his centralized control, are witness to that. Community Action, for all of the hustling and foolishness done in its name, did contribute to legitimating the struggles of the impoverished. And the various employment programs—which the President will now phase out of existence—were progress, even if insufficient progress.
2) Does all this mean that Nixon’s 1974 budget is a “counterrevolution”? I think not. It amounts to a reactionary turn, a cheap, mean-spirited version of the already inadequate welfare state which he inherited, but not a new departure. Indeed, I take the first Nixon administration as the story of the President’s conversion to corporate collectivism and the beginnings of his second administration as simply a process of making the conservative priorities in that approach more barefaced. When he took office, Nixon tried to act upon the true faith of Milton Friedman and the Chicago School, by trading off unemployment for price stability. The 1970 elections convinced him that the recession-inflation he produced instead would ruin his hopes for reelection, so in 1971 he announced his conversion to Keynesianism, applied wage-and-price controls, and indulged in the largest deficits since Franklin Roosevelt. Some liberals mistakenly thought that the President was coopting their program. That was not at all true. He was using Democratic techniques for impeccably Republican ends.
The result of these policies was, as George Meany told the Ways and Means Committee in March, to produce a “one-sided shift in the nation’s income and wealth into the hands of corporations and stockholders.” It was designed that way. Now the President is applying his conservative—but newly sophisticated—priorities to the welfare state. He pretends that he is simply attacking the inefficient and overly lavish innovations of Lyndon Johnson, but he really assaults the aging, the poor, and, as Meany understands, the working people. In some cases he is truly audacious. He attacks the scandalous mismanagement of the housing programs under his own administration as if it had occurred under Kennedy or Johnson and ignores the fact that the debacle of the “235” program for housing subsidies resulted, precisely, from relying on the genius of the free market with a minimum of government intervention. Only the genius of the free marketeers turned out to be a talent for mulcting the public rather than housing the poor.
But the basic thrust of Nixon’s proposals, like everything else about the man, is political. He pretends to want to return power to the grass roots, to avoid bureaucratic centralization in America. That is the rationale of revenue sharing, and it is a fraud. The real point is to return power to the conservatives.
City Politics by Edward C. Banfield and James Q. Wilson is helpful in elucidating this point (and not the least because its authors can hardly be accused of Bolshevism, of membership in the “new class” or the “conscience constituency” or any other such pariah group). It was the conservatives, Banfield and Wilson write, who insisted on planning provisions in the housing programs of the 50’s. They “thought that local planning commissions, which in most cities had always been closely allied with real estate and other business interests, would afford some kind of check on the liberals who (as it seemed to the conservatives) dominated the housing and urban renewal programs.”
So it is with Mr. Nixon today. He wants to refer as many decisions as possible back to state, municipal, and county government because he understands—perhaps the point became clear to him in the fight over urban renewal—that the “grass roots” as thus defined are usually conservative. Most of them can be counted on not to spend funds on behalf of the poor or the minorities or even the white working class, as the first reports on the use of revenue-sharing monies already make clear. And those big-city mayors who, because of the new political potency of the poor and minorities, might want to act on behalf of those in greatest need can simply be shortchanged. That is already being done.
In all this I do not want to picture Richard Nixon as some kind of moral monster who rejoices in the suffering of widows and orphans. That is neither true nor serious social analysis. He is a conservative ideologue, politically bound to corporate interests which demand that all innovations put the rich first. Out of sincere conviction, Nixon has been, and is, acting to make life worse for those who have the greatest claim upon our concern.
3) What, then, should be done? I assume that the foregoing makes it perfectly clear that nothing serious can be done unless the Republicans are defeated in 1976. Now that they have become avowed reactionary Keynesians, they are much more dangerous, much shrewder, than when they remained the spiritual descendants of Herbert Hoover. (I am quite sure, for instance, that, if the current inflation goes on, as it likely will, Nixon will reintroduce some controls.) Therefore, the program which I want to urge is one which will unite a majority around a liberal Democratic candidacy. Those who think that the Meanyites should purge the McGovernites, or vice versa, as a way of winning in ‘76 are objectively playing the Republicans’ game. So my proposals will seek to bring together the recently fratricidal factions of the Democratic party, for that is the only way to win three-and-a-half years hence.
Secondly, it should also be obvious that I cannot agree to the neo-conservative (or deradicalized liberal) proposition that, if only we would humbly aspire less, we would achieve more. The programs of the 60’s worked; they effected the priorities for which they were designed. The problem was that too often those were the wrong priorities, and that when they were the right ones, we acted timidly and unimaginatively. Within this general framework I would propose:
(a) Tax Reform. The excellent proposal for some $20 billion of loophole-closing made by George Meany in March is quite close to George McGovern’s campaign plank of last fall. Both Meany and McGovern understand that taxes must be raised by levies on unearned income. So I am for this idea on the political grounds that it can unify the college-educated alumni of the anti-war movement and the trade-union doves and the trade-union hawks.
However, my concern with the issue is not simply tactical. Insofar as tax reform might achieve even a modest redistribution of wealth—or at least neutralize the present impact of the tax code which is to shift burdens from the rich to the workers—it has a structural character. Tax reform might, for instance, limit the enormous power which the wealthy achieve because they own the assets of the society (the top 1 per cent of income recipients, Business Week recently reported, has 25 per cent of the wealth; the bottom 50 per cent has 3 per cent). And that could even mean a redistribution of political power.
(b) Full Employment. Nixon has been scandalously inept in this department, and his failure is apparent not simply to blue-collar workers but to Ph.D.’s, engineers, and college students. Moreover, a genuine full-employment program provides the context in which affirmative action to eliminate discrimination against racial, national, and sexual minorities can really work. A tight labor market in which employers desperately need blacks, Chicanos, and women would do wonders for their relative position (as World War II so dramatically proved within the domestic economy). Moreover, the way to full employment is through massive social investments to deal with the needs, not simply of the poor and the minorities, but of the majority.
(c) Health Security. Everyone in this society, except the very rich, is threatened by the inflated cost of medicine. The Kennedy-Griffith bill does not, of course, go as far as socialized medicine (which is desirable) but it is a gigantic step in the right direction. It even has some ingenious proposals for controlling costs whereas Nixon’s bill—for all the President’s rattling on about the Protestant ethic—would socialize the profits of the private health-insurance industry, i.e., of one of the main sources of inflation in this field.
(d) Housing. The nation should take a radical stand and attempt to redeem the promise made by the late Senator Robert Taft, among others, in the 1949 Housing Act to provide every citizen with a decent dwelling. We have had a quarter of a century of ignoring this promise—even though the 1968 Act reiterated it—and the time has come to fulfill it. This will mean the building of new cities and towns from the ground up, an idea which has been endorsed in recent years by such gentlemen as Spiro Agnew and David Rockefeller. However, they want such a scheme as a means of publicly funding guaranteed profits: I urge it as an exercise in democratic social planning.
In summary, I think that the Great Society failed, to the extent that it did, because it did so few of the right things it proposed and followed a hidden and corporate-oriented agenda when it acted decisively. Its gains must be protected against Nixon, but they must be seen as points of departure, not model accomplishments. There should be a guaranteed right to a job for every able-bodied citizen; to an income for all those who cannot work; and to a decent dwelling for everyone. I think that these things could make good politics in 1976; I even think they might help make, not a Great Society, but a Better Society. Such a modest first step is utterly necessary. Once it is taken we might even begin to act on the truly radical priorities these times demand. But if, as those who would limit social policy propose, we aspire less, then our problems will overwhelm us. There is only one way out and it leads forward.
1) I would not want to say that the Great Society was a failure because, for one thing, such talk gives undeserved comfort to the conservatives—which I am not eager to do, even if, on the other hand, I would not greatly mind seeing discomforts rained upon certain liberals who, either directly or owing to the spread of their wrong ideas, are accountable for much that went awry in the Great Society—and because, for another thing, it’s not true.
No program that lifted fifteen million people out of poverty, reduced unemployment from 5.2 to 3.6 per cent, extended the right to vote to millions of disenfranchised black citizens, effectively ended segregation in public accommodations, and managed the longest period of uninterrupted economic expansion in our history—to name a few of the presumably uncontested accomplishments of the Great Society—no such program can be dismissed as a failure. Would that we had such failures from the Nixon administration!
What strikes me about these historic accomplishments is that they are mainly products of what have been called the traditional liberal-labor approaches to social policy—the “old New Deal stuff”—and are very little indebted to the vague social concepts that came to be enshrined in the New Politics. That is to say, the successes of the Great Society, so far as I can tell, were based on the pursuit of relatively high employment, economic expansion, accelerated public spending, higher minimum wages, increased education and manpower training, legislation of long sought-after civil-rights goals, and so forth, rather than on the self-organization of the poor, the overcoming of powerlessness, the decentralization of institutions, or the maximum feasible participation of anybody in anything (except in expanded job opportunities)—or on any other manifestation of that cruelly fraudulent claptrap that was fashioned jointly by New Left theorists, wily conservatives, and confused liberals, and whose main effects were to promote destructive confrontations, legitimize social bribery, enrich a new bureaucratic caste of anti-poverty officials, elevate racial hustling to a sophisticated art form, deracinate a stratum of inarticulate poor people and transmogrify them into professional meeting-goers and logorrheic media performers, and in other ways to injure, disappoint, mislead, demean, or neglect people in need.
If there is exaggeration here, it is only slight.
To trace the roots of what went wrong, let us go back to the beginning. In the beginning was The Other America, Michael Harrington’s moving and important attempt to bring poverty to the attention of the literate middle class. Universally credited with sparking the war on poverty, the book shaped the way poverty was perceived by many intellectual and political leaders. It bears rereading after eleven years. Its “most important analytic point,” according to its author, “is the fact that poverty in America forms a culture, a way of life and feeling, that it makes a whole.” Indeed,
One of the most important things about the new poverty is that it cannot be defined in simple, statistical terms. Throughout this book a crucial term is used: aspiration. If a group has internal vitality, a will—if it has aspiration—it may live in dilapidated housing, it may eat an inadequate diet, and it may suffer poverty, but it is not impoverished.
(What is it-enriched?) And, finally:
If statistics and sociology can measure a feeling as delicate as loneliness . . . the other America is becoming increasingly populated by those who do not belong to anybody or anything. They are no longer participants in an ethnic culture from the old country; they are less and less religious; they do not belong to unions or clubs. They are not seen, and because of that they themselves cannot see. [All emphases added.]
This is very poetic, but what does it mean for social policy? Is it really true that a group may suffer poverty but not be impoverished? How can we measure impoverishment (as opposed to poverty)? By whose standards do we judge vitality (especially internal vitality), will, aspiration? What does it mean to say poverty is a way of feeling? Is one wrong to believe that one can see here the seeds of what later went wrong in the War on Poverty?
I do not mean—and surely not at this late date—to pin the blame for the debacles of “community action” on Michael Harrington, who has other burdens to bear. Actually, the programs proposed in his book run pretty much along the lines of the “old New Deal stuff”—and, indeed, that is the point. Like the Great Society itself, The Other America was schizoid—marking a midway passage between an inadequate past and a wrong future, between yesterday’s Rooseveltian reformism which didn’t go far enough and the “now” reformism of the new radicalism which went in the wrong direction. I cite the Harrington book to indicate how early this dichotomy, this fork in the road, appeared. Insofar as Harrington’s portrait of poverty adumbrated the wrong future, it did so in two key respects. First, it focused on the personal characteristics of the poor—the old poor, the young poor, the black poor, the hillbilly poor, the rural poor, the urban poor, the alcoholic poor, the neurotic poor—an approach Leon Keyserling was repeatedly and brilliantly to ridicule as laying the basis for a grab-bag of fragmented, helter-skelter, and mutually contradictory programs instead of a strategic plan, comprehensive and coordinated, built around key national programs of full employment, economic growth, and income redistribution, and set within timetables, as projected in the 1966 “Freedom Budget.”
Second, by playing on the theme of the poor as lonely non-participants, The Other America helped set the stage for the notion that we had to give the poor something to belong to, somebody to belong to. The somebody turned out to be John Lindsay—God rest his political soul—and the something turned out to be boards—poverty boards, planning boards, school boards, boards of directors, boards of trustees—all in the unshakable conviction of the middle-class activist that everybody shares his fondness, or need, for going to meetings, articulating like blazes, and “relating” all over the place. The result? Instead of the invisible poor and the silent poor, we got the vivid poor and the noisy poor—and the terribly participatory poor.
To sum up my answer to the first question: the Great Society was not a failure, but it can be usefully criticized on two counts. It took a wrong turn when it translated the “culture-of-poverty” theory into community-action programs; and insofar as its other programs were on the right track, they did not go far enough—more on which later.
2) Yes, I believe Richard Nixon is leading a “counterrevolution”—of sorts. But it is not a fascist counterrevolution; it is not a racist counterrevolution (although it may well have adverse effects on the rate of black progress). It’s not even a counterrevolution against freedom of the press or the Bill of Rights. It’s a conservative Republican counterrevolution against the labor-liberal Democratic New Deal-Great Society programs in housing, education, employment, manpower training, health, pollution, and poverty. Specifically, the President proposes to slash these programs by $6.5 billion in fiscal 1973, $17 billion in fiscal 1974, and a cumulative $100 billion, at least, over the next five years. The rationale for this dismantling operation is that revenue sharing will enable the states to take up the slack. The answer, of course, is that the proposed revenue-sharing funds fall far short of the proposed cutbacks, and besides history has taught us that state and local governments cannot be relied upon to carry on these programs without strong federal intervention. But in any case this is not a terribly new debate—it’s been going on for decades, generations. I don’t see anything very surprising in the Nixon second-term program. He is a pragmatic ideologue who is trying to implement a conservative philosophy that has been a familiar part of the American landscape for a long time. Perhaps what is surprising, to some, is that he is actually trying to implement it. To salt old wounds, their surprise may stem from an earlier failure to have taken Mr. Nixon seriously. To have yelled “fascism,” “racism,” and “repression” was not to have taken Mr. Nixon seriously. To have taken Mr. Nixon seriously was to have supported the ABM (Anybody but McGovern) drive at the Democratic convention and to have gone all-out for Hubert Humphrey in 1968.
So, in sum, we have a conservative, though modern, President seeking to carry out a conservative, though modern, program. What is new and different is that he recently got 60 per cent of the people’s votes—though not for his program. Our response should be not to deny his legitimacy or the moral capacity of the voters but to oppose his program on the simple grounds that undoing the gains of the New Deal and Great Society is not in the interest of the average American. Clearly—though perhaps not as readily apparent to the flakier New York intellectuals as to Washington observers—our best hope for preserving these gains, and thus the possibility of further gains, rests with the political clout of the labor movement and with the more sensible liberals on Capitol Hill. Other liberals on the Hill seem to believe that the worst thing Mr. Nixon is doing is dismantling the Community Action program of OEO. They are making a blunder: that albatross should be allowed to sink.
3) The third question asks: What should be done? I think that what we should do is what we said we were going to do in the 30’s and 40’s, when we passed legislation committing the government to full employment and a decent home for every American family. How’s that for a start?
I am much disturbed by the new fashionable talk that the trouble with the Great Society is that it tried to do too much, went too far, failed to recognize that “there are limits to what government can do.” Of course there are limits to what government can do—just as there are limits to what private enterprise can do. But do those who use this language have any precise limits in mind—or, one cannot help but wonder, is Vietnam what they have in mind? Do they speak from a need, a wish, to see government power curbed?
In any event, I do not believe that establishing full employment is beyond the capacity of the government of the United States. I do not believe that it is beyond the capacity of the government of the United States to insure an adequate supply of good housing for the American people at every level of income. I do not believe it is beyond the capacity of the government of the United States, through the Social Security system, to guarantee a decent income to all senior citizens. I do not believe it is beyond the capacity of the government of the United States to provide, through something like the Kennedy-Griffiths Health Security program, quality health care for all citizens, regardless of age or income. I do not believe it is beyond the capacity of the government of the United States to give us real tax and welfare reforms, to control inflation, or to overhaul our broken-down system of criminal justice. And I do not believe that our failures in any of these areas had a damn thing to do with the war in Vietnam.
I am not here going to attempt to spell out the specific programs we need to achieve the goals mentioned above. The programs have been spelled out again and again in the publications of the labor movement; they were detailed in the “Freedom Budget.” Our problem at the moment is not an absence of specifics but an overall mood of retreat, a sense of political exhaustion following the turbulent 60’s. The mood may be understandable, but we ought to be prepared to combat it among ourselves, at least insofar as it leads us into that dreary can’t-do-and-besides-there-never-was-a-problem-in-the-first-place syndrome that characterizes the new intellectual conservatism. Perhaps if we were to get on more vigorously with what government can do, we would acquire along the way a clearer idea of what government cannot do. The trouble is, the kind of rhetoric we are hearing now usually arises in periods when we are not doing, or intending to do, very much. One must hope that this time the talk does not mirror the times. It is all very well to complain that expectations soared too high under the Great Society. But there are legitimate expectations which the bulk of the people are entitled to have of their government, expectations having to do with jobs and housing and schools and safety. We have not really made an effort to meet those expectations in ways or proportions commensurate with our capacity. The effort to do so, I come increasingly to believe, can provide the basis for a political thrust which, if successful, will lead to social change at once deeper, more benign, and even more egalitarian than the 60’s produced. I think our task now is to reawaken and reassure the people as to these possibilities—not further demoralize or confuse them, or ourselves. The New Left did damage enough. Let us not, in reaction, give it a posthumous victory over hope in the 70’s.
To say that Nixon is leading a counterrevolution against the New Deal, Fair Deal, New Frontier, etc., implies that there was something revolutionary about those programs in the first place. Unfortunately this was not the case. Most of the social legislation of the last forty years has attacked symptoms rather than causes. For this reason it has done little to reduce inequality and poverty, limit corporate power, make people more secure against disasters that threaten their savings or even their lives, or extend popular control over government. Merely to list these objectives, which are scarcely utopian or radical, is to suggest how ludicrously liberal programs have fallen short of the minimal goals of a democratic social policy—goals on which almost all of us, presumably, can agree.
In most of these areas, the people of the United States are visibly worse off now than they were in 1933. Gross inequalities remain. Poverty remains. The big corporations are bigger than ever and no more responsible to the public. Most people still have no effective protection against the hardships of old age, sickness, or unemployment, while the rise in crime and violence makes it laughable even to talk of security. As for popular control of government, an irresponsible executive branch has achieved a near monopoly of political power, in the face of which state legislatures, Congress, and increasingly the judiciary as well are almost completely helpless.
Timid as the Roosevelt-Truman-Johnson programs were, they gave rise to intense alarm not only among the very rich, as might have been expected, but among large sections of the electorate. In the 1930’s Republicans feared that the Democratic party, through its control of the new welfare and relief programs, would transform itself into a permanent political dynasty. Experience since the 30’s suggests, on the contrary, that liberal reforms tend to be politically self-defeating. Because they seek only to redistribute inequality instead of eliminating it, liberal reformers are constantly dividing their own constituencies. The pro-labor legislation of the New Deal set off an anti-labor backlash when unorganized white-collar workers and professionals, originally attracted to the New Deal, began to feel themselves ground between the millstones of Big Business and Big Labor, victimized by inflation, and generally ignored. By 1952 the middle classes were ready for Eisenhower.
In the 70’s Richard Nixon has made a similar appeal to the great American middle, the “silent majority,” the people who supported the Johnson programs until it became clear that they themselves, and not the rich, would have to pay the price in one form or another. Johnson’s poverty program, such as it was, too often pitted blacks against ethnics. Meanwhile the white working class also bore the main burden of school desegregation, while suburban liberals applauded from the sidelines. In addition the working class and the lower-middle class had to suffer the indignity of being called white racists. It is not altogether surprising that the white working class now supports Nixon, though not with much enthusiasm.
The ethnic backlash against the blacks is only one aspect of what has been called, perhaps too sweepingly, a cultural civil war. There is also a Jewish backlash, compounded of the bitter aftermath of the New York teachers’ strike, fears of a black-Wasp alliance, and reaction to loose pro-Arab rhetoric on the New Left. Beyond that, there is a generalized, ill-defined revulsion against “permissiveness.” This revulsion is widespread; it is confined to no single class or ethnic group. A vague sense that things are out of joint, that values and standards are collapsing, that respect for authority has declined, troubles people at almost every social level.
Because the Left has only ridiculed these fears, those who are troubled by the growing disorder they see around them turn to the Right, which promises to restore order even though in reality it has no idea of how to do so. The Left has nothing to offer except an extension of the meaningless personal freedom Americans already enjoy or are in the process of acquiring, the freedom to do your own thing. (What a rich social vision this threadbare slogan implies!) The Left is unable to understand the validity of the demand that authority speak authoritatively, that standards be upheld, that people be held to responsibilities beyond those of pleasing themselves, that work be respected, that life be regarded as a serious business and not as an endless series of experiments with “alternate life-styles.” The Left sees in these demands—which are imperfectly but insistently manifested in the malaise of Middle America—only the death-throes of a discredited work ethic, an outmoded “middle-class morality.” Accordingly the Left finds it impossible to understand the revulsion produced by the Democratic convention and by the McGovern campaign, seemingly a proliferation of new, anarchic, undisciplined “life-styles.”
Nixon is riding high on the crest of this revulsion. An opportunist, a charlatan, a politician who has no idea what is wrong and no idea of how to right it, he nevertheless appears to take seriously the old values, the cultural distress of the common people. To be sure, he upholds these values in their crudest form and appeals to all that is meanest in the American character. Those who won’t work can expect no more “handouts.” Those who break the law must pay the price. The work ethic must and shall be restored (by the sheer force of Presidential preaching, no doubt). Thin, poor stuff. But to many people it is some satisfaction, after the turmoil and anxiety of the past decade, to hear the President defend the old ways without embarrassment, especially when his hearers are themselves uncertain about the value of patriotism, hard work, and other such austere virtues.
To say that Nixon is leading a counterrevolution or even that he is leading an attack on the welfare state gives him too much credit for leadership. Nixon is adhering to a time-honored strategy of Republican non-rule, following the line of least resistance. Republicans figure that while Americans periodically become excited over a New Deal or a Fair Deal, what they really understand is a business deal—a wheat deal, an I.T.T. deal, a deal with the Chinese, a deal with the Russians. Wheeling and dealing, Nixon keeps his friends happy and his enemies off balance. Meanwhile inflation soars, the cities stagnate, crime continues, a staggering load of unsolved problems accumulates—and the administration assures everybody that all these problems are being solved through its own vigorous action. It announced recently that the urban problem had been solved. This news will surprise and doubtless gratify those who live in cities.
The Democrats, divided and bewildered, are reduced to making an issue of aid to North Vietnam. They too now appeal to the worst in their constituents.
Nixon has no economic or social policy, no policy at all except to reduce federal spending (defense spending excepted, of course), to crack down on the press, to prevent public or Congressional examination of executive policy, and to punish “lawbreakers.” The most imaginative act of his administration is the attempt to make “theft” of federal secrets a crime in the absence of the faintest legislative authorization for such a policy. If the courts uphold this interpretation in the Ellsberg case, the federal executive will have gone a long way toward making its actions completely immune to scrutiny or criticism. Favoritism, touching solicitude for the plight of the rich, suppression of criticism—these are the Nixon “policies,” the policies also of his Republican predecessors. Peace, prosperity, and petulance.
If there is a counterrevolution in progress, it is a cultural counterrevolution. Nixon’s real ambitions, it is clear, lie in the direction of moral leadership. He loves to address the nation on such questions as amnesty, an issue tailor-made to his purposes. Nixon was trained as a lawyer, as he rarely fails to remind us, and knows that amnesty means, literally, forgiveness. (Never mind that it means legal forgetting. Who can explain this distinction to people who have been taught since childhood to “forgive and forget”?) The President asks: Can we forgive those who have deserted their country in its hour of need? Those who rallied to the colors, the parents of those who rallied to the colors, the parents of the war dead—how can they in particular forgive those who betrayed their country? The deserters broke the law and must take the consequences. You can’t get something for nothing. The draft-dodgers, the welfare bums, the radicals, have in common that they all demand something for nothing. They act as if society owed them a living. “Ask not what your country can do for you,” Nixon has recently reminded us, borrowing the rhetoric of the New Frontier if not its celebrated “style.”
Since this kind of language, echoed by many lesser orators, seems to be finding a ready public response, it is tempting to conclude that a resurgence of old-fashioned values is in progress. But this is to misinterpret the nature of the backlash. Traumatized by the events of the 60’s, the public dotes on the old slogans, but few people believe them. Most of us now believe that it is a very good thing to get something for nothing. Indeed we believe, not without reason, that getting something for nothing is the key to success. We know that hard work is very likely to be its own, indeed its only reward, and we avoid work accordingly. We admire gamblers and con men; that is why we admire, grudgingly, Nixon.
The brutalizing effect of the war, the collapse of authority, the decline of public order and safety, Presidential lies, the increasingly obvious cynicism of the rich and powerful, have made Americans today cynical about everything. Everyone is presumed to be on the make, everyone is out for himself, everybody lies. The other side of this cynicism is nostalgia. People cast a sentimental eye on the old days. Nixon appeals to this nostalgia with his empty talk of the work ethic, even as he embodies the rampant cynicism, the total insincerity of our culture: truly a leader for our times.
Ever since the Second World War, international politics has presented itself as a struggle between socialism and the “free world.” After 1945 national rivalries within the “free world” were swallowed up in the struggle against Communism. The capitalist powers appeared to have achieved an unprecedented unity; wars between capitalist powers seemed as much a thing of the past as major depressions. These appearances contributed to the larger illusion that we had entered “a new era.”
Today rivalry among capitalist powers is once again on the rise. Western Europe and Japan are challenging American economic supremacy. Sluggish, weakened by war and inflation, the American economy is highly vulnerable to this competition, both at home and abroad. The political structures of international capitalist unity have collapsed. Neither Western Europe nor Japan depends on American arms to defend itself against Communism. Far from regarding Communism as a threat, they have instituted closer diplomatic and economic relations with the Communist powers—an example the United States has now rather belatedly followed.
In this highly unstable situation, in which conflict between capitalist powers has become even more threatening than conflict between capitalism and Communism, both the economic and political power of the United States has been vastly reduced. The Vietnam war will appear in retrospect as marking—and as partially responsible for—the end of a brief era of American supremacy. Many of the current troubles derive from the loss of this supremacy and from desperate efforts to recover it. An extended economic analysis would show in detail that the very measures necessary to improve the competitive position of the United States vis-à-vis the economies of Western Europe and Japan intensify the domestic difficulties that are so alarmingly accumulating. In general, it is obvious that the competitive disadvantages of American corporations cannot be reduced without lowering the standard of living of the American working class, either through attempts to speed up production—already in effect in many plants—through direct attacks on wages, or through increased automation. Until recently the technological superiority of American industry and the advantages of large-scale production compensated for the wage differential between American workers and workers in other industrial countries, but American industry is no longer more productive than or technically superior to its chief competitors. Any attempt to recapture its technical supremacy would place still greater burdens on the state, which has had to assume most of the costs of the research, development, and technical training on which technological advances in industry depend. This “socialization of the indirect costs of production” has already contributed, along with enormous military expenditures, to the well-known disparity between the public and private sectors and to the collapse of public services, and it appears unlikely, therefore, that the state can continue these indirect subsidies to American industry without further impoverishing itself or, on the other hand, without taxing the corporations—in which case the subsidies would cease to be subsidies.
A truly “sound program in the area of social policy,” it appears, would have to rest on measures completely opposed to those favored by the corporations and, it must be confessed, by the majority of American voters at this time. Even the minimal goals to which most Americans are nominally committed—greater equality, security, etc.—cannot be accomplished by traditional liberal methods: this becomes clearer every day. The defense of the American system of “free” enterprise is no longer consistent, if it ever was, with the ideal of a humane society based on equality, freedom, and justice. As the quality of life deteriorates, the truth of this simple proposition will be generally acknowledged; the question is whether this recognition will come too late to arrest the spread of authoritarianism into every sphere of American life—for the collapse of legitimate authority will be resolved either by attacking the problem at its roots or by forcibly imposing authority, by institutionalizing official violence.
A sound social policy should adopt as minimal goals the reduction of defense spending, equalization of incomes, free medical care, a subsidized system of mass transportation, and political reforms designed to curb the power of the executive branch of the federal government. This is hardly a very sweeping program, but it would go some distance toward easing the plight of the cities and the desperate people living in them, and toward restoring a measure of political democracy. Since the current difficulties are cultural as much as they are political, any merely political program, even a much more ambitious program than this, is inadequate to the present conditions. Social reform designed to make our cities inhabitable is, however, indispensable; and even the modest program I suggest will encounter fierce resistance. In the light of recent experience, there is certainly no reason to be excessively optimistic about the prospects for any program of reform.
1) In a great legislative explosion, Lyndon Johnson’s Great Society sought simultaneously and quickly to cure poverty, revitalize urban life, achieve racial equality, beautify the environment, equalize educational opportunity, redistribute political power, and liberate all groups in search of liberation. In its brief period of glory, circa 1964-65, Great Society programs enlisted shifting coalitions of Washington bureaucrats, university intellectuals, black and other organized minorities, and businessmen in alert search of new markets for educational hardware, medical supplies, and job-training services.
Large goals invite disappointment. Between 1965 and 1973, the number of poor people has diminished, but the principal explanation appears to be economic expansion rather than the strivings of the Office of Economic Opportunity. Moreover, as Daniel P. Moynihan and others have emphasized, the black middle class has flourished to the degree that little distinguishes the economic performance of young black middle-class couples from their white counterparts. Nevertheless, the condition of the cities is not reassuring, least of all to their residents. Even economic expansion does not suffice to alleviate the problem of welfare dependency and the continuing destruction of whole sections of the inner cities. Even for white liberals, the fear of crime threatens to overwhelm social altruism and increase the political attractiveness of local candidates like Mario Biaggi in New York.
The most obvious sign of the Great Society’s failure is the weakness of general and Congressional resistance to present administration designs on the categorical-grant social programs. If Congress ultimately accepts special revenue sharing, its motive will probably be a reading of low public esteem for present arrangements. Current reassessments often make glum reading. Housing subsidy programs have been dreadfully administered and better adapted to the financial interests of builders and developers than to the needs of low- and moderate-income tenants. Manpower-training efforts have from the outset been misconceived: the necessary complement of appropriate public service jobs has never been forthcoming. Title I poverty education grants were often misused and, in any case, spread far too thinly to do much good. And so on.
A reasonable plea in mitigation is of course readily available. Most of these experiments were underfunded. Very little time has elapsed since the major statutes were enacted, too little to render final judgments. Great Society programs were compelled to compete for public attention and Congressional energy with the continuing national tragedy of Vietnam. Wars are notoriously hostile to social reform. The explanations do not disguise the social condition that for reasons good and bad the Great Society does not command general popular favor.
In part, however, this requiem mass for the dear departed is premature. Of the untidy collage of service, income maintenance, tax, and political strategies involved in the Great Society, one at least has a fair claim to durable impact. This is the legal transformation, the rights revolution, accomplished by an effective combination of legislation, Supreme Court decisions, and legal services ingenuity. In the South the Voting Rights Act of 1965 has substantially increased black voting participation and led to the election of rising numbers of black mayors, sheriffs, legislators, and Congressmen even in recalcitrant states like Alabama and Mississippi. The new law generated by successful class-action proceedings has considerably enlarged the rights of welfare recipients, public-housing tenants, migrants, and women, and correspondingly narrowed the discretionary powers of housing and welfare bureaucracies. Even though the future of legal services is clouded and a more conservative Supreme Court looks with diminished favor on imaginative readings of the equal-protection clause of the Fourteenth Amendment, it is unlikely that the courts will take away these recently acquired rights of hearing and appeal.
There is a related point. In all probability, White House plans to terminate Community Action and dismantle Economic Opportunity will succeed. All the same, Community Action has probably led to such lasting consequences as the development of new black leaders and the opening of opportunities for paraprofessionals who in New York City have secured effective union advocates. What works in the long run is political power. Even though the Nixon administration is neither beholden to the black community nor particularly sympathetic to its aspirations, the impetus to black political organization given by the Great Society is likely in a more promising post-Nixon national administration to produce both legislative and administrative benefits.
A final word. Viewed from the Left, the Great Society was always a highly precarious enterprise, founded on a fragile combination of service and income benefits for the poor and tax reduction for the community at large. Its triumphs were matters less of national yearning for social change and a more just distribution of income and wealth than of a division of the spoils of economic growth between winners and losers in American economic competition. Once expansion slowed and foreign commitments expanded, the soothing era of annual tax reduction and enjoyable skirmishes against poverty came to an abrupt halt. No one can say with confidence how in the absence of Vietnam the Great Society would have fared. It is certainly accurate to note that the Vietnam escalation diverted money and moral energy from domestic affairs, required tax increases in place of tax benefits, and dissolved the momentary harmony of interest between the poor and the prosperous which for a historic moment in the mid-1960’s had aroused great expectations.
2) President Nixon has constructed the most coherent conservative program in recent memory. In all seriousness, I think that his critics, themselves in serious intellectual disarray, ought to thank Mr. Nixon for organizing their thoughts and energies at least in opposition.
Premised on the psychological proposition that most people prefer to define themselves as successful rather than unsuccessful, the Nixon appeal is to the self-interest of the reasonably prosperous. His now famous rephrasing of the Kennedy inaugural slogan to read “In our own lives, let each of us ask not just what will government do for me, but what can I do for myself” is a poultice to the conscience of men and women, tired of social strife, and eager to pursue private gain in peace.
As a design, the White House legislative program incorporates five internally consistent propositions:
- It is a far, far better thing to curtail non-defense federal spending than it is to increase personal or corporate income taxes, close tax loopholes, or raise inheritance and gift levies, for the sufficient reason that private spending is socially more useful and individually more rewarding than government spending.
- If taxes are changed at all, it should be in the direction of still less progression. Low progressive personal and corporate tax rates encourage investment and innovation upon which rising standards of living depend. The weak and feckless will learn to walk more quickly on their own feet if they are encouraged to discard the crutches which the Great Society specialized in fabricating. Heavier federal reliance upon regressive payroll taxes to finance social benefits is a good idea because the practical impossibility of greatly increasing these taxes imposes a useful lid on foolish public altruism.
- In general, social spending should be diverted from urban and rural low-income beneficiaries for whom it has failed to do much good, toward suburbanites and pensioners, away from the morally suspect to the morally worthy.
- It is accordingly essential to dissolve alliances as old as the New Deal between federal agencies and their black, urban, and welfare clients. The categorical grants must go because they are the fiscal fruits of these alliances. As the importance of OEO and the Departments of Health, Education, and Welfare, Housing and Urban Development, and Labor diminishes either because of curtailment of funds or administrative reorganization, the power of the President and of elected local officials correspondingly expands. This is the crucial political implication of special revenue sharing. The legislation allows but crucially fails to compel local governments to continue existing social programs.
- In short, the President proposes to substitute local goals and program preferences for the painfully evolved national criteria of the federal agencies. Since city halls and state capitols are considerably more responsive to conservative and local business pressures than Congressmen and government regulators, the outlook under special revenue sharing is desperate for the future of federal programs targeted to racial minorities, inner cities, and poverty groups.
The President’s politics are as astute as his code message to the majority is clear. If blacks, welfare mothers, and other idle loafers lose much of the little they now derive from the public trough, so much the more will be available for self-respecting practitioners of the work ethic like thee and me. Furthermore, once they have come to think a bit, the mayors and the governors will still their childish clamor over reduced categorical assistance and come to realize just how politically profitable special revenue sharing contains the potential to become. Services to the prosperous and the politically important can be increased while, conceivably, property taxes are actually reduced.
The name of the game is fragmentation of old alliances among blacks, liberal intellectuals, unionists, and the Democratic party. In the context of the President’s domestic scheme, some unionists gain and others lose. Prosperous building tradesmen and other blue-collar suburbanites are conspicuously included in the appeals to the middle class. The losers are likely to be relatively low-paid and vulnerable members of Cesar Chavez’s United Farm Workers, Leon Davis’s Local 1199, and Victor Gotbaum’s District 37 which depend either upon publicly supported government jobs, continued flows of Washington money to hospitals, or the indulgence of the Department of Labor. President Nixon’s payoff to George Meany for AFL-CIO benign neutrality during the Presidential season was the lifting of Phase II controls and the issuance to the powerful unions of hunting licenses for better 1973 contract gains than the old Pay Board could have sanctioned.
There is no place in the New Order for a Family Assistance Plan, although once this scheme was proclaimed by Mr. Nixon as the centerpiece of a New American Revolution. Property tax relief is also postponed or jettisoned. And in the interests of price stability and labor discipline, the Kennedy-Johnson 4-per-cent unemployment target is discarded. As the 1973 Economic Report charmingly explains:
When the condition that persons who want work can find it through serious search is met, the rate of unemployment as we measure it will not be zero. What it would be we do not know. Undoubtedly the number would change from time to time. But it is the condition which is important, not the statistic.
Nixon’s plans pose several threats to the employment of the poor. As Title I school funds, Model Cities, and Community Action diminish or disappear, paraprofessional job openings will decrease. If Congress ratifies the President’s phasing out of the Emergency Employment Act, 150,000 public jobs will disappear. And, as usual, acceptance of higher general unemployment penalizes most severely low-income, sketchily credentialed blacks, women, and teen-agers. An unemployment average of 5 per cent translates into 10 per cent black and 15-20 per cent teen-age idleness.
The Nixon thrust eliminates some social expenditure and redirects the remainder away from Washington agencies which have developed sympathies and alliances with their clients toward cities and states which are likely to spend the available money on their own working- and middle-class members. As an object of special national concern, blacks and poor people are to be dismissed. More money for general community services is to come from money now spent on compensatory education, regional mental-health centers, neighborhood health clinics, and other benefits which are currently means- or income-tested.
Counterrevolution is possibly a pretentious term. Still, the Nixon redesign of the federal social effort does reverse forty years of liberal effort to broaden the scope of social protection to include the very neediest and most vulnerable. The predictable consequence of the substitution of special revenue sharing for categorical grants will be to narrow the benefits available to low-income persons and enlarge those which flow to middle- and upper-income groups. As the political scenario is played out in the current Congressional session, it will become clearer whether or not this consequence of their choice is what the voters had in mind last November.
3) Democratic societies pass through conservative periods. Many have suggested that Mr. Nixon’s landslide was enlarged by uneasiness over the prospect of “radical” alterations of familiar arrangements threatened by Senator McGovern. In conservative times, liberals, radicals, and egalitarians cannot sensibly anticipate that much of what they cherish will come to fruition. Allying themselves with the current suspicion of change, liberals and radicals would do better to try to preserve what can and should be preserved of recent social advances. In the immediate future, this ought not include last-gasp defense of farm subsidies (which raise urban food prices mostly for the benefit of large farmers and agri-corporations), unyielding support of housing subsidy programs without substantial reform, or insistence on all aspects of present versions of manpower training and health-care delivery systems.
The more feasible strategies focus on jobs and legal rights. A unifying rather than a divisive objective, full employment is also the most effective of anti-poverty programs. Thus during three Eisenhower recessions, the ratio of non-white to white income dropped from 57 to 51 per cent. According to a 1965 OEO study, a decline in unemployment from 5.4 per cent to 3.5 per cent generated 1,042,000 full-time positions for low-income workers and lifted 1,811,000 persons above the poverty line. A challenge to the administration on full employment is the sort of issue upon which blacks, liberal intellectuals, and the AFL-CIO can agree.
Of the Great Society programs, legal services, by general concession, has been the most successful. The preservation of the program in some form has been endorsed even by the American Bar Association. It is hard for Americans to oppose the extension to everybody of constitutional guarantees. Even if the legal-services program which emerges from the present uncertainty is less innovative than the poverty lawyers of the 1960’s, it will be a valuable symbol of the continuity of some aspects of the Great Society’s commitment to equal protection and due process.
It may be politically feasible to revive a more generous and less coercive version of the Family Assistance Plan, although here the President may have read the entrails accurately in discarding the idea. In sum, there are non-quixotic battles to be waged at least on the legal and job fronts and possibly on the income-maintenance front as well. Partly because of their own deficiencies, there is less hope of saving the service programs and least of all to be anticipated are tax strategies calculated to diminish the shelters now available to the affluent.
In the longer run, a program for a 1976 Democratic restoration begins with the mustering in a single army of the troops which kept Democrats in national office most of the last four decades. The emphasis ought to be on unifying issues, among them health, jobs, transportation, occupational safety, product protection, and sensible anti-pollution.
In the still longer run, the looming issue is whether political democracy and plutocracy are compatible. Those who like myself answer in the negative necessarily advocate redistribution of income and wealth. The tools are available if the will is present: permanent controls over prices and profits, serious taxation of gifts and inheritance, and increased reliance upon progressive imposts upon personal and business income. Extremes of poverty and affluence occasion social envy, auction public office, poison the channels of public communication, and damage the fellow feeling upon which viable, libertarian societies depend.
Here there is a small paradox. “Radical” egalitarianism is less open to technical criticism on efficiency grounds than are liberal service experiments. Egalitarianism is defensible not on the ground that it is more “efficient” but on the argument that it is ethically and politically preferable to the available alternatives. One can discredit a program of compensatory education or job training by familiar computation of costs and benefits. Equality of condition as well as equality of opportunity are moral valuations challengeable only by those who are attached to opposing valuations.
1) Did the Great, Society fail? One need only examine the substantial progress blacks have made during the past decade, both in terms of securing their constitutional rights and in narrowing the economic gap between the races, to understand why the answer to this question must be an emphatic “no.”
Let me make it clear that I by no means celebrate the Great Society as totally successful; government was in fact guilty of promising more than it could deliver and of raising the aspirations of the poor to heights that could not be fulfilled, given the political realities. But acknowledging its shortcomings is quite a different thing from dismissing the programs of the Johnson era as outright, though well intended, failures. The truth is that the poor have benefited from these programs; that government bureaucracy has been reformed, in the sense that serving the needs of the poor has become an essential function of the bureaucracy; and that, for the first time since the New Deal, the direction of social policy has been determined by a commitment to the principle of equality.
What, then, brought about the gap between promise and performance? I think there were three broad, fundamental weaknesses in the Johnson administration’s War on Poverty strategy which substantially reduced its effectiveness. First, the objectives were short-sighted, with an emphasis on providing services; a more effective strategy would have had more ambitious and permanent goals, such as a guaranteed annual income for the poor. Second, the administration devoted a good deal of effort to programs which sought to correct the political powerlessness of minorities, at the expense of attacking, head on, the economic roots of inequality. (I am referring here to Community Action programs and, not, obviously, to the Voting Rights Act.) And, finally, a most serious weakness was the enormous under-financing of the majority of programs.
The consequences of under-financing were crucial; in some cases, the lack of financial commitment has paved the way for President Nixon’s current campaign to discredit liberal social formulas. As Leon Keyserling has demonstrated in his analysis of the federal program to provide decent housing for the poor, the refusal to provide massive financial aid can undercut an entire social program:
The reason why the federal effort to rehouse low income families “failed” is that it has not been tried, except in token form. An annual building program averaging one-twentieth of the annual need has inevitably raised almost as many problems as it has solved. It has tended to make “poor houses” of the public projects. It has permitted the slums to remain in full force. It has perpetrated one of the most dangerous of all social errors, to offer promises rather than performance.
By citing Keyserling, I am not proclaiming, as some on the Left have, that the failure of this or that Great Society program invalidates the traditional objectives of liberal social policy. Keyserling himself observes that the federal housing program has done an “immense amount of good.” Between 1960 and 1970, the percentage of metropolitan-area black families occupying housing with inadequate plumbing facilities declined from 24 per cent to 7 per cent. While this can be traced, in part, to the general improvement in the economic status of blacks, it must certainly be a partial result of the expansion of public-housing projects, programs of apartment rehabilitation launched under federal aegis, and reforms of the discriminatory policies of government-assisted housing-finance agencies—all of which were a part of the much maligned Great Society housing effort.
And while the services approach contributed to the haphazardness of federal social programs, wherever the services provided were those basic to the day-to-day experience of the poor, much good was accomplished. The reduction in the educational gap between the races is a good illustration of this. Recent statistics indicate that black high-school graduates are enrolling in college in roughly the same proportion as their white classmates. The dropout rate has also been substanitally reduced. The average black youngster today has completed twelve years of schooling; in 1960, only 36 per cent of non-white males and 41 per cent of non-white females had completed four years of high school. I consider it inconceivable that the educational programs of the Johnson years did not have something to do with the dramatic improvement.
2) Is Richard Nixon, then, waging a “counterrevolution” against Johnson’s liberal social policy? I think the answer must be “yes.” The implications of the “New Federalism” are of a fundamental departure from the basic relationship between government and those whose interests are served by social progress, a relationship that has been a cornerstone of liberal doctrine.
The current budget contains but an inkling of the massive transformation of government functioning which I see as the ultimate Nixon objective. Nixon seeks, first, to institutionalize the philosophy that liberal social change is unworkable and that the federal government’s role should therefore be reduced to that of caretaker, reacting, perhaps, only in crisis situations. Second, Nixon hopes to reinforce the preeminent position business enjoys among the forces striving to dominate the formulation of government policy. While this has been a traditional function of Republican administrations, Nixon’s approach is more calculated than that of his GOP predecessors.
It is this commitment to business interests that has produced the massive public-relations campaign to discredit the Great Society, that has impelled Nixon to appoint those with corporate backgrounds to a high percentage of policymaking posts, and that has caused him to refuse to countenance tax reforms. It has even influenced the administration’s approach to minority advancement: black capitalism, a scheme which the President has wholeheartedly endorsed, has proved as ineffective as any program which the administration proposes to cut back. Yet, precisely because it appeals to such a narrow constituency—there are, after all, few black capitalists—and therefore raises few expectations, it has a stabilizing effect which broad social-spending programs do not.
The most “revolutionary” aspect of the New Federalism, and also the most ominous, is the transfer of spending powers from Washington to the states and municipalities. The implications of the revenue-sharing concept cut much deeper than the specifics of the budget cutbacks. For while Nixon may refer to what he is doing as decentralization, local autonomy, or whatever, I propose that he is really paving the way for a renaissance of a familiar tactic—a new States’ Rights manifesto.
This is not to suggest that Nixon seeks the reintroduction of Jim Crow, although the abrupt dismissal of Father Hesburgh as chairman of the Civil Rights Commission, the handling of Supreme Court appointments, and the efforts to emasculate the Voting Rights Act are indicative of Nixon’s lack of regard for the issues which the civil-rights movement saw as basic. But by handing over to local authorities the responsibility for spending policies, Nixon has at one stroke enhanced the power of those who have been the least able to respond to the need for social change.
This is why I find the widespread favorable response of liberals to the principles of revenue sharing unsettling. The civil-rights movement, it should be kept in mind, did not succeed simply in extending the rights of minorities; it was also largely responsible for discrediting the principle of states’ rights as applied to a broad range of policy questions. The movement taught us that the states and cities could not respond to the social needs of the poor and disadvantaged; eventually, this idea began to be reflected in basic government policy. It appeared that the liberal ideal of a strong, socially responsible federal government was about to be realized.
At this juncture, acceptance of the revenue-sharing philosophy could have profound consequences. There exists, for example, a genuine housing crisis in the ghettos, a crisis that will only be exacerbated by the moratorium on housing programs imposed by the administration. If, as seems likely, the responsibility for determining urban-development priorities is handed over to the mayors, there is little likelihood that those areas which most critically need help will see much of that money. What we may in fact witness is a revival of the urban-renewal policies of the past, with high-rise luxury apartments going up amid the slums.
The Left bears substantial responsibility for this situation. Decentralization has become attractive at least partially because of those who have decried the powers of Washington while insisting, from a leftward vision, that the fundamental task is bringing government “closer to the people.” Among whites, this attitude permeated the New Left and ran strongly among the reformers who dominated the McGovern campaign. For blacks, it was reflected in the emphasis on “nation building” at the expense of broad strategies for social change.
The antagonism to government which has sprung up among liberals is also to be found in the criticism of what is often referred to as the “distant and unresponsive” bureaucracy. Such criticism does not take into account the often progressive role played by the federal bureaucracy during the Johnson administration. It was a bureaucracy which compelled reluctant school boards to use funds for compensatory education programs for the low-income pupils for whom such money was intended. At the same time the Department of Housing and Urban Development began to insist that—regardless of real-estate and other interests—urban-renewal projects were to include provisions for low-income housing as well as middle-income housing and that relocation programs be carried out in a more equitable manner. HUD thus assured that these projects would serve as instruments for social progress.
3) Considering the current state of affairs, the question of what constitutes the limits of social policy I find irrelevant. The issue is not an academic debate over whether we have reached the outer boundaries of orderly liberal change; we have not even succeeded in achieving the modest goals of the Great Society.
Nor is the problem, as some would have it, that the Great Society attempted too much; on the contrary, it did not go far enough, and, more to the point, it employed strategies that, from both political and policy standpoints, were often counterproductive.
Nothing that has transpired in the past decade has led me to conclude that we cannot achieve massive, fundamental change or that such change cannot come about in a democratic and orderly fashion. The specifics of what might constitute a truly progressive program were spelled out in 1966, when A. Philip Randolph proposed the “Freedom Budget.” The Budget outlines a plan for social reform that would accomplish the following:
Provide full employment for all who are willing and able to work, including those who need education or training to make them willing and able; assure decent and adequate wages to all who work; assure a decent standard for those who cannot or should not work; wipe out slums and ghettos and provide decent homes for all Americans; provide decent medical care and adequate educational opportunities for all Americans, at a cost all can afford; purify our air and water and develop our transportation and natural resources on a scale suitable to our growing needs.
While its scope is massive, such a program should not be labeled utopian. It can be achieved precisely because it is broad enough to deal with the political problems which plagued the Great Society and which, during the Nixon administration, have been magnified by the administration’s economics of scarcity. The basic thrust of the “Freedom Budget” is a full-employment, full-production economy, and a program of fundamental tax reform, all of which the majority of Americans finds acceptable. It rejects the assumption that social spending must be accompanied by reductions in defense spending. Nor does it accept the current theory that economic growth is incompatible with a humane and progressive society; it rather seeks to stimulate the economy to produce the homes, schools, anti-pollution devices—whatever is necessary to fulfill the basic needs of society.
Today, many people are despairing as to the possibility of implementing any sort of liberal social program. I think these people are mistaken. They are misinterpreting the mood of the electorate after the recent election. To perceive McGovern’s defeat as a repudiation of liberalism is, I submit, a misreading of what the voters were saying, a misreading which could have disastrous consequences. Liberalism was not rejected; the failure of the Republicans to improve their position in Congress demonstrated a continued faith in the traditional principles of liberal economic policy. McGovern’s assertion that he lost because the voters came to see him as the representative of blacks is incorrect; a more accurate interpretation is that McGovern lost because some felt he was speaking only to the needs of minorities. Although McGovern’s tax-reform package was the most radical ever proposed by a Presidential candidate, the candidate who did the proposing had, as a matter of fact, never addressed himself to economic issues during a Senate career. He simply lacked credibility in this area.
Nevertheless, both before and after the election Americans have expressed a continuing faith in liberal economic reform. Public-opinion polls have shown two-thirds of the American voters expressing dissatisfaction with Nixon’s economic policies. A majority feels he is too closely aligned with business. At the same time, national health insurance, tax reform, and other programs that would require basic changes have elicited strong support.
Finally, I do not consider the proposals entailed in the “Freedom Budget” as the final expression of liberal social policy. Beyond the “Freedom Budget,” however, I prefer not to speculate on the direction of sound social policy. I consider such speculation, at the time when we are so far from the basic goals for a decent society, as gratuitous as to theorize over whether we have reached the limits of social change.
The Great Society of President Johnson was an extension of the New Deal, as were Truman’s Fair Deal and Kennedy’s New Frontier. The Nixon era—as it begins to emerge in his second term—is a reversion to the old deal. Although the conflict between “old” and “new” may be simplistically put as the contrast between laissez faire and the welfare state, these labels are—at best—just code names for more complex and concealed differences.
The welfare state rested on four tacit theorems: first, government can—and should—play an active role in bringing about social change. Second, the prime political push has to come from the federal government. Third, a basic commitment of federal government is to full employment, to be realized by expanding aggregate demand. Fourth, there has to be public investment to provide people with those services that would otherwise be out of their reach—even if they have jobs at reasonable pay.
None of the Democratic Presidents has been able to hold unswervingly to these dicta, without making concessions to passing pressures. But the general direction has been the use of federal initiative and intervention in the economic life of the nation to “promote the general welfare.”
On all counts, Nixon acts on assumptions contrary to those of the New Deal. First, the government can do little to effectuate social change. Second, the “muscle-bound” federal government should turn over initiative and responsibility to “grass-roots governments” at the local level. Third, full employment is not a cardinal commitment, and, if the economy needs a lift, the encouragement should come not by increasing-labor’s buying power but by enlarging capital’s investing power. Fourth, government—especially federal government—should get out of the business of providing “services” and ought to let people “buy” what they need in the open market.
While Nixon, like his Democratic forerunners, has had to adjust policy to political necessities (Phase I and Phase II being dramatic examples), the thrust of his administration, especially in the second term, is toward the pre-Rooseveltian precept of “rugged individualism.”
Ironically, Democratic policies were far less ideologically motivated than are current Nixon programs. Roosevelt and his successors invented ad hoc answers to crying questions. Nixon is trying to apply a theory out of the 19th century—that government is best which governs least—to the challenges of the late 20th century.
Will this work? I don’t think so—if anything can be learned from past experiences or present exigencies.
The mother of the New Deal invention was necessity. Politics had to step into the economy if the country was not to fall apart. So the government acted, doing all those things that private enterprise could not do (too fragmented) and would not do (no profit in it). As a consequence of these Rooseveltian initiatives, continued by his successors, profound and enduring social changes took place in America: mainly, an economic lift that restored the national spirit.
While it is customary to describe this transition as a shift from laissez faire to the welfare state, such a definition is unduly simplistic. The government has always had a big hand in the economy, from the Colonies to Coolidge. British kings (government) made land grants to favorites who converted their real estate into private preserves for self-enrichment. Hamilton pushed successfully for national policies to foster business. After the War of 1812, the government wrote tariffs to protect infant industries. Commercial monopolies, such as the Bank of the U.S., were chartered by the government. Private corporations were subsidized to build and run highways, canals, and railroads. Federal troops broke strikes; courts handed out labor injunctions; public properties were “given away” to the politically influential; sales taxes were enacted to soak the poor and loopholes were contrived to exempt the rich; corporations bought judges, legislators, governors, Senators, because they knew the economic power of politics.
The New Deal was not really the beginning of governmental intervention in the economy. But it was the first continuing program of government on behalf of the unempowered against an Establishment that used the mouthings of laissez faire to make certain that “government of the people” would not be “for the people.” Bluntly—laissez faire as a practice never existed. The real question has always been: on whose side shall the government intervene? Prior to Roosevelt, the government had—by and large—been the “executive committee of the ruling class.” After Roosevelt, the common man put a few of his people on the committee.
The grand lever of the liberal era has been the federal government—again not because of any ideological enchantment with Washington but because of the total incapacity of state and local government to cope with the complexities of a metro-industrial society. The states were not inclined to act because their legislatures had been districted and apportioned to maintain a rural regnancy for eternity. This control of states by the 18th-century mind meant (a) they would not respond to 20th-century urban problems; and (b) the states would not give their cities the power to do so. The few states that did respond somewhat—like New York and Wisconsin—did so at a high risk: factories and finance would flee the “better” states for the bad states, where they could get bargain rates on labor and taxes. The life-and-death crisis of the society—the great Depression—arose from the total system, nationwide, and could not be resolved locally—no matter how much Hoover exhorted folks to organize “block aid” to help a distressed neighbor.
The federal impetus stimulated the growth of positive social action at the state and local level. While the number of federal employees has been almost constant (between 2.2 and 2.4 million), the number of state, county, and municipal employees has risen from 4.6 million (1955) to 10.5 million (1970). Federal funds to cities have liberated them from the stultifying shackles of state legislatures. The “better” states have found it easier to move ahead because standards in the “bad” states have been elevated by federal action. Whole new categories of social action have been opened as the federal government has moved funds to lower levels for housing, hospitals, education, mental health, nurseries, research, transportation, law enforcement, senior citizens, youth activities, job training, Headstart, sewage treatment, outreach programs, etc. It was federal initiative and funds that vitalized state and local government.
The heart of these policies was full employment, to be obtained by increasing the buying power of “the people.” This was done by work and make-work projects, federal minimum wages, unemployment insurance, old-age pensions, aid to the blind and handicapped and dependent children, and—the encouragement of collective bargaining. While all the Democratic Presidents, in greater or lesser measure, also made government funds directly available to business enterprises, they assumed that “trickle down” was a far less effective way to stimulate the economy than “perk up.” In due time, this “aggregate demand” was significantly “expanded” by direct public spending (investment) in necessary social efforts.
Public investments, like the rest of the liberal program, was born out of need. Once people got jobs and made some money, they were still left with unmet needs: schools for the children, homes for the aged, hospitals for the sick, medical attention for the family, day-care centers for children of working parents—to name a few examples. These services could, of course, all be bought on the open market—if you could afford it. But most low- and middle-income families could not, since such services were not available at budget prices from entrepreneurs motivated by profits. To meet these unmet social needs, the government went into the business of housing, education, day care, Medicare, Medicaid, mental health, rural electrification—either directly or indirectly. The “services” strategy became a necessary part of the “income strategy.”
By the 1960’s, it was sensed that even “full employment” was inadequate. There was a large chunk of America—black and white—that had been so crippled by its past that it could not function in an urban society, whether as workers, parents, students, or law-abiding citizens. To leave these families to themselves would perpetuate the poverty cycle. Hence, government had to lend a helping hand: to acculturate new generations, to train for jobs, to prepare for education, to foster a self-image of worth and dignity. This was an attempt to undo the damage of decades and generations and to do so quickly with the totally inadequate funds offered by a House of Representatives in the hands of a conservative coalition. A small and hasty start was made with consequences that can only be measured decades and generations later. But the first daring dangerous step was made in the long journey out of night.
Although I have been hesitant, almost embarrassed, to present this short history of the Democratic purpose from Franklin Roosevelt to Lyndon Johnson, I find it necessary to do so to offset the neo-conservative arguments for the current Nixonian doctrines that totally ignore the lessons of our most recent and significant historic experiences. I feel that the liberal progression has provided tools that were valuable in the past and that will be indispensable for the future.
This is not to claim that the application of liberal principles over the last generation has remedied all our social ills: a few were cured; others were checked; some were overlooked. In the last few years, new illnesses developed, the most crucial of which—in my opinion—are the urban crisis, the maldistribution of income, and the environmental peril. In all cases, I believe, these maladies call for a strong dose of governmental medicine at the federal level.
The urban crisis cannot be resolved by the cities in the cities because the mess did not begin in the cities; it began in rural America. In two decades some twenty million dispossessed were driven from soil to city by the contrary policies of increased agricultural productivity and curtailed agricultural production. On a tidal wave of migration, these unacculturated hordes poured into urban America, carrying with them the classic upset of the uprooted, expressed in poverty, dependency, delinquency, crime, and riot—just as in Dickens’s England. To the inevitable conflict was added the ingredient of race, since so much of rural America (the Southeast) was black. The cities were totally unprepared for such an invasion of the “vandals.” The nation was—until quite recently—even unaware of this demographic disaster.
The crisis hit us precisely because there was no national policy on what to do with the agricultural “fall-out”: the new nomads of America. Their inpourings have packed about 75 per cent of the American people into less than 2 per cent of the land area, while half the counties of the nation have been losing population. A rational remedy for this madness has been repeatedly proposed, most recently by a National Commission on Urban Growth Policy, suggesting a redistribution of populations by a reversal of the flow: the creation of some two hundred new communities by the year 2000. The impetus for such a program will never come from the cities or small-town America. The drive—policy, power, funds, plans—must come from Washington. Without such an “inner space” program, New York, Los Angeles, and even Philadelphia may all become just another Calcutta: an urban jungle.
How does one finance such a multi-trillion-dollar project? At present, Uncle Sam does not have the money. But—at present—some $50-60 billion a year is lost to the federal treasury through “loopholes.” The American people are too poor to do many things because the affluence of the nation is outside the reach of the people. There is a top 2 per cent that has annual income equal to the bottom 40 per cent; a top 10 per cent of families has income equal to the bottom 60 per cent. The same top 2 per cent is the beneficiary of the great tax exemptions. So long as the riches of the nation are thus locked up at the top, every decision to redistribute the remaining income becomes a bloody battle among the “tribes” in the lower 80 per cent, especially mean and violent among the bottom 40 per cent.
Despite the New Deal and its progeny, income in the U.S. has not been “redistributed” since the beginning of the century. The reason? Ownership has become—if anything—more concentrated than in the past and—as always—income is a function of income-producing wealth. (A top 1 per cent owns about 75 per cent of the privately-held corporate stocks, 85 per cent of the bonds, and 95 per cent of the municipal bonds.)
To redistribute national income will require radical changes in the system of taxation and/or ownership of income-producing wealth: lands, corporations, financial institutions. Clearly, such policies must flow from a national commitment to restructure the economy. Bedford-Stuyvesant, on its own, cannot redistribute the wealth of America.
In the past, the U.S. has avoided the violent political consequences arising from a gross maldistribution of income primarily because of phenomenal economic growth: the gross national product has grown more rapidly than the population, so that without changing the “shares” (slices of our Great National Pie) there has been a bigger piece for everyone.
Continuation of growth, however, is seriously challenged at present by the “ecology” people who fear the rapid exhaustion of resources like fuel, water, air, usable soil—even living room. Whether these fears are alarmist or well-founded, it is agreed that the nation can no longer go its blithe way unmindful of what happens to our environment and our limited resources. Hence, we need national—perhaps international—policies on ecological problems.
Such policies are necessarily economic and political. Some examples: (a) Much of the GNP is made up of useless or even noxious items that devour valuable, sometimes irreplaceable, resources. The public interest may well require a discouragement (prohibition) of such production while shifting manpower and funding (almost necessarily public) to projects like scrubbing Lake Erie and detoxifying poisoned soil; (b) steps to combat pollution, to restore damaged environments, to ration resources are a cost that must be absorbed by someone: either the consumer, or the worker who must do with a smaller wage, or the corporation that will accept a smaller profit, or the public by taxation. The last (taxation) will, in turn, require political decisions as to who (what class) shall pay the tax.
These challenges can only be met with a series of interlocking policies on population control, new communities, income distribution, fuel, water, air, soil, temperature, transportation, resources. None of these policies is possible—and certainly not possible in gestalt—without an initial decision to have national policies.
The entire Nixonian predisposition is to the contrary: the administration favors little over big government—local over federal, cheap over expensive, weak over strong. This plea for pusillanimity is puffed up grandiloquently as “the New Federalism,” or “the Second American Revolution,” or “grass-roots government,” or “the fulfillment of the American dream.” But as the New York Times put it: “The dream looks remarkably like right-wing nostalgia for the days when individual self reliance and states’ rights were regarded as a sufficient answer for every complex social problem.”
Currently, this retreat from federal social responsibility is paraded as a magnanimous advance to revenue sharing—Uncle Sam being kind to his nieces and nephews. Originally—under its inventor, Democrat Walter Heller—that is exactly what revenue sharing was: a way to use the federal tax power to give more money to local governments for their general purposes. When Congress last year voted a five-year $30-billion plan for revenue sharing, it thought it was giving the cities additional money—over and above what they were getting in categorical aids from the federal government for housing, education, and a wide spectrum of social services. “But things don’t seem to be working out that way,” reports the Wall Street Journal (February 26). “For, right now at least, revenue sharing is promising little revenue and less sharing. . . . The President has decided to use revenue sharing not only as a device for radically altering the way federal money is disbursed to the states and localities but also as a mechanism for dismantling many of the Great Society social-aid programs inherited from the Johnson administration. . . . What’s shaping up is a substantial—in some cases, perhaps, even massive—withdrawal of federal funds from a wide variety of programs.”
The “wide variety” goes far beyond controversial items like Community Action programs. “He [Nixon] is suspending some programs (subsidized housing), terminating others (hospital construction), and impounding funds and sharply reducing spending for still others (manpower training),” reports the Journal. Nixon plans to “withdraw federal money for neighborhood mental clinics, hospital construction, and regional medical programs,” reports the Times on March 5. He is withholding funds for a rural environment program; he is rewriting the rules for day care so that working mothers will not be able to place their children in federally subsidized facilities; he is rewriting the regulations on Medicare to step up the cost by about half a billion dollars to sick seniors. He has announced that he would veto pending bills dealing with such various matters as “flood control and rural electrification . . . airport security and veterans’ burial benefits.” And if the veto is overridden, declared domestic affairs adviser John Ehrlichman, the President will simply impound the funds. The Nixonian assault is not on a few aberrations of the Johnsonian effort but on as much of the New Deal accomplishments as it dares assail.
One popular rationale for this turnabout is simply that Uncle Sam cannot afford these past luxuries. Explaining Nixon’s pre-announced veto of pending legislation, John Ehrlichman warned that these proposals represented a “$9 billion dagger aimed at the heart of the American taxpayer.”
Can the U.S. Treasury—in this post-Vietnam era—afford this $9 billion? Testifying before Congress, economist Joseph Pechman of the Brookings Institution put it on the record that “if income was more fully taxable and the unnecessary deductions were removed, it would be possible both to reduce taxes and to buy more of the public services that the recent budget suggests we cannot afford.” Stanley Surrey points out that while this administration pleads poverty, it gives away between $50-$60 billion a year in the form of tax exemptions to the rich, a sum that exceeds all other government subsidy, grant, or credit programs and is, in addition, “immune from scrutiny at any time when the regular budget is being carefully scrutinized for every possible savings.”
However erroneous Ehrlichman’s argument may be, it is popular: people don’t want to pay more taxes and they have ready ears for an American Poujadiste. But for how long? In a recent column, John P. Roche offered a prediction: “I’m convinced that after the American people have lived a while with the Nixon Doctrine of ‘the withering away of the state’ they will begin to yearn for the good old days.” Especially—may one add—when they discover that there are ways to get the rich to help pay the bill.
George F. Will:
My Philosophy tutor at Oxford was fond of quoting to me the late J. L. Austin’s remark that he did not want to make philosophy foolproof, he wanted to make it genius-proof. I was never an impediment to Austin’s program, but I understood his point. Philosophic geniuses tend to clutter up the intellectual landscape with portentous, often intriguing, but usually unclear and confusing ideas. Philosophy should be analytic, a solvent, in the sense that it dissolves rather than solves most “philosophic” questions, whatever they might be. The imposing constructions of the most ingenious philosophers tend to evaporate under the scrutiny of reasonable men committed, first and foremost, to clearing up logical muddles. So the rule for genius-proofing philosophy is: do first things first—clarify—and you may find that first things are all you have to do. Of course, clarity is not the sort of philosophic goal that sends men to the barricades, and that is not its only merit. Its principal merit is that it deals competently with the manageable task it prudently sets for itself, the task of tidying up our thought. And it does no harm. Austin’s conception of philosophy might seem unimaginative or timid but, in matters of the mind, there are many virtues on which un-imaginativeness and timidity are improvements.
Not even my tutor’s heroic efforts made me competent to judge whether Austin’s axiom is really sound. But having just spent several years watching the federal government function, I feel competent to advocate a political version of the axiom. We should make government ingeniousness-proof, especially with regard to social policies, and most especially with regard to policies concerning poor people. We have tried ingenious anti-poverty measures, from manpower programs through Community Action, designed to change the nature of poor people. The theory was that if the government gave poor people marketable talents and a feisty, demanding demeanor, they would be able to cope with American life. The government undoubtedly did some good. But the government is better, much better at delivering material goods to particular places—moon rockets, even the mail—than at delivering particular skills and character traits to a segment of the population. So instead of tinkering with their temperaments, the government should content itself with directly improving the economic condition of poor people. The problem with that condition is too little money. So the government should quit decreasing the purchasing power of the money poor people have, and if that does not help enough, it should give them more money. But first things first: the government would improve its performance if it would quit harming the poor, economically.
If America’s poor people were politically organized—or even organizable—and sophisticated and adept, they would be a very odd kind of poor people, and they could compete as just another interest group in the normal scramble for government favors. If poor people were malleable, adjustable, and able easily to acquire marketable new skills, most would not be poor. So if we begin with more realistic expectations for the poor, we will devise less ingenious ways of helping them. And we may actually help them. Certainly we will help them more if we think about them—explicitly about them—less. Some people believe the efficient way to fight poverty is to load “poverty money” into the federal budget. But the effective force in generating such money virtually guarantees that the money will not be an efficient force against poverty. (The bureaucracy gets Congress to appropriate an average of $8,000 for every Oglala Sioux family at Wounded Knee. The average Oglala family income is $1,900 per year.) The best way to help the poor is by provoking the non-poor majority into making some self-interested demands that also help poor people. This will be a very limited war on poverty, waged by indirection and, if not in defiance of public hostility to such a war, at least without asking or receiving explicit support. It will be like Vietnam: an untidy, inconclusive, little war, but the only war possible, given the political mood of the nation.
Politically, “poverty programs”—designed and advertised as “for the poor”—are not popular. Such programs are generally believed to have failed in the 60’s. Given this belief, it is instructive to note that the 60’s saw some of the most significant domestic reform in the nation’s history.
The most important social policies of that decade were not service programs, and did not fail. They addressed facets of the permanent great problem of our nation, and they were stunning successes. One decade ago, in April 1963, “Bull” Connor was a household word: the Birmingham demonstrations were under way. With a little help from Connor, public opinion moved further, faster, in a humane direction than anyone could have anticipated. This was primarily the work of a politically skillful civil-rights movement that identified salient issues, devised clear legislative goals and strategies, and built a constituency and a consensus nationally, thereby enabling the politicians to perform creditably. The leading poverty fighters never did this, in part because they were not related to “their problem” the way the civil-rights leaders were: civil-rights leaders were fighting for their own rights. Historians will record that the most significant legislative landmarks of the decade concerned the distribution of rights, not wealth. They concerned access to voting booths, jobs, barbershops, the Pickrick restaurant. And they were noble achievements. The tendency to ignore them when considering social policy in the 60’s reflects only the pernicious habit of defining institutions and policies by the goals they fail to achieve.
The “failure” of poverty programs is what really interests people, and that is interesting. Such an assessment of poverty programs often involves dubious criteria of failure, and distinctively American social impatience. A significantly smaller percentage of our population lives in poverty today than a decade ago. It is hard (but, in my judgment, not crucial) to know precisely how much credit for this goes to poverty programs. In any case, this improved percentage is not evidence of failure of those programs—unless, of course, one assumes the programs were irrelevant to the improvement; or that the improvement came in spite of a negative net effect of the programs; or that they “failed” because some poverty remains. These are unreasonable assumptions. Reasonable people disagree as to whether poverty programs failed. But today the salient fact is political: most Americans are not much interested in poverty, and not at all interested in poverty programs. The next phase of the attack on poverty must begin by accepting that fact.
To avoid a distracting argument, let us assume the poverty programs “worked,” at least a bit. As Milton Friedman has said of a controlled economy: it works, and so does a horse and buggy. The 1960’s poverty programs were the horse-and-buggy approach to improving the lot of poor people. Because some programs seemed cleverly inventive, they had a certain fascination for many Americans. From Robert Fulton and Eli Whitney through Samuel Colt, Cyrus McCormick, Thomas Edison, and Henry Ford, the nation has profited from a series of brilliant strokes of practical intelligence that improved the conditions of living. The spirit of modernity—intelligence is practical; man can cope; problems are tractable, being rooted in past behavior rather than permanent human attributes—that stimulated such inventiveness seemed vindicated by it. True, today some people are obsessed by the utterly unsurprising fact that revolutionary inventions often have disagreeable unanticipated consequences: Whitney’s made slavery pay, Ford’s democratized a polluting activity. But more significant than today’s fashionable resentment of the men who invented our conveniences is the fact that our nation reveres its inventors above all philosophers and most politicians. So it is not surprising in a politicized age that Americans’ social genes make them impatient for benevolent government inventiveness. Perhaps they are encouraged to expect an Edison-like genius from public officials because the Manhattan Project, the Apollo moon program, and the interstate highway program—all dealing in hard goods, and backed by powerful interests—have been the most successful government programs of modern times: they did what they were supposed to do. But no program, least of all a “service” program explicitly “for” poor people, will deal in the kind of things, or be backed by the kind of interests, that enables the government to perform skillfully. And that is why the so-called “human resources” portion of the budget (by far the largest portion, 47 per cent, compared with 30 per cent for defense) is not the place to look for evidence that the government has a serious “program” for helping poor people.
Of course many conspicuous defenders of the poor argue—many sincerely, some cynically—that the quantity of “poverty money” in the federal budget is the most important fact about the relation of poor people to their government. That mistaken notion accounts for some of the over-wrought reaction to the President’s Fiscal Year 1974 budget. The reaction casts doubt on the sobriety of some people who want to command divisions in the next declared war on poverty. The asperity with which they have attacked the new budget suggests the lengths to which they will go to confirm their belief that the President would rather grind the faces of the poor than watch the Superbowl.
In fact, the budget calls for a $13 billion deficit. The President proposes to spend $268.7 billion this year, double what Lyndon Johnson spent in 1966 while escalating wars against poverty and North Vietnam. The President proposes to spend $19 billion (7.5 per cent) more than last year and $22 billion (9 per cent) more than he originally proposed to spend. (This year’s increase is larger than the sum of all the federal budgets from 1789 through 1906.) Last year’s outlays by government at all levels came to nearly 40 per cent of the national income. The increase in outlays in the last five years is equal to 46 per cent of the increase in national income, and if you delete defensive outlays the figure is 45 per cent. As has been said, we are lucky we do not get all the government we pay for.
Some of the programs that people defend against the President’s cuts (e.g., cheap loans for non-poor agriculture interests) are like Pascal’s hare that one would chase all day but would not bother to buy. These people are applying to the federal government the rule Hitler applied to the Wehrmacht in Russia: where it sets its foot, it shall never withdraw. These people (like Hitler, but with less reason) believe that retrenchment will become pell-mell retreat. Also, because there was a time when it was hard to get money for social programs, some people cannot believe that today’s government is awash with systemic pressures for more spending, particularly for existing programs, and especially for programs that are not working (because, their patrons say, they are “insufficiently funded”). Such dogmatic resistance to (non-defense) budget cuts calls to mind a proverb: “When he was young he burned his tongue on the soup, so now he blows on the yogurt.” The federal budget runneth over. Federal spending is increasing faster than economic growth is increasing revenues through the tax system, and virtually the only existing consensus (remember that word from the Great Society salad days?) is that there should be no tax increase.
But for the poor, the budget is not the point. The budget, like the rest of government, is a playground for those who are physically fit, politically. It usually takes organization, leadership, money, and a leavening dash of political moxie to place an item in the budget. Thus one can almost assume that any item for the poor is not just for them, and perhaps not primarily for them. Much of the demand for day-care facilities, and virtually all the effective demand, is from middle-class mothers who look upon the program as a monument to their “right” to have a career outside the home. For the Family Assistance Plan, the President requested, and Congress refused, extra billions for poor people. Jerry Wurf, whose State, County, and Municipal Employees Union includes 30,000 welfare workers, opposed FAP: “This legislation threatens to eliminate the jobs of our people.” Daniel P. Moynihan, FAP’s father, notes with a trace of wonder that the poor “never showed any sign of comprehending the opportunity being offered them, nor of resenting those who in their name rejected this offer.” But what is less surprising than the fact that people who cannot cope with the economic system have not mastered the complexities of the political process?
When President Johnson submitted his Fiscal Year 1969 budget he said $27.7 billion was for all “assistance to help reduce the number of people living on poverty.” Probably a similarly derived figure today would be higher—and just as useless—for measuring the federal government’s effect on poor people. The real effect comes from the myriad measures that influence the size, shape, and vigor of the economy. Many of these measures do not show up in the budget, and of those that do, few are labeled as what they are—subsidies. But they all are part of the evolving American political economy that now deserves to be known as a “subsidy system.” The working of this system is neither arcane nor secret. It is all very public business, aboveboard and on the record, open to the inspection of everyone who heeds Yogi Berra’s epistemological principle: “You can observe a lot by just watching.”
For the purpose of understanding this system it is useful to define a subsidy as direct or indirect government assistance (sometimes but not always monetary) to a private interest to improve the lot of that interest. A subsidy system exists when government adopts a comprehensive (although perhaps unsystematic) use of subsidies to influence the amount and distribution of wealth throughout society. Our subsidy system recently received some attention when the Joint Economic Committee of Congress, chaired by Senator William Proxmire (D., Wis.), claimed that subsidies involved $63 billion in Fiscal Year 1970. As Senator Proxmire says, subsidies are “one of the most controversial, . . . and neglected areas of government activity . . . [and] are the dominant form of government intervention into the incentive structure of particular private markets.”
One reason the subject of subsidies is neglected is that the word “subsidy” does not make many official appearances. Subsidizing usually is done by things called regulations, guidelines, aid, reimbursements, incentives, tax credits, loan guarantees.
Direct intervention in the market, as with agricultural products, is a subsidy.
Regulations that set railroad freight rates are subsidies, as are limitations on truck routes that restrict competition.
Tariffs, import quotas, and export assistances are subsidies.
Licensing and apprenticeship requirements that have the intended effect of restricting entry into a profession are subsidies.
Payments of public funds to induce the production of goods (e.g., Ph.D.’s and museums) or delivery of services (e.g., airline service to small communities) that would not otherwise be produced or delivered are subsidies.
The humanitarian rhetoric used by AFL-CIO lobbyists last year in the effort to increase the minimum wage could not obscure the fact that minimum-wage laws are indirect subsidies for organized labor, which enhance its general negotiating position by forcing up the base wage. The fact that every increase in the minimum wage generates unemployment among the most vulnerable workers—usually young, often black, in marginal jobs—underscores the fact that subsidies usually work for the strong at the expense of the weak. The fact that the giant McDonald’s hamburger chain was not motivated by altruism last year does not alter the fact that, by opposing the minimum-wage increase, it was acting in the interests of a lot of poor black teen-agers.
Subsidies are everywhere, and much of the political activity today at the federal level concerns their allocation. Each element of this interlocking, overlapping quilt of disparate government measures, like most things in a modern state, has its own momentum, nurtured by the attentive special interest it serves. Like Socrates’s death, the subsidy system injures incrementally, and injures the poor most of all. Fortunately, it does not injure only the poor. Its inevitable product is inflation, which is a very democratic aggravation: it affects everybody. That is what gives poor people reason for hope.
The subsidy system’s politicized cost of living injures poor, disorganized, inarticulate people but it angers non-poor, articulate people, and anger is politically more important than mere injury. So the most direct and obvious way to help poor people, at least a little bit, is to get non-poor people angry about the subsidy system that produces the inflation that most severely injures people with little discretionary income. Non-poor people vote regularly and can be made to complain loudly. They must be made to notice the costs of unnoticed government measures.
We have come a long way from the day Alexander von Humboldt described the U.S. government to the King of Prussia as “a government which no one sees or feels, yet is far more powerful than Your Majesty’s government.” Today the unfortunate thing is that too many potentially obstreperous people do not know when they are seeing or feeling their government. There are some encouraging signs. More and more people are cottoning on to the fact that they are seeing their government’s agriculture programs in high food prices and feeling their government’s import quotas when fuel oil is scarce. It would be grand if this were the beginning of a lot of boisterous resentment on the part of middle-class consumers. If it is, it will not be the first time resentment fueled benevolent social change. Today, with the federal budget under severe pressure and the public out of patience with poverty programs, many measures that will help poor people also will benefit the non-poor but very irritable majority.
The subsidy system was built by intense interests exploiting the fact that most people have better things to do than monitor the details of government. Cattlemen are attentive to meat imports. Hamburger eaters are not, at least not yet. But the widespread agitation about inflation in general and food prices in particular has caused the administration’s tentative retreat from some farm subsidies. That retreat is a victory for poor people.
There probably will not be a substantial dismantling of the subsidy system. And maybe there should not be: a subsidy, like free trade, is an expedient, not a principle. Some subsidies are useful. The sudden withdrawal of others would cause unjust and insupportable injury to interests that have come to depend on them. In any case, there is scant evidence that a generalized resentment, generated by particular instances of subsidized inflation, will soon become a demand for a substantially less politicized economy. Certainly as long as the ideology and bureaucracy of wage-and-price controls function as they now do we will continue our resolutely downhill trudge toward a comprehensively controlled economy in which the big economic battalions bake and cut the pie.
If that is where we are going, the distribution of income in the U.S. will continue to be unfair, not so much because of inequities inherent in any particular disparities of income, but because the rewards of life in America are so significantly influenced by government measures that would be fairer if they were as capricious as they may appear to the casual observer. And so, because any dismantling of the subsidy system is apt to be very partial, the kind of aid to the poor, valuable though it is, should be supplemented by—a subsidy. Because our subsidy system probably cannot be made to wither away, people need and are owed something like FAP, some form of income support—which, by the way, is a crackerjack example of anti-ingeniousness in social policy: if people are poorer than you think they should be, give them cash.
Poor people should—but probably will not—have many conservative allies in promoting income support. To understand why many conservatives will not be allies, consider Howard Phillips, the enthusiast unleashed upon the Office of Economic Opportunity. Mr. Phillips plunged into his demolition duties with a zest derived from a tiresomely familiar conservative doctrine: poverty is an economic problem, not a “political” problem. That is absurd. Poverty is a political problem and a governmental responsibility because it is an economic problem. Conservatives should understand this better than most people. Conservatives cherish the paradigm of a limited government holding the ring for a fluid, competitive, unregulated market that metes out justice. They understand the extent to which we have departed from that. But too many do not seem to understand the moral imperatives that devolve from that departure. Indeed, the current debate-cum-autopsy about Great Society poverty programs is bewildering because those who oppose treating poverty as a “political” problem derive their indictments of liberal programs—most of which are indictable—from a social doctrine that is refuted by the most obvious facts of American political and economic life. Today the government is a significant, pervasive factor determining the rewards of life in America, and the government did not become so last Tuesday. If conservatism is going to help make things more fair, it must consist of more than a tissue of inclement truths about the futility of government. It must take seriously the political economy that has evolved in the atmosphere (fostered by liberalism) of casual government intervention in economic affairs, a political economy that conservatives cannot realistically hope to dismantle.
We are stuck with a very political economy for as far into the future as any sensible person would claim to see. The public sector is too much with us. As a result, much of the private sector, including the economic condition of poor people, is not a “private” matter any more; it is permeated with the public interest. Regarding the poor, politicians should swear an oath—“I will do no harm”—and seek a social strategy based on fewer harmful subsidies for the strong and one subsidy—income support—for the poor. Such a strategy will not involve new, electrifying, ingenious programs, but at least it will not violate the rule that there should be a government way where there is a government will. Such a strategy may seem unimaginative, even timid, but in government, as in matters of the mind, there are many virtues on which unimaginativeness and timidity are improvements.
Edward C. Banfield is Kenan Professor of Public Policy Analysis and Political Science at the University of Pennsylvania. His most recent book is The Un-heavenly City.
Nathan Glazer, professor of education and social structure at Harvard, is the author of Remembering the Answers, American Judaism, and (with Daniel P. Moynihan) Beyond the Melting Pot.
Michael Harrington is editor of the Newsletter of the Democratic Left, a new publication. His books include, Socialism, Toward a Democratic Left, and The Other America.
Tom Kahn is assistant to the president of the AFL-CIO and former executive director of the League for Industrial Democracy.
Christopher Lasch teaches history at the University of Rochester. He is the author of The Agony of the American Left, The New Radicalism in America, and The World of Nations (to be published this fall).
Robert Lekachman, a Guggenheim Fellow for 1972-73, is professor of economics at the Stony Brook campus of SUNY and the author of the recent National Income and the Public Welfare.
Bayard Rustin is executive director of the A. Philip Randolph Institute and the author of Down the Line.
Gus Tyler is assistant president of the International Ladies Garment Workers Union and the author of, among other works, Labor in the Metropolis.
George F. Will, who has contributed to the Washington Post and the National Review, was for several years an aide to former Senator Gordon Allott (R.) of Colorado.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Nixon, the Great Society, and the Future of Social Policy-A Symposium
Must-Reads from Magazine
t can be said that the Book of Samuel launched the American Revolution. Though antagonistic to traditional faith, Thomas Paine understood that it was not Montesquieu, or Locke, who was inscribed on the hearts of his fellow Americans. Paine’s pamphlet Common Sense is a biblical argument against British monarchy, drawing largely on the text of Samuel.
Today, of course, universal biblical literacy no longer exists in America, and sophisticated arguments from Scripture are all too rare. It is therefore all the more distressing when public intellectuals, academics, or religious leaders engage in clumsy acts of exegesis and political argumentation by comparing characters in the Book of Samuel to modern political leaders. The most common victim of this tendency has been the central character in the Book of Samuel: King David.
Most recently, this tendency was made manifest in the writings of Dennis Prager. In a recent defense of his own praise of President Trump, Prager wrote that “as a religious Jew, I learned from the Bible that God himself chose morally compromised individuals to accomplish some greater good. Think of King David, who had a man killed in order to cover up the adultery he committed with the man’s wife.” Prager similarly argued that those who refuse to vote for a politician whose positions are correct but whose personal life is immoral “must think God was pretty flawed in voting for King David.”
Prager’s invocation of King David was presaged on the left two decades ago. The records of the Clinton Presidential Library reveal that at the height of the Lewinsky scandal, an email from Dartmouth professor Susannah Heschel made its way into the inbox of an administration policy adviser with a similar comparison: “From the perspective of Jewish history, we have to ask how Jews can condemn President Clinton’s behavior as immoral, when we exalt King David? King David had Batsheva’s husband, Uriah, murdered. While David was condemned and punished, he was never thrown off the throne of Israel. On the contrary, he is exalted in our Jewish memory as the unifier of Israel.”
One can make the case for supporting politicians who have significant moral flaws. Indeed, America’s political system is founded on an awareness of the profound tendency to sinfulness not only of its citizens but also of its statesmen. “If men were angels, no government would be necessary,” James Madison informs us in the Federalist. At the same time, anyone who compares King David to the flawed leaders of our own age reveals a profound misunderstanding of the essential nature of David’s greatness. David was not chosen by God despite his moral failings; rather, David’s failings are the lens that reveal his true greatness. It is in the wake of his sins that David emerges as the paradigmatic penitent, whose quest for atonement is utterly unlike that of any other character in the Bible, and perhaps in the history of the world.
While the precise nature of David’s sins is debated in the Talmud, there is no question that they are profound. Yet it is in comparing David to other faltering figures—in the Bible or today—that the comparison falls flat. This point is stressed by the very Jewish tradition in whose name Prager claimed to speak.
It is the rabbis who note that David’s predecessor, Saul, lost the kingship when he failed to fulfill God’s command to destroy the egregiously evil nation of Amalek, whereas David commits more severe sins and yet remains king. The answer, the rabbis suggest, lies not in the sin itself but in the response. Saul, when confronted by the prophet Samuel, offers obfuscations and defensiveness. David, meanwhile, is similarly confronted by the prophet Nathan: “Thou hast killed Uriah the Hittite with the sword, and hast taken his wife to be thy wife, and hast slain him with the sword of the children of Ammon.” David’s immediate response is clear and complete contrition: “I have sinned against the Lord.” David’s penitence, Jewish tradition suggests, sets him apart from Saul. Soon after, David gave voice to what was in his heart at the moment, and gave the world one of the most stirring of the Psalms:
Have mercy upon me, O God, according to thy lovingkindness: according unto the multitude of thy tender mercies blot out my transgressions.
Wash me thoroughly from mine iniquity, and cleanse me from my sin. For I acknowledge my transgressions: and my sin is ever before me.
. . . Deliver me from bloodguiltiness, O God, thou God of my salvation: and my tongue shall sing aloud of thy righteousness.
O Lord, open thou my lips; and my mouth shall shew forth thy praise.
For thou desirest not sacrifice; else would I give it: thou delightest not in burnt offering.
The sacrifices of God are a broken spirit: a broken and a contrite heart, O God, thou wilt not despise.
The tendency to link David to our current age lies in the fact that we know more about David than any other biblical figure. The author Thomas Cahill has noted that in a certain literary sense, David is the only biblical figure that is like us at all. Prior to the humanist autobiographies of the Renaissance, he notes, “we can count only a few isolated instances of this use of ‘I’ to mean the interior self. But David’s psalms are full of I’s.” In David’s Psalms, Cahill writes, we “find a unique early roadmap to the inner spirit—previously mute—of ancient humanity.”
At the same time, a study of the Book of Samuel and of the Psalms reveals how utterly incomparable David is to anyone alive today. Haym Soloveitchik has noted that even the most observant of Jews today fail to feel a constant intimacy with God that the simplest Jew of the premodern age might have felt, that “while there are always those whose spirituality is one apart from that of their time, nevertheless I think it safe to say that the perception of God as a daily, natural force is no longer present to a significant degree in any sector of modern Jewry, even the most religious.” Yet for David, such intimacy with the divine was central to his existence, and the Book of Samuel and the Psalms are an eternal testament to this fact. This is why simple comparisons between David and ourselves, as tempting as they are, must be resisted. David Wolpe, in his book about David, attempts to make the case as to why King David’s life speaks to us today: “So versatile and enduring is David in our culture that rare is the week that passes without some public allusion to his life…We need to understand David better because we use his life to comprehend our own.”
The truth may be the opposite. We need to understand David better because we can use his life to comprehend what we are missing, and how utterly unlike our lives are to his own. For even the most religious among us have lost the profound faith and intimacy with God that David had. It is therefore incorrect to assume that because of David’s flaws it would have been, as Amos Oz has written, “fitting for him to reign in Tel Aviv.” The modern State of Israel was blessed with brilliant leaders, but to which of its modern warriors or statesmen should David be compared? To Ben Gurion, who stripped any explicit invocation of the Divine from Israel’s Declaration of Independence? To Moshe Dayan, who oversaw the reconquest of Jerusalem, and then immediately handed back the Temple Mount, the locus of King David’s dreams and desires, to the administration of the enemies of Israel? David’s complex humanity inspires comparison to modern figures, but his faith, contrition, and repentance—which lie at the heart of his story and success—defy any such engagement.
And so, to those who seek comparisons to modern leaders from the Bible, the best rule may be: Leave King David out of it.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Three attacks in Britain highlight the West’s inability to see the threat clearly
This lack of seriousness manifests itself in several ways. It’s perhaps most obvious in the failure to reform Britain’s chaotic immigration and dysfunctional asylum systems. But it’s also abundantly clear from the grotesque underfunding and under-resourcing of domestic intelligence. In MI5, Britain has an internal security service that is simply too small to do its job effectively, even if it were not handicapped by an institutional culture that can seem willfully blind to the ideological roots of the current terrorism problem.
In 2009, Jonathan Evans, then head of MI5, confessed at a parliamentary hearing about the London bus and subway attacks of 2005 that his organization only had sufficient resources to “hit the crocodiles close to the boat.” It was an extraordinary metaphor to use, not least because of the impression of relative impotence that it conveys. MI5 had by then doubled in size since 2001, but it still boasted a staff of only 3,500. Today it’s said to employ between 4,000 and 5,000, an astonishingly, even laughably, small number given a UK population of 65 million and the scale of the security challenges Britain now faces. (To be fair, the major British police forces all have intelligence units devoted to terrorism, and the UK government’s overall counterterrorism strategy involves a great many people, including social workers and schoolteachers.)
You can also see that unseriousness at work in the abject failure to coerce Britain’s often remarkably sedentary police officers out of their cars and stations and back onto the streets. Most of Britain’s big-city police forces have adopted a reactive model of policing (consciously rejecting both the New York Compstat model and British “bobby on the beat” traditions) that cripples intelligence-gathering and frustrates good community relations.
If that weren’t bad enough, Britain’s judiciary is led by jurists who came of age in the 1960s, and who have been inclined since 2001 to treat terrorism as an ordinary criminal problem being exploited by malign officials and politicians to make assaults on individual rights and to take part in “illegal” foreign wars. It has long been almost impossible to extradite ISIS or al-Qaeda–linked Islamists from the UK. This is partly because today’s English judges believe that few if any foreign countries—apart from perhaps Sweden and Norway—are likely to give terrorist suspects a fair trial, or able to guarantee that such suspects will be spared torture and abuse.
We have a progressive metropolitan media elite whose primary, reflexive response to every terrorist attack, even before the blood on the pavement is dry, is to express worry about an imminent violent anti-Muslim “backlash” on the part of a presumptively bigoted and ignorant indigenous working class. Never mind that no such “backlash” has yet occurred, not even when the young off-duty soldier Lee Rigby was hacked to death in broad daylight on a South London street in 2013.
Another sign of this lack of seriousness is the choice by successive British governments to deal with the problem of internal terrorism with marketing and “branding.” You can see this in the catchy consultant-created acronyms and pseudo-strategies that are deployed in place of considered thought and action. After every atrocity, the prime minister calls a meeting of the COBRA unit—an acronym that merely stands for Cabinet Office Briefing Room A but sounds like a secret organization of government superheroes. The government’s counterterrorism strategy is called CONTEST, which has four “work streams”: “Prevent,” “Pursue,” “Protect,” and “Prepare.”
Perhaps the ultimate sign of unseriousness is the fact that police, politicians, and government officials have all displayed more fear of being seen as “Islamophobic” than of any carnage that actual terror attacks might cause. Few are aware that this short-term, cowardly, and trivial tendency may ultimately foment genuine, dangerous popular Islamophobia, especially if attacks continue.R
ecently, three murderous Islamist terror attacks in the UK took place in less than a month. The first and third were relatively primitive improvised attacks using vehicles and/or knives. The second was a suicide bombing that probably required relatively sophisticated planning, technological know-how, and the assistance of a terrorist infrastructure. As they were the first such attacks in the UK, the vehicle and knife killings came as a particular shock to the British press, public, and political class, despite the fact that non-explosive and non-firearm terror attacks have become common in Europe and are almost routine in Israel.
The success of all three plots indicates troubling problems in British law-enforcement practice and culture, quite apart from any other failings on the parts of the state in charge of intelligence, border control, and the prevention of radicalization. At the time of writing, the British media have been full of encomia to police courage and skill, not least because it took “only” eight minutes for an armed Metropolitan Police team to respond to and confront the bloody mayhem being wrought by the three Islamist terrorists (who had ploughed their rented van into people on London Bridge before jumping out to attack passersby with knives). But the difficult truth is that all three attacks would be much harder to pull off in Manhattan, not just because all NYPD cops are armed, but also because there are always police officers visibly on patrol at the New York equivalents of London’s Borough Market on a Saturday night. By contrast, London’s Metropolitan police is a largely vehicle-borne, reactive force; rather than use a physical presence to deter crime and terrorism, it chooses to monitor closed-circuit street cameras and social-media postings.
Since the attacks in London and Manchester, we have learned that several of the perpetrators were “known” to the police and security agencies that are tasked with monitoring potential terror threats. That these individuals were nevertheless able to carry out their atrocities is evidence that the monitoring regime is insufficient.
It also seems clear that there were failures on the part of those institutions that come under the leadership of the Home Office and are supposed to be in charge of the UK’s border, migration, and asylum systems. Journalists and think tanks like Policy Exchange and Migration Watch have for years pointed out that these systems are “unfit for purpose,” but successive governments have done little to take responsible control of Britain’s borders. When she was home secretary, Prime Minister Theresa May did little more than jazz up the name, logo, and uniforms of what is now called the “Border Force,” and she notably failed to put in place long-promised passport checks for people flying out of the country. This dereliction means that it is impossible for the British authorities to know who has overstayed a visa or whether individuals who have been denied asylum have actually left the country.
It seems astonishing that Youssef Zaghba, one of the three London Bridge attackers, was allowed back into the country. The Moroccan-born Italian citizen (his mother is Italian) had been arrested by Italian police in Bologna, apparently on his way to Syria via Istanbul to join ISIS. When questioned by the Italians about the ISIS decapitation videos on his mobile phone, he declared that he was “going to be a terrorist.” The Italians lacked sufficient evidence to charge him with a crime but put him under 24-hour surveillance, and when he traveled to London, they passed on information about him to MI5. Nevertheless, he was not stopped or questioned on arrival and had not become one of the 3,000 official terrorism “subjects of interest” for MI5 or the police when he carried out his attack. One reason Zaghba was not questioned on arrival may have been that he used one of the new self-service passport machines installed in UK airports in place of human staff after May’s cuts to the border force. Apparently, the machines are not yet linked to any government watch lists, thanks to the general chaos and ineptitude of the Home Office’s efforts to use information technology.
The presence in the country of Zaghba’s accomplice Rachid Redouane is also an indictment of the incompetence and disorganization of the UK’s border and migration authorities. He had been refused asylum in 2009, but as is so often the case, Britain’s Home Office never got around to removing him. Three years later, he married a British woman and was therefore able to stay in the UK.
But it is the failure of the authorities to monitor ringleader Khuram Butt that is the most baffling. He was a known and open associate of Anjem Choudary, Britain’s most notorious terrorist supporter, ideologue, and recruiter (he was finally imprisoned in 2016 after 15 years of campaigning on behalf of al-Qaeda and ISIS). Butt even appeared in a 2016 TV documentary about ISIS supporters called The Jihadist Next Door. In the same year, he assaulted a moderate imam at a public festival, after calling him a “murtad” or apostate. The imam reported the incident to the police—who took six months to track him down and then let him off with a caution. It is not clear if Butt was one of the 3,000 “subjects of interest” or the additional 20,000 former subjects of interest who continue to be the subject of limited monitoring. If he was not, it raises the question of what a person has to do to get British security services to take him seriously as a terrorist threat; if he was in fact on the list of “subjects of interest,” one has to wonder if being so designated is any barrier at all to carrying out terrorist atrocities. It’s worth remembering, as few do here in the UK, that terrorists who carried out previous attacks were also known to the police and security services and nevertheless enjoyed sufficient liberty to go at it again.B
ut the most important reason for the British state’s ineffectiveness in monitoring terror threats, which May addressed immediately after the London Bridge attack, is a deeply rooted institutional refusal to deal with or accept the key role played by Islamist ideology. For more than 15 years, the security services and police have chosen to take note only of people and bodies that explicitly espouse terrorist violence or have contacts with known terrorist groups. The fact that a person, school, imam, or mosque endorses the establishment of a caliphate, the stoning of adulterers, or the murder of apostates has not been considered a reason to monitor them.
This seems to be why Salman Abedi, the Manchester Arena suicide bomber, was not being watched by the authorities as a terror risk, even though he had punched a girl in the face for wearing a short skirt while at university, had attended the Muslim Brotherhood-controlled Didsbury Mosque, was the son of a Libyan man whose militia is banned in the UK, had himself fought against the Qaddafi regime in Libya, had adopted the Islamist clothing style (trousers worn above the ankle, beard but no moustache), was part of a druggy gang subculture that often feeds individuals into Islamist terrorism, and had been banned from a mosque after confronting an imam who had criticized ISIS.
It was telling that the day after the Manchester Arena suicide-bomb attack, you could hear security officials informing radio and TV audiences of the BBC’s flagship morning-radio news show that it’s almost impossible to predict and stop such attacks because the perpetrators “don’t care who they kill.” They just want to kill as many people as possible, he said.
Surely, anyone with even a basic familiarity with Islamist terror attacks over the last 15 or so years and a nodding acquaintance with Islamist ideology could see that the terrorist hadn’t just chosen the Ariana Grande concert in Manchester Arena because a lot of random people would be crowded into a conveniently small area. Since the Bali bombings of 2002, nightclubs, discotheques, and pop concerts attended by shameless unveiled women and girls have been routinely targeted by fundamentalist terrorists, including in Britain. Among the worrying things about the opinion offered on the radio show was that it suggests that even in the wake of the horrific Bataclan attack in Paris during a November 2015 concert, British authorities may not have been keeping an appropriately protective eye on music venues and other places where our young people hang out in their decadent Western way. Such dereliction would make perfect sense given the resistance on the part of the British security establishment to examining, confronting, or extrapolating from Islamist ideology.
The same phenomenon may explain why authorities did not follow up on community complaints about Abedi. All too often when people living in Britain’s many and diverse Muslim communities want to report suspicious behavior, they have to do so through offices and organizations set up and paid for by the authorities as part of the overall “Prevent” strategy. Although criticized by the left as “Islamophobic” and inherently stigmatizing, Prevent has often brought the government into cooperative relationships with organizations even further to the Islamic right than the Muslim Brotherhood. This means that if you are a relatively secular Libyan émigré who wants to report an Abedi and you go to your local police station, you are likely to find yourself speaking to a bearded Islamist.
From its outset in 2003, the Prevent strategy was flawed. Its practitioners, in their zeal to find and fund key allies in “the Muslim community” (as if there were just one), routinely made alliances with self-appointed community leaders who represented the most extreme and intolerant tendencies in British Islam. Both the Home Office and MI5 seemed to believe that only radical Muslims were “authentic” and would therefore be able to influence young potential terrorists. Moderate, modern, liberal Muslims who are arguably more representative of British Islam as a whole (not to mention sundry Shiites, Sufis, Ahmmadis, and Ismailis) have too often found it hard to get a hearing.
Sunni organizations that openly supported suicide-bomb attacks in Israel and India and that justified attacks on British troops in Iraq and Afghanistan nevertheless received government subsidies as part of Prevent. The hope was that in return, they would alert the authorities if they knew of individuals planning attacks in the UK itself.
It was a gamble reminiscent of British colonial practice in India’s northwest frontier and elsewhere. Not only were there financial inducements in return for grudging cooperation; the British state offered other, symbolically powerful concessions. These included turning a blind eye to certain crimes and antisocial practices such as female genital mutilation (there have been no successful prosecutions relating to the practice, though thousands of cases are reported every year), forced marriage, child marriage, polygamy, the mass removal of girls from school soon after they reach puberty, and the epidemic of racially and religiously motivated “grooming” rapes in cities like Rotherham. (At the same time, foreign jihadists—including men wanted for crimes in Algeria and France—were allowed to remain in the UK as long as their plots did not include British targets.)
This approach, simultaneously cynical and naive, was never as successful as its proponents hoped. Again and again, Muslim chaplains who were approved to work in prisons and other institutions have sometimes turned out to be Islamist extremists whose words have inspired inmates to join terrorist organizations.
Much to his credit, former Prime Minister David Cameron fought hard to change this approach, even though it meant difficult confrontations with his home secretary (Theresa May), as well as police and the intelligence agencies. However, Cameron’s efforts had little effect on the permanent personnel carrying out the Prevent strategy, and cooperation with Islamist but currently nonviolent organizations remains the default setting within the institutions on which the United Kingdom depends for security.
The failure to understand the role of ideology is one of imagination as well as education. Very few of those who make government policy or write about home-grown terrorism seem able to escape the limitations of what used to be called “bourgeois” experience. They assume that anyone willing to become an Islamist terrorist must perforce be materially deprived, or traumatized by the experience of prejudice, or provoked to murderous fury by oppression abroad. They have no sense of the emotional and psychic benefits of joining a secret terror outfit: the excitement and glamor of becoming a kind of Islamic James Bond, bravely defying the forces of an entire modern state. They don’t get how satisfying or empowering the vengeful misogyny of ISIS-style fundamentalism might seem for geeky, frustrated young men. Nor can they appreciate the appeal to the adolescent mind of apocalyptic fantasies of power and sacrifice (mainstream British society does not have much room for warrior dreams, given that its tone is set by liberal pacifists). Finally, they have no sense of why the discipline and self-discipline of fundamentalist Islam might appeal so strongly to incarcerated lumpen youth who have never experienced boundaries or real belonging. Their understanding is an understanding only of themselves, not of the people who want to kill them.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'White Working Class' By Joan C. Williams
Williams is a prominent feminist legal scholar with degrees from Yale, MIT, and Harvard. Unbending Gender, her best-known book, is the sort of tract you’d expect to find at an intersectionality conference or a Portlandia bookstore. This is why her insightful, empathic book comes as such a surprise.
Books and essays on the topic have accumulated into a highly visible genre since Donald Trump came on the American political scene; J.D. Vance’s Hillbilly Elegy planted itself at the top of bestseller lists almost a year ago and still isn’t budging. As with Vance, Williams’s interest in the topic is personal. She fell “madly in love with” and eventually married a Harvard Law School graduate who had grown up in an Italian neighborhood in pre-gentrification Brook-lyn. Williams, on the other hand, is a “silver-spoon girl.” Her father’s family was moneyed, and her maternal grandfather was a prominent Reform rabbi.
The author’s affection for her “class-migrant” spouse and respect for his family’s hardships—“My father-in-law grew up on blood soup,” she announces in her opening sentence—adds considerable warmth to what is at bottom a political pamphlet. Williams believes that elite condescension and “cluelessness” played a big role in Trump’s unexpected and dreaded victory. Enlightening her fellow elites is essential to the task of returning Trump voters to the progressive fold where, she is sure, they rightfully belong.
Liberals were not always so dense about the working class, Williams observes. WPA murals and movies like On the Waterfront showed genuine fellow feeling for the proletariat. In the 1970s, however, the liberal mood changed. Educated boomers shifted their attention to “issues of peace, equal rights, and environmentalism.” Instead of feeling the pain of Arthur Miller and John Steinbeck characters, they began sneering at the less enlightened. These days, she notes, elite sympathies are limited to the poor, people of color (POC), and the LGBTQ population. Despite clear evidence of suffering—stagnant wages, disappearing manufacturing jobs, declining health and well-being—the working class gets only fly-over snobbery at best and, more often, outright loathing.
Williams divides her chapters into a series of explainers to questions she has heard from her clueless friends and colleagues: “Why Does the Working Class Resent the Poor?” “Why Does the Working Class Resent Professionals but Admire the Rich?” “Why Doesn’t the Working Class Just Move to Where the Jobs Are?” “Is the Working Class Just Racist?” She weaves her answers into a compelling picture of a way of life and worldview foreign to her targeted readers. Working-class Americans have had to struggle for whatever stability and comfort they have, she explains. Clocking in for midnight shifts year after year, enduring capricious bosses, plant closures, and layoffs, they’re reliant on tag-team parenting and stressed-out relatives for child care. The campus go-to word “privileged” seems exactly wrong.
Proud of their own self-sufficiency and success, however modest, they don’t begrudge the self-made rich. It’s snooty professionals and the dysfunctional poor who get their goat. From their vantage point, subsidizing the day care for a welfare mother when they themselves struggle to manage care on their own dime mocks both their hard work and their beliefs. And since, unlike most professors, they shop in the same stores as the dependent poor, they’ve seen that some of them game the system. Of course that stings.
White Working Class is especially good at evoking the alternate economic and mental universe experienced by Professional and Managerial Elites, or “PMEs.” PMEs see their non-judgment of the poor, especially those who are “POC,” as a mark of their mature understanding that we live in an unjust, racist system whose victims require compassion regardless of whether they have committed any crime. At any rate, their passions lie elsewhere. They define themselves through their jobs and professional achievements, hence their obsession with glass ceilings.
Williams tells the story of her husband’s faux pas at a high-school reunion. Forgetting his roots for a moment, the Ivy League–educated lawyer asked one of his Brooklyn classmates a question that is the go-to opener in elite social settings: “What do you do?” Angered by what must have seemed like deliberate humiliation by this prodigal son, the man hissed: “I sell toilets.”
Instead of stability and backyard barbecues with family and long-time neighbors and maybe the occasional Olive Garden celebration, PMEs are enamored of novelty: new foods, new restaurants, new friends, new experiences. The working class chooses to spend its leisure in comfortable familiarity; for the elite, social life is a lot like networking. Members of the professional class may view themselves as sophisticated or cosmopolitan, but, Williams shows, to the blue-collar worker their glad-handing is closer to phony social climbing and their abstract, knowledge-economy jobs more like self-important pencil-pushing.
White Working Class has a number of proposals for creating the progressive future Williams would like to see. She wants to get rid of college-for-all dogma and improve training for middle-skill jobs. She envisions a working-class coalition of all races and ethnicities bolstered by civics education with a “distinctly celebratory view of American institutions.” In a saner political environment, some of this would make sense; indeed, she echoes some of Marco Rubio’s 2016 campaign themes. It’s little wonder White Working Class has already gotten the stink eye from liberal reviewers for its purported sympathies for racists.
Alas, impressive as Williams’s insights are, they do not always allow her to transcend her own class loyalties. Unsurprisingly, her own PME biases mostly come to light in her chapters on race and gender. She reduces immigration concerns to “fear of brown people,” even as she notes elsewhere that a quarter of Latinos also favor a wall at the southern border. This contrasts startlingly with her succinct observation that “if you don’t want to drive working-class whites to be attracted to the likes of Limbaugh, stop insulting them.” In one particularly obtuse moment, she asserts: “Because I study social inequality, I know that even Malia and Sasha Obama will be disadvantaged by race, advantaged as they are by class.” She relies on dubious gender theories to explain why the majority of white women voted for Trump rather than for his unfairly maligned opponent. That Hillary Clinton epitomized every elite quality Williams has just spent more than a hundred pages explicating escapes her notice. Williams’s own reflexive retreat into identity politics is itself emblematic of our toxic divisions, but it does not invalidate the power of this astute book.
Choose your plan and pay nothing for six Weeks!
When music could not transcend evil
he story of European classical music under the Third Reich is one of the most squalid chapters in the annals of Western culture, a chronicle of collective complaisance that all but beggars belief. Without exception, all of the well-known musicians who left Germany and Austria in protest when Hitler came to power in 1933 were either Jewish or, like the violinist Adolf Busch, Rudolf Serkin’s father-in-law, had close family ties to Jews. Moreover, most of the small number of non-Jewish musicians who emigrated later on, such as Paul Hindemith and Lotte Lehmann, are now known to have done so not out of principle but because they were unable to make satisfactory accommodations with the Nazis. Everyone else—including Karl Böhm, Wilhelm Furtwängler, Walter Gieseking, Herbert von Karajan, and Richard Strauss—stayed behind and served the Reich.
The Berlin and Vienna Philharmonics, then as now Europe’s two greatest orchestras, were just as willing to do business with Hitler and his henchmen, firing their Jewish members and ceasing to perform the music of Jewish composers. Even after the war, the Vienna Philharmonic was notorious for being the most anti-Semitic orchestra in Europe, and it was well known in the music business (though never publicly discussed) that Helmut Wobisch, the orchestra’s principal trumpeter and its executive director from 1953 to 1968, had been both a member of the SS and a Gestapo spy.
The management of the Berlin Philharmonic made no attempt to cover up the orchestra’s close relationship with the Third Reich, no doubt because the Nazi ties of Karajan, who was its music director from 1956 until shortly before his death in 1989, were a matter of public record. Yet it was not until 2007 that a full-length study of its wartime activities, Misha Aster’s The Reich’s Orchestra: The Berlin Philharmonic 1933–1945, was finally published. As for the Vienna Philharmonic, its managers long sought to quash all discussion of the orchestra’s Nazi past, steadfastly refusing to open its institutional archives to scholars until 2008, when Fritz Trümpi, an Austrian scholar, was given access to its records. Five years later, the Viennese, belatedly following the precedent of the Berlin Philharmonic, added a lengthy section to their website called “The Vienna Philharmonic Under National Socialism (1938–1945),” in which the damning findings of Trümpi and two other independent scholars were made available to the public.
Now Trümpi has published The Political Orchestra: The Vienna and Berlin Philharmonics During the Third Reich, in which he tells how they came to terms with Nazism, supplying pre- and postwar historical context for their transgressions.1 Written in a stiff mixture of academic jargon and translatorese, The Political Orchestra is ungratifying to read. Even so, the tale that it tells is both compelling and disturbing, especially to anyone who clings to the belief that high art is ennobling to the spirit.U
nlike the Vienna Philharmonic, which has always doubled as the pit orchestra for the Vienna State Opera, the Berlin Philharmonic started life in 1882 as a fully independent, self-governing entity. Initially unsubsidized by the state, it kept itself afloat by playing a grueling schedule of performances, including “popular” non-subscription concerts for which modest ticket prices were levied. In addition, the orchestra made records and toured internationally at a time when neither was common.
These activities made it possible for the Berlin Philharmonic to develop into an internationally renowned ensemble whose fabled collective virtuosity was widely seen as a symbol of German musical distinction. Furtwängler, the orchestra’s principal conductor, declared in 1932 that the German music in which it specialized was “one of the very few things that actually contribute to elevating [German] prestige.” Hence, he explained, the need for state subsidy, which he saw as “a matter of [national] prestige, that is, to some extent a requirement of national prudence.” By then, though, the orchestra was already heavily subsidized by the city of Berlin, thus paving the way for its takeover by the Nazis.
The Vienna Philharmonic, by contrast, had always been subsidized. Founded in 1842 when the orchestra of what was then the Vienna Court Opera decided to give symphonic concerts on its own, it performed the Austro-German classics for an elite cadre of longtime subscribers. By restricting membership to local players and their pupils, the orchestra cultivated what Furtwängler, who spent as much time conducting in Vienna as in Berlin, described as a “homogeneous and distinct tone quality.” At once dark and sweet, it was as instantly identifiable—and as characteristically Viennese—as the strong, spicy bouquet of a Gewürztraminer wine.
Unlike the Berlin Philharmonic, which played for whoever would pay the tab and programmed new music as a matter of policy, the Vienna Philharmonic chose not to diversify either its haute-bourgeois audience or its conservative repertoire. Instead, it played Beethoven, Brahms, Haydn, Mozart, and Schubert (and, later, Bruckner and Richard Strauss) in Vienna for the Viennese. Starting in the ’20s, the orchestra’s recordings consolidated its reputation as one of the world’s foremost instrumental ensembles, but its internal culture remained proudly insular.
What the two orchestras had in common was a nationalistic ethos, a belief in the superiority of Austro-German musical culture that approached triumphalism. One of the darkest manifestations of this ethos was their shared reluctance to hire Jews. The Berlin Philharmonic employed only four Jewish players in 1933, while the Vienna Philharmonic contained only 11 Jews at the time of the Anschluss, none of whom was hired after 1920. To be sure, such popular Jewish conductors as Otto Klemperer and Bruno Walter continued to work in Vienna for as long as they could. Two months before the Anschluss, Walter led and recorded a performance of the Ninth Symphony of Gustav Mahler, his musical mentor and fellow Jew, who from 1897 to 1907 had been the director of the Vienna Court Opera and one of the Philharmonic’s most admired conductors. But many members of both orchestras were open supporters of fascism, and not a few were anti-Semites who ardently backed Hitler. By 1942, 62 of the 123 active members of the Vienna Philharmonic were Nazi party members.
The admiration that Austro-German classical musicians had for Hitler is not entirely surprising since he was a well-informed music lover who declared in 1938 that “Germany has become the guardian of European culture and civilization.” He made the support of German art, music very much included, a key part of his political program. Accordingly, the Berlin Philharmonic was placed under the direct supervision of Joseph Goebbels, who ensured the cooperation of its members by repeatedly raising their salaries, exempting them from military service, and guaranteeing their old-age pensions. But there had never been any serious question of protest, any more than there would be among the members of the Vienna Philharmonic when the Nazis gobbled up Austria. Save for the Jews and one or two non-Jewish players who were fired for reasons of internal politics, the musicians went along unhesitatingly with Hitler’s desires.
With what did they go along? Above all, they agreed to the scrubbing of Jewish music from their programs and the dismissal of their Jewish colleagues. Some Jewish players managed to escape with their lives, but seven of the Vienna Philharmonic’s 11 Jews were either murdered by the Nazis or died as a direct result of official persecution. In addition, both orchestras performed regularly at official government functions and made tours and other public appearances for propaganda purposes, and both were treated as gems in the diadem of Nazi culture.
As for Furtwängler, the most prominent of the Austro-German orchestral conductors who served the Reich, his relationship to Nazism continues to be debated to this day. He had initially resisted the firing of the Berlin Philharmonic’s Jewish members and protected them for as long as he could. But he was also a committed (if woolly-minded) nationalist who believed that German music had “a different meaning for us Germans than for other nations” and notoriously declared in an open letter to Goebbels that “we all welcome with great joy and gratitude . . . the restoration of our national honor.” Thereafter he cooperated with the Nazis, by all accounts uncomfortably but—it must be said—willingly. A monster of egotism, he saw himself as the greatest living exponent of German music and believed it to be his duty to stay behind and serve a cause higher than what he took to be mere party politics. “Human beings are free wherever Wagner and Beethoven are played, and if they are not free at first, they are freed while listening to these works,” he naively assured a horrified Arturo Toscanini in 1937. “Music transports them to regions where the Gestapo can do them no harm.”O
nce the war was over, the U.S. occupation forces decided to enlist the Berlin Philharmonic in the service of a democratic, anti-Soviet Germany. Furtwängler and Herbert von Karajan, who succeeded him as principal conductor, were officially “de-Nazified” and their orchestra allowed to function largely undisturbed, though six Nazi Party members were fired. The Vienna Philharmonic received similarly privileged treatment.
Needless to say, there was more to this decision than Cold War politics. No one questioned the unique artistic stature of either orchestra. Moreover, the Vienna Philharmonic, precisely because of its insularity, was now seen as a living museum piece, a priceless repository of 19th-century musical tradition. Still, many musicians and listeners, Jews above all, looked askance at both orchestras for years to come, believing them to be tainted by Nazism.
Indeed they were, so much so that they treated many of their surviving Jewish ex-members in a way that can only be described as vicious. In the most blatant individual case, the violinist Szymon Goldberg, who had served as the Berlin Philharmonic’s concertmaster under Furtwängler, was not allowed to reassume his post in 1945 and was subsequently denied a pension. As for the Vienna Philharmonic, the fact that it made Helmut Wobisch its executive director says everything about its deep-seated unwillingness to face up to its collective sins.
Be that as it may, scarcely any prominent musicians chose to boycott either orchestra. Leonard Bernstein went so far as to affect a flippant attitude toward the morally equivocal conduct of the Austro-German artists whom he encountered in Europe after the war. Upon meeting Herbert von Karajan in 1954, he actually told his wife Felicia that he had become “real good friends with von Karajan, whom you would (and will) adore. My first Nazi.”
At the same time, though, Bernstein understood what he was choosing to overlook. When he conducted the Vienna Philharmonic for the first time in 1966, he wrote to his parents:
I am enjoying Vienna enormously—as much as a Jew can. There are so many sad memories here; one deals with so many ex-Nazis (and maybe still Nazis); and you never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget. The city is so beautiful, and so full of tradition. Everyone here lives for music, especially opera, and I seem to be the new hero.
Did Bernstein sell his soul for the opportunity to work with so justly renowned an orchestra—and did he get his price by insisting that its members perform the symphonies of Mahler, with which he was by then closely identified? It is a fair question, one that does not lend itself to easy answers.
Even more revealing is the case of Bruno Walter, who never forgave Furtwängler for staying behind in Germany, informing him in an angry letter that “your art was used as a conspicuously effective means of propaganda for the regime of the Devil.” Yet Walter’s righteous anger did not stop him from conducting in Vienna after the war. Born in Berlin, he had come to identify with the Philharmonic so closely that it was impossible for him to seriously consider quitting its podium permanently. “Spiritually, I was a Viennese,” he wrote in Theme and Variations, his 1946 autobiography. In 1952, he made a second recording with the Vienna Philharmonic of Mahler’s Das Lied von der Erde, whose premiere he had conducted in 1911 and which he had recorded in Vienna 15 years earlier. One wonders what Walter, who had converted to Christianity but had been driven out of both his native lands for the crime of being Jewish, made of the text of the last movement: “My friend, / On this earth, fortune has not been kind to me! / Where do I go?”
As for the two great orchestras of the Third Reich, both have finally acknowledged their guilt and been forgiven, at least by those who know little of their past. It would occur to no one to decline on principle to perform with either group today. Such a gesture would surely be condemned as morally ostentatious, an exercise in what we now call virtue-signaling. Yet it is impossible to forget what Samuel Lipman wrote in 1993 in Commentary apropos the wartime conduct of Furtwängler: “The ultimate triumph of totalitarianism, I suppose it can be said, is that under its sway only a martyred death can be truly moral.” For the only martyrs of the Berlin and Vienna Philharmonics were their Jews. The orchestras themselves live on, tainted and beloved.
Choose your plan and pay nothing for six Weeks!
He knows what to reveal and what to conceal, understands the importance of keeping the semblance of distance between oneself and the story of the day, and comprehends the ins and outs of anonymous sourcing. Within days of his being fired by President Trump on May 9, for example, little green men and women, known only as his “associates,” began appearing in the pages of the New York Times and Washington Post to dispute key points of the president’s account of his dismissal and to promote Comey’s theory of the case.
“In a Private Dinner, Trump Demanded Loyalty,” the New York Times reported on May 11. “Comey Demurred.” The story was a straightforward narrative of events from Comey’s perspective, capped with an obligatory denial from the White House. The next day, the Washington Post reported, “Comey associates dispute Trump’s account of conversations.” The Post did not identify Comey’s associates, other than saying that they were “people who have worked with him.”
Maybe they were the same associates who had gabbed to the Times. Or maybe they were different ones. Who can tell? Regardless, the story these particular associates gave to the Post was readable and gripping. Comey, the Post reported, “was wary of private meetings and discussions with the president and did not offer the assurance, as Trump has claimed, that Trump was not under investigation as part of the probe into Russian interference in last year’s election.”
On May 16, Michael S. Schmidt of the Times published his scoop, “Comey Memo Says Trump Asked Him to End Flynn Investigation.” Schmidt didn’t see the memo for himself. Parts of it were read to him by—you guessed it—“one of Mr. Comey’s associates.” The following day, Robert Mueller was appointed special counsel to oversee the Russia investigation. On May 18, the Times, citing “two people briefed” on a call between Comey and the president, reported, “Comey, Unsettled by Trump, Is Said to Have Wanted Him Kept at a Distance.” And by the end of that week, Comey had agreed to testify before the Senate Intelligence Committee.
As his testimony approached, Comey’s people became more aggressive in their criticisms of the president. “Trump Should Be Scared, Comey Friend Says,” read the headline of a CNN interview with Brookings Institution fellow Benjamin Wittes. This “Comey friend” said he was “very shocked” when he learned that President Trump had asked Comey for loyalty. “I have no doubt that he regarded the group of people around the president as dishonorable,” Wittes said.
Comey, Wittes added, was so uncomfortable at the White House reception in January honoring law enforcement—the one where Comey lumbered across the room and Trump whispered something in his ear—that, as CNN paraphrased it, he “stood in a position so that his blue blazer would blend in with the room’s blue drapes in an effort for Trump to not notice him.” The integrity, the courage—can you feel it?
On June 6, the day before Comey’s prepared testimony was released, more “associates” told ABC that the director would “not corroborate Trump’s claim that on three separate occasions Comey told the president he was not under investigation.” And a “source with knowledge of Comey’s testimony” told CNN the same thing. In addition, ABC reported that, according to “a source familiar with Comey’s thinking,” the former director would say that Trump’s actions stopped short of obstruction of justice.
Maybe those sources weren’t as “familiar with Comey’s thinking” as they thought or hoped? To maximize the press coverage he already dominated, Comey had authorized the Senate Intelligence Committee to release his testimony ahead of his personal interview. That testimony told a different story than what had been reported by CNN and ABC (and by the Post on May 12). Comey had in fact told Trump the president was not under investigation—on January 6, January 27, and March 30. Moreover, the word “obstruction” did not appear at all in his written text. The senators asked Comey if he felt Trump obstructed justice. He declined to answer either way.
My guess is that Comey’s associates lacked Comey’s scalpel-like, almost Jesuitical ability to make distinctions, and therefore misunderstood what he was telling them to say to the press. Because it’s obvious Comey was the one behind the stories of Trump’s dishonesty and bad behavior. He admitted as much in front of the cameras in a remarkable exchange with Senator Susan Collins of Maine.
Comey said that, after Trump tweeted on May 12 that he’d better hope there aren’t “tapes” of their conversations, “I asked a friend of mine to share the content of the memo with a reporter. Didn’t do it myself, for a variety of reasons. But I asked him to, because I thought that might prompt the appointment of a special counsel. And so I asked a close friend of mine to do it.”
Collins asked whether that friend had been Wittes, known to cable news junkies as Comey’s bestie. Comey said no. The source for the New York Times article was “a good friend of mine who’s a professor at Columbia Law School,” Daniel Richman.
Every time I watch or read that exchange, I am amazed. Here is the former director of the FBI just flat-out admitting that, for months, he wrote down every interaction he had with the president of the United States because he wanted a written record in case the president ever fired or lied about him. And when the president did fire and lie about him, that director set in motion a series of public disclosures with the intent of not only embarrassing the president, but also forcing the appointment of a special counsel who might end up investigating the president for who knows what. And none of this would have happened if the president had not fired Comey or tweeted about him. He told the Senate that if Trump hadn’t dismissed him, he most likely would still be on the job.
Rarely, in my view, are high officials so transparent in describing how Washington works. Comey revealed to the world that he was keeping a file on his boss, that he used go-betweens to get his story into the press, that “investigative journalism” is often just powerful people handing documents to reporters to further their careers or agendas or even to get revenge. And as long as you maintain some distance from the fallout, and stick to the absolute letter of the law, you will come out on top, so long as you have a small army of nightingales singing to reporters on your behalf.
“It’s the end of the Comey era,” A.B. Stoddard said on Special Report with Bret Baier the other day. On the contrary: I have a feeling that, as the Russia investigation proceeds, we will be hearing much more from Comey. And from his “associates.” And his “friends.” And persons “familiar with his thinking.”
Choose your plan and pay nothing for six Weeks!
In April, COMMENTARY asked a wide variety of writers,
thinkers, and broadcasters to respond to this question: Is free speech under threat in the United States? We received twenty-seven responses. We publish them here in alphabetical order.
Floyd AbramsFree expression threatened? By Donald Trump? I guess you could say so.
When a president engages in daily denigration of the press, when he characterizes it as the enemy of the people, when he repeatedly says that the libel laws should be “loosened” so he can personally commence more litigation, when he says that journalists shouldn’t be allowed to use confidential sources, it is difficult even to suggest that he has not threatened free speech. And when he says to the head of the FBI (as former FBI director James Comey has said that he did) that Comey should consider “putting reporters in jail for publishing classified information,” it is difficult not to take those threats seriously.
The harder question, though, is this: How real are the threats? Or, as Michael Gerson put it in the Washington Post: Will Trump “go beyond mere Twitter abuse and move against institutions that limit his power?” Some of the president’s threats against the institution of the press, wittingly or not, have been simply preposterous. Surely someone has told him by now that neither he nor Congress can “loosen” libel laws; while each state has its own libel law, there is no federal libel law and thus nothing for him to loosen. What he obviously takes issue with is the impact that the Supreme Court’s 1964 First Amendment opinion in New York Times v. Sullivan has had on state libel laws. The case determined that public officials who sue for libel may not prevail unless they demonstrate that the statements made about them were false and were made with actual knowledge or suspicion of that falsity. So his objection to the rules governing libel law is to nothing less than the application of the First Amendment itself.
In other areas, however, the Trump administration has far more power to imperil free speech. We live under an Espionage Act, adopted a century ago, which is both broad in its language and uncommonly vague in its meaning. As such, it remains a half-open door through which an administration that is hostile to free speech might walk. Such an administration could initiate criminal proceedings against journalists who write about defense- or intelligence-related topics on the basis that classified information was leaked to them by present or former government employees. No such action has ever been commenced against a journalist. Press lawyers and civil-liberties advocates have strong arguments that the law may not be read so broadly and still be consistent with the First Amendment. But the scope of the Espionage Act and the impact of the First Amendment upon its interpretation remain unknown.
A related area in which the attitude of an administration toward the press may affect the latter’s ability to function as a check on government relates to the ability of journalists to protect the identity of their confidential sources. The Obama administration prosecuted more Espionage Act cases against sources of information to journalists than all prior administrations combined. After a good deal of deserved press criticism, it agreed to expand the internal guidelines of the Department of Justice designed to limit the circumstances under which such source revelation is demanded. But the guidelines are none too protective and are, after all, simply guidelines. A new administration is free to change or limit them or, in fact, abandon them altogether. In this area, as in so many others, it is too early to judge the ultimate treatment of free expression by the Trump administration. But the threats are real, and there is good reason to be wary.
Floyd Abrams is the author of The Soul of the First Amendment (Yale University Press, 2017).
Ayaan Hirsi AliFreedom of speech is being threatened in the United States by a nascent culture of hostility to different points of view. As political divisions in America have deepened, a conformist mentality of “right thinking” has spread across the country. Increasingly, American universities, where no intellectual doctrine ought to escape critical scrutiny, are some of the most restrictive domains when it comes to asking open-ended questions on subjects such as Islam.
Legally, speech in the United States is protected to a degree unmatched in almost any industrialized country. The U.S. has avoided unpredictable Canadian-style restrictions on speech, for example. I remain optimistic that as long as we have the First Amendment in the U.S., any attempt at formal legal censorship will be vigorously challenged.
Culturally, however, matters are very different in America. The regressive left is the forerunner threatening free speech on any issue that is important to progressives. The current pressure coming from those who call themselves “social-justice warriors” is unlikely to lead to successful legislation to curb the First Amendment. Instead, censorship is spreading in the cultural realm, particularly at institutions of higher learning.
The way activists of the regressive left achieve silence or censorship is by creating a taboo, and one of the most pernicious taboos in operation today is the word “Islamophobia.” Islamists are similarly motivated to rule any critical scrutiny of Islamic doctrine out of order. There is now a university center (funded by Saudi money) in the U.S. dedicated to monitoring and denouncing incidences of “Islamophobia.”
The term “Islamophobia” is used against critics of political Islam, but also against progressive reformers within Islam. The term implies an irrational fear that is tainted by hatred, and it has had a chilling effect on free speech. In fact, “Islamophobia” is a poorly defined term. Islam is not a race, and it is very often perfectly rational to fear some expressions of Islam. No set of ideas should be beyond critical scrutiny.
To push back in this cultural realm—in our universities, in public discourse—those favoring free speech should focus more on the message of dawa, the set of ideas that the Islamists want to promote. If the aims of dawa are sufficiently exposed, ordinary Americans and Muslim Americans will reject it. The Islamist message is a message of divisiveness, misogyny, and hatred. It’s anachronistic and wants people to live by tribal norms dating from the seventh century. The best antidote to Islamic extremism is the revelation of what its primary objective is: a society governed by Sharia. This is the opposite of censorship: It is documenting reality. What is life like in Saudi Arabia, Iran, the Northern Nigerian States? What is the true nature of Sharia law?
Islamists want to hide the true meaning of Sharia, Jihad, and the implications for women, gays, religious minorities, and infidels under the veil of “Islamophobia.” Islamists use “Islamophobia” to obfuscate their vision and imply that any scrutiny of political Islam is hatred and bigotry. The antidote to this is more exposure and more speech.
As pressure on freedom of speech increases from the regressive left, we must reject the notions that only Muslims can speak about Islam, and that any critical examination of Islamic doctrines is inherently “racist.”
Instead of contorting Western intellectual traditions so as not to offend our Muslim fellow citizens, we need to defend the Muslim dissidents who are risking their lives to promote the human rights we take for granted: equality for women, tolerance of all religions and orientations, our hard-won freedoms of speech and thought.
It is by nurturing and protecting such speech that progressive reforms can emerge within Islam. By accepting the increasingly narrow confines of acceptable discourse on issues such as Islam, we do dissidents and progressive reformers within Islam a grave disservice. For truly progressive reforms within Islam to be possible, full freedom of speech will be required.
Ayaan Hirsi Ali is a research fellow at the Hoover Institution, Stanford University, and the founder of the AHA Foundation.
Lee C. BollingerI know it is too much to expect that political discourse mimic the measured, self-questioning, rational, footnoting standards of the academy, but there is a difference between robust political debate and political debate infected with fear or panic. The latter introduces a state of mind that is visceral and irrational. In the realm of fear, we move beyond the reach of reason and a sense of proportionality. When we fear, we lose the capacity to listen and can become insensitive and mean.
Our Constitution is well aware of this fact about the human mind and of its negative political consequences. In the First Amendment jurisprudence established over the past century, we find many expressions of the problematic state of mind that is produced by fear. Among the most famous and potent is that of Justice Brandeis in Whitney v. California in 1927, one of the many cases involving aggravated fears of subversive threats from abroad. “It is the function of (free) speech,” he said, “to free men from the bondage of irrational fears.” “Men feared witches,” Brandeis continued, “and burned women.”
Today, our “witches” are terrorists, and Brandeis’s metaphorical “women” include the refugees (mostly children) and displaced persons, immigrants, and foreigners whose lives have been thrown into suspension and doubt by policies of exclusion.
The same fears of the foreign that take hold of a population inevitably infect our internal interactions and institutions, yielding suppression of unpopular and dissenting voices, victimization of vulnerable groups, attacks on the media, and the rise of demagoguery, with its disdain for facts, reason, expertise, and tolerance.
All of this poses a very special obligation on those of us within universities. Not only must we make the case in every venue for the values that form the core of who we are and what we do, but we must also live up to our own principles of free inquiry and fearless engagement with all ideas. This is why recent incidents on a handful of college campuses disrupting and effectively censoring speakers is so alarming. Such acts not only betray a basic principle but also inflame a rising prejudice against the academic community, and they feed efforts to delegitimize our work, at the very moment when it’s most needed.
I do not for a second support the view that this generation has an unhealthy aversion to engaging differences of opinion. That is a modern trope of polarization, as is the portrayal of universities as hypocritical about academic freedom and political correctness. But now, in this environment especially, universities must be at the forefront of defending the rights of all students and faculty to listen to controversial voices, to engage disagreeable viewpoints, and to make every effort to demonstrate our commitment to the sort of fearless and spirited debate that we are simultaneously asking of the larger society. Anyone with a voice can shout over a speaker; but being able to listen to and then effectively rebut those with whom we disagree—particularly those who themselves peddle intolerance—is one of the greatest skills our education can bestow. And it is something our democracy desperately needs more of. That is why, I say to you now, if speakers who are being denied access to other campuses come here, I will personally volunteer to introduce them, and listen to them, however much I may disagree with them. But I will also never hesitate to make clear why I disagree with them.
Lee C. Bollinger is the 19th president of Columbia University and the author of Uninhibited, Robust, and Wide-Open: A Free Press for a New Century. This piece has been excerpted from President Bollinger’s May 17 commencement address.
Richard A. Epstein
Today, the greatest threat to the constitutional protection of freedom of speech comes from campus rabble-rousers who invoke this very protection. In their book, the speech of people like Charles Murray and Heather Mac Donald constitutes a form of violence, bordering on genocide, that receives no First Amendment protection. Enlightened protestors are both bound and entitled to shout them down, by force or other disruptive actions, if their universities are so foolish as to extend them an invitation to speak. Any indignant minority may take the law into its own hands to eradicate the intellectual cancer before it spreads on their own campus.
By such tortured logic, a new generation of vigilantes distorts the First Amendment doctrine: Speech becomes violence, and violence becomes heroic acts of self-defense. The standard First Amendment interpretation emphatically rejects that view. Of course, the First Amendment doesn’t let you say what you want when and wherever you want to. Your freedom of speech is subject to the same limitations as your freedom of action. So you have no constitutional license to assault other people, to lie to them, or to form cartels to bilk them in the marketplace. But folks such as Murray, Mac Donald, and even Yiannopoulos do not come close to crossing into that forbidden territory. They are not using, for example, “fighting words,” rightly limited to words or actions calculated to provoke immediate aggression against a known target. Fighting words are worlds apart from speech that provokes a negative reaction in those who find your speech offensive solely because of the content of its message.
This distinction is central to the First Amendment. Fighting words have to be blocked by well-tailored criminal and civil sanctions lest some people gain license to intimidate others from speaking or peaceably assembling. The remedy for mere offense is to speak one’s mind in response. But it never gives anyone the right to block the speech of others, lest everyone be able to unilaterally increase his sphere of action by getting really angry about the beliefs of others. No one has the right to silence others by working himself into a fit of rage.
Obviously, it is intolerable to let mutual animosity generate factional warfare, whereby everyone can use force to silence rivals. To avoid this war of all against all, each side claims that only its actions are privileged. These selective claims quickly degenerate into a form of viewpoint discrimination, which undermines one of the central protections that traditional First Amendment law erects: a wall against each and every group out to destroy the level playing field on which robust political debate rests. Every group should be at risk for having its message fall flat. The new campus radicals want to upend that understanding by shutting down their adversaries if their universities do not. Their aggression must be met, if necessary, by counterforce. Silence in the face of aggression is not an acceptable alternative.
Richard A. Epstein is the Laurence A. Tisch Professor of Law at the New York University School of Law.
David FrenchWe’re living in the midst of a troubling paradox. At the exact same time that First Amendment jurisprudence has arguably never been stronger and more protective of free expression, millions of Americans feel they simply can’t speak freely. Indeed, talk to Americans living and working in the deep-blue confines of the academy, Hollywood, and the tech sector, and you’ll get a sense of palpable fear. They’ll explain that they can’t say what they think and keep their jobs, their friends, and sometimes even their families.
The government isn’t cracking down or censoring; instead, Americans are using free speech to destroy free speech. For example, a social-media shaming campaign is an act of free speech. So is an economic boycott. So is turning one’s back on a public speaker. So is a private corporation firing a dissenting employee for purely political reasons. Each of these actions is largely protected from government interference, and each one represents an expression of the speaker’s ideas and values.
The problem, however, is obvious. The goal of each of these kinds of actions isn’t to persuade; it’s to intimidate. The goal isn’t to foster dialogue but to coerce conformity. The result is a marketplace of ideas that has been emptied of all but the approved ideological vendors—at least in those communities that are dominated by online thugs and corporate bullies. Indeed, this mindset has become so prevalent that in places such as Portland, Berkeley, Middlebury, and elsewhere, the bullies and thugs have crossed the line from protected—albeit abusive—speech into outright shout-downs and mob violence.
But there’s something else going on, something that’s insidious in its own way. While politically correct shaming still has great power in deep-blue America, its effect in the rest of the country is to trigger a furious backlash, one characterized less by a desire for dialogue and discourse than by its own rage and scorn. So we’re moving toward two Americas—one that ruthlessly (and occasionally illegally) suppresses dissenting speech and the other that is dangerously close to believing that the opposite of political correctness isn’t a fearless expression of truth but rather the fearless expression of ideas best calculated to enrage your opponents.
The result is a partisan feedback loop where right-wing rage spurs left-wing censorship, which spurs even more right-wing rage. For one side, a true free-speech culture is a threat to feelings, sensitivities, and social justice. The other side waves high the banner of “free speech” to sometimes elevate the worst voices to the highest platforms—not so much to protect the First Amendment as to infuriate the hated “snowflakes” and trigger the most hysterical overreactions.
The culturally sustainable argument for free speech is something else entirely. It reminds the cultural left of its own debt to free speech while reminding the political right that a movement allegedly centered around constitutional values can’t abandon the concept of ordered liberty. The culture of free speech thrives when all sides remember their moral responsibilities—to both protect the right of dissent and to engage in ideological combat with a measure of grace and humility.
David French is a senior writer at National Review.
Pamela GellerThe real question isn’t whether free speech is under threat in the United States, but rather, whether it’s irretrievably lost. Can we get it back? Not without war, I suspect, as is evidenced by the violence at colleges whenever there’s the shamefully rare event of a conservative speaker on campus.
Free speech is the soul of our nation and the foundation of all our other freedoms. If we can’t speak out against injustice and evil, those forces will prevail. Freedom of speech is the foundation of a free society. Without it, a tyrant can wreak havoc unopposed, while his opponents are silenced.
With that principle in mind, I organized a free-speech event in Garland, Texas. The world had recently been rocked by the murder of the Charlie Hebdo cartoonists. My version of “Je Suis Charlie” was an event here in America to show that we can still speak freely and draw whatever we like in the Land of the Free. Yet even after jihadists attacked our event, I was blamed—by Donald Trump among others—for provoking Muslims. And if I tried to hold a similar event now, no arena in the country would allow me to do so—not just because of the security risk, but because of the moral cowardice of all intellectual appeasers.
Under what law is it wrong to depict Muhammad? Under Islamic law. But I am not a Muslim, I don’t live under Sharia. America isn’t under Islamic law, yet for standing for free speech, I’ve been:
- Prevented from running our advertisements in every major city in this country. We have won free-speech lawsuits all over the country, which officials circumvent by prohibiting all political ads (while making exceptions for ads from Muslim advocacy groups);
- Shunned by the right, shut out of the Conservative Political Action Conference;
- Shunned by Jewish groups at the behest of terror-linked groups such as the Council on American-Islamic Relations;
- Blacklisted from speaking at universities;
- Prevented from publishing books, for security reasons and because publishers fear shaming from the left;
- Banned from Britain.
A Seattle court accused me of trying to shut down free speech after we merely tried to run an FBI poster on global terrorism, because authorities had banned all political ads in other cities to avoid running ours. Seattle blamed us for that, which was like blaming a woman for being raped because she was wearing a short skirt.
This kind of vilification and shunning is key to the left’s plan to shut down all dissent from its agenda—they make legislation restricting speech unnecessary.
The same refusal to allow our point of view to be heard has manifested itself elsewhere. The foundation of my work is individual rights and equality for all before the law. These are the foundational principles of our constitutional republic. That is now considered controversial. Truth is the new hate speech. Truth is going to be criminalized.
The First Amendment doesn’t only protect ideas that are sanctioned by the cultural and political elites. If “hate speech” laws are enacted, who would decide what’s permissible and what’s forbidden? The government? The gunmen in Garland?
There has been an inversion of the founding premise of this nation. No longer is it the subordination of might to right, but right to might. History is repeatedly deformed with the bloody consequences of this transition.
Pamela Geller is the editor in chief of the Geller Report and president of the American Freedom Defense Initiative.
Jonah GoldbergOf course free speech is under threat in America. Frankly, it’s always under threat in America because it’s always under threat everywhere. Ronald Reagan was right when he said in 1961, “Freedom is never more than one generation away from extinction. We didn’t pass it on to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same.”
This is more than political boilerplate. Reagan identified the source of the threat: human nature. God may have endowed us with a right to liberty, but he didn’t give us all a taste for it. As with most finer things, we must work to acquire a taste for it. That is what civilization—or at least our civilization—is supposed to do: cultivate attachments to certain ideals. “Cultivate” shares the same Latin root as “culture,” cultus, and properly understood they mean the same thing: to grow, nurture, and sustain through labor.
In the past, threats to free speech have taken many forms—nationalist passion, Comstockery (both good and bad), political suppression, etc.—but the threat to free speech today is different. It is less top-down and more bottom-up. We are cultivating a generation of young people to reject free speech as an important value.
One could mark the beginning of the self-esteem movement with Nathaniel Branden’s 1969 paper, “The Psychology of Self-Esteem,” which claimed that “feelings of self-esteem were the key to success in life.” This understandable idea ran amok in our schools and in our culture. When I was a kid, Saturday-morning cartoons were punctuated with public-service announcements telling kids: “The most important person in the whole wide world is you, and you hardly even know you!”
The self-esteem craze was just part of the cocktail of educational fads. Other ingredients included multiculturalism, the anti-bullying crusade, and, of course, that broad phenomenon known as “political correctness.” Combined, they’ve produced a generation that rejects the old adage “sticks and stones can break my bones but words can never harm me” in favor of the notion that “words hurt.” What we call political correctness has been on college campuses for decades. But it lacked a critical mass of young people who were sufficiently receptive to it to make it a fully successful ideology. The campus commissars welcomed the new “snowflakes” with open arms; truly, these are the ones we’ve been waiting for.
“Words hurt” is a fashionable concept in psychology today. (See Psychology Today: “Why Words Can Hurt at Least as Much as Sticks and Stones.”) But it’s actually a much older idea than the “sticks and stones” aphorism. For most of human history, it was a crime to say insulting or “injurious” things about aristocrats, rulers, the Church, etc. That tendency didn’t evaporate with the Divine Right of Kings. Jonathan Haidt has written at book length about our natural capacity to create zones of sanctity, immune from reason.
And that is the threat free speech faces today. Those who inveigh against “hate speech” are in reality fighting “heresy speech”—ideas that do “violence” to sacred notions of self-esteem, racial or gender equality, climate change, and so on. Put whatever label you want on it, contemporary “social justice” progressivism acts as a religion, and it has no patience for blasphemy.
When Napoleon’s forces converted churches into stables, the clergy did not object on the grounds that regulations regarding the proper care and feeding of animals had been violated. They complained of sacrilege and blasphemy. When Charles Murray or Christina Hoff Summers visits college campuses, the protestors are behaving like the zealous acolytes of St. Jerome. Appeals to the First Amendment have as much power over the “antifa” fanatics as appeals to Odin did to champions of the New Faith.
That is the real threat to free speech today.
Jonah Goldberg is a senior editor at National Review and a fellow at the American Enterprise Institute.
KC JohnsonIn early May, the Washington Post urged universities to make clear that “racist signs, symbols, and speech are off-limits.” Given the extraordinarily broad definition of what constitutes “racist” speech at most institutions of higher education, this demand would single out most right-of-center (and, in some cases, even centrist and liberal) discourse on issues of race or ethnicity. The editorial provided the highest-profile example of how hostility to free speech, once confined to the ideological fringe on campus, has migrated to the liberal mainstream.
The last few years have seen periodic college protests—featuring claims that significant amounts of political speech constitute “violence,” thereby justifying censorship—followed by even more troubling attempts to appease the protesters. After the mob scene that greeted Charles Murray upon his visit to Middlebury College, for instance, the student government criticized any punishment for the protesters, and several student leaders wanted to require that future speakers conform to the college’s “community standard” on issues of race, gender, and ethnicity. In the last few months, similar attempts to stifle the free exchange of ideas in the name of promoting diversity occurred at Wesleyan, Claremont McKenna, and Duke. Offering an extreme interpretation of this point of view, one CUNY professor recently dismissed dialogue as “inherently conservative,” since it reinforced the “relations of power that presently exist.”
It’s easy, of course, to dismiss campus hostility to free speech as affecting only a small segment of American public life—albeit one that trains the next generation of judges, legislators, and voters. But, as Jonathan Chait observed in 2015, denying “the legitimacy of political pluralism on issues of race and gender” has broad appeal on the left. It is only most apparent on campus because “the academy is one of the few bastions of American life where the political left can muster the strength to impose its political hegemony upon others.” During his time in office, Barack Obama generally urged fellow liberals to support open intellectual debate. But the current campus environment previews the position of free speech in a post-Obama Democratic Party, increasingly oriented around identity politics.
Waning support on one end of the ideological spectrum for this bedrock American principle should provide a political opening for the other side. The Trump administration, however, seems poorly suited to make the case. Throughout his public career, Trump has rarely supported free speech, even in the abstract, and has periodically embraced legal changes to facilitate libel lawsuits. Moreover, the right-wing populism that motivates Trump’s base has a long tradition of ideological hostility to civil liberties of all types. Even in campus contexts, conservatives have defended free speech inconsistently, as seen in recent calls that CUNY disinvite anti-Zionist fanatic Linda Sarsour as a commencement speaker.
In a sharply polarized political environment, awash in dubiously-sourced information, free speech is all the more important. Yet this same environment has seen both sides, most blatantly elements of the left on campuses, demand restrictions on their ideological foes’ free speech in the name of promoting a greater good.
KC Johnson is a professor of history at Brooklyn College and the CUNY Graduate Center.
Laura KipnisI find myself with a strange-bedfellows problem lately. Here I am, a left-wing feminist professor invited onto the pages of Commentary—though I’d be thrilled if it were still 1959—while fielding speaking requests from right-wing think tanks and libertarians who oppose child-labor laws.
Somehow I’ve ended up in the middle of the free-speech-on-campus debate. My initial crime was publishing a somewhat contentious essay about campus sexual paranoia that put me on the receiving end of Title IX complaints. Apparently I’d created a “hostile environment” at my university. I was investigated (for 72 days). Then I wrote up what I’d learned about these campus inquisitions in a second essay. Then I wrote about it all some more, in a book exposing the kangaroo-court elements of the Title IX process—and the extra-legal gag orders imposed on everyone caught in its widening snare.
I can’t really comment on whether more charges have been filed against me over the book. I’ll just say that writing about being a Title IX respondent could easily become a life’s work. I learned, shortly after writing this piece, that I and my publisher were being sued for defamation, among other things.
Is free speech under threat on American campuses? Yes. We know all about student activists who wish to shut down talks by people with opposing views. I got smeared with a bit of that myself, after a speaking invitation at Wellesley—some students made a video protesting my visit before I arrived. The talk went fine, though a group of concerned faculty circulated an open letter afterward also protesting the invitation: My views on sexual politics were too heretical, and might have offended students.
I didn’t take any of this too seriously, even as right-wing pundits crowed, with Wellesley as their latest outrage bait. It was another opportunity to mock student activists, and the fact that I was myself a feminist rather than a Charles Murray or a Milo Yiannopoulos, made them positively gleeful.
I do find myself wondering where all my new free-speech pals were when another left-wing professor, Steven Salaita, was fired (or if you prefer euphemism, “his job offer was withdrawn”) from the University of Illinois after he tweeted criticism of Israel’s Gaza policy. Sure the tweets were hyperbolic, but hyperbole and strong opinions are protected speech, too.
I guess free speech is easy to celebrate until it actually challenges something. Funny, I haven’t seen Milo around lately—so beloved by my new friends when he was bashing minorities and transgender kids. Then he mistakenly said something authentic (who knew he was capable of it!), reminiscing about an experience a lot of gay men have shared: teenage sex with older men. He tried walking it back—no, no, he’d been a victim, not a participant—but his fan base was shrieking about pedophilia and fleeing in droves. Gee, they were all so against “political correctness” a few minutes before.
It’s easy to be a free-speech fan when your feathers aren’t being ruffled. No doubt what makes me palatable to the anti-PC crowd is having thus far failed to ruffle them enough. I’m just going to have to work harder.
Laura Kipnis’s latest book is Unwanted Advances: Sexual Paranoia Comes to Campus.
Eugene KontorovichThe free and open exchange of views—especially politically conservative or traditionally religious ones—is being challenged. This is taking place not just at college campuses but throughout our public spaces and cultural institutions. James Watson was fired from the lab he led since 1968 and could not speak at New York University because of petty, censorious students who would not know DNA from LSD. Our nation’s founders and heroes are being “disappeared” from public commemoration, like Trotsky from a photograph of Soviet rulers.
These attacks on “free speech” are not the result of government action. They are not what the First Amendment protects against. The current methods—professional and social shaming, exclusion, and employment termination—are more inchoate, and their effects are multiplied by self-censorship. A young conservative legal scholar might find himself thinking: “If the late Justice Antonin Scalia can posthumously be deemed a ‘bigot’ by many academics, what chance have I?”
Ironically, artists and intellectuals have long prided themselves on being the first defenders of free speech. Today, it is the institutions of both popular and high culture that are the censors. Is there one poet in the country who would speak out for Ann Coulter?
The inhibition of speech at universities is part of a broader social phenomenon of making longstanding, traditional views and practices sinful overnight. Conservatives have not put up much resistance to this. To paraphrase Martin Niemöller’s famous dictum: “First they came for Robert E. Lee, and I said nothing, because Robert E. Lee meant nothing to me.”
The situation with respect to Israel and expressions of support for it deserves separate discussion. Even as university administrators give political power to favored ideologies by letting them create “safe spaces” (safe from opposing views), Jews find themselves and their state at the receiving end of claims of apartheid—modern day blood libels. It is not surprising if Jewish students react by demanding that they get a safe space of their own. It is even less surprising if their parents, paying $65,000 a year, want their children to have a nicer time of it. One hears Jewish groups frequently express concern about Jewish students feeling increasingly isolated and uncomfortable on campus.
But demanding selective protection from the new ideological commissars is unlikely to bring the desired results. First, this new ideology, even if it can be harnessed momentarily to give respite to harassed Jews on campus, is ultimately illiberal and will be controlled by “progressive” forces. Second, it is not so terrible for Jews in the Diaspora to feel a bit uncomfortable. It has been the common condition of Jews throughout the millennia. The social awkwardness that Jews at liberal arts schools might feel in being associated with Israel is of course one of the primary justifications for the Jewish State. Facing the snowflakes incapable of hearing a dissonant view—but who nonetheless, in the grip of intersectional ecstasy, revile Jewish self-determination—Jewish students should toughen up.
Eugene Kontorovich teaches constitutional law at Northwestern University and heads the international law department of the Kohelet Policy Forum in Jerusalem.
Nicholas LemannThere’s an old Tom Wolfe essay in which he describes being on a panel discussion at Princeton in 1965 and provoking the other panelists by announcing that America, rather than being in crisis, is in the middle of a “happiness explosion.” He was arguing that the mass effects of 20 years of post–World War II prosperity made for a larger phenomenon than the Vietnam War, the racial crisis, and the other primary concerns of intellectuals at the time.
In the same spirit, I’d say that we are in the middle of a free-speech explosion, because of 20-plus years of the Internet and 10-plus years of social media. If one understands speech as disseminated individual opinion, then surely we live in the free-speech-est society in the history of the world. Anybody with access to the unimpeded World Wide Web can say anything to a global audience, and anybody can hear anything, too. All threats to free speech should be understood in the context of this overwhelmingly reality.
It is a comforting fantasy that a genuine free-speech regime will empower mainly “good,” but previously repressed, speech. Conversely, repressive regimes that are candid enough to explain their anti-free-speech policies usually say that they’re not against free speech, just “bad” speech. We have to accept that more free speech probably means, in the aggregate, more bad speech, and also a weakening of the power, authority, and economic support for information professionals such as journalists. Welcome to the United States in 2017.
I am lucky enough to live and work on the campus of a university, Columbia, that has been blessedly free of successful attempts to repress free speech. Just in the last few weeks, Charles Murray and Dinesh D’Souza have spoken here without incident. But, yes, the evidently growing popularity of the idea that “hate speech” shouldn’t be permitted on campuses is a problem, especially, it seems, at small private liberal-arts colleges. We should all do our part, and I do, by frequently and publicly endorsing free-speech principles. Opposing the BDS movement falls squarely into that category.
It’s not just on campuses that free-speech vigilance is needed, though. The number-one threat to free speech, to my mind, is that the wide-open Web has been replaced by privately owned platforms such as Facebook and Google as the way most people experience the public life of the Internet. These companies are committed to banning “hate speech,” and they are eager to operate freely in countries, like China, that don’t permit free political speech. That makes for a far more consequential constrained environment than any campus’s speech code.
Also, Donald Trump regularly engages in presidentially unprecedented rhetoric demonizing people who disagree with him. He seems to think this is all in good fun, but, as we have already seen at his rallies, not everybody hears it that way. The place where Trumpism will endanger free speech isn’t in the center—the White House press room—but at the periphery, for example in the way that local police handle bumptious protestors and the journalists covering them. This is already happening around the country. If Trump were as disciplined and knowledgeable as Vladimir Putin or Recep Tayyip Erdogan, which so far he seems not to be, then free speech could be in even more serious danger from government, which in most places is its usual main enemy.
Nicholas Lemann is a professor at Columbia Journalism School and a staff writer for the New Yorker.
Michael J. LewisFree speech is a right but it is also a habit, and where the habit shrivels so will the right. If free speech today is in headlong retreat—everywhere threatened by regulation, organized harassment, and even violence—it is in part because our political culture allowed the practice of persuasive oratory to atrophy. The process began in 1973, an unforeseen side effect of Roe v. Wade. Legislators were delighted to learn that by relegating this divisive matter of public policy to the Supreme Court and adopting a merely symbolic position, they could sit all the more safely in their safe seats.
Since then, one crucial question of public policy after another has been punted out of the realm of politics and into the judicial. Issues that might have been debated with all the rhetorical agility of a Lincoln and a Douglas, and then subjected to a process of negotiation, compromise, and voting, have instead been settled by decree: e.g., Chevron, Kelo, Obergefell. The consequences for speech have been pernicious. Since the time of Pericles, deliberative democracy has been predicated on the art of persuasion, which demands the forceful clarity of thought and expression without which no one has ever been persuaded. But a legislature that relegates its authority to judges and regulators will awaken to discover its oratorical culture has been stunted. When politicians, rather than seeking to convince and win over, prefer to project a studied and pleasant vagueness, debate withers into tedious defensive performance. It has been decades since any presidential debate has seen any sustained give and take over a matter of policy. If there is any suspense at all, it is only the possibility that a fatigued or peeved candidate might blurt out that tactless shard of truth known as a gaffe.
A generation accustomed to hearing platitudes smoothly dispensed from behind a teleprompter will find the speech of a fearless extemporaneous speaker to be startling, even disquieting; unfamiliar ideas always are. Unhappily, they have been taught to interpret that disquiet as an injury done to them, rather than as a premise offered to them to consider. All this would not have happened—certainly not to this extent—had not our deliberative democracy decided a generation ago that it preferred the security of incumbency to the risks of unshackled debate. The compulsory contraction of free speech on college campuses is but the logical extension of the voluntary contraction of free speech in our political culture.
Michael J. Lewis’s new book is City of Refuge: Separatists and Utopian Town Planning (Princeton University Press).
Heather Mac DonaldThe answer to the symposium question depends on how powerful the transmission belt is between academia and the rest of the country. On college campuses, violence and brute force are silencing speakers who challenge left-wing campus orthodoxies. These totalitarian outbreaks have been met with listless denunciations by college presidents, followed by . . . virtually nothing. As of mid-May, the only discipline imposed for 2017’s mass attacks on free speech at UC Berkeley, Middlebury, and Clare-mont McKenna College was a letter of reprimand inserted—sometimes only temporarily—into the files of several dozen Middlebury students, accompanied by a brief period of probation. Previous outbreaks of narcis-sistic incivility, such as the screaming-girl fit at Yale and the assaults on attendees of Yale’s Buckley program, were discreetly ignored by college administrators.
Meanwhile, the professoriate unapologetically defends censorship and violence. After the February 1 riot in Berkeley to prevent Milo Yiannapoulos from speaking, Déborah Blocker, associate professor of French at UC Berkeley, praised the rioters. They were “very well-organized and very efficient,” Blocker reported admiringly to her fellow professors. “They attacked property but they attacked it very sparingly, destroying just enough University property to obtain the cancellation order for the MY event and making sure no one in the crowd got hurt” (emphasis in original). (In fact, perceived Milo and Donald Trump supporters were sucker-punched and maced; businesses downtown were torched and vandalized.) New York University’s vice provost for faculty, arts, humanities, and diversity, Ulrich Baer, displayed Orwellian logic by claiming in a New York Times op-ed that shutting down speech “should be understood as an attempt to ensure the conditions of free speech for a greater group of people.”
Will non-academic institutions take up this zeal for outright censorship? Other ideological products of the left-wing academy have been fully absorbed and operationalized. Racial victimology, which drives much of the campus censorship, is now standard in government and business. Corporate diversity trainers counsel that bias is responsible for any lack of proportional racial representation in the corporate ranks. Racial disparities in school discipline and incarceration are universally attributed to racism rather than to behavior. Public figures have lost jobs for violating politically correct taboos.
Yet Americans possess an instinctive commitment to the First Amendment. Federal judges, hardly an extension of the Federalist Society, have overwhelmingly struck down campus speech codes. It is hard to imagine that they would be any more tolerant of the hate-speech legislation so prevalent in Europe. So the question becomes: At what point does the pressure to conform to the elite worldview curtail freedom of thought and expression, even without explicit bans on speech?
Social stigma against conservative viewpoints is not the same as actual censorship. But the line can blur. The Obama administration used regulatory power to impose a behavioral conformity on public and private entities. School administrators may have technically still possessed the right to dissent from novel theories of gender, but they had to behave as if they were fully on board with the transgender revolution when it came to allowing boys to use girls’ bathrooms and locker rooms.
Had Hillary Clinton had been elected president, the federal bureaucracy would have mimicked campus diversocrats with even greater zeal. That threat, at least, has been avoided. Heresies against left-wing dogma may still enter the public arena, if only by the back door. The mainstream media have lurched even further left in the Trump era, but the conservative media, however mocked and marginalized, are expanding (though Twitter and Facebook’s censorship of conservative speakers could be a harbinger of more official silencing).
Outside the academy, free speech is still legally protected, but its exercise requires ever greater determination.
Heather Mac Donald is a fellow at the Manhattan Institute and the author of The War on Cops.
John McWhorterThere is a certain mendacity, as Brick put it in Cat on a Hot Tin Roof, in our discussion of free speech on college campuses. Namely, none of us genuinely wish that absolutely all issues be aired in the name of education and open-mindedness. To insist so is to pretend that civilized humanity makes nothing we could call advancement in philosophical consensus.
I doubt we need “free speech” on issues such as whether slavery and genocide are okay, whether it has been a mistake to view women as men’s equals, or to banish as antique the idea that whites are a master race while other peoples represent a lower rung on the Darwinian scale. With all due reverence of John Stuart Mill’s advocacy for the regular airing of even noxious views in order to reinforce clarity on why they were rejected, we are also human beings with limited time. A commitment to the Enlightenment justifiably will decree that certain views are, indeed, no longer in need of discussion.
However, our modern social-justice warriors are claiming that this no-fly zone of discussion is vaster than any conception of logic or morality justifies. We are being told that questions regarding the modern proposals about cultural appropriation, about whether even passing infelicitous statements constitute racism in the way that formalized segregation and racist disparagement did, or about whether social disparities can be due to cultural legacies rather than structural impediments, are as indisputably egregious, backwards, and abusive as the benighted views of the increasingly distant past.
That is, the new idea is not only that discrimination and inequality still exist, but that to even question the left’s utopian expectation on such matters justifies the same furious, sloganistic and even physically violent resistance that was once levelled against those designated heretics by a Christian hegemony.
Of course the protesters in question do not recognize themselves in a portrait as opponents of something called heresy. They suppose that Galileo’s opponents were clearly wrong but that they, today, are actually correct in a way that no intellectual or moral argument could coherently deny.
As such, we have students allowed to decree college campuses as “racist” when they are the least racist spaces on the planet—because they are, predictably given the imperfection of humans, not perfectly free of passingly unsavory interactions. Thinkers invited to talk for a portion of an hour from the right rather than the left and then have dinner with a few people and fly home are treated as if they were reanimated Hitlers. The student of color who hears a few white students venturing polite questions about the leftist orthodoxy is supported in fashioning these questions as “racist” rhetoric.
The people on college campuses who openly and aggressively spout this new version of Christian (or even Islamist) crusading—ironically justifying it as a barricade against “fascist” muzzling of freedom when the term applies ominously well to the regime they are fostering—are a minority. However, the sawmill spinning blade of their rhetoric has succeeding in rendering opposition as risky as espousing pedophilia, such that only those natively open to violent criticism dare speak out. The latter group is small. The campus consensus thereby becomes, if only at moralistic gunpoint à la the ISIS victim video, a strangled hard-leftism.
Hence freedom of speech is indeed threatened on today’s college campuses. I have lost count of how many of my students, despite being liberal Democrats (many of whom sobbed at Hillary Clinton’s loss last November), have told me that they are afraid to express their opinions about issues that matter, despite the fact that their opinions are ones that any liberal or even leftist person circa 1960 would have considered perfectly acceptable.
Something has shifted of late, and not in a direction we can legitimately consider forwards.
John McWhorter teaches linguistics, philosophy, and music history at Columbia University and is the author of The Language Hoax, Words on the Move, and Talking Back, Talking Black.
Kate Bachelder OdellIt’s 2021, and Harvard Square has devolved into riots: Some 120 people are injured in protests, and the carnage includes fire-consumed cop cars and smashed-in windows. The police discharge canisters of tear gas, and, after apprehending dozens of protesters, enforce a 1:45 A.M. curfew. Anyone roaming the streets after hours is subject to arrest. About 2,000 National Guardsmen are prepared to intervene. Such violence and disorder is also roiling Berkeley and other elite and educated areas.
Oh, that’s 1970. The details are from the Harvard Crimson’s account of “anti-war” riots that spring. The episode is instructive in considering whether free speech is under threat in the United States. Almost daily, there’s a new YouTube installment of students melting down over viewpoints of speakers invited to one campus or another. Even amid speech threats from government—for example, the IRS’s targeting of political opponents—nothing has captured the public’s attention like the end of free expression at America’s institutions of higher learning.
Yet disruption, confusion, and even violence are not new campus phenomena. And it’s hard to imagine that young adults who deployed brute force in the 1960s and ’70s were deeply committed to the open and peaceful exchange of ideas.
There may also be reason for optimism. The rough and tumble on campus in the 1960s and ’70s produced a more even-tempered ’80s and ’90s, and colleges are probably heading for another course correction. In covering the ruckuses at Yale, Missouri, and elsewhere, I’ve talked to professors and students who are figuring out how to respond to the illiberalism, even if the reaction is delayed. The University of Chicago put out a set of free-speech principles last year, and others schools such as Princeton and Purdue have endorsed them.
The NARPs—Non-Athletic Regular People, as they are sometimes known on campus—still outnumber the social-justice warriors, who appear to be overplaying their hand. Case in point is the University of Missouri, which experienced a precipitous drop in enrollment after instructor Melissa Click and her ilk stoked racial tensions last spring. The college has closed dorms and trimmed budgets. Which brings us to another silver lining: The economic model of higher education (exorbitant tuition to pay ever more administrators) may blow up traditional college before the fascists can.
Note also that the anti-speech movement is run by rich kids. A Brookings Institution analysis from earlier this year discovered that “the average enrollee at a college where students have attempted to restrict free speech comes from a family with an annual income $32,000 higher than that of the average student in America.” Few rank higher in average income than those at Middlebury College, where students evicted scholar Charles Murray in a particularly ugly scene. (The report notes that Murray was received respectfully at Saint Louis University, “where the median income of students’ families is half Middlebury’s.”) The impulses of over-adulated 20-year-olds may soon be tempered by the tyranny of having to show up for work on a daily basis.
None of this is to suggest that free speech is enjoying some renaissance either on campus or in America. But perhaps as the late Wall Street Journal editorial-page editor Robert Bartley put it in his valedictory address: “Things could be worse. Indeed, they have been worse.”
Kate Bachelder Odell is an editorial writer for the Wall Street Journal.
Jonathan RauchIs free speech under threat? The one-syllable answer is “yes.” The three-syllable answer is: “Yes, of course.” Free speech is always under threat, because it is not only the single most successful social idea in all of human history, it is also the single most counterintuitive. “You mean to say that speech that is offensive, untruthful, malicious, seditious, antisocial, blasphemous, heretical, misguided, or all of the above deserves government protection?” That seemingly bizarre proposition is defensible only on the grounds that the marketplace of ideas turns out to be the most powerful engine of knowledge, prosperity, liberty, social peace, and moral advancement that our species has had the good fortune to discover.
Every new generation of free-speech advocates will need to get up every morning and re-explain the case for free speech and open inquiry—today, tomorrow, and forever. That is our lot in life, and we just need to be cheerful about it. At discouraging moments, it is helpful to remember that the country has made great strides toward free speech since 1798, when the Adams administration arrested and jailed its political critics; and since the 1920s, when the U.S. government banned and burned James Joyce’s great novel Ulysses; and since 1954, when the government banned ONE, a pioneering gay journal. (The cover article was a critique of the government’s indecency censors, who censored it.) None of those things could happen today.
I suppose, then, the interesting question is: What kind of threat is free speech under today? In the present age, direct censorship by government bodies is rare. Instead, two more subtle challenges hold sway, especially, although not only, on college campuses. The first is a version of what I called, in my book Kindly Inquisitors, the humanitarian challenge: the idea that speech that is hateful or hurtful (in someone’s estimation) causes pain and thus violates others’ rights, much as physical violence does. The other is a version of what I called the egalitarian challenge: the idea that speech that denigrates minorities (again, in someone’s estimation) perpetuates social inequality and oppression and thus also is a rights violation. Both arguments call upon administrators and other bureaucrats to defend human rights by regulating speech rights.
Both doctrines are flawed to the core. Censorship harms minorities by enforcing conformity and entrenching majority power, and it no more ameliorates hatred and injustice than smashing thermometers ameliorates global warming. If unwelcome words are the equivalent of bludgeons or bullets, then the free exchange of criticism—science, in other words—is a crime. I could go on, but suffice it to say that the current challenges are new variations on ancient themes—and they will be followed, in decades and centuries to come, by many, many other variations. Memo to free-speech advocates: Our work is never done, but the really amazing thing, given the proposition we are tasked to defend, is how well we are doing.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of Kindly Inquisitors: The New Attacks on Free Thought.
Nicholas Quinn RosenkranzSpeech is under threat on American campuses as never before. Censorship in various forms is on the rise. And this year, the threat to free speech on campus took an even darker turn, toward actual violence. The prospect of Milo Yiannopoulos speaking at Berkeley provoked riots that caused more than $100,000 worth of property damage on the campus. The prospect of Charles Murray speaking at Middlebury led to a riot that put a liberal professor in the hospital with a concussion. Ann Coulter’s speech at Berkeley was cancelled after the university determined that none of the appropriate venues could be protected from “known security threats” on the date in question.
The free-speech crisis on campus is caused, at least in part, by a more insidious campus pathology: the almost complete lack of intellectual diversity on elite university faculties. At Yale, for example, the number of registered Republicans in the economics department is zero; in the psychology department, there is one. Overall, there are 4,410 faculty members at Yale, and the total number of those who donated to a Republican candidate during the 2016 primaries was three.
So when today’s students purport to feel “unsafe” at the mere prospect of a conservative speaker on campus, it may be easy to mock them as “delicate snowflakes,” but in one sense, their reaction is understandable: If students are shocked at the prospect of a Republican behind a university podium, perhaps it is because many of them have never before laid eyes on one.
To see the connection between free speech and intellectual diversity, consider the recent commencement speech of Harvard President Drew Gilpin Faust:
Universities must be places open to the kind of debate that can change ideas….Silencing ideas or basking in intellectual orthodoxy independent of facts and evidence impedes our access to new and better ideas, and it inhibits a full and considered rejection of bad ones. . . . We must work to ensure that universities do not become bubbles isolated from the concerns and discourse of the society that surrounds them. Universities must model a commitment to the notion that truth cannot simply be claimed, but must be established—established through reasoned argument, assessment, and even sometimes uncomfortable challenges that provide the foundation for truth.
Faust is exactly right. But, alas, her commencement audience might be forgiven a certain skepticism. After all, the number of registered Republicans in several departments at Harvard—e.g., history and psychology—is exactly zero. In those departments, the professors themselves may be “basking in intellectual orthodoxy” without ever facing “uncomfortable challenges.” This may help explain why some students will do everything in their power to keep conservative speakers off campus: They notice that faculty hiring committees seem to do exactly the same thing.
In short, it is a promising sign that true liberal academics like Faust have started speaking eloquently about the crucial importance of civil, reasoned disagreement. But they will be more convincing on this point when they hire a few colleagues with whom they actually disagree.
Nicholas Quinn Rosenkranz is a professor of law at Georgetown. He serves on the executive committee of Heterodox Academy, which he co-founded, on the board of directors of the Federalist Society, and on the board of directors of the Foundation for Individual Rights in Education (FIRE).
Ben ShapiroIn February, I spoke at California State University in Los Angeles. Before my arrival, professors informed students that a white supremacist would be descending on the school to preach hate; threats of violence soon prompted the administration to cancel the event. I vowed to show up anyway. One hour before the event, the administration backed down and promised to guarantee that the event could go forward, but police officers were told not to stop the 300 students, faculty, and outside protesters who blocked and assaulted those who attempted to attend the lecture. We ended up trapped in the auditorium, with the authorities telling students not to leave for fear of physical violence. I was rushed from campus under armed police guard.
Is free speech under assault?
Of course it is.
On campus, free speech is under assault thanks to a perverse ideology of intersectionality that claims victim identity is of primary value and that views are a merely secondary concern. As a corollary, if your views offend someone who outranks you on the intersectional hierarchy, your views are treated as violence—threats to identity itself. On campus, statements that offend an individual’s identity have been treated as “microaggressions”–actual aggressions against another, ostensibly worthy of violence. Words, students have been told, may not break bones, but they will prompt sticks and stones, and rightly so.
Thus, protesters around the country—leftists who see verbiage as violence—have, in turn, used violence in response to ideas they hate. Leftist local authorities then use the threat of violence as an excuse to ideologically discriminate against conservatives. This means public intellectuals like Charles Murray being run off of campus and his leftist professorial cohort viciously assaulted; it means Ann Coulter being targeted for violence at Berkeley; it means universities preemptively banning me and Ayaan Hirsi Ali and Condoleezza Rice and even Jason Riley.
The campus attacks on free speech are merely the most extreme iteration of an ideology that spans from left to right: the notion that your right to free speech ends where my feelings begin. Even Democrats who say that Ann Coulter should be allowed to speak at Berkeley say that nobody should be allowed to contribute to a super PAC (unless you’re a union member, naturally).
Meanwhile, on the right, the president’s attacks on the press have convinced many Republicans that restrictions on the press wouldn’t be altogether bad. A Vanity Fair/60 Minutes poll in late April found that 36 percent of Americans thought freedom of the press “does more harm than good.” Undoubtedly, some of that is due to the media’s obvious bias. CNN’s Jeff Zucker has targeted the Trump administration for supposedly quashing journalism, but he was silent when the Obama administration’s Department of Justice cracked down on reporters from the Associated Press and Fox News, and when hacks like Deputy National Security Adviser Ben Rhodes openly sold lies regarding Iran. But for some on the right, the response to press falsities hasn’t been to call for truth, but to instead echo Trumpian falsehoods in the hopes of damaging the media. Free speech is only important when people seek the truth. Leftists traded truth for tribalism long ago; in response, many on the right seem willing to do the same. Until we return to a common standard under which facts matter, free speech will continue to rest on tenuous grounds.
Ben Shapiro is the editor in chief of The Daily Wire and the host of The Ben Shapiro Show.
Judith ShulevitzIt’s tempting to blame college and university administrators for the decline of free speech in America, and for years I did just that. If the guardians of higher education won’t inculcate the habits of mind required for serious thinking, I thought, who will? The unfettered but civil exchange of ideas is the basic operation of education, just as addition is the basic operation of arithmetic. And universities have to teach both the unfettered part and the civil part, because arguing in a respectful manner isn’t something anyone does instinctively.
So why change my mind now? Schools still cling to speech codes, and there still aren’t enough deans like the one at the University of Chicago who declared his school a safe-space-free zone. My alma mater just handed out prizes for “enhancing race and/or ethnic relations” to two students caught on video harassing the dean of their residential college, one screaming at him that he’d created “a space for violence to happen,” the other placing his face inches away from the dean’s and demanding, “Look at me.” All this because they deemed a thoughtful if ill-timed letter about Halloween costumes written by the dean’s wife to be an act of racist aggression. Yale should discipline students who behave like that, even if they’re right on the merits (I don’t think they were, but that’s not the point). They certainly don’t deserve awards. I can’t believe I had to write that sentence.
But in abdicating their responsibilites, the universities have enabled something even worse than an attack on free speech. They’ve unleashed an assault on themselves. There’s plenty of free speech around; we know that because so much bad speech—low-minded nonsense—tests our constitutional tolerance daily, and that’s holding up pretty well. (As Nicholas Lemann observes elsewhere in this symposium, Facebook and Google represent bigger threats to free speech than students and administrators.) What’s endangered is good speech.
Universities were setting themselves up to be used. Provocateurs exploit the atmosphere on campus to goad overwrought students, then gleefully trash the most important bastion of our crumbling civil society. Higher education and everything it stands for—logical argument, the scientific method, epistemological rigor—start to look illegitimate. Voters perceive tenure and research and higher education itself as hopelessly partisan and unworthy of taxpayers’ money.
The press is a secondary victim of this process of delegitimization. If serious inquiry can be waved off as ideology, then facts won’t be facts and reporting can’t be trusted. All journalism will be equal to all other journalism, and all journalists will be reduced to pests you can slam to the ground with near impunity. Politicians will be able to say anything and do just about anything and there will be no countervailing authority to challenge them. I’m pretty sure that that way lies Putinism and Erdoganism. And when we get to that point, I’m going to start worrying about free speech again.
Judith Shulevitz is a critic in New York.
Harvey SilverglateFree speech is, and has always been, threatened. The title of Nat Hentoff’s 1993 book Free Speech for Me – but Not for Thee is no less true today than at any time, even as the Supreme Court has accorded free speech a more absolute degree of protection than in any previous era.
Since the 1980s, the high court has decided most major free-speech cases in favor of speech, with most of the major decisions being unanimous or nearly so.
Women’s-rights advocates were turned back by the high court in 1986 when they sought to ban the sale of printed materials that, because deemed pornographic by some, were alleged to promote violence against women. Censorship in the name of gender–based protection thus failed to gain traction.
Despite the demands of civil-rights activists, the Supreme Court in 1992 declared cross-burning to be a protected form of expression in R.A.V. v. City of St. Paul, a decision later refined to strengthen a narrow exception for when cross-burning occurs primarily as a physical threat rather than merely an expression of hatred.
Other attempts at First Amendment circumvention have been met with equally decisive rebuff. When the Reverend Jerry Falwell sued Hustler magazine publisher Larry Flynt for defamation growing out of a parody depicting Falwell’s first sexual encounter as a drunken tryst with his mother in an outhouse, a unanimous Supreme Court lectured on the history of parody as a constitutionally protected, even if cruel, form of social and political criticism.
When the South Boston Allied War Veterans, sponsor of Boston’s Saint Patrick’s Day parade, sought to exclude a gay veterans’ group from marching under its own banner, the high court unanimously held that as a private entity, even though marching in public streets, the Veterans could exclude any group marching under a banner conflicting with the parade’s socially conservative message, notwithstanding public-accommodations laws. The gay group could have its own parade but could not rain on that of the conservatives.
Despite such legal clarity, today’s most potent attacks on speech are coming, ironically, from liberal-arts colleges. Ubiquitous “speech codes” limit speech that might insult, embarrass, or “harass,” in particular, members of “historically disadvantaged” groups. “Safe spaces” and “trigger warnings” protect purportedly vulnerable students from hearing words and ideas they might find upsetting. Student demonstrators and threats of violence have forced the cancellation of controversial speakers, left and right.
It remains unclear how much campus censorship results from politically correct faculty, control-obsessed student-life administrators, or students socialized and indoctrinated into intolerance. My experience suggests that the bureaucrats are primarily, although not entirely, to blame. When sued, colleges either lose or settle, pay a modest amount, and then return to their censorious ways.
This trend threatens the heart and soul of liberal education. Eventually it could infect the entire society as these students graduate and assume influential positions. Whether a resulting flood of censorship ultimately overcomes legal protections and weakens democracy remains to be seen.
Harvey Silverglate, a Boston-based lawyer and writer, is the co-author of The Shadow University: The Betrayal of Liberty on America’s Campuses (Free Press, 1998). He co-founded the Foundation for Individual Rights in Education in 1999 and is on FIRE’s board of directors. He spent some three decades on the board of the ACLU of Massachusetts, two of those years as chairman. Silverglate taught at Harvard Law School for a semester during a sabbatical he took in the mid-1980s.
Christina Hoff SommersWhen Heather Mac Donald’s “blue lives matter” talk was shut down by a mob at Claremont McKenna College, the president of neighboring Pomona College sent out an email defending free speech. Twenty-five students shot back a response: “Heather Mac Donald is a fascist, a white supremacist . . . classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.”
Some blame the new campus intolerance on hypersensitive, over-trophied millennials. But the students who signed that letter don’t appear to be fragile. Nor do those who recently shut down lectures at Berkeley, Middlebury, DePaul, and Cal State LA. What they are is impassioned. And their passion is driven by a theory known as intersectionality.
Intersectionality is the source of the new preoccupation with microaggressions, cultural appropriation, and privilege-checking. It’s the reason more than 200 colleges and universities have set up Bias Response Teams. Students who overhear potentially “otherizing” comments or jokes are encouraged to make anonymous reports to their campus BRTs. A growing number of professors and administrators have built their careers around intersectionality. What is it exactly?
Intersectionality is a neo-Marxist doctrine that views racism, sexism, ableism, heterosexism, and all forms of “oppression” as interconnected and mutually reinforcing. Together these “isms” form a complex arrangement of advantages and burdens. A white woman is disadvantaged by her gender but advantaged by her race. A Latino is burdened by his ethnicity but privileged by his gender. According to intersectionality, American society is a “matrix of domination,” with affluent white males in control. Not only do they enjoy most of the advantages, they also determine what counts as “truth” and “knowledge.”
But marginalized identities are not without resources. According to one of intersectionality’s leading theorists, Patricia Collins (former president of the American Sociology Association), disadvantaged groups have access to deeper, more liberating truths. To find their voice, and to enlighten others to the true nature of reality, they require a safe space—free of microaggressive put-downs and imperious cultural appropriations. Here they may speak openly about their “lived experience.” Lived experience, according to intersectional theory, is a better guide to the truth than self-serving Western and masculine styles of thinking. So don’t try to refute intersectionality with logic or evidence: That only proves that you are part of the problem it seeks to overcome.
How could comfortably ensconced college students be open to a convoluted theory that describes their world as a matrix of misery? Don’t they flinch when they hear intersectional scholars like bell hooks refer to the U.S. as an “imperialist, white-supremacist, capitalist patriarchy”? Most take it in stride because such views are now commonplace in high-school history and social studies texts. And the idea that knowledge comes from lived experience rather than painstaking study and argument is catnip to many undergrads.
Silencing speech and forbidding debate is not an unfortunate by-product of intersectionality—it is a primary goal. How else do you dismantle a lethal system of oppression? As the protesting students at Claremont McKenna explained in their letter: “Free speech . . . has given those who seek to perpetuate systems of domination a platform to project their bigotry.” To the student activists, thinkers like Heather MacDonald and Charles Murray are agents of the dominant narrative, and their speech is “a form of violence.”
It is hard to know how our institutions of higher learning will find their way back to academic freedom, open inquiry, and mutual understanding. But as long as intersectional theory goes unchallenged, campus fanaticism will intensify.
Christina Hoff Sommers is a resident scholar at the American Enterprise Institute. She is the author of several books, including Who Stole Feminism? and The War Against Boys. She also hosts The Factual Feminist, a video blog. @Chsommers
John StosselYes, some college students do insane things. Some called police when they saw “Trump 2016” chalked on sidewalks. The vandals at Berkeley and the thugs who assaulted Charles Murray are disgusting. But they are a minority. And these days people fight back.
Someone usually videotapes the craziness. Yale’s “Halloween costume incident” drove away two sensible instructors, but videos mocking Yale’s snowflakes, like “Silence U,” make such abuse less likely. Groups like Young America’s Foundation (YAF) publicize censorship, and the Foundation for Individual Rights in Education (FIRE) sues schools that restrict speech.
Consciousness has been raised. On campus, the worst is over. Free speech has always been fragile. I once took cameras to Seton Hall law school right after a professor gave a lecture on free speech. Students seemed to get the concept. Sean, now a lawyer, said, “Protect freedom for thought we hate; otherwise you never have a society where ideas clash, and we come up with the best idea.” So I asked, “Should there be any limits?” Students listed “fighting words,” “shouting fire in a theater,” malicious libel, etc.— reasonable court-approved exceptions. But then they went further. Several wanted bans on “hate” speech, “No value comes out of hate speech,” said Javier. “It inevitably leads to violence.”
No it doesn’t, I argued, “Also, doesn’t hate speech bring ideas into the open, so you can better argue about them, bringing you to the truth?”
“No,” replied Floyd, “With hate speech, more speech is just violence.”
So I pulled out a big copy of the First Amendment and wrote, “exception: hate speech.”
Two students wanted a ban on flag desecration “to respect those who died to protect it.”
One wanted bans on blasphemy:
“Look at the gravity of the harm versus the value in blasphemy—the harm outweighs the value.”
Several wanted a ban on political speech by corporations because of “the potential for large corporations to improperly influence politicians.”
Finally, Jillian, also now a lawyer, wanted hunting videos banned.
“It encourages harm down the road.”
I asked her, incredulously, “you’re comfortable locking up people who make a hunting film?”
“Oh, yeah,” she said. “It’s unnecessary cruelty to feeling and sentient beings.”
So, I picked up my copy of the Bill of Rights again. After “no law . . . abridging freedom of speech,” I added: “Except hate speech, flag burning, blasphemy, corporate political speech, depictions of hunting . . . ”
That embarrassed them. “We may have gone too far,” said Sean. Others agreed. One said, “Cross out the exceptions.” Free speech survived, but it was a close call. Respect for unpleasant speech will always be thin. Then-Senator Hillary Clinton wanted violent video games banned. John McCain and Russ Feingold tried to ban political speech. Donald Trump wants new libel laws, and if you burn a flag, he tweeted, consequences might be “loss of citizenship or a year in jail!” Courts or popular opinion killed those bad ideas.
Free speech will survive, assuming those of us who appreciate it use it to fight those who would smother it.
John Stossel is a FOX News/FOX Business Network Contributor.
Warren TreadgoldEven citizens of dictatorships are free to praise the regime and to talk about the weather. The only speech likely to be threatened anywhere is the sort that offends an important and intolerant group. What is new in America today is a leftist ideology that threatens speech precisely because it offends certain important and intolerant groups: feminists and supposedly oppressed minorities.
So far this new ideology is clearly dominant only in colleges and universities, where it has become so strong that most controversies concern outside speakers invited by students, not faculty speakers or speakers invited by administrators. Most academic administrators and professors are either leftists or have learned not to oppose leftism; otherwise they would probably never have been hired. Administrators treat even violent leftist protestors with respect and are ready to prevent conservative and moderate outsiders from speaking rather than provoke protests. Most professors who defend conservative or moderate speakers argue that the speakers’ views are indeed noxious but say that students should be exposed to them to learn how to refute them. This is very different from encouraging a free exchange of ideas.
Although the new ideology began on campuses in the ’60s, it gained authority outside them largely by means of several majority decisions of the Supreme Court, from Roe (1973) to Obergefell (2015). The Supreme Court decisions that endanger free speech are based on a presumed consensus of enlightened opinion that certain rights favored by activists have the same legitimacy as rights explicitly guaranteed by the Constitution—or even more legitimacy, because the rights favored by activists are assumed to be so fundamental that they need no grounding in specific constitutional language. The Court majorities found restricting abortion rights or homosexual marriage, as large numbers of Americans wish to do, to be constitutionally equivalent to restricting black voting rights or interracial marriage. Any denial of such equivalence therefore opposes fundamental constitutional rights and can be considered hate speech, advocating psychological and possibly physical harm to groups like women seeking abortions or homosexuals seeking approval. Such speech may still be constitutionally protected, but acting upon it is not.
This ideology of forbidding allegedly offensive speech has spread to most of the Democratic Party and the progressive movement. Rather than seeing themselves as taking one side in a free debate, progressives increasingly argue (for example) that opposing abortion is offensive to women and supporting the police is offensive to blacks. Some politicians object so strongly to such speech that despite their interest in winning votes, they attack voters who disagree with them as racists or sexists. Expressing views that allegedly discriminate against women, blacks, homosexuals, and various other minorities can now be grounds for a lawsuit.
Speech that supposedly offends women or minorities has already cost some people their careers, their businesses, and their opportunities to deliver or hear speeches. Such intimidation is the intended result of an ideology that threatens free speech.
Warren Treadgold is a professor of history at Saint Louis University.
Matt WelchLike a sullen zoo elephant rocking back and forth from leg to leg, there is an oversized paradox we’d prefer not to see standing smack in the sightlines of most our policy debates. Day by day, even minute by minute, America simultaneously gets less free in the laboratory, but more free in the field. Individuals are constantly expanding the limits and applications of their own autonomy, even as government transcends prior restraints on how far it can reach into our intimate business.
So it is that the Internal Revenue Service can charge foreign banks with collecting taxes on U.S. citizens (therefore causing global financial institutions to shun many of the estimated 6 million-plus Americans who live abroad), even while block-chain virtuosos make illegal transactions wholly undetectable to authorities. It has never been easier for Americans to travel abroad, and it’s never been harder to enter the U.S. without showing passports, fingerprints, retinal scans, and even social-media passwords.
What’s true for banking and tourism is doubly true for free speech. Social media has given everyone not just a platform but a megaphone (as unreadable as our Facebook timelines have all become since last November). At the same time, the federal government during this unhappy 21st century has continuously ratcheted up prosecutorial pressure against leakers, whistleblowers, investigative reporters, and technology companies.
A hopeful bulwark against government encroachment unique to the free-speech field is the Supreme Court’s very strong First Amendment jurisprudence in the past decade or two. Donald Trump, like Hillary Clinton before him, may prattle on about locking up flag-burners, but Antonin Scalia and the rest of SCOTUS protected such expression back in 1990. Barack Obama and John McCain (and Hillary Clinton—she’s as bad as any recent national politician on free speech) may lament the Citizens United decision, but it’s now firmly legal to broadcast unfriendly documentaries about politicians without fear of punishment, no matter the electoral calendar.
But in this very strength lies what might be the First Amendment’s most worrying vulnerability. Barry Friedman, in his 2009 book The Will of the People, made the persuasive argument that the Supreme Court typically ratifies, post facto, where public opinion has already shifted. Today’s culture of free speech could be tomorrow’s legal framework. If so, we’re in trouble.
For evidence of free-speech slippage, just read around you. When both major-party presidential nominees react to terrorist attacks by calling to shut down corners of the Internet, and when their respective supporters are actually debating the propriety of sucker punching protesters they disagree with, it’s hard to escape the conclusion that our increasingly shrill partisan sorting is turning the very foundation of post-1800 global prosperity into just another club to be swung in our national street fight.
In the eternal cat-and-mouse game between private initiative and government control, the former is always advantaged by the latter’s fundamental incompetence. But what if the public willingly hands government the power to muzzle? It may take a counter-cultural reformation to protect this most noble of American experiments.
Matt Welch is the editor at large of Reason.
Adam. J. WhiteFree speech is indeed under threat on our university campuses, but the threat did not begin there and it will not end there. Rather, the campus free-speech crisis is a particularly visible symptom of a much more fundamental crisis in American culture.
The problem is not that some students, teachers, and administrators reject traditional American values and institutions, or even that they are willing to menace or censor others who defend those values and institutions. Such critics have always existed, and they can be expected to use the tools and weapons at their disposal. The problem is that our country seems to produce too few students, teachers, and administrators who are willing or able to respond to them.
American families produce children who arrive on campus unprepared for, or uninterested in, defending our values and institutions. For our students who are focused primarily on their career prospects (if on anything at all), “[c]ollege is just one step on the continual stairway of advancement,” as David Brooks observed 16 years ago. “They’re not trying to buck the system; they’re trying to climb it, and they are streamlined for ascent. Hence they are not a disputatious group.”
Meanwhile, parents bear incomprehensible financial burdens to get their kids through college, without a clear sense of precisely what their kids will get out of these institutions in terms of character formation or civic virtue. With so much money at stake, few can afford for their kids to pursue more than career prospects.
Those problems are not created on campus, but they are exacerbated there, as too few college professors and administrators see their institutions as cultivators of American culture and republicanism. Confronted with activists’ rage, they offer no competing vision of higher education—let alone a compelling one.
Ironically, we might borrow a solution from the Left. Where progressives would leverage state power in service of their health-care agenda, we could do the same for education. State legislatures and governors, recognizing the present crisis, should begin to reform and renegotiate the fundamental nature of state universities. By making state universities more affordable, more productive, and more reflective of mainstream American values, they will attract students—and create incentives for competing private universities to follow suit.
Let’s hope they do it soon, for what’s at stake is much more than just free speech on campus, or even free speech writ large. In our time, as in Tocqueville’s, “the instruction of the people powerfully contributes to the support of a democratic republic,” especially “where instruction which awakens the understanding is not separated from moral education which amends the heart.” We need our colleges to cultivate—not cut down—civic virtue and our capacity for self-government. “Republican government presupposes the existence of these qualities in a higher degree than any other form,” Madison wrote in Federalist 55. If “there is not sufficient virtue among men for self-government,” then “nothing less than the chains of despotism” can restrain us “from destroying and devouring one another.”
Adam J. White is a research fellow at the Hoover Institution.
Cathy YoungA writer gets expelled from the World Science Fiction Convention for criticizing the sci-fi community’s preoccupation with racial and gender “inclusivity” while moderating a panel. An assault on free speech, or an exercise of free association? How about when students demand the disinvitation of a speaker—or disrupt the speech? When a critic of feminism gets banned from a social-media platform for unspecified “abuse”?
Such questions are at the heart of many recent free-speech controversies. There is no censorship by government; but how concerned should we be when private actors effectively suppress unpopular speech? Even in the freest society, some speech will—and should—be considered odious and banished to unsavory fringes. No one weeps for ostracized Holocaust deniers or pedophilia apologists.
But shunned speech needs to remain a narrow exception—or acceptable speech will inexorably shrink. As current Federal Communications Commission chairman Ajit Pai cautioned last year, First Amendment protections will be hollowed out unless undergirded by cultural values that support a free marketplace of ideas.
Sometimes, attacks on speech come from the right. In 2003, an Iraq War critic, reporter Chris Hedges, was silenced at Rockford College in Illinois by hecklers who unplugged the microphone and rushed the stage; some conservative pundits defended this as robust protest. Yet the current climate on the left—in universities, on social media, in “progressive” journalism, in intellectual circles—is particularly hostile to free expression. The identity-politics left, fixated on subtle oppressions embedded in everyday attitudes and language, sees speech-policing as the solution.
Is hostility to free-speech values on the rise? New York magazine columnist Jesse Singal argues that support for restrictions on public speech offensive to minorities has remained steady, and fairly high, since the 1970s. Perhaps. But the range of what qualifies as offensive—and which groups are to be shielded—has expanded dramatically. In our time, a leading liberal magazine, the New Republic, can defend calls to destroy a painting of lynching victim Emmett Till because the artist is white and guilty of “cultural appropriation,” and a feminist academic journal can be bullied into apologizing for an article on transgender issues that dares to mention “male genitalia.”
There is also a distinct trend of “bad” speech being squelched by coercion, not just disapproval. That includes the incidents at Middlebury College in Vermont and at Claremont McKenna in California, where mobs not only prevented conservative speakers—Charles Murray and Heather Mac Donald—from addressing audiences but physically threatened them as well. It also includes the use of civil-rights legislation to enforce goodthink in the workplace: Businesses may face stiff fines if they don’t force employees to call a “non-binary” co-worker by the singular “they,” even when talking among themselves.
These trends make a mockery of liberalism and enable the kind of backlash we have seen with Donald Trump’s election. But the backlash can bring its own brand of authoritarianism. It’s time to start rebuilding the culture of free speech across political divisions—a project that demands, above all, genuine openness and intellectual consistency. Otherwise it will remain, as the late, great Nat Hentoff put it, a call for “free speech for me, but not for thee.”
Cathy Young is a contributing editor at Reason.
Robert J. ZimmerFree speech is not a natural feature of human society. Many people are comfortable with free expression for views they agree with but would withhold this privilege for those they deem offensive. People justify such restrictions by various means: the appeal to moral certainty, political agendas, demand for change, opposing change, retaining power, resisting authority, or, more recently, not wanting to feel uncomfortable. Moral certainty about one’s views or a willingness to indulge one’s emotions makes it easy to assert that others are doing true damage or creating unacceptable offense simply by presenting a fundamentally different perspective.
The resulting challenges to free expression may come in the form of laws, threats, pressure (whether societal, group, or organizational), or self-censorship in the face of a prevailing consensus. Specific forms of challenge may be more or less pronounced as circumstances vary. But the widespread temptation to consider the silencing of “objectionable” viewpoints as acceptable implies that the challenge to free expression is always present.
The United States today is no exception. We benefit from the First Amendment, which asserts that the government shall make no law abridging the freedom of speech. However, fostering a society supporting free expression involves matters far beyond the law. The ongoing and increasing demonization of one group by another creates a political and social environment conducive to suppressing speech. Even violent acts opposing speech can become acceptable or encouraged. Such behavior is evident at both political rallies and university events. Our greatest current threat to free expression is the emergence of a national culture that accepts the legitimacy of suppression of speech deemed objectionable by a segment of the population.
University and college campuses present a particularly vivid instance of this cultural shift. There have been many well-publicized episodes of speakers being disinvited or prevented from speaking because of their views. However, the problem is much deeper, as there is significant self-censorship on many campuses. Both faculty and students sometimes find themselves silenced by social and institutional pressures to conform to “acceptable” views. Ironically, the very mission of universities and colleges to provide a powerful and deeply enriching education for their students demands that they embrace and protect free expression and open discourse. Failing to do so significantly diminishes the quality of the education they provide.
My own institution, the University of Chicago, through the words and actions of its faculty and leaders since its founding, has asserted the importance of free expression and its essential role in embracing intellectual challenge. We continue to do so today as articulated by the Chicago Principles, which strongly affirm that “the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” It is only in such an environment that universities can fulfill their own highest aspirations and provide leadership by demonstrating the value of free speech within society more broadly. A number of universities have joined us in reinforcing these values. But it remains to be seen whether the faculty and leaders of many institutions will truly stand up for these values, and in doing so provide a model for society as a whole.
Robert J. Zimmer is the president of the University of Chicago.