WE DO NOT YET have a full-scale history of intellectuals in the United States, but when that book comes to…
We do not yet have a full-scale history of intellectuals in the United States, but when that book comes to be written one of its central themes will surely be that our intellectuals have done their work mostly in isolation. Even the groups we locate in the past—the Transcendentalists encircling Emerson, the writers and critics following Van Wyck Brooks during the Seven Arts period—are groups mainly by courtesy of retrospect. The figures we see within them were not nearly so close to one another in experience nor so allied in opinion as our need for historical reconstruction makes them out to have been. The kind of inner fraternity we associate with literary groups in Paris and London has rarely been characteristic of American intellectual life. It is hardly an accident that one of our most poignant cultural legends concerns the brief friendship between Hawthorne and Melville and then its long sequel of separation. Ours is a culture in which people rattle around.
A seeming exception is the group of writers who have come to be known, these past few decades, as the New York intellectuals. They appear to have a common history, prolonged now for more than thirty years; a common political outlook, even if marked by ceaseless internecine quarrels; a common style of thought and perhaps composition; a common focus of intellectual interests; and once you get past politeness—which becomes, these days, easier and easier—a common ethnic origin. They are, or until recently have been, anti-Communist; they are, or until some time ago were, radicals; they have a fondness for ideological speculation; they write literary criticism with a strong social emphasis; they revel in polemic; they strive self-consciously to be “brilliant”; and by birth or osmosis, they are Jews.
The New York intellectuals are perhaps the only group America has ever had that could be described as an intelligentsia. This term comes awkwardly to our lips, and for good reason: it suggests, as Martin Malia, a historian of Russian culture, writes, “more than intellectuals in the ordinary sense. Whether merely ‘critical-thinking’ or actively oppositional, their name indicates that [in Russia] they thought of themselves as the embodied ‘intelligence’ . . . or ‘consciousness’ of the nation. They clearly felt an exceptional sense of apartness from the society in which they lived.”
Malia's phrase about “consciousness of the nation” seems special to the problems of the Russian intellectuals under Tsarism, but the rest of his description fits the New York intellectuals rather well: the stress upon “critical thinking,” the stance of active opposition, the sense of apartness. Or perhaps more accurately, it is a description which fits the past of the New York intellectuals. And just as the Russian “intelligentsia” was marked by a strongly Westernizing outlook, a wish to bring Russian culture out of its provincial limits and into a close relationship with the culture of Western Europe, so the New York intellectuals have played a role in the internationalization of American culture, serving as a liaison between American readers and Russian politics, French ideas, European writing.
A more complicated approach to the problem of the intelligentsia is provided by Renato Poggioli in his book The Theory of the Avant Garde. He describes the Russian intelligentsia as “an intellectual order from the lower ranks . . . created by those who were rejected by other classes: an intellectual order whose function was not so much cultural as political. . . .” Poggioli remarks that in Russia the term referred to a “cultural proletariat,” but
these intellectuals are not so much prolertrian as proletarianizing . . . they may become ideologically and politically bound to the mass of workers and peasants but they are not, at bottom, an order economically bound to the interests of those masses. A member of the intelligentsia is not born but made.
I suspect there may be a contradiction between regarding the intelligentsia as an order “from the lower ranks” and concluding that a member of this order “is not born but made.” But Poggioli's description is valuable insofar as it suggests that the intelligentsia is defined primarily by its position in society rather than by its relation to culture. Poggioli wishes sharply to distinguish the intelligentsia from an “intellectual elite,” which he regards as a self-mobilized group whose raison d'être is a cultural attitude, and in our time, a positive commitment to modernist literature. In respect to late 19th and early 20th-century Russia, this distinction is useful; when we turn to America, we are obviously dealing with loose analogies, yet useful ones too, for Poggioli's distinction should help us, a little later, to see the precise nature and limits of the New York intellectuals as a group.
Reflecting upon the experience of these writers, one begins to wonder whether—apart from a few years during the late 30's—they ever did constitute a coherent and self-defined group. The steady exchange of ideas, the reading of manuscripts, the preliminary discussion of work, all these characteristics of European intellectuals were not often evident in New York. On the contrary. In their work habits the New York intellectuals have mostly been loners, and in their relationships with one another, closer to the vision of life we associate with Hobbes than with Kropotkin. Repeatedly I have been struck by the way writers commonly associated with this group will hotly deny that it exists, or will say that if indeed it does exist they—they!—would not be so docile as to be part of it. Certain New York intellectuals like Harold Rosenberg and Lionel Abel have never been very strong in sentiments of group fraternity, and Rosenberg, in the course of a polemic against other New York writers, once coined the memorable phrase, “a herd of independent minds.” Some, like myself, have seen themselves as only in part and then ambivalently related, since we are also caught up with a separate political milieu.1 After a time, in Europe, it became a source of pride for writers to say they had once been associated with the Bloomsbury group or the Scrutiny critics or the socialists led by Gorky before the Revolution; but for whatever reasons, that point has not been reached among the New York writers. I doubt that it ever will be. Contentious and, by virtue of their origins and history, uncertain as to their relationship with American culture, the New York intellectuals wish, so far as I can tell, to form a loose and unacknowledged tribe.
Yet the mere fact that there does exist a commonly-shared perception of a New York intellectual group, even if that perception is held mainly by hostile academics and a parasitic mass media, must be taken as decisive. That people “out there” believe in the reality of the New York group, makes it a reality of sorts. And in all candor there is something else: the New York writers dislike being labeled, they can speak bitterly about each other's work and opinions, they may not see one another from year's start to year's end, but they are nervously alert to one another's judgments. Attention is paid—whether from warranted respect or collective vanity or provincial narrowness, it hardly matters.
Such groups approach a fragile state of coherence only at the point where writers are coming together and the point where they are drifting apart. Especially does this seem true at the end, when there comes that tremor of self-awareness which no one would have troubled to feel during the years of energy and confidence. A tradition in process of being lost, a generation facing assault and ridicule from ambitious younger men—the rekindled sense of group solidarity is brought to a half-hour's flame by the hardness of dying. And it is at such moments that the mass media, never more than twenty years late, become aware of the problem: their publicity signals recognition and recognition a certificate of death.
The social roots of the New York writers are not hard to trace. With a few delightful exceptions—a tendril from Yale, a vine from Seattle—they stem from the world of the immigrant Jews, either workers or petty bourgeois.2 They come at a moment in the development of immigrant Jewish culture when there is a strong drive not only to break out of the ghetto but also to leave behind the bonds of Jewishness entirely. Earlier generations had known such feelings, and through many works of fiction, especially those by Henry Roth, Michael Gold, and Daniel Fuchs, one can return to the classic pattern of a fierce attachment to the provincialism of origins as it becomes entangled with a fierce eagerness to plunge into the Gentile world of success, manners, freedom. As early as the 1890's this pattern had already come into view, and with diminishing intensity it has continued to control Jewish life deep into the 20th century; perhaps its last significant expression comes in Philip Roth's stories, where the sense of Jewish tradition is feeble but the urge to escape its suburban ruins extremely strong.
The New York intellectuals were the first group of Jewish writers to come out of the immigrant milieu who did not define themselves through a relationship, nostalgic or hostile, to memories of Jewishness. They were the first generation of Jewish writers for whom the recall of an immigrant childhood does not seem to have been completely overwhelming. (Is that perhaps one reason few of them tried to write fiction?) That this severance from Jewish roots and immigrant sources would later come to seem a little suspect, is another matter. All I wish to stress here is that, precisely at the point in the 30's when the New York intellectuals began to form themselves into a loose cultural-political tendency, Jewishness as idea and sentiment played no significant role in their expectations—apart, to be sure, from a bitter awareness that no matter what their political or cultural desires, the sheer fact of their recent emergence had still to be regarded, and not least of all by themselves, as an event within Jewish American life.
For decades the life of the East European Jews, both in the old country and the new, might be compared to a tightly-gathered spring, trembling with unused force, which had been held in check until the climactic moment of settlement in America. Then the energies of generations came bursting out, with an ambition that would range from pure to coarse, disinterested to vulgar, and indeed would mix all these together, but finally—this ambition—would count for more as an absolute release than in any of its local manifestations. What made Sammy run was partly that his father and his father's father had been bound hand and foot. And in all the New York intellectuals there was and had to be a fraction of Sammy. All were driven by a sense of striving, a thrust of will, an unspoken conviction that time had now to be regained.
The youthful experiences described by Alfred Kazin in his autobiography are, apart from his distinctive outcroppings of temperament, more or less typical of the experiences of many New York intellectuals—except, at one or two points, for the handful that involved itself deeply in the radical movement. It is my impression, however, that Kazin's affectionate stress on the Jewish sources of his experience is mainly a feeling of retrospect, mainly a recognition in the 50's and 60's that no matter how you might try to shake off your past, it would still cling to your speech, gestures, skin and nose, it would still shape, with a thousand subtle movements, the way you did your work and raised your children. In the 30's, however, it was precisely the idea of discarding the past, breaking away from families, traditions, and memories which excited intellectuals. They meant to declare themselves citizens of the world and, that succeeding, perhaps consider becoming writers of this country.
The Jewish immigrant world branded upon its sons and daughters marks of separateness even while encouraging them to dreams of universalism. This subculture may have been formed to preserve ethnic continuity, but it was the kind of continuity that would reach its triumph in self-disintegration. It taught its children both to conquer the Gentile world and to be conquered by it, both to leave an intellectual impress and to accept the dominant social norms. By the 20's and 30's the values dominating Jewish immigrant life were mostly secular, radical, and universalist, and if these were often conveyed through a parochial vocabulary, they nonetheless carried some remnants of European culture. Even as they were moving out of a constricted immigrant milieu, the New York intellectuals were being prepared by it for the tasks they would set themselves during the 30's. They were being prepared for the intellectual vocation as one of assertiveness, speculation, and free-wheeling; for the strategic maneuvers of a vanguard, at this point almost a vanguard in the abstract, with no ranks following in the rear; and for the union of politics and culture, with the politics radical and the culture cosmopolitan. What made this goal all the more attractive was that the best living American critic, Edmund Wilson, had triumphantly reached it: he was the author of both The Triple Thinkers and To the Finland Station, he served as a model for emulation, and he gave this view of the intellectual life a special authority in that he seemed to come out of the mainstream of American life.
That the literary avant garde and the political Left were not really comfortable partners would become clear with the passage of time; in Europe it already had. But during the years the New York intellectuals began to appear as writers and critics worthy of some attention, there was a feeling in the air that a union of the advanced—critical consciousness and political conscience—could be forged.
Throughout the 30's the New York intellectuals believed, somewhat naively, that this union was not only a desirable possibility but a tie both natural and appropriate. Except, however, for the Surrealists in Paris, and it is not clear how seriously this instance should be taken, the paths of political radicalism and cultural modernism have seldom met. To use Poggioli's terms, the New York writers were more an “intelligentsia” than an “intellectual elite,” and more inclined to an amorphous “proletarianizing” than to an austere partisanship for modernism.
The history of the West in the last century offers many instances in which Jewish intellectuals played an important role in the development of political radicalism; but almost always this occurred when there were sizable movements, with the intellectuals serving as spokesmen, propagandists, and functionaries of a party. In New York, by contrast, the intellectuals had no choice but to begin with a dissociation from the only significant radical movement in this country, the Communist party. What for European writers like Koestler, Silone, and Malraux would be the end of the road was here a beginning. In a fairly short time, the New York writers found that the meeting of political and cultural ideas which had stirred them to excitement could also leave them stranded and distressed. Radicalism, in both its daily practice and ethical biases, proved inhospitable to certain aspects of modernism—and not always, I now think, mistakenly. Literary modernism often had a way of cavalierly dismissing the world of daily existence, a world that remained intensely absorbing to the New York writers. Literary modernism could sometimes align itself with reactionary movements, a fact that was intensely embarrassing and required either torturous explanations or complex dissociations. The New York writers discovered, as well, that their relationship to modernism as a purely literary phenomenon was less authoritative and more ambiguous than they had wished to feel. The great battles for Joyce and Eliot and Proust had been fought in the 20's and mostly won; and now, while clashes with entrenched philistinism might still take place, they were mostly skirmishes or mopping-up operations (as in the polemics against the transfigured Van Wyck Brooks). The New York writers came at the end of the modernist experience, just as they came at what may yet have to be judged the end of the radical experience, and as they certainly came at the end of the Jewish experience. One shorthand way of describing their situation, a cause of both their feverish brilliance and their recurrent instability, is to say that they came late.
During the 30's and 40's their radicalism was anxious, problematic, and beginning to decay at the very moment it was adopted. They had no choice: the crisis of socialism was worldwide, profound, with no end in sight, and the only way to avoid that crisis was to bury oneself, as a few did, in the left-wing sects. Some of the New York writers had gone through the “political school” of Stalinism, a training in coarseness from which not all recovered; some had even spent a short time in the organizational coils of the Communist party. By 1936, when the anti-Stalinist Partisan Review was conceived, the central figures of that moment—Philip Rahv, William Phillips, Sidney Hook—had shed whatever sympathies they once felt for Stalinism, but the hope that they could find another ideological system, some cleansed version of Marxism associated perhaps with Trotsky or Luxemburg, was doomed to failure. Some gravitated for a year or two toward the Trotskyist group, but apart from admiration for Trotsky's personal qualities and dialectical prowess, they found little satisfaction there; no version of orthodox Marxism could retain a hold on intellectuals who had gone through the trauma of abandoning the Leninist Weltanschauung and had experienced the depth to which the politics of this century, most notably the rise of totalitarianism, called into question the once-sacred Marxist categories. From now on, the comforts of system would have to be relinquished.
Though sometimes brilliant in expression and often a stimulus to the kind of cultural speculation at which they excelled, the radicalism of the New York intellectuals during the 30's was not a deeply-grounded experience. It lacked roots in a popular movement which might bring intellectuals into relationship with the complexities of power and stringencies of organization. From a doctrine it became a style, and from a style a memory. It was symptomatic that the Marxist Quarterly, started in 1937 by a spectrum of Left intellectuals and probably the most distinguished Marxist journal ever published in this country, could survive no more than a year. The differences among its founders, some like James Burn-ham holding to a revolutionary Marxist line and others like Sidney Hook and Lewis Corey moving toward versions of liberalism and social democracy, proved too severe for collaboration. And even the radicalism of the Partisan Review editors and writers during its vivid early years—how deeply did it cut, except as a tool enabling them to break away from Marxism? Which of those writers and readers who now look back nostalgically have troubled to examine the early files of this important magazine and read—with embarrassment? amusement? pleasure?—the political essays it printed?
Yet if the radicalism of the New York intellectuals seems to have been without much political foundation or ideological strength, it certainly played an important role in their own development. For the New York writers, and even I suspect those among them who would later turn sour on the whole idea of radicalism (including the few who in the mid-60's would try to erase the memory of having turned sour), the 30's represented a time of intensity and fervor, a reality or illusion of engagement, a youth tensed with conviction and assurance: so that even Dwight Macdonald, who at each point in his life has made a specialty out of mocking his previous beliefs, could not help displaying tender feelings upon remembering his years, God help us, as a “revolutionist.” The radicalism of the 30's gave the New York intellectuals their distinctive style: a flair for polemic, a taste for the grand generalization, an impatience with what they regarded (often parochially) as parochial scholarship, an internationalist perspective, and a tacit belief in the unity—even if a unity beyond immediate reach—of intellectual work.
By comparison with competing schools of thought, the radicalism of the anti-Stalinist Left, as it was then being advanced in Partisan Review, seemed cogent, fertile, alive: it could stir good minds to argument, it could gain the attention of writers abroad, it seemed to offer a combination of system and independence. With time the anti-Stalinist intellectuals came to enjoy advantages somewhat like those which have enabled old radicals to flourish in the trade unions: they could talk faster than anyone else, they knew their way around better, they were quicker on their feet. Brief and superficial as their engagement with Marxism may have been, it gave the intellectuals the advantage of dialectic, sometimes dialectic as it lapsed into mere double-talk.
Yet in fairness I should add that this radicalism did achieve something of substantial value in the history of American culture. It helped destroy—once and for all, I would have said until recently—Stalinism as a force in our intellectual life, and with Stalinism those varieties of populist sentimentality which the Communist movement of the late 30's exploited with notable skill. If certain sorts of manipulative soft-headedness have been all but banished from serious American writing, and the kinds of rhetoric once associated with Archibald MacLeish and Van Wyck Brooks cast into permanent disrepute, at least some credit for this ought to go to the New York writers.
It has recently become fashionable, especially in the pages of the New York Review of Books, to sneer at the achievements of anti-Stalinism by muttering darkly about “the cold war.” But we ought to have enough respect for the past at least to avoid telescoping several decades. The major battle against Stalinism as a force within intellectual life, and in truth a powerful force, occurred before anyone heard of the cold war; it occurred in the late 30's and early 40's. In our own moment we see “the old crap,” as Marx once called it, rise to the surface with unnerving ease; there is something dizzying in an encounter with Stalin's theory of “social fascism,” particularly when it comes from the lips of young people who may not even be quite sure when Stalin lived. Still, I think there will not and probably cannot be repeated in our intellectual life the ghastly large-scale infatuation with a totalitarian regime which disgraced the 30's. Some achievements, a very few, seem beyond destruction.
A little credit is therefore due. Whatever judgments one may have about Sidney Hook's later political writings, and mine have been very critical, it is a matter of decency to recall the liberating role he played in the 30's as spokesman for a democratic radicalism and a fierce opponent of all the rationalizations for totalitarianism a good many intellectuals allowed themselves. One reason people have recently felt free to look down their noses at “anti-Communism” as if it were a mass voodoo infecting everyone from far Right to democratic Left, is precisely the toughness with which the New York intellectuals fought against Stalinism. Neither they nor anybody else could reestablish socialism as a viable politics in the United States; but for a time they did help to salvage the honor of the socialist idea—which meant primarily to place it in the sharpest opposition to all totalitarian states and systems. What many intellectuals now say they take for granted, had first to be won through bitter and exhausting struggle.
I should not like to give the impression that Stalinism was the beginning and end of whatever was detestable in American intellectual life during the 30's. Like the decades to come, perhaps like all decades, this was a “low dishonest” time. No one who grew up in, or lived through, these years should wish for a replay of their ideological melodramas. Nostalgia for the 30's is a sentiment possible only to the very young or the very old, those who have not known and those who no longer remember. Whatever distinction can be assigned to the New York intellectuals during the 30's lies mainly in their persistence as a small minority, in their readiness to defend unpopular positions against apologists for the Moscow trials and the vigilantism of Popular Front culture. Some historians, with the selectivity of retrospect, have recently begun to place the New York intellectuals at the center of cultural life in the 30's—but this is both a comic misapprehension and a soiling honor. On the contrary; their best hours were spent on the margin, in opposition.
Later, in the 40's and 50's, most of the New York intellectuals would abandon the effort to find a renewed basis for a socialist politics—to their serious discredit, I believe. Some would vulgarize anti-Stalinism into a politics barely distinguishable from reaction. Yet for almost all New York intellectuals the radical years proved a decisive moment in their lives. And for a very few, the decisive moment.
I have been speaking here as if the New York intellectuals were mainly political people, but in reality this was true for only a few of them, writers like Hook, Macdonald, and perhaps Rahv. Most were literary men or journalists with no experience in any political movement; they had come to radical politics through the pressures of conscience and a flair for the dramatic; and even in later years, when they abandoned any direct political involvement, they would in some sense remain “political.” They would maintain an alertness toward the public event. They would respond with eagerness to historical changes, even if these promised renewed favor for the very ideas they had largely discarded. They would continue to structure their cultural responses through a sharp, perhaps excessively sharp, kind of categorization, in itself a sign that political styles and habits persisted. But for the most part, the contributions of the New York intellectuals were not to political thought. Given the brief span of time during which they fancied themselves agents of a renewed Marxism, there was little they could have done. Sidney Hook wrote one or two excellent books on the sources of Marxism, Harold Rosenberg one or two penetrating essays on the dramatics of Marxism; and not much more. The real contribution of the New York writers was toward creating a new, and for this country almost exotic, style of work. They thought of themselves as cultural radicals even after they had begun to wonder whether there was much point in remaining political radicals. But what could this mean? Cultural radicalism was a notion extremely hard to define and perhaps impossible to defend, as Richard Chase would discover in the late 50's, when against the main drift of New York opinion he put forward the idea of a radicalism without immediate political ends but oriented toward criticism of a meretricious culture. What Chase did not live long enough to see was that his idea, much derided at the time, would lend itself a decade later to caricature through success.
Chase was seriously trying to preserve a major impetus of New York intellectual life: the exploration and defense of literary modernism.3 He failed to see, however, that this was a task largely fulfilled and, in any case, taking on a far more ambiguous and less militant character in the 50's than it would have had twenty or thirty years earlier. The New York writers had done useful work in behalf of modernist literature. Without fully realizing it, they were continuing a cultural movement that had begun in the United States during the mid-19th century: the return to Europe, not as provincials knocking humbly at the doors of the great, but as equals in an enterprise which by its very nature had to be international. We see this at work in Howells's reception of Ibsen and Tolstoy; in Van Wyck Brooks's use of European models to assault the timidities of American literature; in the responsiveness of The Little Review and The Dial to European experiments; and somewhat paradoxically, in the later fixation of the New Critics, despite an ideology of cultural provincialism, on modernist writing from abroad.
The New York critics, and most notably Partisan Review, helped complete this process of internationalizing American culture (also, by the way, Americanizing international culture). They gave a touch of glamor to that style which the Russians and Poles now call “cosmopolitan.” Partisan Review was the first journal in which it was not merely respectable but a matter of pride to print one of Eliot's Four Quartets side by side with Marxist criticism. And not only did the magazine break down the polar rigidities of the hard-line Marxists and the hardline nativists; it also sanctioned the idea, perhaps the most powerful cultural idea of the last half century, that there existed an all but incomparable generation of modern masters, some of them still alive, who in this terrible age represented the highest possibilities of the human imagination. On a more restricted scale, Partisan Review helped win attention and respect for a generation of European writers—Silone, Orwell, Malraux, Koestler, Serge—who were not quite of the first rank as novelists but had yielded themselves to and suffered the failure of socialism.
If the Partisan critics came too late for a direct encounter with new work from the modern masters, they did serve the valuable end of placing that work in a cultural context more vital and urgent than could be provided by any other school of American criticism. For many young people up to and through the Second World War, the Partisan critics helped to mold a new sensibility, a mixture of rootless radicalism and a de-sanctified admiration for writers like Joyce, Eliot, and Kafka. I can recall that even in my orthodox Marxist phase I felt that the central literary expression of the time was a now half-forgotten poem by a St. Louis writer called “The Wasteland.”
In truth, however, the New York critics were then performing no more than an auxiliary service. They were following upon the work of earlier, more fortunate critics. And even in the task of cultural consolidation, which soon had the unhappy result of overconsolidating the modern masters in the academy, the New York critics found important allies among their occasional opponents in the New Criticism. As it turned out, the commitment to literary modernism proved insufficient either as a binding literary purpose or as a theme that might inform the writings of the New York critics. By now modernism was entering its period of decline; the old excitements had paled and the old achievements been registered. Modernism had become successful; it was no longer a literature of opposition, and thereby had begun that metamorphosis signifying its ultimate death. The problem was no longer to fight for modernism, the problem was now to consider why the fight had so easily ended in triumph. And as time went on, modernism surfaced an increasing portion of its limitations and ambiguities, so that among some critics earlier passions of advocacy gave way to increasing anxieties of judgment. Yet the moment had certainly not come when a cool and objective reconsideration could be undertaken of works that had formed the sensibility of our time. The New York critics, like many others, were trapped in a dilemma from which no escape could be found but which lent itself to brilliant improvisation: it was too late for unobstructed enthusiasm, it was too soon for unobstructed valuation, and meanwhile the literary work that was being published, though sometimes distinguished, was composed in the heavy shadows of the modernists. At almost every point this work betrayed the marks of having come after.
Except for Harold Rosenberg, Who would make “the tradition of the new” a signature of his criticism, the New York writers slowly began to release those sentiments of uneasiness they had been harboring about the modernist poets and novelists. One instance was the notorious Pound case,4 in which literary and moral values, if not jammed into a head-on collision, were certainly entangled beyond easy separation. Essays on writers like D. H. Lawrence—what to make of his call for “blood consciousness,” what one's true response might be to his notions of the leader cult—began to appear. A recent book by John Harrison, The Reactionaries, which contains a full-scale attack on the politics of several modernist writers, is mostly a compilation of views that had been gathering force over the last few decades. And then, as modernism stumbled into its late period, those recent years in which its early energies have evidently reached a point of exhaustion, the New York critics became still more discomfited. There was a notable essay several years ago by Lionel Trilling in which he acknowledged mixed feelings toward the modernist writers he had long praised and taught. There was a cutting attack by Philip Rahv on Jean Genet, a perverse genius in whose fiction the compositional resources of modernism seem all but severed from its moral—one might even say, its human—interests. And more recently there has been an essay by myself ending with the gloomy expectation that no dignified funeral awaits modernism, only noisy prolongation “in publicity and sensation, the kind of savage parody which may indeed be the only fate worse than death.”
For the New York intellectuals in the 30's and 40's there was still another focus of interest, never quite as strong as radical politics or literary modernism but seeming, for a brief time, to promise a valuable new line of discussion. In the essays of writers like Clement Greenberg and Dwight Macdonald, more or less influenced by the German neo-Marxist school of Adorno-Horkheimer, there were beginnings at a theory of “mass culture,” that mass-produced pseudo-art characteristic of industrialized urban society, together with its paralyzed audiences, its inaccessible sources, its parasitic relation to high culture. More insight than system and more intuition than knowledge, this slender body of work, which appeared mostly in Politics and COMMENTARY, was nevertheless a contribution to the study of that hazy area where culture and society meet. It was attacked by writers like Edward Shils as being haughtily elitist, on the ground that it assumed a condescension to the tastes and experiences of the masses. It was attacked by writers like Harold Rosenberg, who charged that only people taking a surreptitious pleasure in dipping their noses into trash would study the “content” (he had no objection to sociological investigations) of mass culture. Even at its most penetrating, the criticism of mass culture was beset by uncertainty and improvisation; perhaps all necessary for a beginning.
Then, almost as if by common decision, the whole subject was dropped. For years hardly a word could be found in the advanced journals about what a little earlier had been called a crucial problem of the modern era. One reason was that the theory advanced by Greenberg and Macdonald turned out to be static: it could be stated but apparently not developed. It suffered from weaknesses parallel to those of Hannah Arendt's theory of totalitarianism: by positing a cul de sac, a virtual end of days, for 20th-century man and his culture it proposed a suffocating relationship between high or minority culture and the ever-multiplying mass culture. From this relationship there seemed neither relief nor escape, and if one accepted this view, nothing remained but to refine the theory and keep adding grisly instances.
In the absence of more complex speculations, there was little point in continuing to write about mass culture. Besides, hostility toward the commercial pseudo-arts was hard to maintain with unyielding intensity, mostly because it was hard to remain all that interested in them—only in Macdonald's essays did both hostility and interest survive intact. Some felt that the whole matter had been inflated and that writers should stick to their business, which was literature, and intellectuals to theirs, which was ideas. Others felt that the movies and TV were beginning to show more ingenuity and resourcefulness than the severe notions advanced by Greenberg and Macdonald allowed for—though no one could have anticipated that glorious infatuation with trash which Marshall McLuhan would make acceptable. And still others felt that the multiplication of insights, even if pleasing as an exercise, failed to yield significant results: a critic who contributes a nuance to Dostoevsky criticism is working within a structured tradition, while one who throws off a clever observation about Little Orphan Annie is simply showing that he can do what he has done.
There was another and more political reason for the collapse of mass culture criticism. One incentive toward this kind of writing was the feeling that industrial society had reached a point of affluent stasis where no major upheavals could now be registered much more vividly in culture than in economics. While aware of the dangers of reductionism here, I think the criticism of mass culture did serve, as some of its critics charged, conveniently to replace the criticism of bourgeois society. If you couldn't stir the proletariat to action, you could denounce Madison Avenue in comfort. Once, however, it began to be felt among intellectuals in the 50's that there was no longer so overwhelming a need for political criticism, and then, at the other pole, once it began to seem in the 60's that there were new openings for political criticism, the appetite for cultural surrogates became less keen.
Greenberg now said little more about mass culture; Macdonald made no serious effort to extend his theory or test it against new events; and in recent years, younger writers have seemed to feel that the whole approach of these men was heavy and humorless. An influential critic like Susan Sontag has proposed a cheerfully eclectic view which undercuts just about everything written from the Greenberg-Macdonald position. Now everyone is to do “his thing,” high, middle, or low; the old puritan habit of interpretation and judgment, so inimical to sensuousness, gives way to a programmed receptivity; and thus we are enlightened by lengthy studies of the ethos of the Beatles.
By the end of the Second World War, the New York writers had reached a point of severe intellectual crisis, though as frequently happens at such moments, they themselves often felt they were entering a phase of enlarged influence and power. Perhaps indeed there was a relation between inner crisis and external influence. Everything that had kept them going—the idea of socialism, the advocacy of literary modernism, the assault on mass culture, a special brand of literary criticism—was judged to be irrelevant to the postwar years. But as a group, just at the time their internal disintegration had seriously begun, the New York writers could be readily identified. The leading critics were Rahv, Phillips, Trilling, Rosenberg, Abel, and Kazin. The main political theorist was Hook. Writers of poetry and fiction related to the New York milieu were Delmore Schwartz, Saul Bellow, Paul Goodman, and Isaac Rosenfeld. And the recognized scholar, as also inspiring moral force, was Meyer Schapiro.
A sharp turn occurs, or is completed, soon after the Second World War. The intellectuals now go racing or stumbling from idea to idea, notion to notion, hope to hope, fashion to fashion. This instability often derives from a genuine eagerness to capture all that seems new—or threatening—in experience, sometimes from a mere desire to capture a bitch-goddess whose first name is Novelty. The abandonment of ideology can be liberating: a number of talents, thrown back on their own resources, begin to grow. The surrender of “commitment” can be damaging: some writers find themselves rattling about in a gray and chilly freedom. The culture opens up, with both temptation and generosity, and together with intellectual anxieties there are public rewards, often deserved. A period of dispersion; extreme oscillations in thought; and a turn in politics toward an increasingly conservative kind of liberalism—reflective, subtle, acquiescent.
The postwar years were marked by a sustained discussion of the new political and intellectual problems raised by the totalitarian state. Nothing in received political systems, neither Marxist nor liberal, adequately prepared one for the frightful mixture of terror and ideology, the capacity to sweep along the plebeian masses and organize a warfare state, and above all the readiness to destroy entire peoples, which characterized totalitarianism. Still less was anyone prepared—who had heeded the warning voices of the Russian socialist Martov or the English liberal Russell?—for the transformation of the revolutionary Bolshevik state, either through a “necessary” degeneration or an internal counterrevolution, into one of the major totalitarian powers. Marxist theories of fascism—the “last stage” of capitalism, with the economy statified to organize a permanent war machine and mass terror employed to put down rebellious workers—came to seem, if not entirely mistaken, then certainly insufficient. The quasi- or pseudo-Leninist notion that “bourgeois democracy” was merely a veiled form of capitalist domination, no different in principle from its open dictatorship, proved to be a moral and political disaster. The assumption that socialism was an ordained “next step,” or that nationalization of industry constituted a sufficient basis for working-class rule, was as great a disaster. No wonder intellectual certainties were shattered and these years marked by frenetic improvisation! At every point, with the growth of Communist power in Europe and with the manufacture of the Bomb at home, apocalypse seemed the name of tomorrow.
So much foolishness and malice has been written about the New York intellectuals and their anti-Communism, either by those who have signed a separate peace with the authoritarian idea or those who lack the courage to defend what is defensible in their own past, that I want here to be both blunt and unyielding.
Given the enormous growth of Russian power after the Second World War and the real possibility of a Communist takeover in Europe, the intellectuals—and not they alone—had to reconsider their political responses.5 An old-style Marxist declaration of rectitude, a plague repeated on both their houses? Or the difficult position of making foreign policy proposals for the United States, while maintaining criticism of its social order, so as to block totalitarian expansion without resort to war? Most intellectuals decided they had to choose the second course, and as far as that goes, I think they were right.
Like anti-capitalism, anti-Communism was a tricky politics, all too open to easy distortion. Like anti-capitalism, anti-Communism could be put to the service of ideological racketeering and reaction. Just as ideologues of the fanatic Right insisted that by some ineluctable logic anti-capitalism led to a Stalinist terror, so ideologues of the authoritarian Left, commandeering the same logic, declared that anti-Communism led to the politics of Dulles and Rusk. There is, of course, no “anti-capitalism” or “anti-Communism” in the abstract; these take on political flesh only when linked with a larger body of programs and values, so that it becomes clear what kind of “anti-capitalism” or “anti-Communism” we are dealing with. It is absurd, and indeed disreputable, for intellectuals in the 60's to write as if there were a unified “anti-Communism” which can be used to enclose the views of everyone from William Buckley to Michael Harrington.
There were difficulties. A position could be worked out for conditional support of the West when it defended Berlin or introduced the Marshall Plan or provided economic help to underdeveloped countries; but in the course of daily politics, in the effort to influence the foreign policy of what remained a capitalist power, intellectuals could lose their independence and slip into vulgarities of analysis and speech.
Painful choices had to be faced. When the Hungarian revolution broke out in 1956, most intellectuals sympathized strongly with the rebels yet feared that active intervention by the West might provoke a world war. For a rational and humane mind, anti-Communism could not be the sole motive, it could only be one of several, in political behavior and policy; and even those intellectuals who had by now swung a considerable distance to the Right did not advocate military intervention in Hungary. There was simply no way out—as, just recently, there was none in Czechoslovakia.
It became clear, furthermore, that U.S. military intervention in underdeveloped countries could help local reactionaries in the short run, and the Communists in the long run. These difficulties were inherent in postwar politics, and they ruled out—though for that very reason, also made tempting—a simplistic moralism. These difficulties were also exacerbated by the spread among intellectuals of a crude anti-Communism, often ready to justify whatever the U.S. might do at home and abroad. For a hard-line group within the American Committee for Cultural Freedom, all that seemed to matter in any strongly-felt way was a sour hatred of the Stalinists, historically justifiable but more and more a political liability even in the fight against Stalinism. The dangers in such a politics now seem all too obvious, but I should note, for whatever we may mean by the record, that in the early 50's they were already being pointed out by a mostly unheeded minority of intellectuals around Dissent. Yet, with all these qualifications registered, the criticism to be launched against the New York intellectuals in the postwar years is not that they were strongly anti-Communist but rather that many of them, through disorientation or insensibility, allowed their anti-Communism to become something cheap and illiberal.
Nor is the main point of moral criticism that the intellectuals abandoned socialism. We have no reason to suppose that the declaration of a socialist opinion induces a greater humaneness than does acquiescence in liberalism. It could be argued (I would) that in the ease with which ideas of socialism were now brushed aside there was something shabby. It was undignified, at the very least, for people who had made so much of their Marxist credentials now to put to rest so impatiently the radicalism of their youth. Still, it might be said by some of the New York writers that reality itself had forced them to conclude socialism was no longer viable or had become irrelevant to the American scene, and that while this conclusion might be open to political argument, it was not to moral attack.
Let us grant that for a moment. What cannot be granted is that the shift in ideologies required or warranted the surrender of critical independence which was prevalent during the 50's. In the trauma—or relief—of ideological ricochet, all too many intellectuals joined the American celebration. It was possible, to cite but one of many instances, for Mary McCarthy to write: “Class barriers disappear or tend to become porous [in the U.S.]; the factory worker is an economic aristocrat in comparison with the middle-class clerk. . . . The America . . . of vast inequalities and dramatic contrasts is rapidly ceasing to exist”6 (emphasis added). Because the New York writers all but surrendered their critical perspective on American society—that is why they were now open to attack.7
It was the growth of McCarthyism which brought most sharply into question the role of the intellectuals. Here, presumably, all men of good will could agree; here the interests of the intellectuals were beyond dispute and directly at stake. The record is not glorious. In New York circles it was often said that Bertrand Russell exaggerated wildly in describing the U.S. as “subject to a reign of terror” and that Simone de Beauvoir retained Stalinist clichés in her reportage from America. Yet it should not be forgotten that, if not “a reign of terror,” McCarthyism was frightful and disgusting, and that a number of Communists and fellow-travelers, not always carefully specified, suffered serious harm.
A magazine like Partisan Review was of course opposed to McCarthy's campaign, but it failed to take the lead on the issue of freedom which might once again have imbued the intellectuals with fighting spirit. Unlike some of its New York counterparts, it did print sharp attacks on the drift toward conservatism, and it did not try to minimize the badness of the situation in the name of anti-Communism. But the magazine failed to speak out with enough force and persistence, or to break past the hedgings of those intellectuals who led the American Committee for Cultural Freedom.
COMMENTARY, under Elliot Cohen's editorship, was still more inclined to minimize the threat of McCarthyism. In September 1952, at the very moment McCarthy became a central issue in the Presidential campaign, Cohen could write: “McCarthy remains in the popular mind an unreliable, second-string blowhard; his only support as a great national figure is from the fascinated fears of the intelligentsia”—a mode of argument all too close to that of the anti-anti-Communists who kept repeating that Communism was a serious problem only in the minds of anti-Communists.
In the American Committee for Cultural Freedom the increasingly conformist and conservative impulses of the New York intellectuals, or at least of a good number of them, found formal expression. I quote at length from Michael Harrington in a 1955 Dissent, first because it says precisely what needs to be said and second because it has the value of contemporary evidence:
In practice the ACCF has fallen behind Sidney Hook's views on civil liberties. Without implying any “conspiracy” theory of history . . . one may safely say that it is Hook who has molded the decisive ACCF policies. His Heresy Yes, Conspiracy No articles were widely circulated by the Committee, which meant that in effect it endorsed his systematic, explicit efforts to minimize the threat to civil liberties and to attack those European intellectuals who, whatever their own political or intellectual deficiencies, took a dim view of American developments. Under the guidance of Hook and the leadership of Irving Kristol, who supported Hook's general outlook, the American Committee cast its weight not so much in defense of these civil liberties which were steadily being nibbled away, but rather against those few remaining fellow-travelers who tried to exploit the civil-liberties issue.
At times this had an almost comic aspect. When Irving Kristol was executive secretary of the ACCF, one learned to expect from him silence on those issues that were agitating the whole intellectual and academic world, and enraged communiqués on the outrages performed by people like Arthur Miller and Bertrand Russell in exaggerating the dangers to civil liberties in the U.S.
Inevitably this led to more serious problems. In an article by Kristol, which first appeared in COMMENTARY and was later circulated under the ACCF imprimatur, one could read such astonishing and appalling statements as “there is one thing the American people know about Senator McCarthy; he, like them, is unequivocally anti-Communist. About the spokesmen for American liberalism, they feel they know no such thing. And with some justification.” This in the name of defending cultural freedom!
Harrington then proceeded to list several instances in which the ACCF had “acted within the United States in defense of freedom.” But
these activities do not absorb the main attention or interest of the Committee; its leadership is too jaded, too imbued with the sourness of indiscriminate anti-Stalinism to give itself to an active struggle against the dominant trend of contemporary intellectual life in America. What it really cares about is a struggle against fellow-travelers and “neutralists”—that is, against many European intellectuals. . . .
One of the crippling assumptions of the Committee has been that it would not intervene in cases where Stalinists or accused Stalinists were involved. It has rested this position on the academic argument . . . that Stalinists, being enemies of democracy, have no “right” to democratic privileges. . . . But the actual problem is not the metaphysical one of whether enemies of democracy (as the Stalinists clearly are) have a “right” to democratic privileges. What matters is that the drive against cultural freedom and civil liberties takes on the guise of anti-Stalinism.
Years later came the revelations that the Congress for Cultural Freedom, which had its headquarters in Paris and with which the American Committee was for a time affiliated, had received secret funds from the CIA. Some of the people, it turned out, with whom one had sincerely disagreed were not free men at all; they were knowing accomplices of an intelligence service. What a sad denouement! And yet not the heart of the matter, as the malicious Ramparts journalists have tried to make out. Most of the intellectuals who belonged to the ACCF seem not to have had any knowledge of the CIA connection—on this, as on anything else, I would completely accept the word of Dwight Macdonald. It is also true, however, that these intellectuals seem not to have inquired very closely into the Congress's sources of support. That a few, deceiving their closest associates, established connections with the CIA was not nearly so important, however, as that a majority within the Committee acquiesced to a politics of acquiescence. We Americans have a strong taste for conspiracy theories, supposing that if you scratch a trouble you'll find a villain. But history is far more complicated, and squalid as the CIA tie was, it should not be used to smear honest people who had nothing to do with secret services even as they remain open to criticism for what they did say and do.
At the same time, the retrospective defenses offered by some New York intellectuals strike me as decidedly lame. Meetings and magazines sponsored by the Congress, Daniel Bell has said, kept their intellectual freedom and contained criticism of U.S. policy—true but hardly to the point, since the issue at stake is not the opinions the Congress tolerated but the larger problem of good faith in intellectual life. The leadership of the Congress did not give its own supporters the opportunity to choose whether they wished to belong to a CIA-financed group. Another defense, this one offered by Sidney Hook, is that private backing was hard to find during the years it was essential to publish journals like Preuves and Encounter in Europe. Simply as a matter of fact, I do not believe this. For the Congress to have raised its funds openly, from non-governmental sources, would have meant discomfort, scrounging, penny-pinching: all the irksome things editors of little magazines have always had to do. By the postwar years, however, leading figures of both the Congress and the Committee no longer thought or behaved in that tradition.
Dwight Macdonald did. His magazine Politics was the one significant effort during the late 40's to return to radicalism. Enlivened by Macdonald's ingratiating personality and his table-hopping mind, Politics brought together sophisticated muckraking with torturous revaluations of Marxist ideology. Macdonald could not long keep in balance the competing interests which finally tore apart his magazine: lively commentary on current affairs and unavoidable if depressing retrospects on the failure of the Left. As always with Macdonald, honesty won out (one almost adds, alas) and the “inside” political discussion reached its climax with his essay The Root Is Man, in which he arrived at a kind of anarcho-pacifism based on an absolutist morality. This essay was in many ways the most poignant and authentic expression of the plight of those few intellectuals—Nicola Chiaramonte, Paul Goodman, Macdonald—who wished to dissociate themselves from the postwar turn to Realpolitik but could not find ways of transforming sentiments of rectitude and visions of Utopia into a workable politics. It was also a perfect leftist rationale for a kind of internal emigration of spirit and mind, with some odd shadings of similarity to the Salinger cult of the late 50's8
The overwhelming intellectual drift, however, was toward the Right. Arthur Schlesinger Jr., with moony glances at Kierkegaard, wrote essays in which he maintained that American society had all but taken care of its economic problems and could now concentrate on raising its cultural level. The “end of ideology” became a favorite shield for intellectuals in retreat, though it was never entirely clear whether this phrase meant the end of “our” ideology (partly true) or that all ideologies were soon to disintegrate (not true) or that the time had come to abandon the nostalgia for ideology (at least debatable). And in the mid 50's, as if to codify things, there appeared in Partisan Review a symposium, “Our Country and Our Culture,” in which all but three or four of the thirty participants clearly moved away from their earlier radical views. The rapprochement with “America the Beautiful,” as Mary McCarthy now called it in a tone not wholly ironic, seemed almost complete.
In these years there also began that series of gyrations in opinion, interest, and outlook—so frenetic, so unserious—which would mark our intellectual life. In place of the avant-garde idea we now had the style of fashion, though to suggest a mere replacement may be too simple, since as Poggioli remarks, fashion has often shadowed the avant garde as a kind of dandified double. Some intellectuals turned to a weekend of religion, some to a semester of existentialism,9 some to a holiday of Jewishness without faith or knowledge, some to a season of genteel conservatism. Leslie Fiedler, no doubt by design, seemed to go through more of such episodes than anyone else: even his admirers could not always be certain whether he was davenning or doing a rain dance.
These twists and turns were lively, and they could all seem harmless if only one could learn to look upon intellectual life as a variety of play, like potsie or king of the hill. What struck one as troubling, however, was not this or that fashion (tomorrow morning would bring another), but the dynamic of fashion itself, the ruthlessness with which, to remain in fashion, fashion had to keep devouring itself.
It would be unfair to give the impression that the fifteen years after the war were without significant growth or achievement among the New York writers. The attempt of recent New Left ideologues to present the 40's and 50's as if they were no more than a time of intellectual sterility and reaction is an oversimplification. Together with the turn toward conservative acquiescence, there were serious and valuable achievements. Hannah Arendt's book on totalitarianism may now seem open to many criticisms, but it certainly must rank as a major piece of work which, at the very least, made impossible—I mean, implausible—those theories of totalitarianism which, before and after she wrote, tended to reduce fascism and Stalinism to a matter of class rule or economic interest. Daniel Bell's writing contributed to the rightward turn of these years, but some of it, such as his excellent little book Work and Its Discontents, constitutes a permanent contribution, and one that is valuable for radicals too. The stress upon complexity of thought which characterized intellectual life during these years could be used as a rationale for conservatism, and perhaps even arose from the turn toward conservatism; but in truth, the lapsed radicalism of earlier years had proved to be simplistic, the world of late capitalism was perplexing, and for serious people complexity is a positive value. Even the few intellectuals who resisted the dominant temper of the 50's underwent during these years significant changes in their political outlooks and styles of thought: e.g., those around Dissent who cut whatever ties of sentiment still held them to the Bolshevik tradition and made the indissoluble connection between democracy and socialism a crux of their thought. Much that happened during these years is to be deplored and dismissed, but not all was waste; the increasing sophistication and complication of mind was a genuine gain, and it would be absurd, at this late date, to forgo it.
In literary criticism there were equivalent achievements. The very instability that might make a shambles out of political thought could have the effect of magnifying the powers required for criticism. Floundering in life and uncertainty in thought could make for an increased responsiveness to art. In the criticism of men like Trilling, Rahv, Chase, and Dupee there was now a more authoritative relation to the literary text and a richer awareness of the cultural past than was likely to be found in their earlier work. And a useful tension was also set up between the New York critics, whose instinctive response to literature was through a social-moral contextualism, and the New Critics, whose formalism may have been too rigid yet proved of great value to those who opposed it.
Meanwhile, the world seemed to be opening up, with all its charms, seductions, and falsities. In the 30's the life of the New York writers had been confined: the little magazine as island, the radical sect as cave. Partly they were recapitulating the pattern of immigrant Jewish experience: an ingathering of the flock in order to break out into the world and taste the Gentile fruits of status and success. Once it became clear that waiting for the revolution might turn out to be steady work and that the United States would neither veer to fascism nor sink into depression, the intellectuals had little choice but to live within (which didn't necessarily mean, become partisans of) the existing society.
There was money to be had from publishers, no great amounts but more than in the past. There were jobs in the universities, even for those without degrees. Some writers began to discover that publishing a story in The New Yorker or Esquire was not a sure ticket to Satan; others to see that the academy, while perhaps less exciting than the Village, wasn't invariably a graveyard for intellect and might even provide the only harbor in which serious people could do their own writing and perform honorable work. This dispersion involved losses, but usually there was nothing sinister about it—unless one clung, past an appropriate age, to the fantasy of being a momentarily unemployed “professional revolutionist.” Writers ought to know something about the world; they ought to test their notions against the reality of the country in which they live. Worldly involvements would, of course, bring risks, and one of these was power, really a very trifling kind of power, but still enough to raise the fear of corruption. That power corrupts everyone knows by now, but we ought also to recognize that powerlessness, if not corrupting, can be damaging—as in the case of Paul Goodman, a very courageous writer who stuck to his anarchist beliefs through years in which he was mocked and all but excluded from the New York journals, yet who could also come to seem, in his very rectitude, an example of asphyxiating righteousness.
What brought about these changes? Partly ideological adaptation, a feeling that capitalist society was here to stay and there wasn't much point in maintaining a radical position or posture. Partly the sly workings of prosperity. But also a loosening of the society itself, the start of that process which only now is in full swing—I mean the remarkable absorptiveness of modern society, its readiness to abandon traditional precepts for a moment of excitement, its growing permissiveness toward social criticism, perhaps out of indifference, or security, or even tolerance.
In the 60's well-placed young professors and radical students would denounce the “success,” sometimes the “sellout” of the New York writers. Their attitude reminds one a little of George Orwell's remark about wartime France: only a Pétain could afford the luxury of asceticism, ordinary people had to live by the necessities of materialism. But really, when you come to think of it, what did this “success” of the intellectuals amount to? A decent or a good job, a chance to earn extra money by working hard, and in the case of a few, like Trilling and Kazin, some fame beyond New York—rewards most European intellectuals would take for granted, so paltry would they seem. For the New York writers who lived through the 30's expecting never to have a job at all, a regular pay check might be remarkable; but in the American scale of things it was very modest indeed. And what the “leftist” prigs of the 60's, sons of psychiatrists and manufacturers, failed to understand—or perhaps understood only too well—was that the “success” with which they kept scaring themselves was simply one of the possibilities of adult life, a possibility, like failure, heavy with moral risks and disappointment. Could they imagine that they too might have to face the common lot?—I mean the whole business: debts, overwork, varicose veins, alimony, drinking, quarrels, hemorrhoids, depletion, the recognition that one might not prove to be another T.S. Eliot, but also some good things, some lessons learned, some “rags of time” salvaged and precious.
Here and there you could find petty greed or huckstering, now and again a drop into opportunism; but to make much of this would be foolish. Common clay, the New York writers had their share of common ambition. What drove them, and sometimes drove them crazy, was not, however, the quest for money, nor even a chance to “mix” with White House residents; it was finally, when all the trivia of existence was brushed aside, a gnawing ambition to write something, even three pages, that might live.
The intellectuals should have regarded their entry into the outer world as utterly commonplace, at least if they kept faith with the warning of Stendhal and Balzac that one must always hold a portion of the self forever beyond the world's reach. Few of the New York intellectuals made much money on books and articles. Few reached audiences beyond the little magazines. Few approached any centers of power, and precisely the buzz of gossip attending the one or two sometimes invited to a party beyond the well-surveyed limits of the West Side showed how confined their life still was. What seems most remarkable in retrospect is the innocence behind the assumption, sometimes held by the New York writers themselves with a nervous mixture of guilt and glee, that whatever recognition they won was cause for either preening or embarrassment. For all their gloss of sophistication, they had not really moved very far into the world. The immigrant milk was still on their lips.
In their published work during these years, the New York intellectuals developed a characteristic style of exposition and polemic. With some admiration and a bit of irony, let us call it the style of brilliance. The kind of essay they wrote was likely to be wide-ranging in reference, melding notions about literature and politics, sometimes announcing itself as a study of a writer or literary group but usually taut with a pressure to “go beyond” its subject, toward some encompassing moral or social observation. It is a kind of writing highly self-conscious in mode, with an unashamed vibration of bravura and display. Nervous, strewn with knotty or flashy phrases, impatient with transitions and other concessions to dullness, willfully calling attention to itself as a form or at least an outcry, fond of rapid twists, taking pleasure in dispute, dialectic, dazzle—such, at its best or most noticeable, was the essay cultivated by the New York writers. Until recently its strategy of exposition was likely to be impersonal (the writer did not speak much as an “I”) but its tone and bearing were likely to be intensely personal (the audience was to be made aware that the aim of the piece was not judiciousness but rather a strong impress of attitude, a blow of novelty, a wrenching of accepted opinion, sometimes a mere indulgence of vanity).
In most of these essays there was a sense of tournament, the writer as gymnast with one eye on other rings, or as skilled infighter juggling knives of dialectic. Polemics were harsh, often rude. And audiences nurtured, or spoiled, on this kind of performance, learned not to form settled judgments about a dispute until all sides had registered their blows: surprise was always a possible reward.
This style may have brought new life to the American essay, but in contemporary audiences it often evoked a strong distaste and even fear. “Ordinary” readers could be left with the fretful sense that they were not “in,” the beauties of polemic racing past their sluggish eye. Old-line academics, quite as if they had just crawled out of The Dunciad, enjoyed dismissing the New York critics as “unsound.” And for some younger souls, the cliffs of dialectic seemed too steep. Seymour Krim has left a poignant account of his disablement before “the overcerebral, Europeanish, sterilely citified, pretentiously alienated” New York intellectuals. Resentful at the fate which drove them to compare themselves with “the over-cerebral etc. etc.,” Krim writes that he and his friends “were often tortured and unappeasably bitter about being the offspring of this unhappily unique-ingrown-screwed-up breed.” Similar complaints could be heard from other writers and would-be writers who felt that New York intellectualism threatened their vital powers.
At its best the style of brilliance reflected a certain view of the intellectual life: free-lance dash, peacock strut, daring hypothesis, knockabout synthesis. For better or worse it was radically different from the accepted modes of scholarly publishing and middle-brow journalism. It celebrated the idea of the intellectual as anti-specialist, or as a writer whose speciality was the lack of a speciality: the writer as dilettante-connoisseur, Luftmensch of the mind, roamer among theories. But it was a style which also lent itself with peculiar ease to a stifling mimicry and decadence. Sometimes it seemed—no doubt mistakenly—as if any sophomore, indeed any parrot, could learn to write one of those scintillating Partisan reviews, so thoroughly could manner consume matter. In the 50's the cult of brilliance became a sign that writers were offering not their work or ideas but themselves, the persona as content; and this was but a step or two away from the exhibitionism of the 60's. Brilliance could become a sign of intellect unmoored: the less assurance, the more pyrotechnics. In making this judgment I ought to be frank enough to register the view that serious writers may prove to be brilliant and take pleasure in the proving, but insofar as they are serious, their overriding aim must be absolute lucidity.
If to the minor genre of the essay the New York writers made a major contribution, to the major genres of fiction and poetry they made only a minor contribution. As a literary group and no more than a literary group, they will seem less important than, say, the New Critics, who did set in motion a whole school of poetry. A few poets—Berryman, Lowell, Jarrell, perhaps Kunitz—have been influenced by the New York intellectuals, though in ways hard to specify and hardly comprising a major pressure on their work: all were finished writers by the time they brushed against the New York milieu. For one or two poets, the influence of New York meant becoming aware of the cultural pathos resident in the idea of the Jew (not always distinguished from the idea of Del-more Schwartz). But the main literary contribution of the New York milieu has been to legitimate a subject and tone we must uneasily call American Jewish writing. The fiction of urban malaise, second-generation complaint, Talmudic dazzle, woeful alienation, and dialectical irony, all found its earliest expression in the pages of COMMENTARY and Partisan Review—fiction in which the Jewish world is not merely regained in memory as a point of beginnings, an archetypal Lower East Side of spirit and. place, but is also treated as a portentous metaphor of man's homelessness and wandering.
Such distinguished short fictions as Bellow's “Seize the Day,” Schwartz's “In Dreams Begin Responsibility,” Mailer's “The Man Who Studied Yoga,” and Malamud's “The Magic Barrel” seem likely to survive the cultural moment in which they were written. And even if one concludes that these and similar pieces are not enough to warrant speaking of a major literary group, they certainly form a notable addition—a new tone, a new sensibility—to American writing. In time, these writers may be regarded as the last “regional” group in American literature, parallel to recent Southern writers in both sophistication of craft and a thematic dissociation from the values of American society. Nor is it important that during the last few decades both of these literary tendencies, the Southern and the Jewish, have been overvalued. The distance of but a few years has already made it clear that except for Faulkner Southern writing consists of a scatter of talented minor poets and novelists; and in a decade or so a similar judgment may be commonly accepted about most of the Jewish writers—though in regard to Bellow and Mailer settled opinions are still risky.
What is clear from both Southern and Jewish writing is that in a society increasingly disturbed about its lack of self-definition, the recall of regional and traditional details can be intensely absorbing in its own right, as well as suggestive of larger themes transcending the region. (For the Jewish writers New York was not merely a place, it was a symbol, a burden, a stamp of history.) Yet the writers of neither school have thus far managed to move from their particular milieu to a grasp of the entire culture; the very strengths of their localism define their limitations; and especially is this true for the Jewish writers, in whose behalf critics have recently overreached themselves. The effort to transform a Jewishness without religious or ethnic content into an emblem of universal dismay can easily lapse into sentimentality.
Whatever the hopeful future of individual writers, the “school” of American Jewish writing is by now in an advanced state of decomposition: how else explain the attention it has lately enjoyed? Or the appearance of a generation of younger Jewish writers who, without authentic experience or memory to draw upon, manufacture fantasies about the lives of their grandfathers? Or the popularity of Isaac Bashevis Singer who, coming to the American literary scene precisely at the moment when writers composing in English had begun to exhaust the Jewish subject, could, by dazzling contrast, extend it endlessly backward in time and deeper in historical imagination?
Just as there appear today young Jewish intellectuals who no longer know what it is that as Jews they do not know, so in fiction the fading immigrant world offers a thinner and thinner yield to writers of fiction. It no longer presses on memory, people can now choose whether to care about it. We are almost at the end of a historic experience, and it now seems unlikely that there will have arisen in New York a literary school comparable to the best this country has had. Insofar as the New York intellectual atmosphere has affected writers like Schwartz, Rosenfeld, Bellow, Malamud, Mailer, Goodman, and Roth (some of these would hotly deny that it has), it seems to have been too brittle, too contentious, too insecure for major creative work. What cannot yet be estimated is the extent to which the styles and values of the New York world may have left a mark on the work of American writers who never came directly under its influence or have been staunchly hostile to all of its ways.
Thinking back upon intellectual life in the 40's and 50's, and especially the air of malaise that hung over it, I find myself turning to a theme as difficult to clarify as it is impossible to evade. And here, for a few paragraphs, let me drop the porous shield of impersonality and speak openly in the first person.
We were living directly after the Holocaust of the European Jews. We might scorn our origins; we might crush America with discoveries of ardor; we might change our names. But we knew that but for an accident of geography we might also now be bars of soap. At least some of us could not help feeling that in our earlier claims to have shaken off all ethnic distinctiveness there had been something false, something shaming. Our Jewishness might have no clear religious or national content, it might be helpless before the criticism of believers; but Jews we were, like it or not, and liked or not.
To recognize that we were living after one of the greatest and least explicable catastrophes of human history, and one for which we could not claim to have adequately prepared ourselves either as intellectuals or as human beings, brought a new rush of feelings, mostly unarticulated and hidden behind the scrim of consciousness. It brought a low-charged but nagging guilt, a quiet remorse. Sartre's brilliant essay on authentic and inauthentic Jews left a strong mark. Hannah Arendt's book on totalitarianism had an equally strong impact, mostly because it offered a coherent theory, or at least a coherent picture, of the concentration camp universe. We could no longer escape the conviction that, blessing or curse, Jewishness was an integral part of our life, even if—and perhaps just because—there was nothing we could do or say about it. Despite a few simulated seders and literary raids on-Hasidism, we could not turn back to the synagogue; we could only express our irritation with “the community” which kept nagging us like disappointed mothers; and sometimes we tried, through imagination and recall, to put together a few bits and pieces of the world of our fathers. I cannot prove a connection between the Holocaust and the turn to Jewish themes in American fiction, at first urgent and quizzical, later fashionable and manipulative. I cannot prove that my own turn to Yiddish literature during the 50's was due to the shock following the war years. But it would be foolish to scant the possibility.
The violent dispute which broke out among the New York intellectuals when Hannah Arendt published her book on Eichmann had as one of its causes a sense of guilt concerning the Jewish tragedy—a guilt pervasive, unmanageable, yet seldom declared at the surface of speech or act. In the quarrel between those attacking and those defending Eichmann in Jerusalem there were polemical excesses on both sides, insofar as both were acting out of unacknowledged passions. Yet even in the debris of this quarrel there was, I think, something good. At least everyone was acknowledging emotions that had long gone unused. Nowhere else in American academic and intellectual life was there such ferocity of concern with the problems raised by Hannah Arendt. If left to the rest of the American intellectual world, her book would have been praised as “stimulating” and “thoughtful,” and then everyone would have gone back to sleep. Nowhere else in the country could there have been the kind of public forum sponsored on this subject by Dissent: a debate sometimes ugly and outrageous, yet also urgent and afire—evidence that in behalf of ideas we were still ready to risk personal relationships. After all, it had never been dignity that we could claim as our strong point.
Nothing about the New York writers is more remarkable than the sheer fact of their survival. In a country where tastes in culture change more rapidly than lengths of skirts, they have succeeded in maintaining a degree of influence, as well as a distinctive milieu, for more than thirty years. Apart from reasons intrinsic to the intellectual life, let me note a few that are somewhat more worldly in nature.
- There is something, perhaps a quasi-religious dynamism, about an ideology, even a lapsed ideology that everyone says has reached its end, which yields force and coherence to those who have closely experienced it. A lapsed Catholic has tactical advantages in his apostasy which a lifelong skeptic does not have. And just as Christianity kept many 19th-century writers going long after they had discarded religion, so Marxism gave bite and edge to the work of 20th-century writers long after they had turned from socialism.
- The years in which the New York writers gained some prominence were those in which the style at which they had arrived—irony, ambiguity, complexity, the problematic as mode of knowledge—took on a magnified appeal for the American educated classes. After the Second World War the cultivation of private sensibility and personal responsibility were values enormously popular among reflective people, to whom the very thought of public life smacked of betrayal and vulgarity.
- An intelligentsia flourishes in a capital: Paris, St. Petersburg, Berlin. The influence of the New York writers grew at the time New York itself, for better or worse, became the cultural center of the country. And thereby, to return to Poggioli's categories, the New York writers slowly shed the characteristics of an intelligentsia and transformed themselves into—
Perhaps. But what precisely is an Establishment? Vaguely sinister in its overtones, the term is used these days with gay abandon on the American campus; but except as a spread-eagle put-down it has no discernible meaning, and if accepted as a put-down, the problem then becomes to discover who, if anyone, is not in the Establishment. In England the term has had a certain clarity of usage, referring to an intellectual elite which derives from the same upper and middle classes as the men who wield political power and which shares with these men Oxbridge education and Bloomsbury culture. But except in F. R. Leavis's angrier tirades, “Establishment” does not bear the conspiratorial overtones we are inclined to credit in this country. What it does in England is to locate the social-cultural stratum guiding the tastes of the classes in power and thereby crucially affecting the tastes of the country as a whole.
In this sense, neither the New York writers nor any other group can be said to comprise an American Establishment, simply because no one in this country has ever commanded an equivalent amount of cultural power. The New York writers have few, if any, connections with a stable class of upper-rank civil servants or with a significant segment of the rich. They are notably without connections in Washington. They do not shape official or dominant tastes. And they cannot exert the kind of control over cultural opinion that the London Establishment is said to have maintained until recently. Critics like Trilling and Kazin are listened to by people in publishing, Rosenberg and Greenberg by people in the art world; but this hardly constitutes anything so formidable as an Establishment. Indeed, at the very time mutterings have been heard about a New York literary Establishment, there has occurred a rapid disintegration of whatever group ties may still have remained among the New York writers. They lack—and it is just as well—the first requirement for an Establishment: that firm sense of internal discipline which enables it to impose its values and tastes on a large public.
During the last few years the talk about a New York Establishment has taken an extremely unpleasant turn. Whoever does a bit of lecturing about the country is likely to encounter, after a few drinks, literary academics who inquire enviously, sometimes spitefully, about “what's new in New York.” Such people seem to feel that exile in outlying regions means they are missing something remarkable (and so they are: the Balan-chine company). The cause of their cultural envy is, I think, a notion that has become prevalent in our English departments that scholarship is somehow unworthy and the “real” literary life is to be found in the periodical journalism of New York. Intrinsically this is a dubious notion and for the future of American education, a disastrous one; when directed against the New York writers it leads to some painful situations. As polite needling questions are asked about the cultural life of New York, a rise of sweat comes to one's brow, for everyone knows what no one says: New York means Jews.10
Whatever the duration or extent of the influence enjoyed by the New York intellectuals, it is now reaching an end. There are signs of internal disarray: unhealed wounds, a dispersal of interests, the damage of time. More important, however, is the appearance these last few years of a new and powerful challenge to the New York writers. And here I shall have to go off on what may appear to be a long digression, since one cannot understand the present situation of the New York writers without taking into detailed account the cultural-political scene of America in the 60's.
There is a rising younger generation of intellectuals: ambitious, self-assured, at ease with prosperity while conspicuously alienated, unmarred by the traumas of the totalitarian age, bored with memories of defeat, and attracted to the idea of power. This generation matters, thus far, not so much for its leading figures and their meager accomplishments, but for the political-cultural style—what I shall call the new sensibility—it thrusts into absolute opposition both to the New York writers and to other groups. It claims not to seek penetration into, or accommodation with, our cultural and academic institutions; it fancies the prospect of a harsh generational fight; and given the premise with which it begins—that everything touched by older men reeks of betrayal—its claims and fancies have a sort of propriety. It proposes a revolution, I would call it a counterrevolution, in sensibility. Though linked to New Left politics, it goes beyond any politics, making itself felt, like a spreading blot of anti-intellectualism, in every area of intellectual life. Not yet fully cohered, this new cultural group cannot yet be fully defined, nor is it possible fully to describe its projected sensibility, since it declares itself through a refusal of both coherence and definition.
There is no need to discuss once more the strengths and weaknesses of the New Left, its moral energies and intellectual muddles. Nor need we be concerned with the tactical issues separating New Left politics from that of older left-wing intellectuals. Were nothing else at stake than, say, “coalition politics,” the differences would be both temporary and tolerable. But in reality a deeper divergence of outlook has begun to show itself. The new intellectual style, insofar as it approximates a politics, mixes sentiments of anarchism with apologies for authoritarianism; bubbling hopes for “participatory democracy” with manipulative elitism; unqualified populist majoritarianism with the reign of the cadres.
A confrontation of intellectual outlooks is unavoidable. And a central issue is certain to be the problem of liberalism, not liberalism as one or another version of current politics, nor even as a theory of power, but liberalism as a cast of mind, a structure of norms by means of which to humanize public life. For those of us who have lived through the age of totalitarianism and experienced the debacle of socialism, this conflict over liberal values is extremely painful. We have paid heavily for the lesson that democracy, even “bourgeois democracy,” is a precious human achievement, one that, far from being simply a mode of mass manipulation, has been wrested through decades of struggle by the labor, socialist, and liberal movements. To protect the values of liberal democracy, often against those who call themselves liberals, is an elementary task for the intellectuals as a social group.
Yet what I have just been saying, axiomatic as it may seem, has in the last few years aroused opposition, skepticism, open contempt among professors, students, and intellectuals. On the very crudest, though by no means unpopular, level, we find a vulgarization of an already vulgar Marxism. The notion that we live in a society that can be described as “liberal fascism” (a theoretic contribution from certain SDS leaders) isn't one that serious people can take seriously; but the fact that it is circulated in the academic community signifies a counterrevolution of the mind: a refusal of nuance and observation, a willed return to the kind of political primitivism which used to declare the distinctions of bourgeois rule—democratic, authoritarian, totalitarian—as slight in importance.
For the talk about “liberal fascism” men like Norman Mailer must bear a heavy responsibility, insofar as they have recklessly employed the term “totalitarian” as a descriptive for present-day American society. Having lived through the ghastliness of the Stalinist theory of “social fascism” (the grand-daddy of “liberal fascism”) I cannot suppose any literate person really accepts this kind of nonsense, yet I know that people can find it politically expedient to pretend that they do. It is, in Ernest Nolte's phrase, “a lie which the intellect sees for what it is but which is [felt to be] at one with the deeper motivations of life.”
There are sophisticated equivalents. One of these points to the failings and crises of democracy, concluding that the content of decision has been increasingly separated from the forms of decision-making. Another emphasizes the manipulation of the masses by communication media and declares them brainwashed victims incapable of rational choice and acquiescing in their own subjugation. A third decries the bureaucratic entanglements of the political process and favors some version, usually more sentiment than scheme, for direct plebiscitory rule. With varying intelligence, all point to acknowledged problems of democratic society; and there could be no urgent objection were these criticisms not linked with the premise that the troubles of democracy can be overcome by undercutting or bypassing representative institutions. Thus, it is quite true that the masses are manipulated, but to make that the crux of a political analysis is to lead into the notion that elections are mere “formalities” and majorities mere tokens of the inauthentic; what is needed, instead, is Marcuse's “educational dictatorship” (in which, I hope, at least some of the New York intellectuals would require the most prolonged reeducation). And in a similar vein, all proposals for obligatory or pressured “participation,” apart from violating the democratic right not to participate, have a way of discounting those representative institutions and limitations upon power which can alone provide a degree of safeguard for liberal norms.
Perhaps the most sophisticated and currently popular of anti-democratic notions is that advanced by Herbert Marcuse: his contempt for tolerance on the ground that it is a veil for subjection, a rationale for maintaining the status quo, and his consequent readiness to suppress “regressive” elements of the population lest they impede social “liberation.” About these theories, which succeed in salvaging the worst of Leninism, Henry David Aiken has neatly remarked: “Whether garden-variety liberties can survive the ministrations of such ‘liberating tolerance’ is not a question that greatly interests Marcuse.” Indeed not.
Such theories are no mere academic indulgence or sectarian irrelevance; they have been put to significant use on the American campus as rationalizations for schemes to break up meetings of political opponents and as the justification for imaginary coups d'état by tiny minorities of enraged intellectuals. How depressing that “men of the Left,” themselves so often victims of repression, should attack the values of tolerance and freedom.11
These differences concerning liberal norms run very deep and are certain to affect American intellectual life in the coming years; yet they do not quite get to the core of the matter. In the Kulturkampf now emerging there are issues more consequential than the political ones, issues that have to do with basic views concerning the nature of human life.
One of these has been with us for a long time, and trying now to put it into simple language, I feel a measure of uneasiness, as if it were bad form to violate the tradition of antinomianism in which we have all been raised.
What, for “emancipated” people, is the surviving role of moral imperatives, or at least moral recommendations? Do these retain for us a shred of sanctity or at least of coercive value? The question to which I am moving is not, of course, whether the moral life is desirable or men should try to live it; no, the question has to do with the provenance and defining conditions of the moral life. Do moral principles continue to signify insofar as and if they come into conflict with spontaneous impulses, and more urgently still, can we conceive of moral principles retaining some validity if they do come into conflict with spontaneous impulses? Are we still to give credit to the idea, one of the few meeting-points between traditional Christianity and modern Freudianism, that there occurs and must occur a deepseated clash between instinct and civilization, nature and nurture, or can we now, with a great sigh of collective relief, dismiss this as still another hangup, perhaps the supreme hangup, of Western civilization?
For more than 150 years there has been a line of Western thought, as also of sentiment in modern literature, which calls into question not one or another moral commandment or regulation, but the very idea of commandment and regulation; which insists that the ethic of control, like the ethic of work, should be regarded as spurious, a token of a centuries-long heritage of repression. Sometimes this view comes to us as a faint residue of Christian heresy, more recently as the blare of Nietzschean prophecy, and on our own day as a psychoanalytic gift.
Now, even those of us raised on the premise of revolt against received values, against the whole system of bourgeois constriction and anti-pleasure, did not—I suppose it had better be said outright—imagine ourselves to be exempt from the irksome necessity of regulation, even if we had managed to escape the reach of the commandments. Neither primitive Christians nor romantic naïfs, we did not suppose that we could entrust ourselves entirely to the beneficence of nature, or the signals of our bodies, as a sufficient guide to conduct. My very use of the word “conduct,” freighted as it is with normative associations, puts the brand of time on what I am saying.
By contrast, the emerging new sensibility rests on a vision of innocence: an innocence through lapse or will or recovery, an innocence through a refusal of our and perhaps any other culture, an innocence not even to be preceded by the withering away of the state, since in this view of things the state could wither away only if men learned so to be at ease with their desires, all need for regulation would fade. This is a vision of life beyond good and evil, not because these experiences or possibilities of experience have been confronted and transcended, but because the categories by which we try to designate them have been dismissed. There is no need to taste the apple: the apple brings health to those who know how to bite it: and look more closely, there is no apple at all, it exists only in your sickened imagination.
The new sensibility posits a theory that might be called the psychology of unobstructed need: men should satisfy those needs which are theirs, organic to their bodies and psyches, and to do this they now must learn to discard or destroy all those obstructions, mostly the result of cultural neurosis, which keep them from satisfying their needs. This does not mean that the moral life is denied; it only means that in the moral economy costs need not be entered as a significant item. In the current vocabulary, it becomes a matter of everyone doing “his own thing,” and once all of us are allowed to do “his own thing,” a prospect of easing harmony unfolds. Sexuality is the ground of being, and vital sexuality the assurance of the moral life.
Whether this outlook is compatible with a high order of culture or a complex civilization I shall not discuss here; Freud thought they were not compatible, though that does not foreclose the matter. More immediately, and on a less exalted plane, one is troubled by the following problem: what if the needs and impulses of human beings clash, as they seem to do, and what if the transfer of energies from sexuality to sociality does not proceed with the anticipated abundance and smoothness? The new sensibility, as displayed in the writings of Norman Brown and Norman Mailer, falls back upon a curious analogue to laissez faire economics, Adam Smith's invisible hand, by means of which innumerable units in conflict with one another achieve a resultant of cooperation. Is there, however, much reason to suppose that this will prove more satisfactory in the economy of moral conduct than it has in the morality of economic relations?
Suppose that, after reading Mailer's “The White Negro,” my “thing” happens to be that, to “dare the unknown” (as Mailer puts it), I want to beat in the brains of an aging candystore keeper; or after reading LeRoi Jones, I should like to cut up a few Jews, whether or not they keep stores—how is anyone going to argue against the outpouring of my need? Who will declare himself its barrier? Against me, against my ideas it is possible to argue, but how, according to this new dispensation, can anyone argue against my need? Acting through violence I will at least have realized myself, for I will have entered (to quote Mailer) “a new relation with the police” and introduced “a dangerous element” into my life; thereby, too, I will have escaped the cell-block of regulation which keeps me from the free air of self-determination. And if you now object that this very escape may lead to brutality, you reveal yourself as hopelessly linked to imperfection and original sin. For why should anyone truly heeding his nature wish to kill or wound or do anything but love and make love? That certain spokesmen of the new sensibility seem to be boiling over with fantasies of blood, or at least suppose that a verbal indulgence in such fantasies is a therapy for the boredom in their souls, is a problem for dialecticians. And as for skeptics, what have they to offer but evidence from history, that European contamination?
When it is transposed to a cultural setting, this psychology—in earlier times it would have been called a moral psychology—provokes a series of disputes over “complexity” in literature. Certain older critics find much recent writing distasteful and tiresome because it fails to reach or grasp for that complexity which they regard as intrinsic to the human enterprise. More indulgent critics, not always younger, find the same kind of writing forceful, healthy, untangled. At first this seems like a problem in taste, a pardonable difference between those who like their poems and novels knotty and those who like them smooth; but soon it becomes clear that this clash arises from a meeting of incompatible world-outlooks. For if the psychology of unobstructed need is taken as a sufficient guide to life, it all but eliminates any need for complexity—or rather, the need for complexity comes to be seen as a mode of false consciousness, an evasion of true feelings, a psychic bureaucratism in which to trap the pure and the strong. If good sex signifies good feeling; good feeling, good being; good being, good action; and good action, a healthy polity, then we have come the long way round, past the Reichian way or the Lawrentian way, to an Emersonian romanticism minus Emerson's complicatedness of vision. The world snaps back into a system of burgeoning potentialities, waiting for free spirits to attach themselves to the richness of natural object and symbol—except that now the orgasmic blackout is to replace the Oversoul as the current through which pure transcendent energies will flow.
We are confronting, then, a new phase in our culture, which in motive and spring represents a wish to shake off the bleeding heritage of modernism and reinstate one of those periods of the collective naï which seem endemic to American experience. The new sensibility is impatient with ideas. It is impatient with literary structures of complexity and coherence, only yesterday the catchwords of our criticism. It wants instead works of literature—though literature may be the wrong word—that will be as absolute as the sun, as unarguable as orgasm, and as delicious as a lollipop. It schemes to throw off the weight of nuance and ambiguity, legacies of high consciousness and tired blood. It is weary of the habit of reflection, the making of distinctions, the squareness of dialectic, the tarnished gold of inherited wisdom. It cares nothing for the haunted memories of old Jews. It has no taste for the ethical nail-biting of those writers of the Left who suffered defeat and could never again accept the narcotic of certainty. It is sick of those magnifications of irony that Mann gave us, sick of those visions of entrapment to which Kafka led us, sick of those shufflings of daily horror and grace that Joyce left us. It breathes contempt for rationality, impatience with mind, and a hostility to the artifices and decorums of high culture. It despises liberal values, liberal cautions, liberal virtues. It is bored with the past: for the past is a fink.
Where Marx and Freud were diggers of intellect, mining deeper and deeper into society and the psyche, and forever determined to strengthen the dominion of reason, today the favored direction of search is not inward but sideways, an “expansion of consciousness” through the kick of drugs. The new sensibility is drawn to images of sickness, but not, as with the modernist masters, out of dialectical canniness or religious blasphemy; it takes their denials literally and does not even know the complex desperations that led them to deny. It seeks to charge itself into dazzling sentience through chemicals and the rhetoric of violence. It gropes for sensations: the innocence of blue, the ejaculations of red. It ordains life's simplicity. It chooses surfaces as against relationships, the skim of texture rather than the weaving of pattern. Haunted by boredom, it transforms art into a sequence of shocks which, steadily magnified, yield fewer and fewer thrills, so that simply to maintain a modest frisson requires mounting exertions. It proposes an art as disposable as a paper dress, to which one need give nothing but a flicker of notice. Especially in the theater it resurrects tattered heresies, trying to collapse aesthetic distance in behalf of touch and frenzy. (But if illusion is now worn out, what remains but staging the realities of rape, fellatio, and murder?) Cutting itself off from a knowledge of what happened before the moment of its birth, it repeats with a delighted innocence most of what did in fact happen: expressionist drama reduced to skit, agit-prop tumbled to farce, Melvillean anguish slackened into black humor. It devalues the word, which is soaked up with too much past history, and favors monochromatic cartoons, companionate grunts, and glimpses of the ineffable in popular ditties. It has some humor, but not much wit. Of the tragic it knows next to nothing. Where Dostoevsky made nihilism seem sinister by painting it in jolly colors, the new American sensibility does something no other culture could have aspired to: it makes nihilism seem casual, good-natured, even innocent. No longer burdened by the idea of the problematic, it arms itself with the paraphernalia of post-industrial technique and crash-dives into a Typee of neo-primitivism.
Its high priests are Norman Brown, Herbert Marcuse, and Marshall McLuhan,12 all writers with a deeply conservative bias: all committed to a stasis of the given: the stasis of unmoving instinct, the stasis of unmovable society, the stasis of endlessly moving technology. Classics of the latest thing, these three figures lend the new sensibility an aura of profundity. Their prestige can be employed to suggest an organic link between cultural modernism and the new sensibility, though in reality their relation to modernism is no more than biographical.
Perhaps because it is new, some of the new style has its charms—mainly along the margins of social life, in dress, music, and slang. In that it captures the yearnings of a younger generation, the new style has more than charm: a vibration of moral desire, a desire for goodness of heart. Still, we had better not deceive ourselves. Some of those shiny-cheeked darlings adorned with flowers and tokens of love can also be campus enragés screaming “Up Against the Wall, Motherfuckers, This Is a Stickup” (a slogan that does not strike one as a notable improvement over “Workers of the World, Unite”).
That finally there should appear an impulse to shake off the burdens and entanglements of modernism need come as no surprise. After all the virtuosos of torment and enigma we have known, it would be fine to have a period in Western culture devoted to relaxed pleasures and surface hedonism. But so far this does not seem possible: the century forbids it. What strikes one most forcefully about a great deal of the new writing and theater is its grindingly ideological tone, even if now the claim is for an ideology of pleasure. And what strikes one even more is the air of pulsing ressentiment which pervades this work, an often unearned and seemingly inexplicable hostility. If one went by the cues of a critic like Susan Sontag, one might suppose that the ethical torments of Kamanetz Podolsk and the moral repressiveness of Salem, Massachusetts had finally been put to rest, in favor of creamy delights in texture, color, and sensation. But nothing of the sort is true, at least not yet; it is only advertised.
Keen on tactics, the spokesmen for the new sensibility proclaim it to be still another turn in the endless gyrations of modernism, still another revolt in the permanent revolution of 20th-century sensibility. This approach is very shrewd, since it can disarm in advance those older New York (and other) critics who still respond with enthusiasm to the battlecries of modernism. But several objections or qualifications need to be registered:
- Modernism, by its very nature, is uncompromisingly a minority culture, creating and defining itself through opposition to a dominant culture. Today, however, nothing of the sort is true. Floodlights glaring and tills overflowing, the new sensibility is a success from the very start. The middle-class public, eager for thrills and humiliations, welcomes it; so do the mass media, always on the alert for exploitable sensations; and naturally there appear intellectuals with handy theories. The new sensibility is both embodied and celebrated in the actions of Norman Mailer, whose condition as a swinger in America is not quite comparable with that of Joyce in Trieste or Kafka in Prague or Lawrence anywhere; it is reinforced with critical exegesis by Susan Sontag, a publicist able to make brilliant quilts from grandmother's patches; it is housed and braced by Robert Brustein, who has been writing drama reviews as if thumbing one's nose on the stage were a sufficient act of social criticism.13 And on a far lower level, it has even found its Smerdyakov in LeRoi Jones, that parodist of apocalypse who rallies Jewish audiences with calls for Jewish blood. Whatever one may think of this situation, it is surely very different from the classical picture of a besieged modernism.
- By now the search for the “new,” often reduced to a trivializing of form and matter, has become the predictable old. To suppose that we keep moving from cultural breakthrough to breakthrough requires a collective wish to forget what happened yesterday and even the day before: ignorance always being a great spur to claims for originality. Alienation has been transformed from a serious and revolutionary concept into a motif of mass culture, and the content of modernism into the decor of kitsch. As Harold Rosenberg has pungently remarked:
The sentiment of the diminution of personality is an historical hypothesis upon which writers have constructed a set of literary conventions by this time richly equipped with theatrical machinery and symbolic allusions. . . . The individual's emptiness and inability to act have become an irrefrangible cliché, untiringly supported by an immense, voluntary phalanx of latecomers to modernism. In this manifestation, the notion of the void has lost its critical edge and is thoroughly reactionary.
- The effort to assimilate new cultural styles to the modernist tradition brushes aside problems of value, quality, judgment. It rests upon a philistine version of the theory of progress in the arts: all must keep changing, and change signifies a realization of progress. Yet even if an illicit filiation can be shown, there is a vast difference in seriousness and accomplishment between the modernism of some decades ago and what we have now. The great literary modernists (to cite but one instance) put at the center of their work a confrontation and struggle with the demons of nihilism; the literary swingers of the 60's, facing a nihilist violation, cheerfully remove the threat by what Fielding once called “a timely compliance.” Just as in the verse of Swinburne echoes of Romanticism sag through the stanzas, so in much current writing there is indeed a continuity with modernism, but a continuity of grotesque and parody, through the doubles of fashion.
Still, it would be foolish to deny that in the Kulturkampf of the 60's, the New York intellectuals are at a severe disadvantage. Some have simply gone over to the other camp. A critic like Susan Sontag employs the dialectical skills and accumulated knowledge of intellectual life in order to bless the new sensibility as a dispensation of pleasure, beyond the grubby reach of interpretation and thereby, it would seem, beyond the tight voice of judgment. That her theories are skillfully-rebuilt versions of aesthetic notions long familiar and discarded; that in her own critical writing she interprets like mad and casts an image anything but hedonistic, relaxed, or sensuous—none of this need bother her admirers, for a highly literate spokesman is very sustaining to those who have discarded or not acquired intellectual literacy. Second only to Miss Sontag in trumpeting the new sensibility is Leslie Fiedler, a critic with an amiable weakness for thrusting himself at the head of parades marching into sight.14 But for those New York (or any other) writers not quite enchanted with the current scene there are serious difficulties.
They cannot be quite sure. Having fought in the later battles for modernism, they must acknowledge to themselves the possibility that, now grown older, they have lost their capacity to appreciate innovation. Why, they ask themselves with some irony, should “their” cultural revolution have been the last one, or the last good one? From the publicists of the new sensibility they hear the very slogans, catchwords, and stirring appeals which a few decades ago they were hurling in behalf of modernism and against such diehards as Van Wyck Brooks and Bernard de Voto. And given the notorious difficulties in making judgments about contemporary works of art, how can they be certain that Kafka is a master of despair and Burroughs a symptom of disintegration, Pollack a pioneer of innovation and Warhol a triviality of pop? The capacity for self-doubt, the habit of self-irony, which is the reward of decades of experience renders them susceptible to the simplistic cries of the new.
Well, the answer is that there can be no certainty: we should neither want nor need it. One must speak out of one's taste and conviction, and let history make whatever judgments it will care to. But this is not an easy stand to take, for it means that after all these years one may have to face intellectual isolation and perhaps dismissal, and there are moments when it must seem as if the best course is to be promiscuously “receptive,” swinging along with a grin of resignation.
In the face of this challenge, surely the most serious of the last twenty-five years, the New York intellectuals have not been able to mount a coherent response, certainly not a judgment sufficiently inclusive and severe. There have been a few efforts, some intellectual polemics by Lionel Abel and literary pieces by Philip Rahv; but no more. Yet if ever there was a moment when our culture needed an austere and sharp criticism—the one talent the New York writers supposedly find it death to hide—it is today. One could imagine a journal with the standards, if hopefully not the parochialism, of Scrutiny. One could imagine a journal like Partisan Review stripping the pretensions of the current scene with the vigor it showed in opposing the Popular Front and neo-conservative cultures. But these are fantasies. In its often accomplished pages Partisan Review betrays a hopeless clash between its editors' capacity to apply serious standards and their yearnings to embrace the moment. Predictably, the result satisfies no one.
One example of the failure of the New York writers to engage in criticism is their relation to Norman Mailer. He is not an easy man to come to grips with, for he is “our genius,” probably the only one, and in more than a merely personal way he is a man of enormous charm. Yet Mailer has been the central and certainly most dramatic presence in the new sensibility, even if in reflective moments he makes clear his ability to brush aside its incantations.15 Mailer as thaumaturgist of orgasm; as metaphysician of the gut; as psychic herb-doctor; as advance man for literary violence;16 as dialectician of unreason; and above all, as a novelist who has laid waste his own formidable talent—these masks of brilliant, nutty restlessness, these papery dikes against squalls of boredom—all require sharp analysis and criticism. Were Mailer to read these lines he would surely grin and exclaim that, whatever else, his books have suffered plenty of denunciation. My point, however, is not that he has failed to receive adverse reviews, including some from such New York critics as Norman Podhoretz, Elizabeth Hardwick, and Philip Rahv; perhaps he has even had too many adverse reviews, given the scope and brightness of his talent. My point is that the New York writers have failed to confront Mailer seriously as an intellectual spokesman, a cultural agent, and instead have found it easier to regard him as a hostage to the temper of our times. What has not been forthcoming is a recognition, surely a painful one, that in his major public roles he has come to represent values in deep opposition to liberal humaneness and rational discourse. That the New York critics have refused him this confrontation is both a disservice to Mailer and a sign that, whatever it may once have been, the New York intellectual community no longer exists as a significant force.
An equally telling sign is the recent growth in popularity and influence of the New York Review of Books. Emerging at least in part from the New York intellectual milieu, this journal has steadily moved away from the styles and premises with which it began. Its early dependence on those New York writers who lent their names to it and helped establish it seems all but over. The Jewish imprint has been blotted out; the New York Review, for all its sharp attacks on current political policies, is thoroughly at home in the worlds of American culture, publishing, and society. It features a strong Anglophile slant in its literary pieces, perhaps in accord with the New Statesman formula of blending leftish (and at one time, fellow-traveling) politics with Bloomsbury culture, Kingsley Martin with tips on wine. More precisely, what the New York Review has managed to achieve—I find it quite fascinating as a portent of things to come—is a link between campus “leftism” and East Side stylishness, the worlds of Tom Hayden and George Plimpton. Opposition to Communist politics and ideology is frequently presented in the pages of the New York Review as if it were an obsolete, indeed a pathetic, hangover from a discredited past or worse yet, a dark sign of the CIA. A snappish and crude anti-Americanism has swept over much of its political writing—and to avoid misunderstanding, let me say that by this I do not mean anything so necessary as attacks on the ghastly Vietnam war or on our failures in the cities. And in the hands of writers like Andrew Kopkind (author of the immortal phrase, “morality . . . starts at the barrel of a gun”), liberal values and norms are treated with something very close to contempt.
Though itself too sophisticated to indulge in the more preposterous New Left notions, such as “liberal fascism” and “confrontationism,” the New York Review has done the New Left the considerable service of providing it with a link of intellectual respectability to the academic world. In the materials it has published by Kopkind, Tom Hayden, Philip Rahv, Edgar Z. Friedenberg, Jason Epstein, and others, one finds not an acceptance of the fashionable talk about “revolution” which has become an indoor and outdoor sport on the American campus, but a kind of rhetorical violence, a verbal “radicalism,” which gives moral and intellectual encouragement to precisely such fashionable (self-defeating) talk.
This is by no means the only kind of political material to have appeared in the New York Review; at least in my own experience I have found its editors prepared to print articles of a sharply different kind; and in recent years it has published serious political criticism by George Lichtheim, Theodore Draper, and Walter Laqueur.
And because it is concerned with maintaining a certain level of sophistication and accomplishment, the New York Review has not simply taken over the new sensibility. No, at stake here is the dominant tone of this skillfully edited paper, an editorial keenness in responding to the current academic and intellectual temper—as for instance in that memorable issue with a cover featuring, no doubt for the benefit of its university readers, a diagram explaining how to make a Molotov cocktail. The genius of the New York Review, and it has been a genius of sorts, is not, in either politics or culture, for swimming against the stream.
Perhaps it is all too late. Perhaps there is no longer available among the New York writers enough energy and coherence to make possible a sustained confrontation with the new sensibility. Still, one would imagine that their undimmed sense of the Zeitgeist would prod them to sharp responses, precise statements, polemical assaults. What, after all, would be risked in saying that we have entered a period of overwhelming cultural sleaziness?
Having been formed by, and through opposition to, the New York intellectual experience, I cannot look with joy at the prospect of its ending. But neither with dismay. Such breakups are inevitable, and out of them come new voices and energies. Yet, precisely at this moment of dispersion, might not some of the New York writers achieve renewed strength if they were to struggle once again for whatever has been salvaged from these last few decades? For the values of liberalism, for the politics of a democratic radicalism, for the norms of rationality and intelligence, for the standards of literary seriousness, for the life of the mind as a humane dedication—for all this it should again be worth finding themselves in a minority, even a beleaguered minority, and not with fantasies of martyrdom but with a quiet recognition that for the intellectual this is likely to be his usual condition.
1 Is it “they” or “we”? To speak of the New York intellectuals as “they” might seem coy or disloyal; to speak of “we” self-assertive or cozy. Well, let it be “they,” with the proviso that I do not thereby wish, even if I could, to exempt myself from judgment.
2 In placing this emphasis on the Jewish origins of the New York intellectuals, I am guilty of a certain—perhaps unavoidable—compression of the realities. Were I writing a book rather than an essay, I would have to describe in some detail the relationship between the intellectuals who came on the scene in the 30's and those of earlier periods. There were significant ties between Partisan Review and The Dial, Politics and the Masses. But I choose here to bypass this historical connection because I wish to stress what has been distinctive and perhaps unique.
A similar qualification has to be made concerning those intellectuals who have been associated with this milieu but have not been Jewish. I am working on the premise that in background and style there was something decidedly Jewish about the intellectuals who began to cohere as a group around Partisan Review in the late 30's—and one of the things that was “decidedly Jewish” was that most were of Jewish birth! Perhaps it ought to be said, then, that my use of the phrase “New York intellectuals” is simply a designation of convenience. I don't mean to suggest that there have been or will be no other intellectuals in New York. I am using the phrase as a shorthand for what might awkwardly be spelled out as “the intellectuals of New York who began to appear in the 30's, most of whom were Jewish.”
3 In a lengthy essay printed in this journal, “The Culture of Modernism,” November 1967, I have tried to suggest what this term can signify.
4 In 1948 Ezra Pound, who had spent the war years as a propagandist for Mussolini and whose writings contained strongly anti-Semitic passages, was awarded the prestigious Bollingen Prize. The committee voting for this award contained a number of ranking American poets. After the award was announced, there occurred in the pages of Partisan Review, COMMENTARY, and other journals a harsh dispute as to its appropriateness.
5 Some recent historians, under New Left inspiration, have argued that in countries like France and Italy the possibility of a Communist seizure of power was really quite small. Perhaps; counter-factuals are hard to dispose of. What matters is the political consequences these historians would retrospectively have us draw, if they were at all specific on this point. Was it erroneous, or reactionary, to believe that resistance had to be created in Europe against further Communist expansion? What attitude, for example, would they have had intellectuals, or anyone else, take during the Berlin crisis? Should the city, in the name of peace, have been yielded to the East Germans? Did the possibility of Communist victories in Western Europe require an extraordinary politics? And to what extent are present reconsiderations of Communist power in postwar Europe made possible by the fact that it was, in fact, successfully contained?
6 Fifteen years later, again swept along by the Zeitgeist, Miss McCarthy would write that the Communist societies, because of their concentration of ownership, made economic planning more feasible than did capitalist societies. She is perhaps the last intellectual in the world who seems not to have heard about the disasters of “planning” in totalitarian society (e.g., recent reports from Czechoslovakia).
7 One such attack was an essay by myself, “This Age of Conformity,” Partisan Review, 1954. Looking at it again I believe that, apart from some gratuitous polemical sentences, its main thrust still holds. No close, let alone sympathetic, analysis can be found in this essay as to why intellectuals now felt themselves so much more at home in capitalist society than they had in the 30's or why they felt themselves driven to an intransigent anti-Communism. I wrote as a polemicist, not as a historian or a sociologist of knowledge; and if that limited the scope it did not, I think, blunt the point of my attack.
8 It is not clear whether Macdonald still adheres to The Root Is Man. In a recent BBC broadcast he said about the student uprising at Columbia: “I don't approve of their methods, but Columbia will be a better place afterwards.” Perhaps it will, perhaps it won't; but I don't see how the author of The Root Is Man could say this, since the one thing he kept insisting was that means could not be separated from ends, as the Marxists too readily separated them. He would surely have felt that if the means used by the students were objectionable, then their ends would be contaminated as well—and thereby the consequences of their action. But in the swinging 60's not many people trouble to remember their own lessons.
9 The most lasting contribution this school of thought seems to have made to America is an adjective, as in “existential crisis,” which communicates the sensation of depth without the burden of content.
10 Not quite no one. In an attack on the New York writers (Hudson Review, Autumn 1965) Richard Kostelanetz speaks about “Jewish group-aggrandizement” and “the Jewish American push.” One appreciates the delicacy of his phrasing.
11 That Marcuse chooses not to apply his theories to the area of society in which he himself functions is a tribute to his personal realism, or perhaps merely a sign of a lack of intellectual seriousness. In a recent public discussion, recorded by the New York Times Magazine (May 26, 1968), there occurred the following exchange:
Hentoff: We've been talking about new institutions, new structures, as the only way to get fundamental change. What would that mean to you, Mr. Marcuse, in terms of the university, in terms of Columbia?
Marcuse: I was afraid of that because I now finally reveal myself as a fink. I have never suggested or advocated or supported destroying the established universities and building new anti-institutions instead. I have always said that no matter how radical the demands of the students and no matter how justified, they should be pressed within the existing universities. . . . I believe—and this is where the finkdom comes in—that American universities, at least quite a few of them, today are still enclaves of relatively critical thought and relatively free thought.
12 John Simon has some cogent things to say about Brown and McLuhan, the pop poppas of the new: “. . . like McLuhan, Brown fulfills the four requirements for our prophets: (I) to span and reconcile, however grotesquely, various disciplines to the relief of a multitude of specialists; (2) to affirm something, even if it is something negative, retrogressive, mad; (3) to justify something vulgar or sick or indefensible in us, whether it be television-addiction (McLuhan) or schizophrenia (Brown); (4) to abolish the need for discrimination, difficult choices, balancing mind and appetite, and so reduce the complex orchestration of life to the easy strumming of a monochord. Brown and McLuhan have nicely apportioned the world between them: the inward madness for the one, the outward manias for the other.”
13 Reviewing a theatrical grope-in called Dionysus in 69 (New Republic, August 10, 1968), Brustein pulls back a little from his enthusiasm for the swinging new. He remarks that in Dionysus “the pelvis becomes the actor's primary organ of expression” and that “only about a third of Euripides's play” The Bacchae is used in this “adaptation.” (But why even a third? Who needs words at all?) And then says Brustein: “The off-off-Broadway movement which began so promisingly with America Hurrah, MacBird, and the experimental probes of the Open Theatre, is now cultivating its worst faults, developing an anarchic Philistinism which virtually throws the writer out of the theater.”
But if what the Dean of the Yale Drama School counter-poses to Dionysus in 69 is America Hurrah and MacBird—noisy, coarse, derivative, and third-rate—how can he possibly bring to bear serious critical standards?
14 Fiedler's essay “The New Mutants” (Partisan Review, Fall 1965) is a sympathetic charting of the new sensibility, with discussions of “porno-esthetics,” the effort among young people to abandon habits and symbols of masculinity in favor of a feminized receptiveness, “the aspiration to take the final evolutionary leap and cast off adulthood completely,” and above all, the role of drugs as “the crux of the futurist revolt.”
With uncharacteristic forebearance, Fiedler denies himself any sustained or explicit judgments of this “futurist revolt,” so that the rhetorical thrust of his essay is somewhere between acclaim and resignation. He cannot completely suppress his mind, perhaps because he has been using it too long, and so we find this acute passage concerning the responses of older writers to “the most obscene forays of the young”:
. . . after a while, there will be no more Philip Rahvs and Stanley Edgar Hymans left to shock—anti-language becoming mere language with repeated use and in the face of acceptance; so that all sense of exhilaration will be lost along with the possibility of offense. What to do then except to choose silence, since raising the ante of violence is ultimately self-defeating; and the way of obscenity in any case leads as naturally to silence as to further excess?
About drugs Fiedler betrays no equivalent skepticism, so that it is hard to disagree with Lionel Abel's judgment that, “while I do not want to charge Mr. Fieldler with recommending the taking of drugs, I think his whole essay is a confession that he cannot call upon one value in whose name he could oppose it.”
15 Two examples:
“Tom Hayden began to discuss revolution with Mailer. ‘'m for Kennedy,’ said Mailer, ‘because I'm not so sure I want a revolution. Some of those kids are awfully dumb.’ Hayden the Revolutionary said a vote for George Wallace would further his objective more than a vote for RFK.” (Village Voice, May 30, 1968—and by the way, some Revolutionary!)
“If he still took a toke of marijuana from time to time for Auld Lang Syne, or in recognition of the probability that good sex had to be awfully good before it was better than on pot, yet, still!—Mailer was not in approval of any drug, he was virtually conservative about it, having demanded of his 18-year-old daughter . . . that she not take marijuana, and never LSD, until she completed her education, a mean promise to extract in these apocalyptic times.” (The Armies of the Night).
16 In this regard the editor of Dissent bears a heavy responsibility. When he first received the manuscript of “The White Negro,” he should have expressed in print his objections to the passage in which Mailer discusses the morality of beating up a fifty-year-old storekeeper. That he could not bring himself to risk losing a scoop is no excuse whatever.
The New York Intellectuals: A Chronicle & A Critique
Must-Reads from Magazine
t can be said that the Book of Samuel launched the American Revolution. Though antagonistic to traditional faith, Thomas Paine understood that it was not Montesquieu, or Locke, who was inscribed on the hearts of his fellow Americans. Paine’s pamphlet Common Sense is a biblical argument against British monarchy, drawing largely on the text of Samuel.
Today, of course, universal biblical literacy no longer exists in America, and sophisticated arguments from Scripture are all too rare. It is therefore all the more distressing when public intellectuals, academics, or religious leaders engage in clumsy acts of exegesis and political argumentation by comparing characters in the Book of Samuel to modern political leaders. The most common victim of this tendency has been the central character in the Book of Samuel: King David.
Most recently, this tendency was made manifest in the writings of Dennis Prager. In a recent defense of his own praise of President Trump, Prager wrote that “as a religious Jew, I learned from the Bible that God himself chose morally compromised individuals to accomplish some greater good. Think of King David, who had a man killed in order to cover up the adultery he committed with the man’s wife.” Prager similarly argued that those who refuse to vote for a politician whose positions are correct but whose personal life is immoral “must think God was pretty flawed in voting for King David.”
Prager’s invocation of King David was presaged on the left two decades ago. The records of the Clinton Presidential Library reveal that at the height of the Lewinsky scandal, an email from Dartmouth professor Susannah Heschel made its way into the inbox of an administration policy adviser with a similar comparison: “From the perspective of Jewish history, we have to ask how Jews can condemn President Clinton’s behavior as immoral, when we exalt King David? King David had Batsheva’s husband, Uriah, murdered. While David was condemned and punished, he was never thrown off the throne of Israel. On the contrary, he is exalted in our Jewish memory as the unifier of Israel.”
One can make the case for supporting politicians who have significant moral flaws. Indeed, America’s political system is founded on an awareness of the profound tendency to sinfulness not only of its citizens but also of its statesmen. “If men were angels, no government would be necessary,” James Madison informs us in the Federalist. At the same time, anyone who compares King David to the flawed leaders of our own age reveals a profound misunderstanding of the essential nature of David’s greatness. David was not chosen by God despite his moral failings; rather, David’s failings are the lens that reveal his true greatness. It is in the wake of his sins that David emerges as the paradigmatic penitent, whose quest for atonement is utterly unlike that of any other character in the Bible, and perhaps in the history of the world.
While the precise nature of David’s sins is debated in the Talmud, there is no question that they are profound. Yet it is in comparing David to other faltering figures—in the Bible or today—that the comparison falls flat. This point is stressed by the very Jewish tradition in whose name Prager claimed to speak.
It is the rabbis who note that David’s predecessor, Saul, lost the kingship when he failed to fulfill God’s command to destroy the egregiously evil nation of Amalek, whereas David commits more severe sins and yet remains king. The answer, the rabbis suggest, lies not in the sin itself but in the response. Saul, when confronted by the prophet Samuel, offers obfuscations and defensiveness. David, meanwhile, is similarly confronted by the prophet Nathan: “Thou hast killed Uriah the Hittite with the sword, and hast taken his wife to be thy wife, and hast slain him with the sword of the children of Ammon.” David’s immediate response is clear and complete contrition: “I have sinned against the Lord.” David’s penitence, Jewish tradition suggests, sets him apart from Saul. Soon after, David gave voice to what was in his heart at the moment, and gave the world one of the most stirring of the Psalms:
Have mercy upon me, O God, according to thy lovingkindness: according unto the multitude of thy tender mercies blot out my transgressions.
Wash me thoroughly from mine iniquity, and cleanse me from my sin. For I acknowledge my transgressions: and my sin is ever before me.
. . . Deliver me from bloodguiltiness, O God, thou God of my salvation: and my tongue shall sing aloud of thy righteousness.
O Lord, open thou my lips; and my mouth shall shew forth thy praise.
For thou desirest not sacrifice; else would I give it: thou delightest not in burnt offering.
The sacrifices of God are a broken spirit: a broken and a contrite heart, O God, thou wilt not despise.
The tendency to link David to our current age lies in the fact that we know more about David than any other biblical figure. The author Thomas Cahill has noted that in a certain literary sense, David is the only biblical figure that is like us at all. Prior to the humanist autobiographies of the Renaissance, he notes, “we can count only a few isolated instances of this use of ‘I’ to mean the interior self. But David’s psalms are full of I’s.” In David’s Psalms, Cahill writes, we “find a unique early roadmap to the inner spirit—previously mute—of ancient humanity.”
At the same time, a study of the Book of Samuel and of the Psalms reveals how utterly incomparable David is to anyone alive today. Haym Soloveitchik has noted that even the most observant of Jews today fail to feel a constant intimacy with God that the simplest Jew of the premodern age might have felt, that “while there are always those whose spirituality is one apart from that of their time, nevertheless I think it safe to say that the perception of God as a daily, natural force is no longer present to a significant degree in any sector of modern Jewry, even the most religious.” Yet for David, such intimacy with the divine was central to his existence, and the Book of Samuel and the Psalms are an eternal testament to this fact. This is why simple comparisons between David and ourselves, as tempting as they are, must be resisted. David Wolpe, in his book about David, attempts to make the case as to why King David’s life speaks to us today: “So versatile and enduring is David in our culture that rare is the week that passes without some public allusion to his life…We need to understand David better because we use his life to comprehend our own.”
The truth may be the opposite. We need to understand David better because we can use his life to comprehend what we are missing, and how utterly unlike our lives are to his own. For even the most religious among us have lost the profound faith and intimacy with God that David had. It is therefore incorrect to assume that because of David’s flaws it would have been, as Amos Oz has written, “fitting for him to reign in Tel Aviv.” The modern State of Israel was blessed with brilliant leaders, but to which of its modern warriors or statesmen should David be compared? To Ben Gurion, who stripped any explicit invocation of the Divine from Israel’s Declaration of Independence? To Moshe Dayan, who oversaw the reconquest of Jerusalem, and then immediately handed back the Temple Mount, the locus of King David’s dreams and desires, to the administration of the enemies of Israel? David’s complex humanity inspires comparison to modern figures, but his faith, contrition, and repentance—which lie at the heart of his story and success—defy any such engagement.
And so, to those who seek comparisons to modern leaders from the Bible, the best rule may be: Leave King David out of it.
Three attacks in Britain highlight the West’s inability to see the threat clearly
This lack of seriousness manifests itself in several ways. It’s perhaps most obvious in the failure to reform Britain’s chaotic immigration and dysfunctional asylum systems. But it’s also abundantly clear from the grotesque underfunding and under-resourcing of domestic intelligence. In MI5, Britain has an internal security service that is simply too small to do its job effectively, even if it were not handicapped by an institutional culture that can seem willfully blind to the ideological roots of the current terrorism problem.
In 2009, Jonathan Evans, then head of MI5, confessed at a parliamentary hearing about the London bus and subway attacks of 2005 that his organization only had sufficient resources to “hit the crocodiles close to the boat.” It was an extraordinary metaphor to use, not least because of the impression of relative impotence that it conveys. MI5 had by then doubled in size since 2001, but it still boasted a staff of only 3,500. Today it’s said to employ between 4,000 and 5,000, an astonishingly, even laughably, small number given a UK population of 65 million and the scale of the security challenges Britain now faces. (To be fair, the major British police forces all have intelligence units devoted to terrorism, and the UK government’s overall counterterrorism strategy involves a great many people, including social workers and schoolteachers.)
You can also see that unseriousness at work in the abject failure to coerce Britain’s often remarkably sedentary police officers out of their cars and stations and back onto the streets. Most of Britain’s big-city police forces have adopted a reactive model of policing (consciously rejecting both the New York Compstat model and British “bobby on the beat” traditions) that cripples intelligence-gathering and frustrates good community relations.
If that weren’t bad enough, Britain’s judiciary is led by jurists who came of age in the 1960s, and who have been inclined since 2001 to treat terrorism as an ordinary criminal problem being exploited by malign officials and politicians to make assaults on individual rights and to take part in “illegal” foreign wars. It has long been almost impossible to extradite ISIS or al-Qaeda–linked Islamists from the UK. This is partly because today’s English judges believe that few if any foreign countries—apart from perhaps Sweden and Norway—are likely to give terrorist suspects a fair trial, or able to guarantee that such suspects will be spared torture and abuse.
We have a progressive metropolitan media elite whose primary, reflexive response to every terrorist attack, even before the blood on the pavement is dry, is to express worry about an imminent violent anti-Muslim “backlash” on the part of a presumptively bigoted and ignorant indigenous working class. Never mind that no such “backlash” has yet occurred, not even when the young off-duty soldier Lee Rigby was hacked to death in broad daylight on a South London street in 2013.
Another sign of this lack of seriousness is the choice by successive British governments to deal with the problem of internal terrorism with marketing and “branding.” You can see this in the catchy consultant-created acronyms and pseudo-strategies that are deployed in place of considered thought and action. After every atrocity, the prime minister calls a meeting of the COBRA unit—an acronym that merely stands for Cabinet Office Briefing Room A but sounds like a secret organization of government superheroes. The government’s counterterrorism strategy is called CONTEST, which has four “work streams”: “Prevent,” “Pursue,” “Protect,” and “Prepare.”
Perhaps the ultimate sign of unseriousness is the fact that police, politicians, and government officials have all displayed more fear of being seen as “Islamophobic” than of any carnage that actual terror attacks might cause. Few are aware that this short-term, cowardly, and trivial tendency may ultimately foment genuine, dangerous popular Islamophobia, especially if attacks continue.R
ecently, three murderous Islamist terror attacks in the UK took place in less than a month. The first and third were relatively primitive improvised attacks using vehicles and/or knives. The second was a suicide bombing that probably required relatively sophisticated planning, technological know-how, and the assistance of a terrorist infrastructure. As they were the first such attacks in the UK, the vehicle and knife killings came as a particular shock to the British press, public, and political class, despite the fact that non-explosive and non-firearm terror attacks have become common in Europe and are almost routine in Israel.
The success of all three plots indicates troubling problems in British law-enforcement practice and culture, quite apart from any other failings on the parts of the state in charge of intelligence, border control, and the prevention of radicalization. At the time of writing, the British media have been full of encomia to police courage and skill, not least because it took “only” eight minutes for an armed Metropolitan Police team to respond to and confront the bloody mayhem being wrought by the three Islamist terrorists (who had ploughed their rented van into people on London Bridge before jumping out to attack passersby with knives). But the difficult truth is that all three attacks would be much harder to pull off in Manhattan, not just because all NYPD cops are armed, but also because there are always police officers visibly on patrol at the New York equivalents of London’s Borough Market on a Saturday night. By contrast, London’s Metropolitan police is a largely vehicle-borne, reactive force; rather than use a physical presence to deter crime and terrorism, it chooses to monitor closed-circuit street cameras and social-media postings.
Since the attacks in London and Manchester, we have learned that several of the perpetrators were “known” to the police and security agencies that are tasked with monitoring potential terror threats. That these individuals were nevertheless able to carry out their atrocities is evidence that the monitoring regime is insufficient.
It also seems clear that there were failures on the part of those institutions that come under the leadership of the Home Office and are supposed to be in charge of the UK’s border, migration, and asylum systems. Journalists and think tanks like Policy Exchange and Migration Watch have for years pointed out that these systems are “unfit for purpose,” but successive governments have done little to take responsible control of Britain’s borders. When she was home secretary, Prime Minister Theresa May did little more than jazz up the name, logo, and uniforms of what is now called the “Border Force,” and she notably failed to put in place long-promised passport checks for people flying out of the country. This dereliction means that it is impossible for the British authorities to know who has overstayed a visa or whether individuals who have been denied asylum have actually left the country.
It seems astonishing that Youssef Zaghba, one of the three London Bridge attackers, was allowed back into the country. The Moroccan-born Italian citizen (his mother is Italian) had been arrested by Italian police in Bologna, apparently on his way to Syria via Istanbul to join ISIS. When questioned by the Italians about the ISIS decapitation videos on his mobile phone, he declared that he was “going to be a terrorist.” The Italians lacked sufficient evidence to charge him with a crime but put him under 24-hour surveillance, and when he traveled to London, they passed on information about him to MI5. Nevertheless, he was not stopped or questioned on arrival and had not become one of the 3,000 official terrorism “subjects of interest” for MI5 or the police when he carried out his attack. One reason Zaghba was not questioned on arrival may have been that he used one of the new self-service passport machines installed in UK airports in place of human staff after May’s cuts to the border force. Apparently, the machines are not yet linked to any government watch lists, thanks to the general chaos and ineptitude of the Home Office’s efforts to use information technology.
The presence in the country of Zaghba’s accomplice Rachid Redouane is also an indictment of the incompetence and disorganization of the UK’s border and migration authorities. He had been refused asylum in 2009, but as is so often the case, Britain’s Home Office never got around to removing him. Three years later, he married a British woman and was therefore able to stay in the UK.
But it is the failure of the authorities to monitor ringleader Khuram Butt that is the most baffling. He was a known and open associate of Anjem Choudary, Britain’s most notorious terrorist supporter, ideologue, and recruiter (he was finally imprisoned in 2016 after 15 years of campaigning on behalf of al-Qaeda and ISIS). Butt even appeared in a 2016 TV documentary about ISIS supporters called The Jihadist Next Door. In the same year, he assaulted a moderate imam at a public festival, after calling him a “murtad” or apostate. The imam reported the incident to the police—who took six months to track him down and then let him off with a caution. It is not clear if Butt was one of the 3,000 “subjects of interest” or the additional 20,000 former subjects of interest who continue to be the subject of limited monitoring. If he was not, it raises the question of what a person has to do to get British security services to take him seriously as a terrorist threat; if he was in fact on the list of “subjects of interest,” one has to wonder if being so designated is any barrier at all to carrying out terrorist atrocities. It’s worth remembering, as few do here in the UK, that terrorists who carried out previous attacks were also known to the police and security services and nevertheless enjoyed sufficient liberty to go at it again.B
ut the most important reason for the British state’s ineffectiveness in monitoring terror threats, which May addressed immediately after the London Bridge attack, is a deeply rooted institutional refusal to deal with or accept the key role played by Islamist ideology. For more than 15 years, the security services and police have chosen to take note only of people and bodies that explicitly espouse terrorist violence or have contacts with known terrorist groups. The fact that a person, school, imam, or mosque endorses the establishment of a caliphate, the stoning of adulterers, or the murder of apostates has not been considered a reason to monitor them.
This seems to be why Salman Abedi, the Manchester Arena suicide bomber, was not being watched by the authorities as a terror risk, even though he had punched a girl in the face for wearing a short skirt while at university, had attended the Muslim Brotherhood-controlled Didsbury Mosque, was the son of a Libyan man whose militia is banned in the UK, had himself fought against the Qaddafi regime in Libya, had adopted the Islamist clothing style (trousers worn above the ankle, beard but no moustache), was part of a druggy gang subculture that often feeds individuals into Islamist terrorism, and had been banned from a mosque after confronting an imam who had criticized ISIS.
It was telling that the day after the Manchester Arena suicide-bomb attack, you could hear security officials informing radio and TV audiences of the BBC’s flagship morning-radio news show that it’s almost impossible to predict and stop such attacks because the perpetrators “don’t care who they kill.” They just want to kill as many people as possible, he said.
Surely, anyone with even a basic familiarity with Islamist terror attacks over the last 15 or so years and a nodding acquaintance with Islamist ideology could see that the terrorist hadn’t just chosen the Ariana Grande concert in Manchester Arena because a lot of random people would be crowded into a conveniently small area. Since the Bali bombings of 2002, nightclubs, discotheques, and pop concerts attended by shameless unveiled women and girls have been routinely targeted by fundamentalist terrorists, including in Britain. Among the worrying things about the opinion offered on the radio show was that it suggests that even in the wake of the horrific Bataclan attack in Paris during a November 2015 concert, British authorities may not have been keeping an appropriately protective eye on music venues and other places where our young people hang out in their decadent Western way. Such dereliction would make perfect sense given the resistance on the part of the British security establishment to examining, confronting, or extrapolating from Islamist ideology.
The same phenomenon may explain why authorities did not follow up on community complaints about Abedi. All too often when people living in Britain’s many and diverse Muslim communities want to report suspicious behavior, they have to do so through offices and organizations set up and paid for by the authorities as part of the overall “Prevent” strategy. Although criticized by the left as “Islamophobic” and inherently stigmatizing, Prevent has often brought the government into cooperative relationships with organizations even further to the Islamic right than the Muslim Brotherhood. This means that if you are a relatively secular Libyan émigré who wants to report an Abedi and you go to your local police station, you are likely to find yourself speaking to a bearded Islamist.
From its outset in 2003, the Prevent strategy was flawed. Its practitioners, in their zeal to find and fund key allies in “the Muslim community” (as if there were just one), routinely made alliances with self-appointed community leaders who represented the most extreme and intolerant tendencies in British Islam. Both the Home Office and MI5 seemed to believe that only radical Muslims were “authentic” and would therefore be able to influence young potential terrorists. Moderate, modern, liberal Muslims who are arguably more representative of British Islam as a whole (not to mention sundry Shiites, Sufis, Ahmmadis, and Ismailis) have too often found it hard to get a hearing.
Sunni organizations that openly supported suicide-bomb attacks in Israel and India and that justified attacks on British troops in Iraq and Afghanistan nevertheless received government subsidies as part of Prevent. The hope was that in return, they would alert the authorities if they knew of individuals planning attacks in the UK itself.
It was a gamble reminiscent of British colonial practice in India’s northwest frontier and elsewhere. Not only were there financial inducements in return for grudging cooperation; the British state offered other, symbolically powerful concessions. These included turning a blind eye to certain crimes and antisocial practices such as female genital mutilation (there have been no successful prosecutions relating to the practice, though thousands of cases are reported every year), forced marriage, child marriage, polygamy, the mass removal of girls from school soon after they reach puberty, and the epidemic of racially and religiously motivated “grooming” rapes in cities like Rotherham. (At the same time, foreign jihadists—including men wanted for crimes in Algeria and France—were allowed to remain in the UK as long as their plots did not include British targets.)
This approach, simultaneously cynical and naive, was never as successful as its proponents hoped. Again and again, Muslim chaplains who were approved to work in prisons and other institutions have sometimes turned out to be Islamist extremists whose words have inspired inmates to join terrorist organizations.
Much to his credit, former Prime Minister David Cameron fought hard to change this approach, even though it meant difficult confrontations with his home secretary (Theresa May), as well as police and the intelligence agencies. However, Cameron’s efforts had little effect on the permanent personnel carrying out the Prevent strategy, and cooperation with Islamist but currently nonviolent organizations remains the default setting within the institutions on which the United Kingdom depends for security.
The failure to understand the role of ideology is one of imagination as well as education. Very few of those who make government policy or write about home-grown terrorism seem able to escape the limitations of what used to be called “bourgeois” experience. They assume that anyone willing to become an Islamist terrorist must perforce be materially deprived, or traumatized by the experience of prejudice, or provoked to murderous fury by oppression abroad. They have no sense of the emotional and psychic benefits of joining a secret terror outfit: the excitement and glamor of becoming a kind of Islamic James Bond, bravely defying the forces of an entire modern state. They don’t get how satisfying or empowering the vengeful misogyny of ISIS-style fundamentalism might seem for geeky, frustrated young men. Nor can they appreciate the appeal to the adolescent mind of apocalyptic fantasies of power and sacrifice (mainstream British society does not have much room for warrior dreams, given that its tone is set by liberal pacifists). Finally, they have no sense of why the discipline and self-discipline of fundamentalist Islam might appeal so strongly to incarcerated lumpen youth who have never experienced boundaries or real belonging. Their understanding is an understanding only of themselves, not of the people who want to kill them.
Review of 'White Working Class' By Joan C. Williams
Williams is a prominent feminist legal scholar with degrees from Yale, MIT, and Harvard. Unbending Gender, her best-known book, is the sort of tract you’d expect to find at an intersectionality conference or a Portlandia bookstore. This is why her insightful, empathic book comes as such a surprise.
Books and essays on the topic have accumulated into a highly visible genre since Donald Trump came on the American political scene; J.D. Vance’s Hillbilly Elegy planted itself at the top of bestseller lists almost a year ago and still isn’t budging. As with Vance, Williams’s interest in the topic is personal. She fell “madly in love with” and eventually married a Harvard Law School graduate who had grown up in an Italian neighborhood in pre-gentrification Brook-lyn. Williams, on the other hand, is a “silver-spoon girl.” Her father’s family was moneyed, and her maternal grandfather was a prominent Reform rabbi.
The author’s affection for her “class-migrant” spouse and respect for his family’s hardships—“My father-in-law grew up on blood soup,” she announces in her opening sentence—adds considerable warmth to what is at bottom a political pamphlet. Williams believes that elite condescension and “cluelessness” played a big role in Trump’s unexpected and dreaded victory. Enlightening her fellow elites is essential to the task of returning Trump voters to the progressive fold where, she is sure, they rightfully belong.
Liberals were not always so dense about the working class, Williams observes. WPA murals and movies like On the Waterfront showed genuine fellow feeling for the proletariat. In the 1970s, however, the liberal mood changed. Educated boomers shifted their attention to “issues of peace, equal rights, and environmentalism.” Instead of feeling the pain of Arthur Miller and John Steinbeck characters, they began sneering at the less enlightened. These days, she notes, elite sympathies are limited to the poor, people of color (POC), and the LGBTQ population. Despite clear evidence of suffering—stagnant wages, disappearing manufacturing jobs, declining health and well-being—the working class gets only fly-over snobbery at best and, more often, outright loathing.
Williams divides her chapters into a series of explainers to questions she has heard from her clueless friends and colleagues: “Why Does the Working Class Resent the Poor?” “Why Does the Working Class Resent Professionals but Admire the Rich?” “Why Doesn’t the Working Class Just Move to Where the Jobs Are?” “Is the Working Class Just Racist?” She weaves her answers into a compelling picture of a way of life and worldview foreign to her targeted readers. Working-class Americans have had to struggle for whatever stability and comfort they have, she explains. Clocking in for midnight shifts year after year, enduring capricious bosses, plant closures, and layoffs, they’re reliant on tag-team parenting and stressed-out relatives for child care. The campus go-to word “privileged” seems exactly wrong.
Proud of their own self-sufficiency and success, however modest, they don’t begrudge the self-made rich. It’s snooty professionals and the dysfunctional poor who get their goat. From their vantage point, subsidizing the day care for a welfare mother when they themselves struggle to manage care on their own dime mocks both their hard work and their beliefs. And since, unlike most professors, they shop in the same stores as the dependent poor, they’ve seen that some of them game the system. Of course that stings.
White Working Class is especially good at evoking the alternate economic and mental universe experienced by Professional and Managerial Elites, or “PMEs.” PMEs see their non-judgment of the poor, especially those who are “POC,” as a mark of their mature understanding that we live in an unjust, racist system whose victims require compassion regardless of whether they have committed any crime. At any rate, their passions lie elsewhere. They define themselves through their jobs and professional achievements, hence their obsession with glass ceilings.
Williams tells the story of her husband’s faux pas at a high-school reunion. Forgetting his roots for a moment, the Ivy League–educated lawyer asked one of his Brooklyn classmates a question that is the go-to opener in elite social settings: “What do you do?” Angered by what must have seemed like deliberate humiliation by this prodigal son, the man hissed: “I sell toilets.”
Instead of stability and backyard barbecues with family and long-time neighbors and maybe the occasional Olive Garden celebration, PMEs are enamored of novelty: new foods, new restaurants, new friends, new experiences. The working class chooses to spend its leisure in comfortable familiarity; for the elite, social life is a lot like networking. Members of the professional class may view themselves as sophisticated or cosmopolitan, but, Williams shows, to the blue-collar worker their glad-handing is closer to phony social climbing and their abstract, knowledge-economy jobs more like self-important pencil-pushing.
White Working Class has a number of proposals for creating the progressive future Williams would like to see. She wants to get rid of college-for-all dogma and improve training for middle-skill jobs. She envisions a working-class coalition of all races and ethnicities bolstered by civics education with a “distinctly celebratory view of American institutions.” In a saner political environment, some of this would make sense; indeed, she echoes some of Marco Rubio’s 2016 campaign themes. It’s little wonder White Working Class has already gotten the stink eye from liberal reviewers for its purported sympathies for racists.
Alas, impressive as Williams’s insights are, they do not always allow her to transcend her own class loyalties. Unsurprisingly, her own PME biases mostly come to light in her chapters on race and gender. She reduces immigration concerns to “fear of brown people,” even as she notes elsewhere that a quarter of Latinos also favor a wall at the southern border. This contrasts startlingly with her succinct observation that “if you don’t want to drive working-class whites to be attracted to the likes of Limbaugh, stop insulting them.” In one particularly obtuse moment, she asserts: “Because I study social inequality, I know that even Malia and Sasha Obama will be disadvantaged by race, advantaged as they are by class.” She relies on dubious gender theories to explain why the majority of white women voted for Trump rather than for his unfairly maligned opponent. That Hillary Clinton epitomized every elite quality Williams has just spent more than a hundred pages explicating escapes her notice. Williams’s own reflexive retreat into identity politics is itself emblematic of our toxic divisions, but it does not invalidate the power of this astute book.
When music could not transcend evil
he story of European classical music under the Third Reich is one of the most squalid chapters in the annals of Western culture, a chronicle of collective complaisance that all but beggars belief. Without exception, all of the well-known musicians who left Germany and Austria in protest when Hitler came to power in 1933 were either Jewish or, like the violinist Adolf Busch, Rudolf Serkin’s father-in-law, had close family ties to Jews. Moreover, most of the small number of non-Jewish musicians who emigrated later on, such as Paul Hindemith and Lotte Lehmann, are now known to have done so not out of principle but because they were unable to make satisfactory accommodations with the Nazis. Everyone else—including Karl Böhm, Wilhelm Furtwängler, Walter Gieseking, Herbert von Karajan, and Richard Strauss—stayed behind and served the Reich.
The Berlin and Vienna Philharmonics, then as now Europe’s two greatest orchestras, were just as willing to do business with Hitler and his henchmen, firing their Jewish members and ceasing to perform the music of Jewish composers. Even after the war, the Vienna Philharmonic was notorious for being the most anti-Semitic orchestra in Europe, and it was well known in the music business (though never publicly discussed) that Helmut Wobisch, the orchestra’s principal trumpeter and its executive director from 1953 to 1968, had been both a member of the SS and a Gestapo spy.
The management of the Berlin Philharmonic made no attempt to cover up the orchestra’s close relationship with the Third Reich, no doubt because the Nazi ties of Karajan, who was its music director from 1956 until shortly before his death in 1989, were a matter of public record. Yet it was not until 2007 that a full-length study of its wartime activities, Misha Aster’s The Reich’s Orchestra: The Berlin Philharmonic 1933–1945, was finally published. As for the Vienna Philharmonic, its managers long sought to quash all discussion of the orchestra’s Nazi past, steadfastly refusing to open its institutional archives to scholars until 2008, when Fritz Trümpi, an Austrian scholar, was given access to its records. Five years later, the Viennese, belatedly following the precedent of the Berlin Philharmonic, added a lengthy section to their website called “The Vienna Philharmonic Under National Socialism (1938–1945),” in which the damning findings of Trümpi and two other independent scholars were made available to the public.
Now Trümpi has published The Political Orchestra: The Vienna and Berlin Philharmonics During the Third Reich, in which he tells how they came to terms with Nazism, supplying pre- and postwar historical context for their transgressions.1 Written in a stiff mixture of academic jargon and translatorese, The Political Orchestra is ungratifying to read. Even so, the tale that it tells is both compelling and disturbing, especially to anyone who clings to the belief that high art is ennobling to the spirit.U
nlike the Vienna Philharmonic, which has always doubled as the pit orchestra for the Vienna State Opera, the Berlin Philharmonic started life in 1882 as a fully independent, self-governing entity. Initially unsubsidized by the state, it kept itself afloat by playing a grueling schedule of performances, including “popular” non-subscription concerts for which modest ticket prices were levied. In addition, the orchestra made records and toured internationally at a time when neither was common.
These activities made it possible for the Berlin Philharmonic to develop into an internationally renowned ensemble whose fabled collective virtuosity was widely seen as a symbol of German musical distinction. Furtwängler, the orchestra’s principal conductor, declared in 1932 that the German music in which it specialized was “one of the very few things that actually contribute to elevating [German] prestige.” Hence, he explained, the need for state subsidy, which he saw as “a matter of [national] prestige, that is, to some extent a requirement of national prudence.” By then, though, the orchestra was already heavily subsidized by the city of Berlin, thus paving the way for its takeover by the Nazis.
The Vienna Philharmonic, by contrast, had always been subsidized. Founded in 1842 when the orchestra of what was then the Vienna Court Opera decided to give symphonic concerts on its own, it performed the Austro-German classics for an elite cadre of longtime subscribers. By restricting membership to local players and their pupils, the orchestra cultivated what Furtwängler, who spent as much time conducting in Vienna as in Berlin, described as a “homogeneous and distinct tone quality.” At once dark and sweet, it was as instantly identifiable—and as characteristically Viennese—as the strong, spicy bouquet of a Gewürztraminer wine.
Unlike the Berlin Philharmonic, which played for whoever would pay the tab and programmed new music as a matter of policy, the Vienna Philharmonic chose not to diversify either its haute-bourgeois audience or its conservative repertoire. Instead, it played Beethoven, Brahms, Haydn, Mozart, and Schubert (and, later, Bruckner and Richard Strauss) in Vienna for the Viennese. Starting in the ’20s, the orchestra’s recordings consolidated its reputation as one of the world’s foremost instrumental ensembles, but its internal culture remained proudly insular.
What the two orchestras had in common was a nationalistic ethos, a belief in the superiority of Austro-German musical culture that approached triumphalism. One of the darkest manifestations of this ethos was their shared reluctance to hire Jews. The Berlin Philharmonic employed only four Jewish players in 1933, while the Vienna Philharmonic contained only 11 Jews at the time of the Anschluss, none of whom was hired after 1920. To be sure, such popular Jewish conductors as Otto Klemperer and Bruno Walter continued to work in Vienna for as long as they could. Two months before the Anschluss, Walter led and recorded a performance of the Ninth Symphony of Gustav Mahler, his musical mentor and fellow Jew, who from 1897 to 1907 had been the director of the Vienna Court Opera and one of the Philharmonic’s most admired conductors. But many members of both orchestras were open supporters of fascism, and not a few were anti-Semites who ardently backed Hitler. By 1942, 62 of the 123 active members of the Vienna Philharmonic were Nazi party members.
The admiration that Austro-German classical musicians had for Hitler is not entirely surprising since he was a well-informed music lover who declared in 1938 that “Germany has become the guardian of European culture and civilization.” He made the support of German art, music very much included, a key part of his political program. Accordingly, the Berlin Philharmonic was placed under the direct supervision of Joseph Goebbels, who ensured the cooperation of its members by repeatedly raising their salaries, exempting them from military service, and guaranteeing their old-age pensions. But there had never been any serious question of protest, any more than there would be among the members of the Vienna Philharmonic when the Nazis gobbled up Austria. Save for the Jews and one or two non-Jewish players who were fired for reasons of internal politics, the musicians went along unhesitatingly with Hitler’s desires.
With what did they go along? Above all, they agreed to the scrubbing of Jewish music from their programs and the dismissal of their Jewish colleagues. Some Jewish players managed to escape with their lives, but seven of the Vienna Philharmonic’s 11 Jews were either murdered by the Nazis or died as a direct result of official persecution. In addition, both orchestras performed regularly at official government functions and made tours and other public appearances for propaganda purposes, and both were treated as gems in the diadem of Nazi culture.
As for Furtwängler, the most prominent of the Austro-German orchestral conductors who served the Reich, his relationship to Nazism continues to be debated to this day. He had initially resisted the firing of the Berlin Philharmonic’s Jewish members and protected them for as long as he could. But he was also a committed (if woolly-minded) nationalist who believed that German music had “a different meaning for us Germans than for other nations” and notoriously declared in an open letter to Goebbels that “we all welcome with great joy and gratitude . . . the restoration of our national honor.” Thereafter he cooperated with the Nazis, by all accounts uncomfortably but—it must be said—willingly. A monster of egotism, he saw himself as the greatest living exponent of German music and believed it to be his duty to stay behind and serve a cause higher than what he took to be mere party politics. “Human beings are free wherever Wagner and Beethoven are played, and if they are not free at first, they are freed while listening to these works,” he naively assured a horrified Arturo Toscanini in 1937. “Music transports them to regions where the Gestapo can do them no harm.”O
nce the war was over, the U.S. occupation forces decided to enlist the Berlin Philharmonic in the service of a democratic, anti-Soviet Germany. Furtwängler and Herbert von Karajan, who succeeded him as principal conductor, were officially “de-Nazified” and their orchestra allowed to function largely undisturbed, though six Nazi Party members were fired. The Vienna Philharmonic received similarly privileged treatment.
Needless to say, there was more to this decision than Cold War politics. No one questioned the unique artistic stature of either orchestra. Moreover, the Vienna Philharmonic, precisely because of its insularity, was now seen as a living museum piece, a priceless repository of 19th-century musical tradition. Still, many musicians and listeners, Jews above all, looked askance at both orchestras for years to come, believing them to be tainted by Nazism.
Indeed they were, so much so that they treated many of their surviving Jewish ex-members in a way that can only be described as vicious. In the most blatant individual case, the violinist Szymon Goldberg, who had served as the Berlin Philharmonic’s concertmaster under Furtwängler, was not allowed to reassume his post in 1945 and was subsequently denied a pension. As for the Vienna Philharmonic, the fact that it made Helmut Wobisch its executive director says everything about its deep-seated unwillingness to face up to its collective sins.
Be that as it may, scarcely any prominent musicians chose to boycott either orchestra. Leonard Bernstein went so far as to affect a flippant attitude toward the morally equivocal conduct of the Austro-German artists whom he encountered in Europe after the war. Upon meeting Herbert von Karajan in 1954, he actually told his wife Felicia that he had become “real good friends with von Karajan, whom you would (and will) adore. My first Nazi.”
At the same time, though, Bernstein understood what he was choosing to overlook. When he conducted the Vienna Philharmonic for the first time in 1966, he wrote to his parents:
I am enjoying Vienna enormously—as much as a Jew can. There are so many sad memories here; one deals with so many ex-Nazis (and maybe still Nazis); and you never know if the public that is screaming bravo for you might contain someone who 25 years ago might have shot me dead. But it’s better to forgive, and if possible, forget. The city is so beautiful, and so full of tradition. Everyone here lives for music, especially opera, and I seem to be the new hero.
Did Bernstein sell his soul for the opportunity to work with so justly renowned an orchestra—and did he get his price by insisting that its members perform the symphonies of Mahler, with which he was by then closely identified? It is a fair question, one that does not lend itself to easy answers.
Even more revealing is the case of Bruno Walter, who never forgave Furtwängler for staying behind in Germany, informing him in an angry letter that “your art was used as a conspicuously effective means of propaganda for the regime of the Devil.” Yet Walter’s righteous anger did not stop him from conducting in Vienna after the war. Born in Berlin, he had come to identify with the Philharmonic so closely that it was impossible for him to seriously consider quitting its podium permanently. “Spiritually, I was a Viennese,” he wrote in Theme and Variations, his 1946 autobiography. In 1952, he made a second recording with the Vienna Philharmonic of Mahler’s Das Lied von der Erde, whose premiere he had conducted in 1911 and which he had recorded in Vienna 15 years earlier. One wonders what Walter, who had converted to Christianity but had been driven out of both his native lands for the crime of being Jewish, made of the text of the last movement: “My friend, / On this earth, fortune has not been kind to me! / Where do I go?”
As for the two great orchestras of the Third Reich, both have finally acknowledged their guilt and been forgiven, at least by those who know little of their past. It would occur to no one to decline on principle to perform with either group today. Such a gesture would surely be condemned as morally ostentatious, an exercise in what we now call virtue-signaling. Yet it is impossible to forget what Samuel Lipman wrote in 1993 in Commentary apropos the wartime conduct of Furtwängler: “The ultimate triumph of totalitarianism, I suppose it can be said, is that under its sway only a martyred death can be truly moral.” For the only martyrs of the Berlin and Vienna Philharmonics were their Jews. The orchestras themselves live on, tainted and beloved.
He knows what to reveal and what to conceal, understands the importance of keeping the semblance of distance between oneself and the story of the day, and comprehends the ins and outs of anonymous sourcing. Within days of his being fired by President Trump on May 9, for example, little green men and women, known only as his “associates,” began appearing in the pages of the New York Times and Washington Post to dispute key points of the president’s account of his dismissal and to promote Comey’s theory of the case.
“In a Private Dinner, Trump Demanded Loyalty,” the New York Times reported on May 11. “Comey Demurred.” The story was a straightforward narrative of events from Comey’s perspective, capped with an obligatory denial from the White House. The next day, the Washington Post reported, “Comey associates dispute Trump’s account of conversations.” The Post did not identify Comey’s associates, other than saying that they were “people who have worked with him.”
Maybe they were the same associates who had gabbed to the Times. Or maybe they were different ones. Who can tell? Regardless, the story these particular associates gave to the Post was readable and gripping. Comey, the Post reported, “was wary of private meetings and discussions with the president and did not offer the assurance, as Trump has claimed, that Trump was not under investigation as part of the probe into Russian interference in last year’s election.”
On May 16, Michael S. Schmidt of the Times published his scoop, “Comey Memo Says Trump Asked Him to End Flynn Investigation.” Schmidt didn’t see the memo for himself. Parts of it were read to him by—you guessed it—“one of Mr. Comey’s associates.” The following day, Robert Mueller was appointed special counsel to oversee the Russia investigation. On May 18, the Times, citing “two people briefed” on a call between Comey and the president, reported, “Comey, Unsettled by Trump, Is Said to Have Wanted Him Kept at a Distance.” And by the end of that week, Comey had agreed to testify before the Senate Intelligence Committee.
As his testimony approached, Comey’s people became more aggressive in their criticisms of the president. “Trump Should Be Scared, Comey Friend Says,” read the headline of a CNN interview with Brookings Institution fellow Benjamin Wittes. This “Comey friend” said he was “very shocked” when he learned that President Trump had asked Comey for loyalty. “I have no doubt that he regarded the group of people around the president as dishonorable,” Wittes said.
Comey, Wittes added, was so uncomfortable at the White House reception in January honoring law enforcement—the one where Comey lumbered across the room and Trump whispered something in his ear—that, as CNN paraphrased it, he “stood in a position so that his blue blazer would blend in with the room’s blue drapes in an effort for Trump to not notice him.” The integrity, the courage—can you feel it?
On June 6, the day before Comey’s prepared testimony was released, more “associates” told ABC that the director would “not corroborate Trump’s claim that on three separate occasions Comey told the president he was not under investigation.” And a “source with knowledge of Comey’s testimony” told CNN the same thing. In addition, ABC reported that, according to “a source familiar with Comey’s thinking,” the former director would say that Trump’s actions stopped short of obstruction of justice.
Maybe those sources weren’t as “familiar with Comey’s thinking” as they thought or hoped? To maximize the press coverage he already dominated, Comey had authorized the Senate Intelligence Committee to release his testimony ahead of his personal interview. That testimony told a different story than what had been reported by CNN and ABC (and by the Post on May 12). Comey had in fact told Trump the president was not under investigation—on January 6, January 27, and March 30. Moreover, the word “obstruction” did not appear at all in his written text. The senators asked Comey if he felt Trump obstructed justice. He declined to answer either way.
My guess is that Comey’s associates lacked Comey’s scalpel-like, almost Jesuitical ability to make distinctions, and therefore misunderstood what he was telling them to say to the press. Because it’s obvious Comey was the one behind the stories of Trump’s dishonesty and bad behavior. He admitted as much in front of the cameras in a remarkable exchange with Senator Susan Collins of Maine.
Comey said that, after Trump tweeted on May 12 that he’d better hope there aren’t “tapes” of their conversations, “I asked a friend of mine to share the content of the memo with a reporter. Didn’t do it myself, for a variety of reasons. But I asked him to, because I thought that might prompt the appointment of a special counsel. And so I asked a close friend of mine to do it.”
Collins asked whether that friend had been Wittes, known to cable news junkies as Comey’s bestie. Comey said no. The source for the New York Times article was “a good friend of mine who’s a professor at Columbia Law School,” Daniel Richman.
Every time I watch or read that exchange, I am amazed. Here is the former director of the FBI just flat-out admitting that, for months, he wrote down every interaction he had with the president of the United States because he wanted a written record in case the president ever fired or lied about him. And when the president did fire and lie about him, that director set in motion a series of public disclosures with the intent of not only embarrassing the president, but also forcing the appointment of a special counsel who might end up investigating the president for who knows what. And none of this would have happened if the president had not fired Comey or tweeted about him. He told the Senate that if Trump hadn’t dismissed him, he most likely would still be on the job.
Rarely, in my view, are high officials so transparent in describing how Washington works. Comey revealed to the world that he was keeping a file on his boss, that he used go-betweens to get his story into the press, that “investigative journalism” is often just powerful people handing documents to reporters to further their careers or agendas or even to get revenge. And as long as you maintain some distance from the fallout, and stick to the absolute letter of the law, you will come out on top, so long as you have a small army of nightingales singing to reporters on your behalf.
“It’s the end of the Comey era,” A.B. Stoddard said on Special Report with Bret Baier the other day. On the contrary: I have a feeling that, as the Russia investigation proceeds, we will be hearing much more from Comey. And from his “associates.” And his “friends.” And persons “familiar with his thinking.”
In April, COMMENTARY asked a wide variety of writers,
thinkers, and broadcasters to respond to this question: Is free speech under threat in the United States? We received twenty-seven responses. We publish them here in alphabetical order.
Floyd AbramsFree expression threatened? By Donald Trump? I guess you could say so.
When a president engages in daily denigration of the press, when he characterizes it as the enemy of the people, when he repeatedly says that the libel laws should be “loosened” so he can personally commence more litigation, when he says that journalists shouldn’t be allowed to use confidential sources, it is difficult even to suggest that he has not threatened free speech. And when he says to the head of the FBI (as former FBI director James Comey has said that he did) that Comey should consider “putting reporters in jail for publishing classified information,” it is difficult not to take those threats seriously.
The harder question, though, is this: How real are the threats? Or, as Michael Gerson put it in the Washington Post: Will Trump “go beyond mere Twitter abuse and move against institutions that limit his power?” Some of the president’s threats against the institution of the press, wittingly or not, have been simply preposterous. Surely someone has told him by now that neither he nor Congress can “loosen” libel laws; while each state has its own libel law, there is no federal libel law and thus nothing for him to loosen. What he obviously takes issue with is the impact that the Supreme Court’s 1964 First Amendment opinion in New York Times v. Sullivan has had on state libel laws. The case determined that public officials who sue for libel may not prevail unless they demonstrate that the statements made about them were false and were made with actual knowledge or suspicion of that falsity. So his objection to the rules governing libel law is to nothing less than the application of the First Amendment itself.
In other areas, however, the Trump administration has far more power to imperil free speech. We live under an Espionage Act, adopted a century ago, which is both broad in its language and uncommonly vague in its meaning. As such, it remains a half-open door through which an administration that is hostile to free speech might walk. Such an administration could initiate criminal proceedings against journalists who write about defense- or intelligence-related topics on the basis that classified information was leaked to them by present or former government employees. No such action has ever been commenced against a journalist. Press lawyers and civil-liberties advocates have strong arguments that the law may not be read so broadly and still be consistent with the First Amendment. But the scope of the Espionage Act and the impact of the First Amendment upon its interpretation remain unknown.
A related area in which the attitude of an administration toward the press may affect the latter’s ability to function as a check on government relates to the ability of journalists to protect the identity of their confidential sources. The Obama administration prosecuted more Espionage Act cases against sources of information to journalists than all prior administrations combined. After a good deal of deserved press criticism, it agreed to expand the internal guidelines of the Department of Justice designed to limit the circumstances under which such source revelation is demanded. But the guidelines are none too protective and are, after all, simply guidelines. A new administration is free to change or limit them or, in fact, abandon them altogether. In this area, as in so many others, it is too early to judge the ultimate treatment of free expression by the Trump administration. But the threats are real, and there is good reason to be wary.
Floyd Abrams is the author of The Soul of the First Amendment (Yale University Press, 2017).
Ayaan Hirsi AliFreedom of speech is being threatened in the United States by a nascent culture of hostility to different points of view. As political divisions in America have deepened, a conformist mentality of “right thinking” has spread across the country. Increasingly, American universities, where no intellectual doctrine ought to escape critical scrutiny, are some of the most restrictive domains when it comes to asking open-ended questions on subjects such as Islam.
Legally, speech in the United States is protected to a degree unmatched in almost any industrialized country. The U.S. has avoided unpredictable Canadian-style restrictions on speech, for example. I remain optimistic that as long as we have the First Amendment in the U.S., any attempt at formal legal censorship will be vigorously challenged.
Culturally, however, matters are very different in America. The regressive left is the forerunner threatening free speech on any issue that is important to progressives. The current pressure coming from those who call themselves “social-justice warriors” is unlikely to lead to successful legislation to curb the First Amendment. Instead, censorship is spreading in the cultural realm, particularly at institutions of higher learning.
The way activists of the regressive left achieve silence or censorship is by creating a taboo, and one of the most pernicious taboos in operation today is the word “Islamophobia.” Islamists are similarly motivated to rule any critical scrutiny of Islamic doctrine out of order. There is now a university center (funded by Saudi money) in the U.S. dedicated to monitoring and denouncing incidences of “Islamophobia.”
The term “Islamophobia” is used against critics of political Islam, but also against progressive reformers within Islam. The term implies an irrational fear that is tainted by hatred, and it has had a chilling effect on free speech. In fact, “Islamophobia” is a poorly defined term. Islam is not a race, and it is very often perfectly rational to fear some expressions of Islam. No set of ideas should be beyond critical scrutiny.
To push back in this cultural realm—in our universities, in public discourse—those favoring free speech should focus more on the message of dawa, the set of ideas that the Islamists want to promote. If the aims of dawa are sufficiently exposed, ordinary Americans and Muslim Americans will reject it. The Islamist message is a message of divisiveness, misogyny, and hatred. It’s anachronistic and wants people to live by tribal norms dating from the seventh century. The best antidote to Islamic extremism is the revelation of what its primary objective is: a society governed by Sharia. This is the opposite of censorship: It is documenting reality. What is life like in Saudi Arabia, Iran, the Northern Nigerian States? What is the true nature of Sharia law?
Islamists want to hide the true meaning of Sharia, Jihad, and the implications for women, gays, religious minorities, and infidels under the veil of “Islamophobia.” Islamists use “Islamophobia” to obfuscate their vision and imply that any scrutiny of political Islam is hatred and bigotry. The antidote to this is more exposure and more speech.
As pressure on freedom of speech increases from the regressive left, we must reject the notions that only Muslims can speak about Islam, and that any critical examination of Islamic doctrines is inherently “racist.”
Instead of contorting Western intellectual traditions so as not to offend our Muslim fellow citizens, we need to defend the Muslim dissidents who are risking their lives to promote the human rights we take for granted: equality for women, tolerance of all religions and orientations, our hard-won freedoms of speech and thought.
It is by nurturing and protecting such speech that progressive reforms can emerge within Islam. By accepting the increasingly narrow confines of acceptable discourse on issues such as Islam, we do dissidents and progressive reformers within Islam a grave disservice. For truly progressive reforms within Islam to be possible, full freedom of speech will be required.
Ayaan Hirsi Ali is a research fellow at the Hoover Institution, Stanford University, and the founder of the AHA Foundation.
Lee C. BollingerI know it is too much to expect that political discourse mimic the measured, self-questioning, rational, footnoting standards of the academy, but there is a difference between robust political debate and political debate infected with fear or panic. The latter introduces a state of mind that is visceral and irrational. In the realm of fear, we move beyond the reach of reason and a sense of proportionality. When we fear, we lose the capacity to listen and can become insensitive and mean.
Our Constitution is well aware of this fact about the human mind and of its negative political consequences. In the First Amendment jurisprudence established over the past century, we find many expressions of the problematic state of mind that is produced by fear. Among the most famous and potent is that of Justice Brandeis in Whitney v. California in 1927, one of the many cases involving aggravated fears of subversive threats from abroad. “It is the function of (free) speech,” he said, “to free men from the bondage of irrational fears.” “Men feared witches,” Brandeis continued, “and burned women.”
Today, our “witches” are terrorists, and Brandeis’s metaphorical “women” include the refugees (mostly children) and displaced persons, immigrants, and foreigners whose lives have been thrown into suspension and doubt by policies of exclusion.
The same fears of the foreign that take hold of a population inevitably infect our internal interactions and institutions, yielding suppression of unpopular and dissenting voices, victimization of vulnerable groups, attacks on the media, and the rise of demagoguery, with its disdain for facts, reason, expertise, and tolerance.
All of this poses a very special obligation on those of us within universities. Not only must we make the case in every venue for the values that form the core of who we are and what we do, but we must also live up to our own principles of free inquiry and fearless engagement with all ideas. This is why recent incidents on a handful of college campuses disrupting and effectively censoring speakers is so alarming. Such acts not only betray a basic principle but also inflame a rising prejudice against the academic community, and they feed efforts to delegitimize our work, at the very moment when it’s most needed.
I do not for a second support the view that this generation has an unhealthy aversion to engaging differences of opinion. That is a modern trope of polarization, as is the portrayal of universities as hypocritical about academic freedom and political correctness. But now, in this environment especially, universities must be at the forefront of defending the rights of all students and faculty to listen to controversial voices, to engage disagreeable viewpoints, and to make every effort to demonstrate our commitment to the sort of fearless and spirited debate that we are simultaneously asking of the larger society. Anyone with a voice can shout over a speaker; but being able to listen to and then effectively rebut those with whom we disagree—particularly those who themselves peddle intolerance—is one of the greatest skills our education can bestow. And it is something our democracy desperately needs more of. That is why, I say to you now, if speakers who are being denied access to other campuses come here, I will personally volunteer to introduce them, and listen to them, however much I may disagree with them. But I will also never hesitate to make clear why I disagree with them.
Lee C. Bollinger is the 19th president of Columbia University and the author of Uninhibited, Robust, and Wide-Open: A Free Press for a New Century. This piece has been excerpted from President Bollinger’s May 17 commencement address.
Richard A. Epstein
Today, the greatest threat to the constitutional protection of freedom of speech comes from campus rabble-rousers who invoke this very protection. In their book, the speech of people like Charles Murray and Heather Mac Donald constitutes a form of violence, bordering on genocide, that receives no First Amendment protection. Enlightened protestors are both bound and entitled to shout them down, by force or other disruptive actions, if their universities are so foolish as to extend them an invitation to speak. Any indignant minority may take the law into its own hands to eradicate the intellectual cancer before it spreads on their own campus.
By such tortured logic, a new generation of vigilantes distorts the First Amendment doctrine: Speech becomes violence, and violence becomes heroic acts of self-defense. The standard First Amendment interpretation emphatically rejects that view. Of course, the First Amendment doesn’t let you say what you want when and wherever you want to. Your freedom of speech is subject to the same limitations as your freedom of action. So you have no constitutional license to assault other people, to lie to them, or to form cartels to bilk them in the marketplace. But folks such as Murray, Mac Donald, and even Yiannopoulos do not come close to crossing into that forbidden territory. They are not using, for example, “fighting words,” rightly limited to words or actions calculated to provoke immediate aggression against a known target. Fighting words are worlds apart from speech that provokes a negative reaction in those who find your speech offensive solely because of the content of its message.
This distinction is central to the First Amendment. Fighting words have to be blocked by well-tailored criminal and civil sanctions lest some people gain license to intimidate others from speaking or peaceably assembling. The remedy for mere offense is to speak one’s mind in response. But it never gives anyone the right to block the speech of others, lest everyone be able to unilaterally increase his sphere of action by getting really angry about the beliefs of others. No one has the right to silence others by working himself into a fit of rage.
Obviously, it is intolerable to let mutual animosity generate factional warfare, whereby everyone can use force to silence rivals. To avoid this war of all against all, each side claims that only its actions are privileged. These selective claims quickly degenerate into a form of viewpoint discrimination, which undermines one of the central protections that traditional First Amendment law erects: a wall against each and every group out to destroy the level playing field on which robust political debate rests. Every group should be at risk for having its message fall flat. The new campus radicals want to upend that understanding by shutting down their adversaries if their universities do not. Their aggression must be met, if necessary, by counterforce. Silence in the face of aggression is not an acceptable alternative.
Richard A. Epstein is the Laurence A. Tisch Professor of Law at the New York University School of Law.
David FrenchWe’re living in the midst of a troubling paradox. At the exact same time that First Amendment jurisprudence has arguably never been stronger and more protective of free expression, millions of Americans feel they simply can’t speak freely. Indeed, talk to Americans living and working in the deep-blue confines of the academy, Hollywood, and the tech sector, and you’ll get a sense of palpable fear. They’ll explain that they can’t say what they think and keep their jobs, their friends, and sometimes even their families.
The government isn’t cracking down or censoring; instead, Americans are using free speech to destroy free speech. For example, a social-media shaming campaign is an act of free speech. So is an economic boycott. So is turning one’s back on a public speaker. So is a private corporation firing a dissenting employee for purely political reasons. Each of these actions is largely protected from government interference, and each one represents an expression of the speaker’s ideas and values.
The problem, however, is obvious. The goal of each of these kinds of actions isn’t to persuade; it’s to intimidate. The goal isn’t to foster dialogue but to coerce conformity. The result is a marketplace of ideas that has been emptied of all but the approved ideological vendors—at least in those communities that are dominated by online thugs and corporate bullies. Indeed, this mindset has become so prevalent that in places such as Portland, Berkeley, Middlebury, and elsewhere, the bullies and thugs have crossed the line from protected—albeit abusive—speech into outright shout-downs and mob violence.
But there’s something else going on, something that’s insidious in its own way. While politically correct shaming still has great power in deep-blue America, its effect in the rest of the country is to trigger a furious backlash, one characterized less by a desire for dialogue and discourse than by its own rage and scorn. So we’re moving toward two Americas—one that ruthlessly (and occasionally illegally) suppresses dissenting speech and the other that is dangerously close to believing that the opposite of political correctness isn’t a fearless expression of truth but rather the fearless expression of ideas best calculated to enrage your opponents.
The result is a partisan feedback loop where right-wing rage spurs left-wing censorship, which spurs even more right-wing rage. For one side, a true free-speech culture is a threat to feelings, sensitivities, and social justice. The other side waves high the banner of “free speech” to sometimes elevate the worst voices to the highest platforms—not so much to protect the First Amendment as to infuriate the hated “snowflakes” and trigger the most hysterical overreactions.
The culturally sustainable argument for free speech is something else entirely. It reminds the cultural left of its own debt to free speech while reminding the political right that a movement allegedly centered around constitutional values can’t abandon the concept of ordered liberty. The culture of free speech thrives when all sides remember their moral responsibilities—to both protect the right of dissent and to engage in ideological combat with a measure of grace and humility.
David French is a senior writer at National Review.
Pamela GellerThe real question isn’t whether free speech is under threat in the United States, but rather, whether it’s irretrievably lost. Can we get it back? Not without war, I suspect, as is evidenced by the violence at colleges whenever there’s the shamefully rare event of a conservative speaker on campus.
Free speech is the soul of our nation and the foundation of all our other freedoms. If we can’t speak out against injustice and evil, those forces will prevail. Freedom of speech is the foundation of a free society. Without it, a tyrant can wreak havoc unopposed, while his opponents are silenced.
With that principle in mind, I organized a free-speech event in Garland, Texas. The world had recently been rocked by the murder of the Charlie Hebdo cartoonists. My version of “Je Suis Charlie” was an event here in America to show that we can still speak freely and draw whatever we like in the Land of the Free. Yet even after jihadists attacked our event, I was blamed—by Donald Trump among others—for provoking Muslims. And if I tried to hold a similar event now, no arena in the country would allow me to do so—not just because of the security risk, but because of the moral cowardice of all intellectual appeasers.
Under what law is it wrong to depict Muhammad? Under Islamic law. But I am not a Muslim, I don’t live under Sharia. America isn’t under Islamic law, yet for standing for free speech, I’ve been:
- Prevented from running our advertisements in every major city in this country. We have won free-speech lawsuits all over the country, which officials circumvent by prohibiting all political ads (while making exceptions for ads from Muslim advocacy groups);
- Shunned by the right, shut out of the Conservative Political Action Conference;
- Shunned by Jewish groups at the behest of terror-linked groups such as the Council on American-Islamic Relations;
- Blacklisted from speaking at universities;
- Prevented from publishing books, for security reasons and because publishers fear shaming from the left;
- Banned from Britain.
A Seattle court accused me of trying to shut down free speech after we merely tried to run an FBI poster on global terrorism, because authorities had banned all political ads in other cities to avoid running ours. Seattle blamed us for that, which was like blaming a woman for being raped because she was wearing a short skirt.
This kind of vilification and shunning is key to the left’s plan to shut down all dissent from its agenda—they make legislation restricting speech unnecessary.
The same refusal to allow our point of view to be heard has manifested itself elsewhere. The foundation of my work is individual rights and equality for all before the law. These are the foundational principles of our constitutional republic. That is now considered controversial. Truth is the new hate speech. Truth is going to be criminalized.
The First Amendment doesn’t only protect ideas that are sanctioned by the cultural and political elites. If “hate speech” laws are enacted, who would decide what’s permissible and what’s forbidden? The government? The gunmen in Garland?
There has been an inversion of the founding premise of this nation. No longer is it the subordination of might to right, but right to might. History is repeatedly deformed with the bloody consequences of this transition.
Pamela Geller is the editor in chief of the Geller Report and president of the American Freedom Defense Initiative.
Jonah GoldbergOf course free speech is under threat in America. Frankly, it’s always under threat in America because it’s always under threat everywhere. Ronald Reagan was right when he said in 1961, “Freedom is never more than one generation away from extinction. We didn’t pass it on to our children in the bloodstream. It must be fought for, protected, and handed on for them to do the same.”
This is more than political boilerplate. Reagan identified the source of the threat: human nature. God may have endowed us with a right to liberty, but he didn’t give us all a taste for it. As with most finer things, we must work to acquire a taste for it. That is what civilization—or at least our civilization—is supposed to do: cultivate attachments to certain ideals. “Cultivate” shares the same Latin root as “culture,” cultus, and properly understood they mean the same thing: to grow, nurture, and sustain through labor.
In the past, threats to free speech have taken many forms—nationalist passion, Comstockery (both good and bad), political suppression, etc.—but the threat to free speech today is different. It is less top-down and more bottom-up. We are cultivating a generation of young people to reject free speech as an important value.
One could mark the beginning of the self-esteem movement with Nathaniel Branden’s 1969 paper, “The Psychology of Self-Esteem,” which claimed that “feelings of self-esteem were the key to success in life.” This understandable idea ran amok in our schools and in our culture. When I was a kid, Saturday-morning cartoons were punctuated with public-service announcements telling kids: “The most important person in the whole wide world is you, and you hardly even know you!”
The self-esteem craze was just part of the cocktail of educational fads. Other ingredients included multiculturalism, the anti-bullying crusade, and, of course, that broad phenomenon known as “political correctness.” Combined, they’ve produced a generation that rejects the old adage “sticks and stones can break my bones but words can never harm me” in favor of the notion that “words hurt.” What we call political correctness has been on college campuses for decades. But it lacked a critical mass of young people who were sufficiently receptive to it to make it a fully successful ideology. The campus commissars welcomed the new “snowflakes” with open arms; truly, these are the ones we’ve been waiting for.
“Words hurt” is a fashionable concept in psychology today. (See Psychology Today: “Why Words Can Hurt at Least as Much as Sticks and Stones.”) But it’s actually a much older idea than the “sticks and stones” aphorism. For most of human history, it was a crime to say insulting or “injurious” things about aristocrats, rulers, the Church, etc. That tendency didn’t evaporate with the Divine Right of Kings. Jonathan Haidt has written at book length about our natural capacity to create zones of sanctity, immune from reason.
And that is the threat free speech faces today. Those who inveigh against “hate speech” are in reality fighting “heresy speech”—ideas that do “violence” to sacred notions of self-esteem, racial or gender equality, climate change, and so on. Put whatever label you want on it, contemporary “social justice” progressivism acts as a religion, and it has no patience for blasphemy.
When Napoleon’s forces converted churches into stables, the clergy did not object on the grounds that regulations regarding the proper care and feeding of animals had been violated. They complained of sacrilege and blasphemy. When Charles Murray or Christina Hoff Summers visits college campuses, the protestors are behaving like the zealous acolytes of St. Jerome. Appeals to the First Amendment have as much power over the “antifa” fanatics as appeals to Odin did to champions of the New Faith.
That is the real threat to free speech today.
Jonah Goldberg is a senior editor at National Review and a fellow at the American Enterprise Institute.
KC JohnsonIn early May, the Washington Post urged universities to make clear that “racist signs, symbols, and speech are off-limits.” Given the extraordinarily broad definition of what constitutes “racist” speech at most institutions of higher education, this demand would single out most right-of-center (and, in some cases, even centrist and liberal) discourse on issues of race or ethnicity. The editorial provided the highest-profile example of how hostility to free speech, once confined to the ideological fringe on campus, has migrated to the liberal mainstream.
The last few years have seen periodic college protests—featuring claims that significant amounts of political speech constitute “violence,” thereby justifying censorship—followed by even more troubling attempts to appease the protesters. After the mob scene that greeted Charles Murray upon his visit to Middlebury College, for instance, the student government criticized any punishment for the protesters, and several student leaders wanted to require that future speakers conform to the college’s “community standard” on issues of race, gender, and ethnicity. In the last few months, similar attempts to stifle the free exchange of ideas in the name of promoting diversity occurred at Wesleyan, Claremont McKenna, and Duke. Offering an extreme interpretation of this point of view, one CUNY professor recently dismissed dialogue as “inherently conservative,” since it reinforced the “relations of power that presently exist.”
It’s easy, of course, to dismiss campus hostility to free speech as affecting only a small segment of American public life—albeit one that trains the next generation of judges, legislators, and voters. But, as Jonathan Chait observed in 2015, denying “the legitimacy of political pluralism on issues of race and gender” has broad appeal on the left. It is only most apparent on campus because “the academy is one of the few bastions of American life where the political left can muster the strength to impose its political hegemony upon others.” During his time in office, Barack Obama generally urged fellow liberals to support open intellectual debate. But the current campus environment previews the position of free speech in a post-Obama Democratic Party, increasingly oriented around identity politics.
Waning support on one end of the ideological spectrum for this bedrock American principle should provide a political opening for the other side. The Trump administration, however, seems poorly suited to make the case. Throughout his public career, Trump has rarely supported free speech, even in the abstract, and has periodically embraced legal changes to facilitate libel lawsuits. Moreover, the right-wing populism that motivates Trump’s base has a long tradition of ideological hostility to civil liberties of all types. Even in campus contexts, conservatives have defended free speech inconsistently, as seen in recent calls that CUNY disinvite anti-Zionist fanatic Linda Sarsour as a commencement speaker.
In a sharply polarized political environment, awash in dubiously-sourced information, free speech is all the more important. Yet this same environment has seen both sides, most blatantly elements of the left on campuses, demand restrictions on their ideological foes’ free speech in the name of promoting a greater good.
KC Johnson is a professor of history at Brooklyn College and the CUNY Graduate Center.
Laura KipnisI find myself with a strange-bedfellows problem lately. Here I am, a left-wing feminist professor invited onto the pages of Commentary—though I’d be thrilled if it were still 1959—while fielding speaking requests from right-wing think tanks and libertarians who oppose child-labor laws.
Somehow I’ve ended up in the middle of the free-speech-on-campus debate. My initial crime was publishing a somewhat contentious essay about campus sexual paranoia that put me on the receiving end of Title IX complaints. Apparently I’d created a “hostile environment” at my university. I was investigated (for 72 days). Then I wrote up what I’d learned about these campus inquisitions in a second essay. Then I wrote about it all some more, in a book exposing the kangaroo-court elements of the Title IX process—and the extra-legal gag orders imposed on everyone caught in its widening snare.
I can’t really comment on whether more charges have been filed against me over the book. I’ll just say that writing about being a Title IX respondent could easily become a life’s work. I learned, shortly after writing this piece, that I and my publisher were being sued for defamation, among other things.
Is free speech under threat on American campuses? Yes. We know all about student activists who wish to shut down talks by people with opposing views. I got smeared with a bit of that myself, after a speaking invitation at Wellesley—some students made a video protesting my visit before I arrived. The talk went fine, though a group of concerned faculty circulated an open letter afterward also protesting the invitation: My views on sexual politics were too heretical, and might have offended students.
I didn’t take any of this too seriously, even as right-wing pundits crowed, with Wellesley as their latest outrage bait. It was another opportunity to mock student activists, and the fact that I was myself a feminist rather than a Charles Murray or a Milo Yiannopoulos, made them positively gleeful.
I do find myself wondering where all my new free-speech pals were when another left-wing professor, Steven Salaita, was fired (or if you prefer euphemism, “his job offer was withdrawn”) from the University of Illinois after he tweeted criticism of Israel’s Gaza policy. Sure the tweets were hyperbolic, but hyperbole and strong opinions are protected speech, too.
I guess free speech is easy to celebrate until it actually challenges something. Funny, I haven’t seen Milo around lately—so beloved by my new friends when he was bashing minorities and transgender kids. Then he mistakenly said something authentic (who knew he was capable of it!), reminiscing about an experience a lot of gay men have shared: teenage sex with older men. He tried walking it back—no, no, he’d been a victim, not a participant—but his fan base was shrieking about pedophilia and fleeing in droves. Gee, they were all so against “political correctness” a few minutes before.
It’s easy to be a free-speech fan when your feathers aren’t being ruffled. No doubt what makes me palatable to the anti-PC crowd is having thus far failed to ruffle them enough. I’m just going to have to work harder.
Laura Kipnis’s latest book is Unwanted Advances: Sexual Paranoia Comes to Campus.
Eugene KontorovichThe free and open exchange of views—especially politically conservative or traditionally religious ones—is being challenged. This is taking place not just at college campuses but throughout our public spaces and cultural institutions. James Watson was fired from the lab he led since 1968 and could not speak at New York University because of petty, censorious students who would not know DNA from LSD. Our nation’s founders and heroes are being “disappeared” from public commemoration, like Trotsky from a photograph of Soviet rulers.
These attacks on “free speech” are not the result of government action. They are not what the First Amendment protects against. The current methods—professional and social shaming, exclusion, and employment termination—are more inchoate, and their effects are multiplied by self-censorship. A young conservative legal scholar might find himself thinking: “If the late Justice Antonin Scalia can posthumously be deemed a ‘bigot’ by many academics, what chance have I?”
Ironically, artists and intellectuals have long prided themselves on being the first defenders of free speech. Today, it is the institutions of both popular and high culture that are the censors. Is there one poet in the country who would speak out for Ann Coulter?
The inhibition of speech at universities is part of a broader social phenomenon of making longstanding, traditional views and practices sinful overnight. Conservatives have not put up much resistance to this. To paraphrase Martin Niemöller’s famous dictum: “First they came for Robert E. Lee, and I said nothing, because Robert E. Lee meant nothing to me.”
The situation with respect to Israel and expressions of support for it deserves separate discussion. Even as university administrators give political power to favored ideologies by letting them create “safe spaces” (safe from opposing views), Jews find themselves and their state at the receiving end of claims of apartheid—modern day blood libels. It is not surprising if Jewish students react by demanding that they get a safe space of their own. It is even less surprising if their parents, paying $65,000 a year, want their children to have a nicer time of it. One hears Jewish groups frequently express concern about Jewish students feeling increasingly isolated and uncomfortable on campus.
But demanding selective protection from the new ideological commissars is unlikely to bring the desired results. First, this new ideology, even if it can be harnessed momentarily to give respite to harassed Jews on campus, is ultimately illiberal and will be controlled by “progressive” forces. Second, it is not so terrible for Jews in the Diaspora to feel a bit uncomfortable. It has been the common condition of Jews throughout the millennia. The social awkwardness that Jews at liberal arts schools might feel in being associated with Israel is of course one of the primary justifications for the Jewish State. Facing the snowflakes incapable of hearing a dissonant view—but who nonetheless, in the grip of intersectional ecstasy, revile Jewish self-determination—Jewish students should toughen up.
Eugene Kontorovich teaches constitutional law at Northwestern University and heads the international law department of the Kohelet Policy Forum in Jerusalem.
Nicholas LemannThere’s an old Tom Wolfe essay in which he describes being on a panel discussion at Princeton in 1965 and provoking the other panelists by announcing that America, rather than being in crisis, is in the middle of a “happiness explosion.” He was arguing that the mass effects of 20 years of post–World War II prosperity made for a larger phenomenon than the Vietnam War, the racial crisis, and the other primary concerns of intellectuals at the time.
In the same spirit, I’d say that we are in the middle of a free-speech explosion, because of 20-plus years of the Internet and 10-plus years of social media. If one understands speech as disseminated individual opinion, then surely we live in the free-speech-est society in the history of the world. Anybody with access to the unimpeded World Wide Web can say anything to a global audience, and anybody can hear anything, too. All threats to free speech should be understood in the context of this overwhelmingly reality.
It is a comforting fantasy that a genuine free-speech regime will empower mainly “good,” but previously repressed, speech. Conversely, repressive regimes that are candid enough to explain their anti-free-speech policies usually say that they’re not against free speech, just “bad” speech. We have to accept that more free speech probably means, in the aggregate, more bad speech, and also a weakening of the power, authority, and economic support for information professionals such as journalists. Welcome to the United States in 2017.
I am lucky enough to live and work on the campus of a university, Columbia, that has been blessedly free of successful attempts to repress free speech. Just in the last few weeks, Charles Murray and Dinesh D’Souza have spoken here without incident. But, yes, the evidently growing popularity of the idea that “hate speech” shouldn’t be permitted on campuses is a problem, especially, it seems, at small private liberal-arts colleges. We should all do our part, and I do, by frequently and publicly endorsing free-speech principles. Opposing the BDS movement falls squarely into that category.
It’s not just on campuses that free-speech vigilance is needed, though. The number-one threat to free speech, to my mind, is that the wide-open Web has been replaced by privately owned platforms such as Facebook and Google as the way most people experience the public life of the Internet. These companies are committed to banning “hate speech,” and they are eager to operate freely in countries, like China, that don’t permit free political speech. That makes for a far more consequential constrained environment than any campus’s speech code.
Also, Donald Trump regularly engages in presidentially unprecedented rhetoric demonizing people who disagree with him. He seems to think this is all in good fun, but, as we have already seen at his rallies, not everybody hears it that way. The place where Trumpism will endanger free speech isn’t in the center—the White House press room—but at the periphery, for example in the way that local police handle bumptious protestors and the journalists covering them. This is already happening around the country. If Trump were as disciplined and knowledgeable as Vladimir Putin or Recep Tayyip Erdogan, which so far he seems not to be, then free speech could be in even more serious danger from government, which in most places is its usual main enemy.
Nicholas Lemann is a professor at Columbia Journalism School and a staff writer for the New Yorker.
Michael J. LewisFree speech is a right but it is also a habit, and where the habit shrivels so will the right. If free speech today is in headlong retreat—everywhere threatened by regulation, organized harassment, and even violence—it is in part because our political culture allowed the practice of persuasive oratory to atrophy. The process began in 1973, an unforeseen side effect of Roe v. Wade. Legislators were delighted to learn that by relegating this divisive matter of public policy to the Supreme Court and adopting a merely symbolic position, they could sit all the more safely in their safe seats.
Since then, one crucial question of public policy after another has been punted out of the realm of politics and into the judicial. Issues that might have been debated with all the rhetorical agility of a Lincoln and a Douglas, and then subjected to a process of negotiation, compromise, and voting, have instead been settled by decree: e.g., Chevron, Kelo, Obergefell. The consequences for speech have been pernicious. Since the time of Pericles, deliberative democracy has been predicated on the art of persuasion, which demands the forceful clarity of thought and expression without which no one has ever been persuaded. But a legislature that relegates its authority to judges and regulators will awaken to discover its oratorical culture has been stunted. When politicians, rather than seeking to convince and win over, prefer to project a studied and pleasant vagueness, debate withers into tedious defensive performance. It has been decades since any presidential debate has seen any sustained give and take over a matter of policy. If there is any suspense at all, it is only the possibility that a fatigued or peeved candidate might blurt out that tactless shard of truth known as a gaffe.
A generation accustomed to hearing platitudes smoothly dispensed from behind a teleprompter will find the speech of a fearless extemporaneous speaker to be startling, even disquieting; unfamiliar ideas always are. Unhappily, they have been taught to interpret that disquiet as an injury done to them, rather than as a premise offered to them to consider. All this would not have happened—certainly not to this extent—had not our deliberative democracy decided a generation ago that it preferred the security of incumbency to the risks of unshackled debate. The compulsory contraction of free speech on college campuses is but the logical extension of the voluntary contraction of free speech in our political culture.
Michael J. Lewis’s new book is City of Refuge: Separatists and Utopian Town Planning (Princeton University Press).
Heather Mac DonaldThe answer to the symposium question depends on how powerful the transmission belt is between academia and the rest of the country. On college campuses, violence and brute force are silencing speakers who challenge left-wing campus orthodoxies. These totalitarian outbreaks have been met with listless denunciations by college presidents, followed by . . . virtually nothing. As of mid-May, the only discipline imposed for 2017’s mass attacks on free speech at UC Berkeley, Middlebury, and Clare-mont McKenna College was a letter of reprimand inserted—sometimes only temporarily—into the files of several dozen Middlebury students, accompanied by a brief period of probation. Previous outbreaks of narcis-sistic incivility, such as the screaming-girl fit at Yale and the assaults on attendees of Yale’s Buckley program, were discreetly ignored by college administrators.
Meanwhile, the professoriate unapologetically defends censorship and violence. After the February 1 riot in Berkeley to prevent Milo Yiannapoulos from speaking, Déborah Blocker, associate professor of French at UC Berkeley, praised the rioters. They were “very well-organized and very efficient,” Blocker reported admiringly to her fellow professors. “They attacked property but they attacked it very sparingly, destroying just enough University property to obtain the cancellation order for the MY event and making sure no one in the crowd got hurt” (emphasis in original). (In fact, perceived Milo and Donald Trump supporters were sucker-punched and maced; businesses downtown were torched and vandalized.) New York University’s vice provost for faculty, arts, humanities, and diversity, Ulrich Baer, displayed Orwellian logic by claiming in a New York Times op-ed that shutting down speech “should be understood as an attempt to ensure the conditions of free speech for a greater group of people.”
Will non-academic institutions take up this zeal for outright censorship? Other ideological products of the left-wing academy have been fully absorbed and operationalized. Racial victimology, which drives much of the campus censorship, is now standard in government and business. Corporate diversity trainers counsel that bias is responsible for any lack of proportional racial representation in the corporate ranks. Racial disparities in school discipline and incarceration are universally attributed to racism rather than to behavior. Public figures have lost jobs for violating politically correct taboos.
Yet Americans possess an instinctive commitment to the First Amendment. Federal judges, hardly an extension of the Federalist Society, have overwhelmingly struck down campus speech codes. It is hard to imagine that they would be any more tolerant of the hate-speech legislation so prevalent in Europe. So the question becomes: At what point does the pressure to conform to the elite worldview curtail freedom of thought and expression, even without explicit bans on speech?
Social stigma against conservative viewpoints is not the same as actual censorship. But the line can blur. The Obama administration used regulatory power to impose a behavioral conformity on public and private entities. School administrators may have technically still possessed the right to dissent from novel theories of gender, but they had to behave as if they were fully on board with the transgender revolution when it came to allowing boys to use girls’ bathrooms and locker rooms.
Had Hillary Clinton had been elected president, the federal bureaucracy would have mimicked campus diversocrats with even greater zeal. That threat, at least, has been avoided. Heresies against left-wing dogma may still enter the public arena, if only by the back door. The mainstream media have lurched even further left in the Trump era, but the conservative media, however mocked and marginalized, are expanding (though Twitter and Facebook’s censorship of conservative speakers could be a harbinger of more official silencing).
Outside the academy, free speech is still legally protected, but its exercise requires ever greater determination.
Heather Mac Donald is a fellow at the Manhattan Institute and the author of The War on Cops.
John McWhorterThere is a certain mendacity, as Brick put it in Cat on a Hot Tin Roof, in our discussion of free speech on college campuses. Namely, none of us genuinely wish that absolutely all issues be aired in the name of education and open-mindedness. To insist so is to pretend that civilized humanity makes nothing we could call advancement in philosophical consensus.
I doubt we need “free speech” on issues such as whether slavery and genocide are okay, whether it has been a mistake to view women as men’s equals, or to banish as antique the idea that whites are a master race while other peoples represent a lower rung on the Darwinian scale. With all due reverence of John Stuart Mill’s advocacy for the regular airing of even noxious views in order to reinforce clarity on why they were rejected, we are also human beings with limited time. A commitment to the Enlightenment justifiably will decree that certain views are, indeed, no longer in need of discussion.
However, our modern social-justice warriors are claiming that this no-fly zone of discussion is vaster than any conception of logic or morality justifies. We are being told that questions regarding the modern proposals about cultural appropriation, about whether even passing infelicitous statements constitute racism in the way that formalized segregation and racist disparagement did, or about whether social disparities can be due to cultural legacies rather than structural impediments, are as indisputably egregious, backwards, and abusive as the benighted views of the increasingly distant past.
That is, the new idea is not only that discrimination and inequality still exist, but that to even question the left’s utopian expectation on such matters justifies the same furious, sloganistic and even physically violent resistance that was once levelled against those designated heretics by a Christian hegemony.
Of course the protesters in question do not recognize themselves in a portrait as opponents of something called heresy. They suppose that Galileo’s opponents were clearly wrong but that they, today, are actually correct in a way that no intellectual or moral argument could coherently deny.
As such, we have students allowed to decree college campuses as “racist” when they are the least racist spaces on the planet—because they are, predictably given the imperfection of humans, not perfectly free of passingly unsavory interactions. Thinkers invited to talk for a portion of an hour from the right rather than the left and then have dinner with a few people and fly home are treated as if they were reanimated Hitlers. The student of color who hears a few white students venturing polite questions about the leftist orthodoxy is supported in fashioning these questions as “racist” rhetoric.
The people on college campuses who openly and aggressively spout this new version of Christian (or even Islamist) crusading—ironically justifying it as a barricade against “fascist” muzzling of freedom when the term applies ominously well to the regime they are fostering—are a minority. However, the sawmill spinning blade of their rhetoric has succeeding in rendering opposition as risky as espousing pedophilia, such that only those natively open to violent criticism dare speak out. The latter group is small. The campus consensus thereby becomes, if only at moralistic gunpoint à la the ISIS victim video, a strangled hard-leftism.
Hence freedom of speech is indeed threatened on today’s college campuses. I have lost count of how many of my students, despite being liberal Democrats (many of whom sobbed at Hillary Clinton’s loss last November), have told me that they are afraid to express their opinions about issues that matter, despite the fact that their opinions are ones that any liberal or even leftist person circa 1960 would have considered perfectly acceptable.
Something has shifted of late, and not in a direction we can legitimately consider forwards.
John McWhorter teaches linguistics, philosophy, and music history at Columbia University and is the author of The Language Hoax, Words on the Move, and Talking Back, Talking Black.
Kate Bachelder OdellIt’s 2021, and Harvard Square has devolved into riots: Some 120 people are injured in protests, and the carnage includes fire-consumed cop cars and smashed-in windows. The police discharge canisters of tear gas, and, after apprehending dozens of protesters, enforce a 1:45 A.M. curfew. Anyone roaming the streets after hours is subject to arrest. About 2,000 National Guardsmen are prepared to intervene. Such violence and disorder is also roiling Berkeley and other elite and educated areas.
Oh, that’s 1970. The details are from the Harvard Crimson’s account of “anti-war” riots that spring. The episode is instructive in considering whether free speech is under threat in the United States. Almost daily, there’s a new YouTube installment of students melting down over viewpoints of speakers invited to one campus or another. Even amid speech threats from government—for example, the IRS’s targeting of political opponents—nothing has captured the public’s attention like the end of free expression at America’s institutions of higher learning.
Yet disruption, confusion, and even violence are not new campus phenomena. And it’s hard to imagine that young adults who deployed brute force in the 1960s and ’70s were deeply committed to the open and peaceful exchange of ideas.
There may also be reason for optimism. The rough and tumble on campus in the 1960s and ’70s produced a more even-tempered ’80s and ’90s, and colleges are probably heading for another course correction. In covering the ruckuses at Yale, Missouri, and elsewhere, I’ve talked to professors and students who are figuring out how to respond to the illiberalism, even if the reaction is delayed. The University of Chicago put out a set of free-speech principles last year, and others schools such as Princeton and Purdue have endorsed them.
The NARPs—Non-Athletic Regular People, as they are sometimes known on campus—still outnumber the social-justice warriors, who appear to be overplaying their hand. Case in point is the University of Missouri, which experienced a precipitous drop in enrollment after instructor Melissa Click and her ilk stoked racial tensions last spring. The college has closed dorms and trimmed budgets. Which brings us to another silver lining: The economic model of higher education (exorbitant tuition to pay ever more administrators) may blow up traditional college before the fascists can.
Note also that the anti-speech movement is run by rich kids. A Brookings Institution analysis from earlier this year discovered that “the average enrollee at a college where students have attempted to restrict free speech comes from a family with an annual income $32,000 higher than that of the average student in America.” Few rank higher in average income than those at Middlebury College, where students evicted scholar Charles Murray in a particularly ugly scene. (The report notes that Murray was received respectfully at Saint Louis University, “where the median income of students’ families is half Middlebury’s.”) The impulses of over-adulated 20-year-olds may soon be tempered by the tyranny of having to show up for work on a daily basis.
None of this is to suggest that free speech is enjoying some renaissance either on campus or in America. But perhaps as the late Wall Street Journal editorial-page editor Robert Bartley put it in his valedictory address: “Things could be worse. Indeed, they have been worse.”
Kate Bachelder Odell is an editorial writer for the Wall Street Journal.
Jonathan RauchIs free speech under threat? The one-syllable answer is “yes.” The three-syllable answer is: “Yes, of course.” Free speech is always under threat, because it is not only the single most successful social idea in all of human history, it is also the single most counterintuitive. “You mean to say that speech that is offensive, untruthful, malicious, seditious, antisocial, blasphemous, heretical, misguided, or all of the above deserves government protection?” That seemingly bizarre proposition is defensible only on the grounds that the marketplace of ideas turns out to be the most powerful engine of knowledge, prosperity, liberty, social peace, and moral advancement that our species has had the good fortune to discover.
Every new generation of free-speech advocates will need to get up every morning and re-explain the case for free speech and open inquiry—today, tomorrow, and forever. That is our lot in life, and we just need to be cheerful about it. At discouraging moments, it is helpful to remember that the country has made great strides toward free speech since 1798, when the Adams administration arrested and jailed its political critics; and since the 1920s, when the U.S. government banned and burned James Joyce’s great novel Ulysses; and since 1954, when the government banned ONE, a pioneering gay journal. (The cover article was a critique of the government’s indecency censors, who censored it.) None of those things could happen today.
I suppose, then, the interesting question is: What kind of threat is free speech under today? In the present age, direct censorship by government bodies is rare. Instead, two more subtle challenges hold sway, especially, although not only, on college campuses. The first is a version of what I called, in my book Kindly Inquisitors, the humanitarian challenge: the idea that speech that is hateful or hurtful (in someone’s estimation) causes pain and thus violates others’ rights, much as physical violence does. The other is a version of what I called the egalitarian challenge: the idea that speech that denigrates minorities (again, in someone’s estimation) perpetuates social inequality and oppression and thus also is a rights violation. Both arguments call upon administrators and other bureaucrats to defend human rights by regulating speech rights.
Both doctrines are flawed to the core. Censorship harms minorities by enforcing conformity and entrenching majority power, and it no more ameliorates hatred and injustice than smashing thermometers ameliorates global warming. If unwelcome words are the equivalent of bludgeons or bullets, then the free exchange of criticism—science, in other words—is a crime. I could go on, but suffice it to say that the current challenges are new variations on ancient themes—and they will be followed, in decades and centuries to come, by many, many other variations. Memo to free-speech advocates: Our work is never done, but the really amazing thing, given the proposition we are tasked to defend, is how well we are doing.
Jonathan Rauch is a senior fellow at the Brookings Institution and the author of Kindly Inquisitors: The New Attacks on Free Thought.
Nicholas Quinn RosenkranzSpeech is under threat on American campuses as never before. Censorship in various forms is on the rise. And this year, the threat to free speech on campus took an even darker turn, toward actual violence. The prospect of Milo Yiannopoulos speaking at Berkeley provoked riots that caused more than $100,000 worth of property damage on the campus. The prospect of Charles Murray speaking at Middlebury led to a riot that put a liberal professor in the hospital with a concussion. Ann Coulter’s speech at Berkeley was cancelled after the university determined that none of the appropriate venues could be protected from “known security threats” on the date in question.
The free-speech crisis on campus is caused, at least in part, by a more insidious campus pathology: the almost complete lack of intellectual diversity on elite university faculties. At Yale, for example, the number of registered Republicans in the economics department is zero; in the psychology department, there is one. Overall, there are 4,410 faculty members at Yale, and the total number of those who donated to a Republican candidate during the 2016 primaries was three.
So when today’s students purport to feel “unsafe” at the mere prospect of a conservative speaker on campus, it may be easy to mock them as “delicate snowflakes,” but in one sense, their reaction is understandable: If students are shocked at the prospect of a Republican behind a university podium, perhaps it is because many of them have never before laid eyes on one.
To see the connection between free speech and intellectual diversity, consider the recent commencement speech of Harvard President Drew Gilpin Faust:
Universities must be places open to the kind of debate that can change ideas….Silencing ideas or basking in intellectual orthodoxy independent of facts and evidence impedes our access to new and better ideas, and it inhibits a full and considered rejection of bad ones. . . . We must work to ensure that universities do not become bubbles isolated from the concerns and discourse of the society that surrounds them. Universities must model a commitment to the notion that truth cannot simply be claimed, but must be established—established through reasoned argument, assessment, and even sometimes uncomfortable challenges that provide the foundation for truth.
Faust is exactly right. But, alas, her commencement audience might be forgiven a certain skepticism. After all, the number of registered Republicans in several departments at Harvard—e.g., history and psychology—is exactly zero. In those departments, the professors themselves may be “basking in intellectual orthodoxy” without ever facing “uncomfortable challenges.” This may help explain why some students will do everything in their power to keep conservative speakers off campus: They notice that faculty hiring committees seem to do exactly the same thing.
In short, it is a promising sign that true liberal academics like Faust have started speaking eloquently about the crucial importance of civil, reasoned disagreement. But they will be more convincing on this point when they hire a few colleagues with whom they actually disagree.
Nicholas Quinn Rosenkranz is a professor of law at Georgetown. He serves on the executive committee of Heterodox Academy, which he co-founded, on the board of directors of the Federalist Society, and on the board of directors of the Foundation for Individual Rights in Education (FIRE).
Ben ShapiroIn February, I spoke at California State University in Los Angeles. Before my arrival, professors informed students that a white supremacist would be descending on the school to preach hate; threats of violence soon prompted the administration to cancel the event. I vowed to show up anyway. One hour before the event, the administration backed down and promised to guarantee that the event could go forward, but police officers were told not to stop the 300 students, faculty, and outside protesters who blocked and assaulted those who attempted to attend the lecture. We ended up trapped in the auditorium, with the authorities telling students not to leave for fear of physical violence. I was rushed from campus under armed police guard.
Is free speech under assault?
Of course it is.
On campus, free speech is under assault thanks to a perverse ideology of intersectionality that claims victim identity is of primary value and that views are a merely secondary concern. As a corollary, if your views offend someone who outranks you on the intersectional hierarchy, your views are treated as violence—threats to identity itself. On campus, statements that offend an individual’s identity have been treated as “microaggressions”–actual aggressions against another, ostensibly worthy of violence. Words, students have been told, may not break bones, but they will prompt sticks and stones, and rightly so.
Thus, protesters around the country—leftists who see verbiage as violence—have, in turn, used violence in response to ideas they hate. Leftist local authorities then use the threat of violence as an excuse to ideologically discriminate against conservatives. This means public intellectuals like Charles Murray being run off of campus and his leftist professorial cohort viciously assaulted; it means Ann Coulter being targeted for violence at Berkeley; it means universities preemptively banning me and Ayaan Hirsi Ali and Condoleezza Rice and even Jason Riley.
The campus attacks on free speech are merely the most extreme iteration of an ideology that spans from left to right: the notion that your right to free speech ends where my feelings begin. Even Democrats who say that Ann Coulter should be allowed to speak at Berkeley say that nobody should be allowed to contribute to a super PAC (unless you’re a union member, naturally).
Meanwhile, on the right, the president’s attacks on the press have convinced many Republicans that restrictions on the press wouldn’t be altogether bad. A Vanity Fair/60 Minutes poll in late April found that 36 percent of Americans thought freedom of the press “does more harm than good.” Undoubtedly, some of that is due to the media’s obvious bias. CNN’s Jeff Zucker has targeted the Trump administration for supposedly quashing journalism, but he was silent when the Obama administration’s Department of Justice cracked down on reporters from the Associated Press and Fox News, and when hacks like Deputy National Security Adviser Ben Rhodes openly sold lies regarding Iran. But for some on the right, the response to press falsities hasn’t been to call for truth, but to instead echo Trumpian falsehoods in the hopes of damaging the media. Free speech is only important when people seek the truth. Leftists traded truth for tribalism long ago; in response, many on the right seem willing to do the same. Until we return to a common standard under which facts matter, free speech will continue to rest on tenuous grounds.
Ben Shapiro is the editor in chief of The Daily Wire and the host of The Ben Shapiro Show.
Judith ShulevitzIt’s tempting to blame college and university administrators for the decline of free speech in America, and for years I did just that. If the guardians of higher education won’t inculcate the habits of mind required for serious thinking, I thought, who will? The unfettered but civil exchange of ideas is the basic operation of education, just as addition is the basic operation of arithmetic. And universities have to teach both the unfettered part and the civil part, because arguing in a respectful manner isn’t something anyone does instinctively.
So why change my mind now? Schools still cling to speech codes, and there still aren’t enough deans like the one at the University of Chicago who declared his school a safe-space-free zone. My alma mater just handed out prizes for “enhancing race and/or ethnic relations” to two students caught on video harassing the dean of their residential college, one screaming at him that he’d created “a space for violence to happen,” the other placing his face inches away from the dean’s and demanding, “Look at me.” All this because they deemed a thoughtful if ill-timed letter about Halloween costumes written by the dean’s wife to be an act of racist aggression. Yale should discipline students who behave like that, even if they’re right on the merits (I don’t think they were, but that’s not the point). They certainly don’t deserve awards. I can’t believe I had to write that sentence.
But in abdicating their responsibilites, the universities have enabled something even worse than an attack on free speech. They’ve unleashed an assault on themselves. There’s plenty of free speech around; we know that because so much bad speech—low-minded nonsense—tests our constitutional tolerance daily, and that’s holding up pretty well. (As Nicholas Lemann observes elsewhere in this symposium, Facebook and Google represent bigger threats to free speech than students and administrators.) What’s endangered is good speech.
Universities were setting themselves up to be used. Provocateurs exploit the atmosphere on campus to goad overwrought students, then gleefully trash the most important bastion of our crumbling civil society. Higher education and everything it stands for—logical argument, the scientific method, epistemological rigor—start to look illegitimate. Voters perceive tenure and research and higher education itself as hopelessly partisan and unworthy of taxpayers’ money.
The press is a secondary victim of this process of delegitimization. If serious inquiry can be waved off as ideology, then facts won’t be facts and reporting can’t be trusted. All journalism will be equal to all other journalism, and all journalists will be reduced to pests you can slam to the ground with near impunity. Politicians will be able to say anything and do just about anything and there will be no countervailing authority to challenge them. I’m pretty sure that that way lies Putinism and Erdoganism. And when we get to that point, I’m going to start worrying about free speech again.
Judith Shulevitz is a critic in New York.
Harvey SilverglateFree speech is, and has always been, threatened. The title of Nat Hentoff’s 1993 book Free Speech for Me – but Not for Thee is no less true today than at any time, even as the Supreme Court has accorded free speech a more absolute degree of protection than in any previous era.
Since the 1980s, the high court has decided most major free-speech cases in favor of speech, with most of the major decisions being unanimous or nearly so.
Women’s-rights advocates were turned back by the high court in 1986 when they sought to ban the sale of printed materials that, because deemed pornographic by some, were alleged to promote violence against women. Censorship in the name of gender–based protection thus failed to gain traction.
Despite the demands of civil-rights activists, the Supreme Court in 1992 declared cross-burning to be a protected form of expression in R.A.V. v. City of St. Paul, a decision later refined to strengthen a narrow exception for when cross-burning occurs primarily as a physical threat rather than merely an expression of hatred.
Other attempts at First Amendment circumvention have been met with equally decisive rebuff. When the Reverend Jerry Falwell sued Hustler magazine publisher Larry Flynt for defamation growing out of a parody depicting Falwell’s first sexual encounter as a drunken tryst with his mother in an outhouse, a unanimous Supreme Court lectured on the history of parody as a constitutionally protected, even if cruel, form of social and political criticism.
When the South Boston Allied War Veterans, sponsor of Boston’s Saint Patrick’s Day parade, sought to exclude a gay veterans’ group from marching under its own banner, the high court unanimously held that as a private entity, even though marching in public streets, the Veterans could exclude any group marching under a banner conflicting with the parade’s socially conservative message, notwithstanding public-accommodations laws. The gay group could have its own parade but could not rain on that of the conservatives.
Despite such legal clarity, today’s most potent attacks on speech are coming, ironically, from liberal-arts colleges. Ubiquitous “speech codes” limit speech that might insult, embarrass, or “harass,” in particular, members of “historically disadvantaged” groups. “Safe spaces” and “trigger warnings” protect purportedly vulnerable students from hearing words and ideas they might find upsetting. Student demonstrators and threats of violence have forced the cancellation of controversial speakers, left and right.
It remains unclear how much campus censorship results from politically correct faculty, control-obsessed student-life administrators, or students socialized and indoctrinated into intolerance. My experience suggests that the bureaucrats are primarily, although not entirely, to blame. When sued, colleges either lose or settle, pay a modest amount, and then return to their censorious ways.
This trend threatens the heart and soul of liberal education. Eventually it could infect the entire society as these students graduate and assume influential positions. Whether a resulting flood of censorship ultimately overcomes legal protections and weakens democracy remains to be seen.
Harvey Silverglate, a Boston-based lawyer and writer, is the co-author of The Shadow University: The Betrayal of Liberty on America’s Campuses (Free Press, 1998). He co-founded the Foundation for Individual Rights in Education in 1999 and is on FIRE’s board of directors. He spent some three decades on the board of the ACLU of Massachusetts, two of those years as chairman. Silverglate taught at Harvard Law School for a semester during a sabbatical he took in the mid-1980s.
Christina Hoff SommersWhen Heather Mac Donald’s “blue lives matter” talk was shut down by a mob at Claremont McKenna College, the president of neighboring Pomona College sent out an email defending free speech. Twenty-five students shot back a response: “Heather Mac Donald is a fascist, a white supremacist . . . classist, and ignorant of interlocking systems of domination that produce the lethal conditions under which oppressed peoples are forced to live.”
Some blame the new campus intolerance on hypersensitive, over-trophied millennials. But the students who signed that letter don’t appear to be fragile. Nor do those who recently shut down lectures at Berkeley, Middlebury, DePaul, and Cal State LA. What they are is impassioned. And their passion is driven by a theory known as intersectionality.
Intersectionality is the source of the new preoccupation with microaggressions, cultural appropriation, and privilege-checking. It’s the reason more than 200 colleges and universities have set up Bias Response Teams. Students who overhear potentially “otherizing” comments or jokes are encouraged to make anonymous reports to their campus BRTs. A growing number of professors and administrators have built their careers around intersectionality. What is it exactly?
Intersectionality is a neo-Marxist doctrine that views racism, sexism, ableism, heterosexism, and all forms of “oppression” as interconnected and mutually reinforcing. Together these “isms” form a complex arrangement of advantages and burdens. A white woman is disadvantaged by her gender but advantaged by her race. A Latino is burdened by his ethnicity but privileged by his gender. According to intersectionality, American society is a “matrix of domination,” with affluent white males in control. Not only do they enjoy most of the advantages, they also determine what counts as “truth” and “knowledge.”
But marginalized identities are not without resources. According to one of intersectionality’s leading theorists, Patricia Collins (former president of the American Sociology Association), disadvantaged groups have access to deeper, more liberating truths. To find their voice, and to enlighten others to the true nature of reality, they require a safe space—free of microaggressive put-downs and imperious cultural appropriations. Here they may speak openly about their “lived experience.” Lived experience, according to intersectional theory, is a better guide to the truth than self-serving Western and masculine styles of thinking. So don’t try to refute intersectionality with logic or evidence: That only proves that you are part of the problem it seeks to overcome.
How could comfortably ensconced college students be open to a convoluted theory that describes their world as a matrix of misery? Don’t they flinch when they hear intersectional scholars like bell hooks refer to the U.S. as an “imperialist, white-supremacist, capitalist patriarchy”? Most take it in stride because such views are now commonplace in high-school history and social studies texts. And the idea that knowledge comes from lived experience rather than painstaking study and argument is catnip to many undergrads.
Silencing speech and forbidding debate is not an unfortunate by-product of intersectionality—it is a primary goal. How else do you dismantle a lethal system of oppression? As the protesting students at Claremont McKenna explained in their letter: “Free speech . . . has given those who seek to perpetuate systems of domination a platform to project their bigotry.” To the student activists, thinkers like Heather MacDonald and Charles Murray are agents of the dominant narrative, and their speech is “a form of violence.”
It is hard to know how our institutions of higher learning will find their way back to academic freedom, open inquiry, and mutual understanding. But as long as intersectional theory goes unchallenged, campus fanaticism will intensify.
Christina Hoff Sommers is a resident scholar at the American Enterprise Institute. She is the author of several books, including Who Stole Feminism? and The War Against Boys. She also hosts The Factual Feminist, a video blog. @Chsommers
John StosselYes, some college students do insane things. Some called police when they saw “Trump 2016” chalked on sidewalks. The vandals at Berkeley and the thugs who assaulted Charles Murray are disgusting. But they are a minority. And these days people fight back.
Someone usually videotapes the craziness. Yale’s “Halloween costume incident” drove away two sensible instructors, but videos mocking Yale’s snowflakes, like “Silence U,” make such abuse less likely. Groups like Young America’s Foundation (YAF) publicize censorship, and the Foundation for Individual Rights in Education (FIRE) sues schools that restrict speech.
Consciousness has been raised. On campus, the worst is over. Free speech has always been fragile. I once took cameras to Seton Hall law school right after a professor gave a lecture on free speech. Students seemed to get the concept. Sean, now a lawyer, said, “Protect freedom for thought we hate; otherwise you never have a society where ideas clash, and we come up with the best idea.” So I asked, “Should there be any limits?” Students listed “fighting words,” “shouting fire in a theater,” malicious libel, etc.— reasonable court-approved exceptions. But then they went further. Several wanted bans on “hate” speech, “No value comes out of hate speech,” said Javier. “It inevitably leads to violence.”
No it doesn’t, I argued, “Also, doesn’t hate speech bring ideas into the open, so you can better argue about them, bringing you to the truth?”
“No,” replied Floyd, “With hate speech, more speech is just violence.”
So I pulled out a big copy of the First Amendment and wrote, “exception: hate speech.”
Two students wanted a ban on flag desecration “to respect those who died to protect it.”
One wanted bans on blasphemy:
“Look at the gravity of the harm versus the value in blasphemy—the harm outweighs the value.”
Several wanted a ban on political speech by corporations because of “the potential for large corporations to improperly influence politicians.”
Finally, Jillian, also now a lawyer, wanted hunting videos banned.
“It encourages harm down the road.”
I asked her, incredulously, “you’re comfortable locking up people who make a hunting film?”
“Oh, yeah,” she said. “It’s unnecessary cruelty to feeling and sentient beings.”
So, I picked up my copy of the Bill of Rights again. After “no law . . . abridging freedom of speech,” I added: “Except hate speech, flag burning, blasphemy, corporate political speech, depictions of hunting . . . ”
That embarrassed them. “We may have gone too far,” said Sean. Others agreed. One said, “Cross out the exceptions.” Free speech survived, but it was a close call. Respect for unpleasant speech will always be thin. Then-Senator Hillary Clinton wanted violent video games banned. John McCain and Russ Feingold tried to ban political speech. Donald Trump wants new libel laws, and if you burn a flag, he tweeted, consequences might be “loss of citizenship or a year in jail!” Courts or popular opinion killed those bad ideas.
Free speech will survive, assuming those of us who appreciate it use it to fight those who would smother it.
John Stossel is a FOX News/FOX Business Network Contributor.
Warren TreadgoldEven citizens of dictatorships are free to praise the regime and to talk about the weather. The only speech likely to be threatened anywhere is the sort that offends an important and intolerant group. What is new in America today is a leftist ideology that threatens speech precisely because it offends certain important and intolerant groups: feminists and supposedly oppressed minorities.
So far this new ideology is clearly dominant only in colleges and universities, where it has become so strong that most controversies concern outside speakers invited by students, not faculty speakers or speakers invited by administrators. Most academic administrators and professors are either leftists or have learned not to oppose leftism; otherwise they would probably never have been hired. Administrators treat even violent leftist protestors with respect and are ready to prevent conservative and moderate outsiders from speaking rather than provoke protests. Most professors who defend conservative or moderate speakers argue that the speakers’ views are indeed noxious but say that students should be exposed to them to learn how to refute them. This is very different from encouraging a free exchange of ideas.
Although the new ideology began on campuses in the ’60s, it gained authority outside them largely by means of several majority decisions of the Supreme Court, from Roe (1973) to Obergefell (2015). The Supreme Court decisions that endanger free speech are based on a presumed consensus of enlightened opinion that certain rights favored by activists have the same legitimacy as rights explicitly guaranteed by the Constitution—or even more legitimacy, because the rights favored by activists are assumed to be so fundamental that they need no grounding in specific constitutional language. The Court majorities found restricting abortion rights or homosexual marriage, as large numbers of Americans wish to do, to be constitutionally equivalent to restricting black voting rights or interracial marriage. Any denial of such equivalence therefore opposes fundamental constitutional rights and can be considered hate speech, advocating psychological and possibly physical harm to groups like women seeking abortions or homosexuals seeking approval. Such speech may still be constitutionally protected, but acting upon it is not.
This ideology of forbidding allegedly offensive speech has spread to most of the Democratic Party and the progressive movement. Rather than seeing themselves as taking one side in a free debate, progressives increasingly argue (for example) that opposing abortion is offensive to women and supporting the police is offensive to blacks. Some politicians object so strongly to such speech that despite their interest in winning votes, they attack voters who disagree with them as racists or sexists. Expressing views that allegedly discriminate against women, blacks, homosexuals, and various other minorities can now be grounds for a lawsuit.
Speech that supposedly offends women or minorities has already cost some people their careers, their businesses, and their opportunities to deliver or hear speeches. Such intimidation is the intended result of an ideology that threatens free speech.
Warren Treadgold is a professor of history at Saint Louis University.
Matt WelchLike a sullen zoo elephant rocking back and forth from leg to leg, there is an oversized paradox we’d prefer not to see standing smack in the sightlines of most our policy debates. Day by day, even minute by minute, America simultaneously gets less free in the laboratory, but more free in the field. Individuals are constantly expanding the limits and applications of their own autonomy, even as government transcends prior restraints on how far it can reach into our intimate business.
So it is that the Internal Revenue Service can charge foreign banks with collecting taxes on U.S. citizens (therefore causing global financial institutions to shun many of the estimated 6 million-plus Americans who live abroad), even while block-chain virtuosos make illegal transactions wholly undetectable to authorities. It has never been easier for Americans to travel abroad, and it’s never been harder to enter the U.S. without showing passports, fingerprints, retinal scans, and even social-media passwords.
What’s true for banking and tourism is doubly true for free speech. Social media has given everyone not just a platform but a megaphone (as unreadable as our Facebook timelines have all become since last November). At the same time, the federal government during this unhappy 21st century has continuously ratcheted up prosecutorial pressure against leakers, whistleblowers, investigative reporters, and technology companies.
A hopeful bulwark against government encroachment unique to the free-speech field is the Supreme Court’s very strong First Amendment jurisprudence in the past decade or two. Donald Trump, like Hillary Clinton before him, may prattle on about locking up flag-burners, but Antonin Scalia and the rest of SCOTUS protected such expression back in 1990. Barack Obama and John McCain (and Hillary Clinton—she’s as bad as any recent national politician on free speech) may lament the Citizens United decision, but it’s now firmly legal to broadcast unfriendly documentaries about politicians without fear of punishment, no matter the electoral calendar.
But in this very strength lies what might be the First Amendment’s most worrying vulnerability. Barry Friedman, in his 2009 book The Will of the People, made the persuasive argument that the Supreme Court typically ratifies, post facto, where public opinion has already shifted. Today’s culture of free speech could be tomorrow’s legal framework. If so, we’re in trouble.
For evidence of free-speech slippage, just read around you. When both major-party presidential nominees react to terrorist attacks by calling to shut down corners of the Internet, and when their respective supporters are actually debating the propriety of sucker punching protesters they disagree with, it’s hard to escape the conclusion that our increasingly shrill partisan sorting is turning the very foundation of post-1800 global prosperity into just another club to be swung in our national street fight.
In the eternal cat-and-mouse game between private initiative and government control, the former is always advantaged by the latter’s fundamental incompetence. But what if the public willingly hands government the power to muzzle? It may take a counter-cultural reformation to protect this most noble of American experiments.
Matt Welch is the editor at large of Reason.
Adam. J. WhiteFree speech is indeed under threat on our university campuses, but the threat did not begin there and it will not end there. Rather, the campus free-speech crisis is a particularly visible symptom of a much more fundamental crisis in American culture.
The problem is not that some students, teachers, and administrators reject traditional American values and institutions, or even that they are willing to menace or censor others who defend those values and institutions. Such critics have always existed, and they can be expected to use the tools and weapons at their disposal. The problem is that our country seems to produce too few students, teachers, and administrators who are willing or able to respond to them.
American families produce children who arrive on campus unprepared for, or uninterested in, defending our values and institutions. For our students who are focused primarily on their career prospects (if on anything at all), “[c]ollege is just one step on the continual stairway of advancement,” as David Brooks observed 16 years ago. “They’re not trying to buck the system; they’re trying to climb it, and they are streamlined for ascent. Hence they are not a disputatious group.”
Meanwhile, parents bear incomprehensible financial burdens to get their kids through college, without a clear sense of precisely what their kids will get out of these institutions in terms of character formation or civic virtue. With so much money at stake, few can afford for their kids to pursue more than career prospects.
Those problems are not created on campus, but they are exacerbated there, as too few college professors and administrators see their institutions as cultivators of American culture and republicanism. Confronted with activists’ rage, they offer no competing vision of higher education—let alone a compelling one.
Ironically, we might borrow a solution from the Left. Where progressives would leverage state power in service of their health-care agenda, we could do the same for education. State legislatures and governors, recognizing the present crisis, should begin to reform and renegotiate the fundamental nature of state universities. By making state universities more affordable, more productive, and more reflective of mainstream American values, they will attract students—and create incentives for competing private universities to follow suit.
Let’s hope they do it soon, for what’s at stake is much more than just free speech on campus, or even free speech writ large. In our time, as in Tocqueville’s, “the instruction of the people powerfully contributes to the support of a democratic republic,” especially “where instruction which awakens the understanding is not separated from moral education which amends the heart.” We need our colleges to cultivate—not cut down—civic virtue and our capacity for self-government. “Republican government presupposes the existence of these qualities in a higher degree than any other form,” Madison wrote in Federalist 55. If “there is not sufficient virtue among men for self-government,” then “nothing less than the chains of despotism” can restrain us “from destroying and devouring one another.”
Adam J. White is a research fellow at the Hoover Institution.
Cathy YoungA writer gets expelled from the World Science Fiction Convention for criticizing the sci-fi community’s preoccupation with racial and gender “inclusivity” while moderating a panel. An assault on free speech, or an exercise of free association? How about when students demand the disinvitation of a speaker—or disrupt the speech? When a critic of feminism gets banned from a social-media platform for unspecified “abuse”?
Such questions are at the heart of many recent free-speech controversies. There is no censorship by government; but how concerned should we be when private actors effectively suppress unpopular speech? Even in the freest society, some speech will—and should—be considered odious and banished to unsavory fringes. No one weeps for ostracized Holocaust deniers or pedophilia apologists.
But shunned speech needs to remain a narrow exception—or acceptable speech will inexorably shrink. As current Federal Communications Commission chairman Ajit Pai cautioned last year, First Amendment protections will be hollowed out unless undergirded by cultural values that support a free marketplace of ideas.
Sometimes, attacks on speech come from the right. In 2003, an Iraq War critic, reporter Chris Hedges, was silenced at Rockford College in Illinois by hecklers who unplugged the microphone and rushed the stage; some conservative pundits defended this as robust protest. Yet the current climate on the left—in universities, on social media, in “progressive” journalism, in intellectual circles—is particularly hostile to free expression. The identity-politics left, fixated on subtle oppressions embedded in everyday attitudes and language, sees speech-policing as the solution.
Is hostility to free-speech values on the rise? New York magazine columnist Jesse Singal argues that support for restrictions on public speech offensive to minorities has remained steady, and fairly high, since the 1970s. Perhaps. But the range of what qualifies as offensive—and which groups are to be shielded—has expanded dramatically. In our time, a leading liberal magazine, the New Republic, can defend calls to destroy a painting of lynching victim Emmett Till because the artist is white and guilty of “cultural appropriation,” and a feminist academic journal can be bullied into apologizing for an article on transgender issues that dares to mention “male genitalia.”
There is also a distinct trend of “bad” speech being squelched by coercion, not just disapproval. That includes the incidents at Middlebury College in Vermont and at Claremont McKenna in California, where mobs not only prevented conservative speakers—Charles Murray and Heather Mac Donald—from addressing audiences but physically threatened them as well. It also includes the use of civil-rights legislation to enforce goodthink in the workplace: Businesses may face stiff fines if they don’t force employees to call a “non-binary” co-worker by the singular “they,” even when talking among themselves.
These trends make a mockery of liberalism and enable the kind of backlash we have seen with Donald Trump’s election. But the backlash can bring its own brand of authoritarianism. It’s time to start rebuilding the culture of free speech across political divisions—a project that demands, above all, genuine openness and intellectual consistency. Otherwise it will remain, as the late, great Nat Hentoff put it, a call for “free speech for me, but not for thee.”
Cathy Young is a contributing editor at Reason.
Robert J. ZimmerFree speech is not a natural feature of human society. Many people are comfortable with free expression for views they agree with but would withhold this privilege for those they deem offensive. People justify such restrictions by various means: the appeal to moral certainty, political agendas, demand for change, opposing change, retaining power, resisting authority, or, more recently, not wanting to feel uncomfortable. Moral certainty about one’s views or a willingness to indulge one’s emotions makes it easy to assert that others are doing true damage or creating unacceptable offense simply by presenting a fundamentally different perspective.
The resulting challenges to free expression may come in the form of laws, threats, pressure (whether societal, group, or organizational), or self-censorship in the face of a prevailing consensus. Specific forms of challenge may be more or less pronounced as circumstances vary. But the widespread temptation to consider the silencing of “objectionable” viewpoints as acceptable implies that the challenge to free expression is always present.
The United States today is no exception. We benefit from the First Amendment, which asserts that the government shall make no law abridging the freedom of speech. However, fostering a society supporting free expression involves matters far beyond the law. The ongoing and increasing demonization of one group by another creates a political and social environment conducive to suppressing speech. Even violent acts opposing speech can become acceptable or encouraged. Such behavior is evident at both political rallies and university events. Our greatest current threat to free expression is the emergence of a national culture that accepts the legitimacy of suppression of speech deemed objectionable by a segment of the population.
University and college campuses present a particularly vivid instance of this cultural shift. There have been many well-publicized episodes of speakers being disinvited or prevented from speaking because of their views. However, the problem is much deeper, as there is significant self-censorship on many campuses. Both faculty and students sometimes find themselves silenced by social and institutional pressures to conform to “acceptable” views. Ironically, the very mission of universities and colleges to provide a powerful and deeply enriching education for their students demands that they embrace and protect free expression and open discourse. Failing to do so significantly diminishes the quality of the education they provide.
My own institution, the University of Chicago, through the words and actions of its faculty and leaders since its founding, has asserted the importance of free expression and its essential role in embracing intellectual challenge. We continue to do so today as articulated by the Chicago Principles, which strongly affirm that “the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed.” It is only in such an environment that universities can fulfill their own highest aspirations and provide leadership by demonstrating the value of free speech within society more broadly. A number of universities have joined us in reinforcing these values. But it remains to be seen whether the faculty and leaders of many institutions will truly stand up for these values, and in doing so provide a model for society as a whole.
Robert J. Zimmer is the president of the University of Chicago.