Mea culpa. In my morning mail I find a magazine with yet one more review of a book that I…
Mea culpa. In my morning mail I find a magazine with yet one more review of a book that I predicted was born to blush unseen, Patrick McCarthy’s Camus.1 This one is by Alfred Kazin, no less. So there seems to be something of a Camus revival after all. The editors of COMMENTARY had suggested that I pronounce a few heartfelt words, they were aware that I had known Camus and shared his turf, so to speak, in Algiers and Paris during and after World War II. Actually, our acquaintance was slight. In his famous battle with Sartre, Merleau-Ponty, Beauvoir & Co., I was on his side and said so, but it was they who had been my friends. There was something off-putting about Camus, he always had his dukes up, as someone said. He had fathers, brothers, lovers, but very few friends. Anyway, I demurred at first on the grounds that it was all covered with cobwebs and no one would care. Aside from a handful of archeo-Tolstoyans, neo-Gandhians, and hapless college students, who reads Camus nowadays?
This was like standing in a gale and wetting one’s finger to see if there’s any wind. McCarthy’s book had already appeared in London, and some of the resultant dithyrambs are proudly reproduced (together with a handsome photograph of Albert Camus by Cartier-Bresson) on the Random House jacket. Thus I learned that the prestigious Professor Stanley Hoffmann of Harvard, in a review published in England, had admired “McCarthy’s mix of sympathy and critical distance,” and judged his book to be “a balanced and probing examination of Camus’s life and works”; and that the Guardian reviewer thought “McCarthy’s special achievement [was] to have found the persuasive way back to Camus’s writings from his charismatic career. . . . He gives us the myth and the writer as superstar, but also an array of sharp critical tools with which to keep that myth under review.” Here I confess to some bemusement with “myth” and “critical tools,” but more to my purpose at the moment is another quotation from the book jacket, a grand and sweeping one from Conor Cruise O’Brien (who committed a Camus of his own about twelve years ago) to the effect that McCarthy’s book is “the best comprehensive study of Camus in English—and probably in any language,” and that “if it is translated into French [it] might prove to be the turning of the tide of fashion.”
This struck me as improbable: McCarthy in French, and commanding the tide. But why not, after all? Foreign historians and film-makers have played a conspicuous part in the reexamination of the French war record that has been going on so intensely since Marcel Ophuls’s The Sorrow and the Pity was banned (as offensive to the national sensibility) from French television. Michael Marrus and Robert Paxton on Vichy, David Pryce-Jones on the Occupation, Herbert Lottman on the Left Bank2—these people have badly shredded the legends and pieties of the Resistance; and since this, precisely, was the period from which Camus emerged as Voice and Conscience of the new France, it would be interesting to know more about what he was actually doing at the time. Unfortunately, McCarthy tells us little. Even Herbert Lottman (Albert Camus, 1979), our richest source of fact about Camus, is relatively terse on the Occupation period. But this, in any event, is not the sort of revisionism that O’Brien had in mind. He is suggesting that if France could only see Camus with the help of Patrick McCarthy, then the fallen hero might return to public favor.
Stranger things have happened. For all I know some ingenious linguist may at this very moment be bent to his task, laboring to translate this muscular prose, non sequiturs and all, into a language whose genius, the French grammarians keep telling us, is clarity. A dubious proposition, perhaps, that famous clarté française, now that we are struggling with Roland Barthes and Jacques Lacan, but to put McCarthy into any other language it would be helpful for a translator to know why he has written this book and what he is trying to say. I submit, pace O’Brien, that this would be no easy task.
The story McCarthy tells of Camus’s life is a foreshortening of what we already have in Lottman; the abbreviation leads to troublesome gaps and leaps, but the general outlines are clear; and anyway we would be suspicious of a biographer who found nothing to mystify him—and us—in the life of a man like Camus. It is when McCarthy summarizes Camus’s thought, or offers his own, that the trouble begins. Not because he challenges or alters the conventional view of Camus as a religious temperament lost in a godless world, a moral absolutist and a leftist disaffected with the Left, but because this view remains curiously unrelated to Camus’s “existence,” to use the period word; because he gives the shortest possible shrift to Camus’s ideas (not merely as they were entertained but as they were lived, which is what matters for the literary artist) and lacks all “feel” for the role they played in shaping the work he left us.
The analysis of The Rebel, for example, is mind-boggling in its simplism. It reads like the marginal comments of not one but a whole committee of professors on an exceptionally incompetent term paper. The comments may be pertinent or not, but they are invariably peremptory, almost stenographic, in form; and McCarthy hardly bothers to tie them together. So we learn that Camus, having set out “to show how Western culture since the French Revolution inspires men to murder,” worked hard and read widely but finally failed because “the gaps in his reading are apparent and the reading itself blurs his intuitive understanding.” We are never told what McCarthy means by Camus’s intuitive understanding, or how it was blurred; or how Camus arrived at his pessimistic conclusions—not about Western culture in general but rather about its nihilistic and “rebellious” tendencies. “Andre’ Breton quarrelled with his interpretation of Lautréamont, Sartre told him he knew nothing about Hegel, and even friends disputed what he said about Bakunin.” So the disaster is irremediable. McCarthy explains it with one of his characteristically rapid formulations: “Much of Camus’s art is caught up with concision and the lamentations of L’Homme révolté [The Rebel] do not suit him.” “Caught up” here seems to mean “requires” or “dependent on.” In other words, Camus should write short, not long.
McCarthy himself is so fond of writing short that he often escapes us. In his concluding chapter, for example, he tells us that Camus believed that “heroic declarations were not enough because man is fundamentally evil or empty.” And what had taught him this profound truth? “The age of Dachau.” For this all those people had to die? The passage is so typical that it deserves to be quoted at greater length: “[The] pseudo-heroic rhetoric [of terrorists who do nothing but gossip about their ever so sensitive souls] plagues Camus’s plays and spoils whole chapters of La Peste [The Plague]. Yet the age of Dachau also taught him that heroic declarations were not enough because man is fundamentally evil or empty.” But whence that “because”? If man were not fundamentally evil or empty, would heroic declarations have been enough? For what? And did Camus really hold those views about man’s nature? Or did he come to feel, as in La Peste, precisely, that evil and emptiness were a function of man’s situation since he had lost his faith in God, and that they could be alleviated, at least, by moral action? These points may be arguable, but my point is that McCarthy never argues. He fires away. “Although Camus wrote a great deal,” he tells us in his introduction, “only a small part remains alive. He was a bad philosopher and has little to tell us about politics. His plays are wooden even if his novels are superb.” Well, the French word for superb is superbe, the translator should have no trouble with that, even if McCarthy’s “even if” hangs a sort of question mark around it; and even if he never once in the course of this book gives us the impression that he is moved by these novels, or tries to tell us why they deserve to be called superb.
So I had some problems making sense of McCarthy and I think the French would, too. But the key issue for a publisher would be whether there is a chance of “turning the tide,” as O’Brien put it. At the moment, from this distance, the prospects look poor. Gallimard has enshrined Camus in a Pleiade edition, the ultimate in decent burials, and these books are bought, I am told, at a moderate pace, despite their considerable cost. The plays are rarely if ever performed. But statistics prove nothing, one way or the other. The obvious fact is that neither Camus nor his intellectual adversaries of the 40’s and 50’s are what the French call actuels. Wages, prices, and trade balances are actuels, and so is structuralism, language, and psychoanalysis. The terrain has shifted, the jargon and the rules of engagement have changed, the intelligentsia have been largely if not definitively freed from the postwar nightmares, including the totalitarian temptation, at least in its Communist form; so that it is not merely McCarthy who would need to be translated but Camus himself. This can happen, of course; it is probably bound to happen; but not as an effect of O’Brien’s tide of fashion. Camus now belongs to History and Literature. It remains to be seen on what terms.
Meanwhile, McCarthy’s Camus has been widely if not uncritically greeted in this country. Aside from the learned journals and Sunday book sections, we’ve had a solid and intelligent essay by Frederick Brown in the New York Review of Books (November 18, 1982), a brilliant analysis by Norman Podhoretz in the New Criterion (November 1982) of Camus’s work and the political issues underlying both the work and the reactions of critics like O’Brien and McCarthy—a shrewd and penetrating reading that sheds light and gives pleasure—and the aforementioned Kazin in the New Republic (November 29, 1982), “tender[ly] remember[ing] Camus for his opposition to capital punishment, for his opposition to every form of totalitarianism and every pretense behind it,” and commending McCarthy for “bringing out the bare, hard facts of Camus’s life and the many reasons . . . why, in his struggles with fashionable literary opinion, Camus increasingly felt himself to be a failure.” And V.S. Pritchett, briefly, in the New Yorker. E tutti quanti. There will be others, but these are the big guns, blazing away from different angles.
So Patrick McCarthy’s Camus—the work of a professor at Haverford College, Pennsylvania, according to the book jacket—is no mere academic exercise. It has become an event. “Are we in for a Camus revival?” asks Podhoretz. “Indications are that we may be.” I can only bow my head.
The fact remains that reading about Camus is one thing and reading (or performing) his work is another. The revival that Podhoretz is talking about may very well begin with critics and historians and a renewal of interest in a writer’s life and times; but then a broader public must become involved, as happened in the case of Kipling, for example, a few years ago. Failing that, the fire goes out. For the moment we do seem to have a renewed interest (in this country, at least) in Camus’s itinerary, but not really in his work. There is no consensus, as we shall see, on what deserves to survive; and Camus’s biographers have given us a man, a Modern Instance—one that stirs us rather more, as we come to know him, than anything Camus wrote.
To be sure, McCarthy himself makes no such distinction: “The revival of interest in his work” (my emphasis), he tells us in the concluding paragraphs of his book, “has come in the late 70’s. . . . When the [French] Communists and Socialists split in September 1977, Jean Daniel, who had hardly spoken to Camus in the years before his death, began mentioning him in the Nouvel Observateur. Camus’s rejection of left-wing Utopias suddenly seemed fruitful; his suspicion of the Communists was more correct than many had thought.” Does this mean that McCarthy has changed his own mind about flunking Camus out in philosophy and politics? Not really. He goes on to suggest that Camus’s new admirers are merely using him for their own purposes (as if posterity ever did anything else), and he particularly deplores the fact that they are “employing L’Homme révolté [The Rebel] which still seems his worst book.”
For Norman Podhoretz, be it said in passing, The Rebel is the best, not the worst, of Camus—and he gives us some solid and persuasive reasons for his opinion, which McCarthy never does. In fact, says Podhoretz, “the Camus who should be revived is the one to whom justice is finally being done in France . . . not the travesty urged upon us first by O’Brien and now by McCarthy in the name of art but in the actual service of their anti-anti-Communist passions.” This would be a consummation devoutly to be wished—if justice were indeed being done to Camus in France.
I cannot see that it is. For one thing, the passions that Podhoretz is talking about, whether anti, or anti-anti, or even anti-anti-anti, do not seem to me to be raging on the banks of the Seine. Communism (that is, the ideology of Lenin and Stalin and their epigones) has long ceased to be an intellectual problem in France, as opposed to a practical political one. The Communist party itself, after the events of May ’68, the general adoption of the term goulag to designate the Soviet political system, and finally the defection of just about everyone who knows how to read and write, seems to have given up on the intellectuals as a bad job. For another, although Paris is still filled with contemporaries of Camus like Daniel, or former youth who remember reading and being troubled by The Rebel back in the 50’s, when non-Communism was barely tolerated among the intellectuals and anti-Communism taboo, the focus and even the language of the political debate seem (ungratefully) to have left Camus behind.
It is not just the old story of having been right too soon, or for the wrong reasons, although I do believe there is something in that. The trouble, rather, is that Camus worried apocalyptically about the End of History and the Master-Slave Dialectic and the Legitimacy of Terror. He was haunted by Shestov, Dostoevsky, Nietzsche. He wrote The Rebel, perforce, without first taking the precaution of assimilating Lévi-Strauss, Barthes, Althusser, Foucault, and Lacan. Nor, perforce, has he anything to say about what Raymond Aron derisively calls archeo-socialism, the ideology of M. Mitterrand & Co. All this makes Camus rather remote from Paris at the moment; and, as the French say, the absent are always wrong.
Even in his lifetime, after the Nobel Prize in 1957, there was a notable decline of interest in Camus’s work, although The Fall (published in early 1956) had sold very well. According to the schedule he had inscribed in his notebooks, he had now completed the preliminary business of putting his ideas in order, and his real work could begin. Only, it couldn’t. The war in Algeria weighed heavily on him. He was harassed by personal problems, blocked. Ironically, when things finally began to look up and he was writing again, it was too late.
When he died in an automobile crash early in 1960, in a car driven by his friend and publisher Michel Gallimard, Camus was looking forward not only to his magnum opus but to a new home in the south of France and to a whole new life, a program that would at last redeem him and free him from . . . what? I believe (not only on the basis of The Fall but from what he was saying to his friends and in his notebooks at the time) that it was from a deep sense of fraud. The word—for a less upright character—would seem excessive; not for Camus. It was not a fraud that he had deliberately perpetrated but one for which he felt responsible, especially since it had brought him fame and fortune. Call it a misunderstanding, after the title of his play (which Stuart Gilbert translated as Cross Purpose): mistaken identity, aggravated by a peculiar out-of-jointness and sense of doom; it had nagged at him. even as he basked in his glory as the exemplar and spokesman of France’s postwar renascence, as a hero of the Resistance and editor of Combat; even as his plays were being produced by the likes of Marcel Herrand and Jean-Louis Barrault, not to speak of his mistress, Maria Casarès; his novel (The Plague) breaking all sales records; his essays (The Myth of Sisyphus and The Rebel) figuring at the center of the controversies roiling the atmosphere of a Paris that had once again, and in part thanks to him, become the literary-intellectual capital of the world; and finally there was the bolt from Stockholm, the Nobel Prize, which struck him as carrying the joke a bit too far.
As always, he pulled himself together. He went to Stockholm and made a fine speech. But he kept protesting, mostly sotto voce, that a mistake had been made, it was all premature, his real work had not really begun. And why not Malraux? Camus felt that he had not so much chosen all those roles; he had been cast in them and played them as well as he could: now Byron, now Nietzsche, now the young Gandhi; in Algiers he had even had some success as a professional on the stage; but his big role, the one that counted most, was precisely the one he had not mastered. And this, surely, was the major reason for the profound depression that settled over him during the last years of his life.
McCarthy, who takes so dim a view of Camus as a thinker, does not hesitate to refer to the “greatness” of the novels—but, once again, ours not to know the reason why. “They are exceptional, if bleak, insights into the modern condition,” he tells us; and their characters are “depicted with irony, ambiguity, and understatement.” That seems to satisfy McCarthy, as motivation for his judgment of the novels, for he does not deign to give us more—unless it be his brief discussion of the narrative techniques of The Plague, which are alleged to constitute an innovation. Because Camus did not believe in God, he invented the non-omniscient narrator. Q.E.D. Anyway, McCarthy’s case is stated, if not made. But Camus himself, alas, knew better. He knew that as a literary artist he had accomplished little or nothing of enduring value, whatever the critics and the sales figures said. Not yet.
Camus grew up, it should be remembered, in a grimly professional tradition; his elders were monsters of productivity like Gide, Proust, Martin du Gard, Montherlant, Bernanos, Romains, Giraudoux, and too many others to mention. In the background were Balzac and Zola and Papa Hugo. Even a maniacal watchmaker like Flaubert ended up with a sizable oeuvre. But as a writer of fiction, Camus had not only produced very little; he had not yet found a distinctive voice. Each of his novels seemed to represent a new departure in style and tone, so that his problem (like that of any young novelist) still lay ahead: to settle down, find his voice and form, and tell his story. But circumstances had always intervened, illness, poverty, the war, politics, journalism, so that in 1954, when he wrote a preface for a volume of early pieces republished (in 1958) to capitalize on his fame, he wistfully confessed that he was still looking forward to “the day when a balance will be established between what I am and what I say.” A curious passage, this, and very moving. It is quite free of the posturing and drama of The Fall, the novella in which (a year or so later) Camus indulges himself in a sort of Dostoevskyan self-abasement, but it is deeply revealing. “On that day,” he continues, “and I scarcely dare write these words, I will be able at last to build the work I have dreamed of.”
Camus’s new life was to begin with an autobiographical novel which he called Le Premier Homme (The First Man). This would take him back to his Algerian origins and give him that longed-for chance to explain himself and his people to the metropolitan French who, having at last retreated from Indochina and relinquished control of Morocco and Tunisia, were beginning to think the unthinkable about Algeria. Was not the French presence there also doomed?
We now know, of course, that it was. But this was far from clear in the 50’s. What made Algeria different was that it had been, since 1830, a land of French settlement. The pieds noirs, as the settlers were called, were deeply rooted and a million strong. Only a very small number of them were colons, or landed proprietors, on the “colonialist” model propagated by the Left. By Camus’s time, most of the Europeans in Algeria were urban and many—including Camus’s family—were very poor. Nor were they all of metropolitan French origin; at least as many had come from Spain, Corsica, Malta, and Italy. If one adds the Jews (who were given easy access to French citizenship by the Crémieux Decree, late in the 19th century), it becomes obvious that the non-Muslim population of Algeria rather resembled the American melting pot. Although doomed (by a lower birthrate) to remain a minority, it was a people, a folk, with its dialects, traditions, and values, almost as difficult for the mainland French to understand as the Arabs.
Presumably, Le Premier Homme would have been bathed in that Mediterranean sunlight that Camus celebrates in his early lyrical essays, the sights, sounds, and smells of Noces and Eté to which he returns so nostalgically even in his philosophical work; and in that peculiar moral atmosphere, sensuous, silent, a little humiliated by the intensities of desire and pleasure, that Camus called pudeur. But now there was a new and terrible urgency about his project; at stake as the war news worsened was something more than his own career. The pied noir, Camus felt, was an invisible man, somewhat in the meaning that Ralph Ellison gave that term. He had to be made visible if he were to survive.
McCarthy works hard and to good effect—and so does Lottman—at conveying a sense of what it was like for Camus to grow up in the Algiers of the 1920’s and 1930’s, between his home in proletarian Belcourt and his schooling in the middle-class lycée and, later, at the university. Between Brooklyn and Manhattan, so to speak—deeply attached to his people and above all to his mother, but propelled by his brilliance and by the excellent French public-school system to leave his origins behind; so that it should not surprise us that Norman Podhoretz, the author of Making It and Breaking Ranks, shows an instinctive grasp for the tensions that racked Camus during his celebrated polemic with Sartre and the Temps modernes group, when he was torn between his natural feelings on one hand and the dogmas of the intellectual Left on the other.
Although the pieds noirs ostensibly reflected the political divisions of metropolitan France, and voted as Radicals, Socialists, Communists, Republicans, or whatnot, they inevitably gave these various tendencies what we would call a “hard-hat” orientation: conservative, national, almost tribal. The result of combining French ideas and piednoir sensibility could be disconcerting and even (as in Edmond Brua’s famous parodies of Corneille in the dialect of Bab-el-Oued) very funny. But this was one form of the Absurd to which Camus remained immune. Briefly a Communist in the 30’s, and then (out of his deepening pacifism) a reluctant supporter of Munich, he gradually came to admit the necessity of resisting the German Occupation. But he saw little point and had no stomach for armed struggle until Leclerc’s troops were approaching Paris, and when he was catapulted into his role as a Resistance hero, in circumstances almost as accidental as those that put Charlie Chaplin at the head of a workers’ demonstration in Modern Times, he felt (as so often in his public life) that he was both the victim and the beneficiary of a misunderstanding.
This must have tried his modesty, such as it was, and his delicate sense of probity; and we know that he resented the demands that were made on his time. But his public persona was far too controlled, dignified, “Spanish,” for humor. Life could and should be dealt with ironically, but it was no laughing matter. In this respect he was like André Malraux: a naturally portentous man. He could be playful and amusing with his friends, but in public his brow was always furrowed, his demeanor always grave. “After thirty one is responsible for one’s face.” This is a typical Camus aphorism, not necessarily original but memorable and neat. I can’t remember whether he wrote it or said it or both, but it was well before thirty that he acquired his worried look.
He acquired it, one supposes, worrying about his “existential” situation, as it came to be called. Camus’s name for it was the Absurd, the subject of his first important philosophical essay, The Myth of Sisyphus.
In one sense, Camus’s preoccupation with the Absurd simply reflected the current intellectual fashion: God was dead, the universe was bereft of purpose and moral law, and man was adrift in an adventure which is bound to have an unhappy ending, as he put it. These themes were more or less assigned, so to speak, if you were a a bright student of philosophy and letters, a protégé of Jean Grenier and an avid reader of La Nouvelle Revue Française. In another sense, however, they spoke very personally to Camus, as we shall see, first because of his poor health and the situation of his family; and then because as he moved out into the world, the roles of Sisyphus—the seducer, the actor, and the conqueror—suggested themselves naturally to him, as did the “happy” Sisyphus of the final line of the book. He was shaping his life as he could.
McCarthy scolds Camus for the inadequacy of The Myth of Sisyphus as a work of philosophy, and indeed it is not difficult to fault it on that score. But reading Sisyphus as if it were a formal treatise is no more rewarding than to read The Rebel as if it were “political science.” These are literary essays in the tradition of Montaigne, Pascal, Chamfort, and—closer to Camus—of Jean Grenier and Simone Weil. They are written with an easy authority and verbal felicity, but also with passion; and they speak directly to Camus’s own concerns. This is why they were so effective and had a palpable influence on their time.
The Rebel, especially, must be counted as one of the crucial events in the long, slow process of decontaminating the French intelligentsia after the war. Camus’s analysis of the history and spirit of revolutionary thinking seems quirky, incomplete, and abstract today, as indeed it did to many—even in France—at the time. The disciplines of the social sciences, economics, anthropology, sociology, are foreign to Camus; nor does it occur to him to examine the psychology of his rebels and revolutionists (among whom, following the literary fashion, he gives prominent place to such sports of nature as Lautréamont and the Marquis de Sade) in anything but moral and philosophical terms. Still, within its own limitations, this book was an act of lucidity and courage. It is not clear to me that the anti-Communism of The Rebel is what brings McCarthy’s wrath down upon it, as Podhoretz alleges, but Podhoretz is certainly right in arguing that this is the Camus who deserves to be revived.
If any. The question remains. The more I remember Camus and read of him, the more avid I become for details, insights, anything that will help me retrace and understand his journey from Belcourt to Paris and Lourmarin and finally to that stretch of wet highway near Villeblevin where he died. Nothing much came of the few times we met, but I keep recalling bits and pieces of talk and the way he had of squinting to keep the cigarette smoke out of his eyes, a sort of tough-guy pose. Clearly, it’s not the writer I’ve been looking for, but the man.
Not that the one excludes the other, but the fact is that we are often embarrassed or exasperated or bored by the writer, never by the man. It so happens that I’ve been reading myself back into that period, for reasons of my own, turning hundreds of pages in books that have been sitting untouched on my shelves for all these years. The pages flake and crumble—the French used such poor paper during and right after the war—but I am surprised and pleased at how much remains readable, even lively and exciting, today. But, alas, not much of Camus. It pains me to say this. Moreover, I find myself turning away from the philosophical essays (his best work) to his most mediocre writings, the short pieces of L’Envers et l’Endroit, for example, simply because the writer is more in command in the former and Brother Camus more visible in the latter.
In any event, it’s hard going. The mind is always vigorous, the posture noble, the prose harmonious and lucid in the traditional French way. Plenty to admire, in short, but so little to savor! So lifeless and cold! This must be the effect of what Barthes called “zero-degree writing.” You keep your eyes open as long as you can, propping your temples with your fingertips, and then reach for The Fall—an embarrassingly melodramatic book but filled with Camus’s person—or you pick up a Queneau, a Ponge, a Michaud, or even (my secret life!) a Marcel Aymé, to stave off sleep or despair.
The trouble is, we’ve been through these arguments so many times. So much of what he says seems, today, to go without saying. But let us grant that he says it with nobility and power, and at a time when it clearly did not go without saying. Before and after The Rebel, a book that Richard Wollheim in the Cambridge Review did not hesitate to compare to Rousseau and Hobbes, Camus took his role as a secular saint very seriously, at the risk (and finally at the cost) of isolating himself from all the ideological gangs whose aim, he said, was not to convince but to kill.
One of the stories from his childhood that stuck in his mind was about his father coming home and vomiting, after having witnessed a capital execution. In his arguments with D’Astier, Jeanson, and Sartre he insisted that his rejection of violence as a means did not lessen his commitment to justice as an end. Indeed, it was a weakness of his position that, in line with the tradition of the French Left, he conceded that the revolutionaries did in fact have justice on their side, regardless of the means they used. But the core of his position, the leitmotif of his political attitude from beginning to end, was an integral pacifism, from which he departed only once or twice—in great anguish—but never for long
His biographers, McCarthy especially, seem to feel that in his battle with the Temps modernes crowd Camus got much the worst of it, and that this was a major cause of the depression that so darkened those years. But how does one tot up the score? At the time of the break over The Rebel, Sartre and his gang were veering sharply toward Stalinism, precisely when the moribund old monster was hanging the Rajks, Slanskys, and Petkovs of the Popular Democracies, so-called, inventing the fantastic “doctors’ plot” and unleashing a policy of open anti-Semitism in the socialist Fatherland itself. Solzhenitsyn and the “dissidents” were still frozen and voiceless, as far as the West was concerned, but the 50’s were the period of the first worker revolts in the East: Berlin, Poznan, Budapest. Yet issue after issue of Les Temps modernes appeared with an interminable essay by Merleau-Ponty or Sartre or one of their acolytes chopping logic to prove that the Communists were all the more necessarily right because they were so obviously wrong. If this be victory, it was a pyrrhic one; and as the internal and external menace receded and the European recovery took shape, the sophistication and natural skepticism of the intellectuals reasserted themselves. It took time. Debates of this kind are never entirely resolved. But who can deny that it is Camus’s position, on the whole, that now prevails?
Unfortunately, France was beginning to “decolonize,” and what we now call the North-South issue was waiting in the wings to offer Camus’s homme révolté another incarnation, for a time at least. All these political preoccupations, together with his personal problems and who knows what secret suspicion that it was too late, that the die was cast, kept him from settling down to what he stubbornly insisted was his true vocation. If Camus felt defeated, and we know from his notebooks and correspondence that he often did, it was not because Sartre had questioned his competence as a philosopher, or because he had failed to turn the Parisian intellectuals around with a single book. It was because he sat on innumerable committees, held down a job and an office at the Nouvelle Revue Française, wrote editorials for Combat and L’Express—not to speak of his incessant involvement in the theater, his need for sociability and erotic adventure and, increasingly, his intervention on behalf of the victims of the Algerian war. He was a prime example of what his friend, Bloch-Michel, called la civilisation de l’agenda; and it is a wonder under the circumstances that he managed to write any fiction at all.
What he did write is troubled and oddly obscure. Here and there the lucidity and easy authority of the essayist reappear, in a reflection by Dr. Rieux (The Plague) or in one of Jean-Baptiste Clamence’s tirades (The Fall); but on the whole the novels take us into a world where Camus is still feeling his way, as it were, and where his intentions have given rise to endless interpretation. But this is not merely to say that like any novelist, Camus created characters who are mysterious and problematical, as was the case with Meursault, the hero of The Stranger, the book that not only brought Camus to the attention of the French public but remains his most convincing performance as a literary artist. However we interpret the behavior of Meursault, we see him as a meaningful instance; he is part of a privatized, affectless, and disquieting world that we recognize as real. Nothing of the sort happens, however, in the two novels that Camus managed to finish in the 50’s. In The Stranger, it was the fictional hero and not the author who was tragically unable to make himself understood. In The Plague and The Fall, it is the other way around. The characters explain themselves interminably, but what the author is trying to tell us remains unclear.
Of the two, The Fall is the more engaging book, a dramatic monologue by a lawyer who has made a career out of defending the weak and the poor, and who now proclaims his vaunted virtue to have been hypocritical and false, although the true cause of his guilt and the sincerity of his repentance remain so well hidden in the convolutions of his confession that Camus’s critics (most of whom agree that the lawyer, Clamence, represents Camus himself) have never ceased to propose new and different explications de texte. The latest of these, by Norman Podhoretz, suggests that Camus felt guilty about failing to oppose Sartre’s pro-Soviet position with a forthright pro-American one, for fear of breaking ranks with the Left and sacrificing his reputation as a secular saint.
Podhoretz’s reading of Camus’s novels is so incisive and brilliant that his idea about The Fall almost sweeps one away, until one remembers that Camus’s deepest and most consistent political instinct was to reject the dichotomy that seems—more than a quarter of a century later—so logical and inescapable to Norman Podhoretz. He did not believe that an anti-Soviet position necessarily implied a pro-American one, as his adversaries kept insisting. His whole point, in fact, after Korea, Berlin, and the onset of German rearmament, was precisely the opposite: that it was urgent to oppose the polarization of East and West—and if this was utopian it is not to be confused with the sort of Utopia that Camus denounced (as leading inevitably to totalitarianism) in The Rebel.
Still, there is a sense in which I believe Podhoretz is right, and indeed that he is “on” to something very deeply buried in Camus’s feelings. “What I am suggesting,” he says, “is that Camus believed in his heart [my emphasis] . . . that the cowardice and hypocrisy of which he accuses himself in The Fall, are the cowardice and hypocrisy involved in his failure to side as clearly with the democracies as Sartre was siding with the Communists.” But to be pro-American meant a good deal more, at the time, than being on the side of the democracies, which Camus certainly was. It meant accepting a policy, a “NATO” outlook, the world according to John Foster Dulles: an entirely different matter, and quite foreign to Camus’s way of thinking about politics. Besides, if we are reduced to surmising what is in Camus’s heart, why limit ourselves to the East-West option? Why dismiss so easily the notion that he was satisfied and at peace with himself on the issue of Algeria? He had had the courage—if it required any—to withhold support from the FLN. But few French intellectuals, even among those who declared themselves for Algerian independence, were enthusiastic about the FLN. He had denounced FLN terrorism which struck down innocent civilians and might, he said, bring harm to his mother. “I believe in justice but I will defend my mother against justice.” Such a statement managed to shock everyone at once. It made the usual ritualistic bow to insurgency. But who is to say, with the war becoming increasingly tribal, that he really believed that justice was entirely on the FLN side?
Whatever he believed, he found it impossible to do or say anything to help his people. It was a hard thing to be Albert Camus in the 1950’s. If he had any talent or predisposition to guilt—and who hasn’t?—there was no lack of reasons. “One is always a bit at fault,” as Meursault had said, in The Stranger.
All his life Camus was haunted by the idea that he was the victim of some sort of mistake, a fatal misunderstanding. In Sisyphus he extends this idea to humanity as a whole, but it was a very personal thing with Camus, as if the universe—even as it delighted him with the beauty of the little Algerian fishing village of Tipasa, then awarded him the Nobel Prize and made him rich—had it in for him. Precisely when and how this notion took possession of him we do not know, but his childhood and youth are so filled with intimations of it that only the most obvious need to be mentioned here.
Item. He had a sun-drenched sensual impoverished happy miserable Mediterranean childhood in the home of Grandmother Sintès in Belcourt, Algiers, an area of docks, warehouses, workshops. Camus never knew his father, who was killed early in World War I. His mother, jealously kept from all suitors by Uncle Etienne, the crippled old barrel-maker described in The Exile and the Kingdom, never remarried. She had to work as a servant and could not properly protect Camus and his older brother against her shrewish, slightly demented, and occasionally brutal old mother. Nor could they, of course, protect her.
Item. Camus did well in school, and the world opened up to him. One of his public-school teachers saw to it that he went on to the lycée. Then the excellent essayist, Jean Grenier, who happened to be in Algiers at that time, took the boy under his protection. Camus had enormous talent, appetite, charm. Everything seemed possible. But then, at the age of seventeen, he learned that he had tuberculosis and might die of it. With luck and care he might keep the disease at bay, but there was practically no prospect of a definitive cure.
Item. In pursuing his education, Camus went to live with an uncle in town, leaving the Belcourt of his boyhood behind. The distance between him and his mother seemed to grow—far greater than the streetcar ride from town. Even as a child Camus had had difficulty communicating with his mother, who was a little deaf. But she was never far from his thoughts. Catherine Camus was a woman of Spanish origin, sweet-tempered, inarticulate, practically illiterate, but with a natural beauty and bearing that seem to have survived the laborious and rather humiliating life she had to accept, after losing her husband.
The idea of malentendu, misunderstanding, first turns up in Camus’s notebooks and then, in The Stranger, in a newspaper clipping that Meursault finds under his mattress in the prison cell where he has been waiting to be tried, ostensibly for the murder of an Arab but really (as the prosecutor develops his case) for failing to show the proper grief and piety on the occasion of the death and burial of his mother. The newspaper clipping is only briefly mentioned in The Stranger; it is one of those disconnected clues that one finds in Camus’s fiction; but after various permutation it turns up again in his work as a play, Le Malentendu, the first of Camus’s plays to be presented (in June 1944) to the Parisian public. It is the story of a son who, after many years abroad, returns to his native land and stays at a hotel run by his mother and sister, who make a practice of murdering their guests in order to raise enough money to escape their dreary life and go abroad. Since he chooses not to reveal his identity immediately, the son is murdered; and when they learn who their victim was, the women commit suicide.
The play was received very badly—practically hissed off the stage—despite the fact that it was produced by Marcel Herrand and starred Maria Casarès. The audience may or may not have been aware that Camus was trying to make an “existentialist” point, namely, that some sort of misunderstanding is built into the nature of things. But Jan, the hero, could easily have revealed his identity to his mother and sister; and if he failed to do so (for perfectly fatuous reasons), what had that to do with the nature of things? It was an accident, pure and simple, as far as the audience could see. In other words, the play was not only about a misunderstanding. It was one. Camus had mobilized Marcel Herrand, Maria Casarès, and the prestigious theater of Les Mathurins, and built his whole enterprise on the basis of a dramatic flaw so obvious that any third-rate critic in Paris could (and did) point it out.
Why do I make a point of all this? Not merely to establish that Camus’s torment over his mother (and his motherland) ran deep, and can be related to his self-castigation in The Fall. He had other reasons to fuel his sense of guilt, as we have seen. But why did Camus, a highly controlled and rational writer in his essays, become so illogical, not to say incoherent, when he undertook a work of imagination?
Practically every major piece of fiction or drama that Camus wrote involves some sort of malentendu. The Plague, which is based on the bizarre idea that an epidemic can serve as an adequate allegory for a political situation like the German occupation of France, is an especially striking example. Camus labored for years on a book that simply refused to “work,” for the obvious reason that political drama disappears in a situation where people are united against a natural disaster and their problems are seen as technological or administrative in their nature. And similar oddities can be pointed out in The Fall, in Caligula, and even in The Stranger, a carefully elaborated apologia by a narrator who staked his life, as it were, on the belief that in the universe of the Absurd you never explained and never complained; in short, made no apologia.
It would be presumptuous, no doubt, to suggest that the question I have raised can be answered simply, once and for all, but it seems to me worth noting that Camus the artist operated in a manner radically different from that of the essayist, or the man of action. As an artist, quite visibly, he became a sleepwalker. To demonstrate this in detail would require another full-length study, but for present purposes there is no need. Not only did Camus’s dreams and obsessions take over in his imaginative works, but they did so with his full approval, so to speak, unhindered by concern about what Jean Grenier or Simone de Beauvoir or Jean-Paul Sartre might think, because art was by definition for Camus the domain of the instinctual, where one came home to one’s deepest self.
Free at last! But coming home meant confronting his feelings for his mother and for a human community—his own—which was about to be destroyed in the name of justice. For a hero of the French Left and a secular saint it was an onerous freedom.
So he longed for it and feared it and when he finally resolved to undertake The First Man, the beginning of an autobiographical work that would force him to explore his origins and think things through, he did so in utter dread. Fear and Trembling! Because he knew that it would tell him and the entire world (and his mother waiting for him in Algeria) who he really was. The boy from Belcourt, Brother Camus.
1 Random House, 359 pp., $17.95.
2 Vichy France and the Jews, by Michael R. Marrus and Robert O. Paxton (Basic Books); Paris in the Third Reich, by David Pryce-Jones (Holt, Rinehart & Winston); The Left Bank, by Herbert R. Lottman (Houghton Mifflin). To these should be added, from inside the German establishment, the memoirs of Gerhard Heller, chief of the literature section of the Nazi propaganda ministry in Paris. These have recently been published in German; Heller figures prominently as well in the Pryce-Jones and Lottman volumes.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Must-Reads from Magazine
Sex and Work in an Age Without Norms
In the Beginning Was the ‘Hostile Work Environment’
In 1979, the feminist legal thinker Catharine MacKinnon published a book called Sexual Harassment of Working Women. Her goal was to convince the public (especially the courts) that harassment was a serious problem affecting all women whether or not they had been harassed, and that it was discriminatory. “The factors that explain and comprise the experience of sexual harassment characterize all women’s situation in one way or another, not only that of direct victims of the practice,” MacKinnon wrote. “It is this level of commonality that makes sexual harassment a women’s experience, not merely an experience of a series of individuals who happen to be of the female sex.” MacKinnon was not only making a case against clear-cut instances of harassment, but also arguing that the ordinary social dynamic between men and women itself created what she called “hostile work environments.”
The culture was ripe for such arguments. Bourgeois norms of sexual behavior had been eroding for at least a decade, a fact many on the left hailed as evidence of the dawn of a new age of sexual and social freedom. At the same time, however, a Redbook magazine survey published a few years before MacKinnon’s book found that nearly 90 percent of the female respondents had experienced some form of harassment on the job.
MacKinnon’s views might have been radical—she argued for a Marxist feminist jurisprudence reflecting her belief that sexual relations are hopelessly mired in male dominance and female submission—but she wasn’t entirely wrong. The postwar America in which women like MacKinnon came of age offered few opportunities for female agency, and the popular culture of the day reinforced the idea that women were all but incapable of it.
It wasn’t just the perfect housewives in the midcentury mold of Donna Reed and June Cleaver who “donned their domestic harness,” as the historian Elaine Tyler May wrote in her social history Homeward Bound. Popular magazines such as Good Housekeeping, McCall’s, and Redbook reinforced the message; so did their advertisers. A 1955 issue of Family Circle featured an advertisement for Tide detergent that depicted a woman with a rapturous expression on her face actually hugging a box of Tide under the line: “No wonder you women buy more Tide than any other washday product! Tide’s got what women want!” Other advertisements infantilized women by suggesting they were incapable of making basic decisions. “You mean a -woman can open it?” ran one for Alcoa aluminum bottle caps. It is almost impossible to read the articles or view the ads without thinking they were some kind of put-on.
The competing view of women in the postwar era was equally pernicious: the objectified pinup or sexpot. Marilyn Monroe’s hypersexualized character in The Seven Year Itch from 1955 doesn’t even have a name—she’s simply called The Girl. The 1956 film introducing the pulchritudinous Jayne Mansfield to the world was called The Girl Can’t Help It. The behavior of Rat Pack–era men has now been so airbrushed and glamorized that we’ve forgotten just how thoroughly debased their treatment of women was. Even as we thrill to Frank Sinatra’s “nice ’n’ easy” style, we overlook the classic Sinatra movie character’s enjoying an endless stream of showgirls and (barely disguised) prostitutes until forced to settle down with a killjoy ball-and-chain girlfriend. The depiction of women either as childish wives living under the protection of their husbands or brainless sirens sexually available to the first taker was undoubtedly vulgar, but it reflected a reality about the domestic arrangements of Americans after 1945 that was due for a profound revision when the 1960s came along.
And change they did, with a vengeance. The sexual revolution broke down the barriers between the sexes as the women’s-liberation movement insisted that bourgeois domesticity was a prison. The rules melted away, but attitudes don’t melt so readily; Sinatra’s ball-and-chain may have disappeared by common consent, but for a long time it seemed that the kooky sexpot of the most chauvinistic fantasy had simply become the ideal American woman. The distinction between the workplaces of the upper middle class and the singles bars where they sought companionship was pretty blurred.
Which is where MacKinnon came in—although if we look back at it, her objection seems not Marxist in orientation but almost Victorian. She described a workplace in which women were unprotected by old-fashioned social norms against adultery and general caddishness and found themselves mired in a “hostile environment.” She named the problem; it fell to the feminist movement as a whole to enshrine protections against it. They had some success. In 1986, the U.S. Supreme Court embraced elements of MacKinnon’s reasoning when it ruled unanimously in Meritor Savings Bank v. Vinson that harassment that was “sufficiently severe or pervasive” enough to create “a hostile or abusive work environment” was a violation of Title VII of the Civil Rights Act of 1964. The U.S. Equal Employment Opportunity Commission issued rules advising employers to create procedures to combat harassment, and employers followed suit by establishing sexual-harassment policies. Human-resource departments spent countless hours and many millions of dollars on sexual-harassment-awareness training for employees.
With new regulations and enforcement mechanisms, the argument went, the final, fusty traces of patriarchal, protective norms and bad behavior would be swept away in favor of rational legal rules that would ensure equal protection for women in the workplace. The culture might still objectify women, but our legal and employment systems would, in fits and starts, erect scaffolding upon which women who were harassed could seek justice.
But as the growing list of present-day harassers and predators attests—Harvey Weinstein, Louis C.K., Charlie Rose, Michael Oreskes, Glenn Thrush, Mark Halperin, John Conyers, Al Franken, Roy Moore, Matt Lauer, Garrison Keillor, et al.—the system appears to have failed the people it was meant to protect. There were searing moments that raised popular awareness about sexual harassment: (Anita Hill’s testimony about U.S. Supreme Court nominee Clarence Thomas in 1991; Senator Bob Packwood’s ouster for serial groping in 1995). There was, however, still plenty of space for men who harassed and assaulted women (and, in Kevin Spacey’s case, men) to shelter in place.
This wasn’t supposed to happen. Why did it?
Sex and Training
What makes sexual harassment so unnerving is not the harassment. It’s the sex—a subject, even a half-century into our so-called sexual revolution, about which we remain deeply confused.
The challenge going forward, now that the Hollywood honcho Weinstein and other notoriously lascivious beneficiaries of the liberation era have been removed, is how to negotiate the rules of attraction and punish predators in a culture that no longer embraces accepted norms for sexual behavior. Who sets the rules, and how do we enforce them? The self-appointed guardians of that galaxy used to be the feminist movement, but it is in no position to play that role today as it reckons not only with the gropers in its midst (Franken) but the ghosts of gropers past (Bill Clinton).
The feminist movement long ago traded MacKinnon’s radical feminism for political expedience. In 1992 and 1998, when her husband was a presidential candidate and then president, Hillary Clinton covered for Bill, enthusiastically slut-shaming his accusers. Her sin was and is at least understandable, if not excusable, given that the two are married. But what about America’s most glamorous early feminist, Gloria Steinem? In 1998, Steinem wrote of Clinton accuser Kathleen Willey: “The truth is that even if the allegations are true, the President is not guilty of sexual harassment. He is accused of having made a gross, dumb and reckless pass at a supporter during a low point in her life. She pushed him away, she said, and it never happened again. In other words, President Clinton took ‘no’ for an answer.” As for Monica Lewinsky, Steinem didn’t even consider the president’s behavior with a young intern to be harassment: “Welcome sexual behavior is about as relevant to sexual harassment as borrowing a car is to stealing one.”
The consequences of applying to Clinton what Steinem herself called the “one-free-grope” rule are only now becoming fully visible. Even in the case of a predator as malevolent as Weinstein, it’s clear that feminists no longer have a shared moral language or the credibility with which to condemn such behavior. Having tied their movement’s fortunes to political power, especially the Democratic Party, it is difficult to take seriously their injunctions about male behavior on either side of the aisle now (just as it was difficult to take seriously partisans on the right who defended the Alabama Senate candidate and credibly accused child sexual predator Roy Moore). Democrat Nancy Pelosi’s initial hemming and hawing about denouncing accused sexual harasser Representative John Conyers was disappointing but not surprising. As for Steinem, she’s gone from posing undercover as a Playboy bunny in order to expose male vice to sitting on the board of Playboy’s true heir, VICE Media, an organization whose bro-culture has spawned many sexual-harassment complaints. She’s been honored by Rutgers University, which created the Gloria Steinem Chair in Media, Culture, and Feminist Studies. One of the chair’s major endowers? Harvey Weinstein.
In place of older accepted norms or trusted moral arbiters, we have weaponized gossip. “S—-y Media Men” is a Google spreadsheet created by a woman who works in media and who, in the wake of the Weinstein revelations, wanted to encourage other women to name the gropers among us. At first a well-intentioned effort to warn women informally about men who had behaved badly, it quickly devolved into an anonymous unverified online litany of horribles devoid of context. The men named on the list were accused of everything from sending clumsy text messages to rape; Jia Tolentino of the New Yorker confessed that she didn’t believe the charges lodged against a male friend of hers who appeared on the list.
Others have found sisterhood and catharsis on social media, where, on Twitter, the phrase #MeToo quickly became the symbol for women’s shared experiences of harassment or assault. Like the consciousness-raising sessions of earlier eras, the hashtag supposedly demonstrated the strength of women supporting other women. But unlike in earlier eras, it led not to group hugs over readings of The Feminine Mystique, but to a brutally efficient form of insta-justice meted out on an almost daily basis against the accused. Writing in the Guardian, Jessica Valenti praised #MeToo for encouraging women to tell their stories but added, “Why have a list of victims when a list of perpetrators could be so much more useful?” Valenti encouraged women to start using the hashtag as a way to out predators, not merely to bond with one another. Even the New York Times has gone all-in on the assumption that the reckoning will continue: The newspaper’s “gender editor,” Jessica Bennett, launched a newsletter, The #MeToo Moment, described as “the latest news and insights on the sexual harassment and misconduct scandals roiling our society.”
As the also-popular hashtag #OpenSecret suggests, this #MeToo moment has brought with it troubling questions about who knew what and when—and a great deal of anger at gatekeepers and institutions that might have turned a blind eye to predators. The backlash against the Metropolitan Opera in New York is only the most recent example. Reports of conductor James Levine’s molestation of teenagers have evidently been widespread in the classical-music world for decades. And, as many social-media users hinted with their use of the hashtag #itscoming, Levine is not the only one who will face a reckoning.
To be sure, questioning and catharsis are welcome if they spark reforms such as crackdowns on the court-approved payoffs and nondisclosure agreements that allowed sexual predators like Weinstein to roam free for so long. And they have also brought a long-overdue recognition of the ineffectiveness of so much of what passes for sexual-harassment-prevention training in the workplace. As the law professor Lauren Edelman noted in the Washington Post, “There have been only a handful of empirical studies of sexual-harassment training, and the research has not established that such training is effective. Some studies suggest that training may in fact backfire, reinforcing gendered stereotypes that place women at a disadvantage.” One specific survey at a university found that “men who participated in the training were less likely to view coercion of a subordinate as sexual harassment, less willing to report harassment and more inclined to blame the victim than were women or men who had not gone through the training.”
Realistic Change vs. Impossible Revolution
Because harassment lies at the intersection of law, politics, ideology, and culture, attempts to re-regulate behavior, either by returning to older, more traditional norms, or by weaponizing women’s potential victimhood via Twitter, won’t work. America is throwing the book at foul old violators like Weinstein and Levine, but aside from warning future violators that they may be subject to horrible public humiliation and ruination, how is all this going to fix the problem?
We are a long way from Phyllis Schlafly’s ridiculous remark, made years ago during a U.S. Senate committee hearing, that “virtuous women are seldom accosted,” but Vice President Mike Pence’s rule about avoiding one-on-one social interactions with women who aren’t his wife doesn’t really scale up in terms of effective policy in the workplace, either. The Pence Rule, like corporate H.R. policies about sexual harassment, really exists to protect Pence from liability, not to protect women.
Indeed, the possibility of realistic change is made almost moot by the hysterical ambitions of those who believe they are on the verge of bringing down the edifice of American masculinity the way the Germans brought down the Berlin wall. Bennett of the Times spoke for many when she wrote in her description of the #MeToo newsletter: “The new conversation goes way beyond the workplace to sweep in street harassment, rape culture, and ‘toxic masculinity’—terminology that would have been confined to gender studies classes, not found in mainstream newspapers, not so long ago.”
Do women need protection? Since the rise of the feminist movement, it has been considered unacceptable to declare that women are weaker than men (even physically), yet, as many of these recent assault cases make clear, this is a plain fact. Men are, on average, physically larger and more aggressive than women; this is why for centuries social codes existed to protect women who were, by and large, less powerful, more vulnerable members of society.
MacKinnon’s definition of harassment at first seemed to acknowledge such differences; she described harassment as “dominance eroticized.” But like all good feminist theorists, she claimed this dominance was socially constructed rather than biological—“the legally relevant content of the term sex, understood as gender difference, should focus upon its social meaning more than upon any biological givens,” she wrote. As such, the reasoning went, men’s socially constructed dominance could be socially deconstructed through reeducation, training, and the like.
Culturally, this is the view that now prevails, which is why we pinball between arguing that women can do anything men can do and worrying that women are all the potential victims of predatory, toxic men. So which is it? Girl Power or the Fainting Couch?
Regardless, when harassment or assault claims arise, the cultural assumptions that feminism has successfully cultivated demand we accept that women are right and men are wrong (hence the insistence that we must believe every woman’s claim about harassment and assault, and the calling out of those who question a woman’s accusation). This gives women—who are, after all, flawed human beings just like men—too much accusatory power in situations where context is often crucial for understanding what transpired. Feminists with a historical memory should recall how they embraced this view after mandatory-arrest laws for partner violence that were passed in the 1990s netted many women for physically assaulting their partners. Many feminist legal scholars at the time argued that such laws were unfair to women precisely because they neglected context. (“By following the letter of the law… law enforcement officers often disregard the context in which victims of violence resort to using violence themselves,” wrote Susan L. Miller in the Violence Against Women journal in 2001.)
Worse, the unquestioned valorization of women’s claims leaves men in the position of being presumed guilty unless proven innocent. Consider a recent tweet by Washington Post reporter and young-adult author Monica Hesse in response to New York Times reporter Farhad Manjoo’s self-indulgent lament. Manjoo: “I am at the point where i seriously, sincerely wonder how all women don’t regard all men as monsters to be constantly feared. the real world turns out to be a legit horror movie that I inhabited and knew nothing about.”
Hesse’s answer: “Surprise! The answer is that we do, and we must, regard all men as potential monsters to be feared. That’s why we cross to the other side of the street at night, and why we sometimes obey when men say ‘Smile, honey!’ We are always aware the alternative could be death.” This isn’t hyperbole in her case; Hesse has so thoroughly internalized the message that men are to be feared, not trusted, that she thinks one might kill her on the street if she doesn’t smile at him. Such illogic makes the Victorian neurasthenics look like the Valkyrie.
But while most reasonable people agree that women and men both need to take responsibility for themselves and exercise good judgment, what this looks like in practice is not going to be perfectly fair, given the differences between men and women when it comes to sexual behavior. In her book, MacKinnon observed of sexual harassment, “Tacitly, it has been both acceptable and taboo; acceptable for men to do, taboo for women to confront, even to themselves.”
That’s one thing we can say for certain is no longer true. Nevertheless, if you begin with the assumption that every sexual invitation is a power play or the prelude to an assault, you are likely to find enemies lurking everywhere. As Hesse wrote in the Washington Post about male behavior: “It’s about the rot that we didn’t want to see, that we shoveled into the garbage disposal of America for years. Some of the rot might have once been a carrot and some it might have once been a moldy piece of rape-steak, but it’s all fetid and horrific and now, and it’s all coming up at once. How do we deal with it? Prison for everyone? Firing for some? …We’re only asking for the entire universe to change. That’s all.”
But women are part of that “entire universe,” too, and it is incumbent on them to make it clear when someone has crossed the line. Both women and men would be better served if they adopted the same rule—“If you see something, say something”—when it comes to harassment. Among the many details that emerged from the recent exposé at Vox about New York Times reporter Glenn Thrush was the setting for the supposedly egregious behavior: It was always after work and after several drinks at a bar. In all of the interactions described, one or usually both of the parties was tipsy or drunk; the women always agreed to go with Thrush to another location. The women also stayed on good terms with Thrush after he made his often-sloppy passes at them, in one case sending friendly text messages and ensuring him he didn’t need to apologize for his behavior. The Vox writer, who herself claims to have been victimized by Thrush, argues, “Thrush, just by his stature, put women in a position of feeling they had to suck up and move on from an uncomfortable encounter.” Perhaps. But he didn’t put them in the position of getting drunk after work with him. They put themselves in that position.
Also, as the Thrush story reveals, women sometimes use sexual appeal and banter for their own benefit in the workplace. If we want to clarify the blurred lines that exist around workplace relationships, then we will have to reckon with the women who have successfully exploited them for their own advantage.
None of this means women should be held responsible when men behave badly or illegally. But it puts male behavior in the proper context. Sometimes, things really are just about sex, not power. As New York Times columnist Ross Douthat bluntly noted in a recent debate in New York magazine with feminist Rebecca Traister, “I think women shouldn’t underestimate the extent to which male sexual desire is distinctive and strange and (to women) irrational-seeming. Saying ‘It’s power, not sex’ excludes too much.”
Social-Media Justice or Restorative Justice?
What do we want to happen? Do we want social-media justice or restorative justice for harassers and predators? The first is immediate, cathartic, and brutal, with little consideration for nuance or presumed innocence for the accused. The second is more painstaking because it requires reaching some kind of consensus about the allegations, but it is also ultimately less destructive of the community and culture as a whole.
Social-media justice deploys the powerful force of shame at the mere whiff of transgression, so as to create a regime of prevention. The thing is, Americans don’t really like shame (the sexual revolution taught us that). Our therapeutic age doesn’t think that suppressing emotions and inhibiting feelings—especially about sex—is “healthy.” So either we will have to embrace the instant and unreflective emotiveness of #MeToo culture and accept that its rough justice is better than no justice at all—or we will have to stop overreacting every time a man does something that is untoward—like sending a single, creepy text message—but not actually illegal (like assault or constant harassment).
After all, it’s not all bad news from the land of masculinity. Rates of sexual violence have fallen 63 percent since 1993, according to statistics from the Rape, Abuse, and Incest National Network, and as scholar Steven Pinker recently observed: “Despite recent attention, workplace sexual harassment has declined over time: from 6.1 percent of GSS [General Social Survey] respondents in 2002 to 3.6 percent in 2014. Too high, but there’s been progress, which can continue.”
Still, many men have taken this cultural moment as an opportunity to reflect on their own understanding of masculinity. In the New York Times, essayist Stephen Marche fretted about the “unexamined brutality of the male libido” and echoed Catharine MacKinnon when he asked, “How can healthy sexuality ever occur in conditions in which men and women are not equal?” He would have done better to ask how we can raise boys who will become men who behave honorably toward women. And how do we even raise boys to become honorable men in a culture that no longer recognizes and rewards honor?
The answers to those questions aren’t immediately clear. But one thing that will make answering them even harder is the promotion of the idea of “toxic masculinity.” New York Times columnist Charles Blow recently argued that “we have to re-examine our toxic, privileged, encroaching masculinity itself. And yes, that also means on some level reimagining the rules of attraction.” But the whole point of the phrase “rules of attraction” is to highlight that there aren’t any and never have been (if you have any doubts, read the 1987 Bret Easton Ellis novel that popularized the phrase). Blow’s lectures about “toxic masculinity” are meant to sow self-doubt in men and thus encourage some enlightened form of masculinity, but that won’t end sexual harassment any more than Lysistrata-style refusal by women to have sex will end war.
Parents should be teaching their sons about personal boundaries and consent from a young age, just as they teach their daughters, and unequivocally condemn raunchy and threatening remarks about women, whether they are uttered by a talk-radio host or by the president of the United States. The phrase “that isn’t how decent men behave” should be something every parent utters.
But such efforts are made more difficult by a liberal culture that has decided to equate caddish behavior with assault precisely because it has rejected the strict norms that used to hold sway—the old conservative norms that regarded any transgression against them as a seriousviolation and punished it accordingly. Instead, in an effort to be a kinder, gentler, more “woke” society that’s understanding of everyone’s differences, we’ve ended up arbitrarily picking and choosing among the various forms of questionable behavior for which we will have no tolerance, all the while failing to come to terms with the costs of living in such a society. A culture that hangs the accused first and asks questions later might have its virtues, but psychological understanding is not one of them.
And so we come back to sex and our muddled understanding of its place in society. Is it a meaningless pleasure you’re supposed to enjoy with as many people as possible before settling down and marrying? Or is it something more important than that? Is it something that you feel empowered to handle in Riot Grrrl fashion, or is getting groped once by a pervy co-worker something that prompts decades of nightmares and declarations that you will “never be the same”? How can we condemn people like Senator Al Franken, whose implicit self-defense is that it’s no big deal to cop a feel every so often, when our culture constantly offers up women like comedian Amy Schumer or Abbi and Ilana of the sketch show Broad City, who argue that women can and should be as filthy and degenerate as the most degenerate guy?
Perhaps it’s progress that the downfall of powerful men who engage in inappropriate sexual behavior is no longer called a “bimbo eruption,” as it was in the days of Bill Clinton, and that the men who harassed or assaulted women are facing the end of their careers and, in some cases, prison. But this is not the great awakening that so many observers have claimed it is. Awakenings need tent preachers to inspire and eager audiences to participate; our #MeToo moment has plenty of those. What it doesn’t have, unless we can agree on new norms for sexual behavior both inside and outside the workplace, is a functional theology that might cultivate believers who will actually practice what they preach.
That functional theology is out of our reach. Which means this moment is just that—a moment. It will die down, impossible though it seems at present. And every 10 or 15 years a new harassment scandal will spark widespread outrage, and we will declare that a new moment of reckoning and realization has emerged. After which the stories will again die down and very little will have changed.
No one wants to admit this. It’s much more satisfying to see the felling of so many powerful men as a tectonic cultural shift, another great leap forward toward equality between the sexes. But it isn’t, because the kind of asexual equality between the genders imagined by those most eager to celebrate our #MeToo moment has never been one most people embrace. It’s one that willfully overlooks significant differences between the sexes and assumes that thoughtful people can still agree on norms of sexual behavior.
They can’t. And they won’t.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
The U.S. will endanger itself if it accedes to Russian and Chinese efforts to change the international system to their liking
A “sphere of influence” is traditionally understood as a geographical zone within which the most powerful actor can impose its will. And nearly three decades after the close of the superpower struggle that Churchill’s speech heralded, spheres of influence are back. At both ends of the Eurasian landmass, the authoritarian regimes in China and Russia are carving out areas of privileged influence—geographic buffer zones in which they exercise diplomatic, economic, and military primacy. China and Russia are seeking to coerce and overawe their neighbors. They are endeavoring to weaken the international rules and norms—and the influence of opposing powers—that stand athwart their ambitions in their respective “near abroads.” Chinese island-building and maritime expansionism in the South China Sea and Russian aggression in Ukraine and intimidation of the Baltic states are part and parcel of the quasi-imperial projects these revisionist regional powers are now pursuing.
Historically speaking, a world made up of rival spheres is more the norm than the exception. Yet such a world is in sharp tension with many of the key tenets of the American foreign-policy tradition—and with the international order that the United States has labored to construct and maintain since the end of World War II.
To be sure, Washington carved out its own spheres of influence in the Western Hemisphere beginning in the 19th century, and America’s myriad alliance blocs in key overseas regions are effectively spheres by another name. And today, some international-relations observers have welcomed the return of what the foreign-policy analyst Michael Lind has recently called “blocpolitik,” hoping that it might lead to a more peaceful age of multilateral equilibrium.
But for more than two centuries, American leaders have generally opposed the idea of a world divided into rival spheres of influence and have worked hard to deny other powers their own. And a reversion to a world dominated by great powers and their spheres of influence would thus undo some of the strongest traditions in American foreign policy and take the international system back to a darker, more dangerous era.I n an extreme form, a sphere of influence can take the shape of direct imperial or colonial control. Yet there are also versions in which a leading power forgoes direct military or administrative domination of its neighbors but nonetheless exerts geopolitical, economic, and ideological influence. Whatever their form, spheres of influence reflect two dominant imperatives of great-power politics in an anarchic world: the need for security vis-à-vis rival powers and the desire to shape a nation’s immediate environment to its benefit. Indeed, great powers have throughout history pursued spheres of influence to provide a buffer against the encroachment of other hostile actors and to foster the conditions conducive to their own security and well-being.
The Persian Empire, Athens and Sparta, and Rome all carved out domains of dominance. The Chinese tribute system—which combined geopolitical control with the spread of Chinese norms and ideas—profoundly shaped the trajectory of East Asia for hundreds of years. The 19th and 20th centuries saw the British Empire, Japan’s East Asian Co-Prosperity Sphere, and the Soviet bloc.
America, too, has played the spheres-of-influence game. From the early-19th century onward, American officials strove for preeminence in the Western Hemisphere—first by running other European powers off much of the North American continent and then by pushing them out of Latin America. With the Monroe Doctrine, first enunciated in 1823, America staked its claim to geopolitical primacy from Canada to the Southern Cone. Over the succeeding generations, Washington worked to achieve military dominance in that area, to tie the countries of the Western Hemisphere to America geopolitically and economically, and even to help pick the rulers of countries from Mexico to Brazil.
If this wasn’t a sphere of influence, nothing was. In 1895, Secretary of State Richard Olney declared that “the United States is practically sovereign on this continent and its fiat is law upon the subjects to which it confines its interposition.” After World War II, moreover, a globally predominant United States steadily expanded its influence into Europe through NATO, into East Asia through various military alliances, and into the Middle East through a web of defense, diplomatic, and political arrangements. The story of global politics over the past 200 years has, in large part, been the story of expanding U.S. influence.
Nonetheless, there has always been something ambivalent—critics would say hypocritical—about American views of this matter. For as energetic as Washington has been in constructing its geopolitical domain, a “spheres-of-influence world” is in perpetual tension with four strong intellectual traditions in U.S. strategy. These are hegemony, liberty, openness, and exceptionalism.
First, hegemony. The myth of America as an innocent isolationist country during its first 170 years is powerful and enduring; it’s also wrong. From the outset, American statesmen understood that the country’s favorable geography, expanding population, and enviable resource endowments gave it the potential to rival, and ultimately overtake, the European states that dominated world politics. America might be a fledgling republic, George Washington said, but it would one day attain “the strength of a giant.” From the revolution onward, American officials worried, with good reason, that France, Spain, and the United Kingdom would use their North American territories to strangle or contain the young republic. Much of early American diplomacy was therefore geared toward depriving the European powers of their North American possessions, using measures from coercive diplomacy to outright wars of conquest. “The world shall have to be familiarized with the idea of considering our proper dominion to be the continent of North America,” wrote John Quincy Adams in 1819. The only regional sphere of influence that Americans would accept as legitimate was their own.
By the late-19th century, the same considerations were pushing Americans to target spheres of influence further abroad. As the industrial revolution progressed, it became clear that geography alone might not protect the nation. Aggressive powers could now generate sufficient military strength to dominate large swaths of Europe or East Asia and then harness the accumulated resources to threaten the United States. Moreover, as America itself became an increasingly mighty country that sought to project its influence overseas, its leaders naturally objected to its rivals’ efforts to establish their own preserves from which Washington would be excluded. If much of America’s 19th-century diplomacy was dedicated to denying other powers spheres of influence in the Western Hemisphere, much of the country’s 20th-century diplomacy was an effort to break up or deny rival spheres of influence in Europe and East Asia.
From the Open Door policy, which sought to prevent imperial powers from carving up China, to U.S. intervention in the world wars, to the confrontation with the Soviet Empire in the Cold War, the United States repeatedly acted on the belief that it could be neither as secure nor influential as it desired in a world divided up and dominated by rival nations. The American geopolitical tradition, in other words, has long contained a built-in hostility to other countries’ spheres of influence.
The American ideological tradition shares this sense of preeminence, as reflected in the second key tenet: liberty. America’s founding generation did not see the revolution merely as the birth of a future superpower; they saw it as a catalyst for spreading political liberty far and wide. Thomas Paine proclaimed in 1775 that Americans could “begin the world anew”; John Quincy Adams predicted, several decades later, that America’s liberal ideology was “destined to cover the surface of the globe.” Here, too, the new nation was not cursed with excessive modesty—and here, too, the existence of rival spheres of influence threatened this ambition.
Rival spheres of influence—particularly within the Western Hemisphere—imperiled the survival of liberty at home. If the United States were merely one great power among many on the North American continent, the founding generation worried, it would be forced to maintain a large standing military establishment and erect a sort of 18th-century “garrison state.” Living in perpetual conflict and vigilance, in turn, would corrode the very freedoms for which the revolution had been fought. “No nation,” wrote James Madison, “can preserve its freedom in the midst of continual warfare.” Just as Madison argued, in Federalist No. 10, that “extending the sphere”—expanding the republic—was a way of safeguarding republicanism at home, expanding America’s geopolitical domain was essential to providing the external security that a liberal polity required to survive.
Rival spheres of influence also constrained the prospects for liberty abroad. Although the question of whether the United States should actively support democratic revolutions overseas has been a source of unending controversy, virtually all American strategists have agreed that the country would be more secure and influential in a world where democracy was widespread. Given this mindset, Americans could hardly be desirous of foreign powers—particularly authoritarian powers—establishing formidable spheres of influence that would allow them to dominate the international system or suppress liberal ideals. The Monroe Doctrine was a response to the geopolitical dangers inherent in renewed imperial control of South America; it was also a response to the ideological danger posed by European nations that would “extend the political system to any portion” of the Western Hemisphere. Similar concerns have been at the heart of American opposition to the British Empire and the Soviet bloc.
Economic openness, the third core dynamic of American policy, has long served as a commercial counterpart to America’s ideological proselytism. Influenced as much by Adam Smith as by Alexander Hamilton, early American statecraft promoted free trade, neutral rights, and open markets, both to safeguard liberty and enrich a growing nation. This mission has depended on access to the world’s seas and markets. When that access was circumscribed—by the British in 1812 and by the Germans in 1917—Americans went to war to preserve it. It is unsurprising, then, that Americans also looked askance at efforts by other powers to establish areas that might be walled off from U.S. trade and investment—and from the spread of America’s capitalist ideology.
A brief list of robust policy endeavors underscores the persistent U.S. hostility to an economically closed, spheres-of-influence world: the Model Treaty of 1776, designed to promote free and reciprocal trade; John Hay’s Open Door policy of 1899, designed to prevent any outside power from dominating trade with China; Woodrow Wilson’s advocacy in his “14 Points” speech of 1918 for the removal “of all economic barriers and the establishment of an equality of trade conditions among all nations”; and the focus of the 1941 Atlantic Charter on reducing trade restrictions while promoting international economic cooperation (assuming the allies would emerge triumphant from World War II).
Fourth and finally, there’s exceptionalism. Americans have long believed that their nation was created not simply to replicate the practices of the Old World, but to revolutionize how states and peoples interact with one another. The United States, in this view, was not merely another great power out for its own self-interest. It was a country that, by virtue of its republican ideals, stood for the advancement of universal rights, and one that rejected the back-alley methods of monarchical diplomacy in favor of a more principled statecraft. When Abraham Lincoln said America represented “the last best hope of earth,” or when Woodrow Wilson scorned secret agreements in favor of “open covenants arrived at openly,” they demonstrated this exceptionalist strain in American thinking. There is some hypocrisy here, of course, for the United States has often acted in precisely the self-interested, cutthroat manner its statesmen deplored. Nonetheless, American exceptionalism has had a pronounced effect on American conduct.
Compare how Washington led its Western European allies during the Cold War—the extent to which NATO rested on the authentic consent of its members, the way the United States consistently sought to empower rather than dominate its partners—with how Moscow managed its empire in Eastern Europe. In the same way, Americans have often recoiled from arrangements that reeked of the old diplomacy. Franklin Roosevelt might have tolerated a Soviet-dominated Eastern Europe after World War II, for instance, but he knew he could not admit this publicly. Likewise, the Helsinki Accords of 1975, which required Washington to acknowledge the diplomatic legitimacy of the Soviet sphere, proved controversial inside the United States because they seemed to represent just the sort of cynical, old-school geopolitics that American exceptionalism abhors.
To be clear, U.S. hostility to a spheres-of-influence world has always been leavened with a dose of pragmatism; American leaders have pursued that hostility only so far as power and prudence allowed. The Monroe Doctrine warned European powers to stay out of the Americas, but the quid pro quo was that a young and relatively weak United States would accept, for a time, a sphere of monarchical dominance within Europe. Even during the Cold War, U.S. policymakers generally accepted that Washington could not break up the Soviet bloc in Eastern Europe without risking nuclear war.
But these were concessions to expediency. As America gained greater global power, it more actively resisted the acquisition or preservation of spheres by others. From gradually pushing the Old World out of the New, to helping vanquish the German and Japanese Empires by force of arms, to assisting the liquidation of the British Empire after World War II, to containing and ultimately defeating the Soviet bloc, the United States was present at the destruction of spheres of influence possessed by adversaries and allies alike.
The acme of this project came in the quarter-century that followed the Cold War. With the collapse of the Warsaw Pact and the Soviet Union itself, it was possible to envision a world in which what Thomas Jefferson called America’s “empire of liberty” could attain global dimensions, and traditional spheres of influence would be consigned to history. The goal, as George W. Bush’s 2002 National Security Strategy proclaimed, was to “create a balance of power that favors human freedom.” This meant an international environment in which the United States and its values were dominant and there was no balance of power whatsoever.
Under presidents from George H.W. Bush to Barack Obama, this project entailed working to spread democracy and economic liberalism farther than ever before. It involved pushing American influence and U.S.-led institutions into regions—such as Eastern Europe—that were previously dominated by other powers. It meant maintaining the military primacy necessary to stop regional powers from establishing new spheres of influence, as Washington did by rolling back Saddam Hussein’s conquest of Kuwait in 1990 and by deterring China from coercing Taiwan in 1995–96. Not least, this American project involved seeking to integrate potential rivals—foremost Russia and China—into the post–Cold War order, in hopes of depriving them of even the desire to challenge it. This multifaceted effort reflected the optimism of the post-Cold War era, as well as the influence of tendencies with deep roots in the American past. Yet try as Washington might to permanently leave behind a spheres-of-influence world, that prospect is once again upon us.B egin with China’s actions in the Asia-Pacific region. The sources of Chinese conduct are diverse, ranging from domestic insecurity to the country’s confidence as a rising power to its sense of historical destiny as “the Middle Kingdom.” All these influences animate China’s bid to establish regional mastery. China is working, first, to create a power vacuum by driving the United States out of the Western Pacific, and second, to fill that vacuum with its own influence. A Chinese admiral made this ambition clear when he remarked—supposedly in jest—to an American counterpart that, in the future, the two powers should simply split the Pacific with Hawaii as the dividing line. Yang Jiechi, then China’s foreign minister, echoed this sentiment in a moment of frustration by lecturing the nations of Southeast Asia. “China is a big country,” he said, “and other countries are small countries, and that’s just a fact.”
Policy has followed rhetoric. To undercut America’s position, Beijing has harassed American ships and planes operating in international waters and airspace. The Chinese have warned U.S. allies they may be caught in the crossfire of a Sino-American war unless Washington accommodates China or the allies cut loose from the United States. China has simultaneously worked to undermine the credibility of U.S. alliance guarantees by using strategies designed to shift the regional status quo in ways even the mighty U.S. Navy finds difficult to counter. Through a mixture of economic aid and diplomatic coercion, Beijing has also successfully divided international bodies, such as the Association of Southeast Asian Nations, through which the United States has sought to rally opposition to Chinese assertiveness. And in the background, China has been steadily building, over the course of more than two decades, formidable military tools designed to keep the United States out of the region and give Beijing a free hand in dealing with its weaker neighbors. As America’s sun sets in the Asia-Pacific, Chinese leaders calculate, the shadow China casts over the region will only grow longer.
To that end, China has claimed, dubiously, nearly all of the South China Sea as its own and constructed artificial islands as staging points for the projection of military power. Military and paramilitary forces have teased, confronted, and violated the sovereignty of countries from Vietnam to the Philippines; China is likewise intensifying the pressure on Japan in the East China Sea. Economically, Beijing uses its muscle to reward those who comply with China’s policies and punish those not willing to bow to its demands. It is simultaneously advancing geoeconomic projects, such as the Belt and Road Initiative, Asian Infrastructure Investment Bank, and Regional Comprehensive Economic Project (RCEP) that are designed to bring the region into its orbit.
Strikingly, China has also moved away from its long-professed principle of noninterference in other countries’ domestic politics by extending the reach of Chinese propaganda organs and using investment and even bribery to co-opt regional elites. Payoffs to Australian politicians are as critical to China’s regional project as development of “carrier-killer” missiles. Finally, far from subscribing to liberal concepts of democracy and human rights, Beijing emphasizes its rejection of these values and its desire to create “Asia for Asians.” In sum, China is pursuing a classic spheres-of-influence project. By blending intimidation with inducement, Beijing aims to sunder its neighbors’ bonds with America and force them to accept a Sino-centric order—a new Chinese tribute system for the 21st century.A t the other end of Eurasia, Russia is playing geopolitical hardball of a different sort. The idea that Moscow should dominate its “near abroad” is as natural to many Russians as American regional primacy is to Americans. The loss of the Kremlin’s traditional buffer zone was, therefore, one of the most painful legacies of the Cold War’s end. And so it is hardly surprising that, as Russia has regained a degree of strength in recent years, it has sought to reassert its supremacy.
It has done so, in fact, through more overtly aggressive means than those employed by China. Moscow has twice seized opportunities to humiliate and dismember former Soviet republics that committed the sin of tilting toward the West or throwing out pro-Russian leaders, first in Georgia in 2008 and then in Ukraine in 2014. It has regularly reminded its neighbors that they live on Russia’s doorstep, through coercive activities such as conducting cyberattacks on Estonia in 2007 and holding aggressive military exercises on the frontiers of the Baltic states. In the same vein, the Kremlin has essentially claimed a veto over the geopolitical alignments of neighbors from the Caucasus to Scandinavia, whether by creating frozen conflicts on their territory or threatening to target them militarily—perhaps with nuclear weapons—should they join NATO.
Military muscle is not Moscow’s only tool. Russia has simultaneously used energy exports to keep the states on its periphery economically dependent, and it has exported corruption and illiberalism to non-aligned states in the former Warsaw Pact area to prevent further encroachment of liberal values. Not least, the Kremlin has worked to undermine NATO and the European Union through political subversion and intervention in Western electoral processes. And while Russia’s activities are most concentrated in Eastern Europe and Central Asia, it’s also projecting its influence farther afield. Russian forces intervened successfully in Syria in 2015 to prop up Bashar al-Assad, preserve access to warm-water ports on the Mediterranean, and demonstrate the improved accuracy and lethality of Russian arms. Moscow continues to make inroads in the Middle East, often in cooperation with another American adversary: Iran.
To be sure, the projects that China and Russia are pursuing today are vastly different from each other, but the core logic is indisputably the same. Authoritarian powers are re-staking their claim to privileged influence in key geostrategic areas.S o what does this mean for American interests? Some observers have argued that the United States should make a virtue of necessity and accept the return of such arrangements. By this logic, spheres of influence create buffer zones between contending great powers; they diffuse responsibility for enforcing order in key areas. Indeed, for those who think that U.S. policy has left the country exhausted and overextended, a return to a world in which America no longer has the burden of being the dominant power in every region may seem attractive. The great sin of American policy after the Cold War, many realist scholars argue, was the failure to recognize that even a weakened Russia would demand privileged influence along its frontiers and thus be unalterably opposed to NATO expansion. Similarly, they lament the failure to understand that China would not forever tolerate U.S. dominance along its own periphery. It is not surprising, then, to hear analysts such as Australia’s Hugh White or America’s John Mearsheimer argue that the United States should learn to “share power” with China in the Pacific, or that it must yield ground in Eastern Europe in order to avoid war with Russia.
Such claims are not meritless; there are instances in which spheres of influence led to a degree of stability. The division of Europe into rival blocs fostered an ugly sort of stasis during the Cold War; closer to home, America’s dominance in the Western Hemisphere has long muted geopolitical competition in our own neighborhood. For all the problems associated with European empires, they often partially succeeded in limiting scourges such as communal violence.
And yet the allure of a spheres-of-influence world is largely an illusion, for such a world would threaten U.S. interests, traditions, and values in several ways.
First, basic human rights and democratic values would be less respected. China and Russia are not liberal democracies; they are illiberal autocracies that see the spread of democratic values as profoundly corrosive to their own authority and security. Just as the United States has long sought to create a world congenial to its own ideological predilections, Beijing and Moscow would certainly do likewise within their spheres of dominance.
They would, presumably, bring their influence to bear in support of friendly authoritarian regimes. And they would surely undermine democratic governments seen to pose a threat of ideological contagion or insubordination to Russian or Chinese prerogatives. Russia has taken steps to prevent the emergence of a Western-facing democracy in Ukraine and to undermine liberal democracies in Europe and elsewhere; China is snuffing out political freedoms in Hong Kong. Such actions offer a preview of what we will see when these countries are indisputably dominant along their peripheries. Further aggressions, in turn, would not simply be offensive to America’s ideological sensibilities. For given that the spread of democracy has been central to the absence of major interstate war in recent decades, and that the spread of American values has made the U.S. more secure and influential, a less democratic world will also be a more dangerous world.
Second, a spheres-of-influence world would be less open to American commerce and investment. After all, the United States itself saw geoeconomic dominance in Latin America as the necessary counterpart to geopolitical dominance. Why would China take a less self-interested approach? China already reaps the advantages of an open global economy even as it embraces protectionism and mercantilism. In a Chinese-dominated East Asia, all economic roads will surely lead to Beijing, as Chinese officials will be able to use their leverage to ensure that trade and investment flows are oriented toward China and geopolitical competitors like the United States are left on the outside. Beijing’s current geoeconomic projects—namely, RCEP and the Belt and Road Initiative—offer insight into a regional economic future in which flows of commerce and investment are subject to heavy Chinese influence.
Third, as spheres of influence reemerge, the United States will be less able to shape critical geopolitical events in crucial regions. The reason Washington has long taken an interest in events in faraway places is that East Asia, Europe, and the Middle East are the areas from which major security challenges have emerged in the past. Since World War II, America’s forward military presence has been intended to suppress incipient threats and instability; that presence has gone hand in glove with energetic diplomacy that amplifies America’s voice and protects U.S. interests. In a spheres-of-influence world, Washington would no longer enjoy the ability to act with decisive effect in these regions; it would find itself reacting to global events rather than molding them.
This leads to a final, and crucial, issue. America would be more likely to find its core security interests challenged because world orders based on rival spheres of influence have rarely been as peaceful and settled as one might imagine.
To see this, just work backward from the present. During the Cold War, a bipolar balance did help avert actual war between Moscow and Washington. But even in Europe—where the spheres of influence were best defined—there were continual tensions and crises as Moscow tested the Western bloc. And outside Europe, violence and proxy wars were common as the superpowers competed to extend their reach into the Third World. In the 1930s, the emergence of German and Japanese spheres of influence led to the most catastrophic war in global history. The empires of the 19th century—spheres of influence in their own right—continually jostled one another, leading to wars and near-wars over the course of decades; the Peace of Amiens between England and Napoleonic France lasted a mere 14 months. And looking back to the ancient world, there were not one, but three Punic Wars fought between Rome and Carthage as two expanding empires came into conflict. A world defined by spheres of influence is often a world characterized by tensions, wars, and competition.
The reasons for this are simple. As the political scientist William Wohlforth observed, unipolar systems—such as the U.S.-dominated post–Cold War order—are anchored by a hegemonic power that can act decisively to maintain the peace. In a unipolar system, Wohlforth writes, there are few incentives for revisionist powers to incur the “focused enmity” of the leading state. Truly multipolar systems, by contrast, have often been volatile. When the major powers are more evenly matched, there is a greater temptation to aggression by those who seek to change the existing order of things. And seek to change things they undoubtedly will.
The idea that spheres of influence are stabilizing holds only if one assumes that the major powers are motivated only by insecurity and that concessions to the revisionists will therefore lead to peace. Churchill described this as the idea that if one “feeds the crocodile enough, the crocodile will eat him last.”
Unfortunately, today’s rising or resurgent powers are also motivated—as is America—by honor, ambition, and the timeless desire to make their international habitats reflect their own interests and ideals. It is a risky gamble indeed, then, to think that ceding Russia or China an uncontested sphere of influence would turn a revisionist authoritarian regime into a satisfied power. The result, as Robert Kagan has noted, might be to embolden those actors all the more, by giving them freer rein to bring their near-abroads under control, greater latitude and resources to pursue their ambitions, and enhanced confidence that the U.S.-led order is fracturing at its foundations. For China, dominance over the first island chain might simply intensify desires to achieve primacy in the second island chain and beyond; for Russia, renewed mastery in the former Soviet space could lead to desires to bring parts of the former Warsaw Pact to heel, as well. To observe how China is developing ever longer-range anti-access/area denial capabilities, or how Russia has been projecting military power ever farther afield, is to see this process in action.T he reemergence of a spheres-of-influence world would thus undercut one of the great historical achievements of U.S. foreign policy: the creation of a system in which America is the dominant power in each major geopolitical region and can act decisively to shape events and protect its interests. It would foster an environment in which democratic values are less prominent, authoritarian models are ascendant, and mercantilism advances as economic openness recedes. And rather than leading to multipolar stability, this change could simply encourage greater revisionism on the part of powers whose appetite grows with the eating. This would lead the world away from the relative stability of the post–Cold War era and back into the darker environment it seemed to have relegated to history a quarter-century ago. The phrase “spheres of influence” may sound vaguely theoretical and benign, but its real-world effects are likely to be tangible and pernicious.
Fortunately, the return of a spheres-of-influence world is not yet inevitable. Even as some nations will accept incorporation into a Chinese or Russian sphere of influence as the price of avoiding conflict, or maintaining access to critical markets and resources, others will resist because they see their own well-being as dependent on the preservation of the world order that Washington has long worked to create. The Philippines and Cambodia seem increasingly to fall into the former group; Poland and Japan, among many others, make up the latter. The willingness of even this latter group to take actions that risk incurring Beijing and Moscow’s wrath, however, will be constantly calibrated against an assessment of America’s own ability to continue leading the resistance to a spheres-of-influence world. Averting that outcome is becoming steadily harder, as the relative power and ambition of America’s authoritarian rivals rise and U.S. leadership seems to falter.
Harder, but not impossible. The United States and its allies still command a significant preponderance of global wealth and power. And the political, economic, and military weaknesses of its challengers are legion. It is far from fated, then, that the Western Pacific and Eastern Europe will slip into China’s and Russia’s respective orbits. With sufficient creativity and determination, Washington and its partners might still be able to resist the return of a dangerous global system. Doing so will require difficult policy work in the military, economic, and diplomatic realms. But ideas precede policy, and so simply rediscovering the venerable tradition of American hostility to spheres of influence—and no less, the powerful logic on which that tradition is based—would be a good start.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
What does the man with the baton actually do?
Why, then, are virtually all modern professional orchestras led by well-paid conductors instead of performing on their own? It’s an interesting question. After all, while many celebrity conductors are highly trained and knowledgeable, there have been others, some of them legendary, whose musical abilities were and are far more limited. It was no secret in the world of classical music that Serge Koussevitzky, the music director of the Boston Symphony from 1924 to 1949, found it difficult to read full orchestral scores and sometimes learned how to lead them in public by first practicing with a pair of rehearsal pianists whom he “conducted” in private.
Yet recordings show that Koussevitzky’s interpretations of such complicated pieces of music as Aaron Copland’s El Salón México and Maurice Ravel’s orchestral transcription of Mussorgsky’s Pictures at an Exhibition (both of which he premiered and championed) were immensely characterful and distinctive. What made them so? Was it the virtuosic playing of the Boston Symphony alone? Or did Koussevitzky also bring something special to these performances—and if so, what was it?
Part of what makes this question so tricky to answer is that scarcely any well-known conductors have spoken or written in detail about what they do. Only two conductors of the first rank, Thomas Beecham and Bruno Walter, have left behind full-length autobiographies, and neither one features a discussion of its author’s technical methods. For this reason, the publication of John Mauceri’s Maestros and Their Music: The Art and Alchemy of Conducting will be of special interest to those who, like my friend, wonder exactly what it is that conductors contribute to the performances that they lead.1
An impeccable musical journeyman best known for his lively performances of film music with the Hollywood Bowl Orchestra, Mauceri has led most of the world’s top orchestras. He writes illuminatingly about his work in Maestros and Their Music, leavening his discussions of such matters as the foibles of opera directors and music critics with sharply pointed, sometimes gossipy anecdotes. Most interesting of all, though, are the chapters in which he talks about what conductors do on the podium. To read Maestros and Their Music is to come away with a much clearer understanding of what its author calls the “strange and lawless world” of conducting—and to understand how conductors whose technique is deficient to the point of seeming incompetence can still give exciting performances.P rior to the 19th century, conductors of the modern kind did not exist. Orchestras were smaller then—most of the ensembles that performed Mozart’s symphonies and operas contained anywhere from two to three dozen players—and their concerts were “conducted” either by the leader of the first violins or by the orchestra’s keyboard player.
As orchestras grew larger in response to the increasing complexity of 19th-century music, however, it became necessary for a full-time conductor both to rehearse them and to control their public performances, normally by standing on a podium placed in front of the musicians and beating time in the air with a baton. Most of the first men to do so were composers, including Hector Berlioz, Felix Mendelssohn, and Richard Wagner. By the end of the century, however, it was becoming increasingly common for musicians to specialize in conducting, and some of them, notably Arthur Nikisch and Arturo Toscanini, came to be regarded as virtuosos in their own right. Since then, only three important composers—Benjamin Britten, Leonard Bernstein, and Pierre Boulez—have also pursued parallel careers as world-class conductors. Every other major conductor of the 20th century was a specialist.
What did these men do in front of an orchestra? Mauceri’s description of the basic physical process of conducting is admirably straightforward:
The right hand beats time; that is, it sets the tempo or pulse of the music. It can hold a baton. The left hand turns pages [in the orchestral score], cues instrumentalists with an invitational or pointing gesture, and generally indicates the quality of the notes (percussive, smoothly linked, sustained, etc.).
Beyond these elements, though, all bets are off. Most of the major conductors of the 20th century were filmed in performance, and what one sees in these films is so widely varied that it is impossible to generalize about what constitutes a good conducting technique.2 Most of them used batons, but several, including Boulez and Leopold Stokowski, conducted with their bare hands. Bernstein and Beecham gestured extravagantly, even wildly, while others, most famously Fritz Reiner, restricted themselves to tightly controlled hand movements. Toscanini beat time in a flowing, beautifully expressive way that made his musical intentions self-evident, but Wilhelm Furtwängler and Herbert von Karajan often conducted so unclearly that it is hard to see how the orchestras they led were able to follow them. (One exasperated member of the London Philharmonic claimed, partly in jest, that Furtwängler’s baton signaled the start of a piece “only after the thirteenth preliminary wiggle.”) Conductors of the Furtwängler sort tend to be at their best in front of orchestras with which they have worked for many years and whose members have learned from experience to “speak” their gestural language fluently.
Nevertheless, all of these men were pursuing the same musical goals. Beyond stopping and starting a given piece, it is the job of a conductor to decide how it will be interpreted. How loud should the middle section of the first movement be—and ought the violins to be playing a bit softer so as not to drown out the flutes? Someone must answer questions such as these if a performance is not to sound indecisive or chaotic, and it is far easier for one person to do so than for 100 people to vote on each decision.
Above all, a conductor controls the tempo of a performance, varying it from moment to moment as he sees fit. It is impossible for a full-sized symphony orchestra to play a piece with any degree of rhythmic flexibility unless a conductor is controlling the performance from the podium. Bernstein put it well when he observed in a 1955 TV special that “the conductor is a kind of sculptor whose element is time instead of marble.” These “sculptural” decisions are subjective, since traditional musical notation cannot be matched with exactitude. As Mauceri reminds us, Toscanini and Beecham both recorded La Bohème, having previously discussed their interpretations with Giacomo Puccini, the opera’s composer, and Toscanini conducted its 1896 premiere. Yet Beecham’s performance is 14 minutes longer than Toscanini’s. Who is “right”? It is purely a matter of individual taste, since both interpretations are powerfully persuasive.
Beyond the not-so-basic task of setting, maintaining, and varying tempos, it is the job of a conductor to inspire an orchestra—to make its members play with a charged precision that transcends mere unanimity. The first step in doing so is to persuade the players of his musical competence. If he cannot run a rehearsal efficiently, they will soon grow bored and lose interest; if he does not know the score in detail, they will not take him seriously. This requires extensive preparation on the part of the conductor, and an orchestra can tell within seconds of the downbeat whether he is adequately prepared—a fact that every conductor knows. “I’m extremely humble about whatever gifts I may have, but I am not modest about the work I do,” Bernstein once told an interviewer. “I work extremely hard and all the time.”
All things being equal, it is better than not for a conductor to have a clear technique, if only because it simplifies and streamlines the process of rehearsing an orchestra. Fritz Reiner, who taught Bernstein among others, did not exaggerate when he claimed that he and his pupils could “stand up [in front of] an orchestra they have never seen before and conduct correctly a new piece at first sight without verbal explanation and by means only of manual technique.”
While orchestra players prefer this kind of conducting, a conductor need not have a technique as fully developed as that of a Reiner or Bernstein if he knows how to rehearse effectively. Given sufficient rehearsal time, decisive and unambiguous verbal instructions will produce the same results as a virtuoso stick technique. This was how Willem Mengelberg and George Szell distinguished themselves on the podium. Their techniques were no better than adequate, but they rehearsed so meticulously that their performances were always brilliant and exact.
It also helps to supply the members of the orchestra with carefully marked orchestra parts. Beecham’s manual technique was notoriously messy, but he marked his musical intentions into each player’s part so clearly and precisely that simply reading the music on the stand would produce most of the effects that he desired.
What players do not like is to be lectured. They want to be told what to do and, if absolutely necessary, how to do it, at which point the wise conductor will stop talking and start conducting. Mauceri recalls the advice given to a group of student conductors by Joseph Silverstein, the concertmaster of the Boston Symphony: “Don’t talk to us about blue skies. Just tell us ‘longer-shorter,’ ‘faster-slower,’ ‘higher-lower.’” Professional musicians cannot abide flowery speeches about the inner meaning of a piece of music, though they will readily respond to a well-turned metaphor. Mauceri makes this point with a Toscanini anecdote:
One of Toscanini’s musicians told me of a moment in a rehearsal when the sound the NBC Symphony was giving him was too heavy. … In this case, without saying a word, he reached into his pocket and took out his silk handkerchief, tossed it into the air, and everyone watched it slowly glide to earth. After seeing that, the orchestra played the same passage exactly as Toscanini wanted.
Conducting, like all acts of leadership, is in large part a function of character. The violinist Carl Flesch went so far as to call it “the only musical activity in which a dash of charlatanism is not only harmless, but positively necessary.” While that is putting it too cynically, Flesch was on to something. I did a fair amount of conducting in college, but even though I practiced endlessly in front of a mirror and spent hours poring over my scores, I lacked the personal magnetism without which no conductor can hope to be more than merely competent at best.
On the other hand, a talented musician with a sufficiently compelling personality can turn himself into a conductor more or less overnight. Toscanini had never conducted an orchestra before making his unrehearsed debut in a performance of Verdi’s Aida at the age of 19, yet the players hastened to do his musical bidding. I once saw the modern-dance choreographer Mark Morris, whose knowledge of classical music is profound, lead a chorus and orchestra in the score to Gloria, a dance he had made in 1981 to a piece by Vivaldi. It was no stunt: Morris used a baton and a score and controlled the performance with the assurance of a seasoned pro. Not only did he have a strong personality, but he had also done his musical homework, and he knew that one was as important as the other.
The reverse, however, is no less true: The success of conductors like Serge Koussevitzky is at least as much a function of their personalities as of their preparation. To be sure, Koussevitzky had been an instrumental virtuoso (he played the double bass) before taking up conducting, but everyone who worked with him in later years was aware of his musical limitations. Yet he was still capable of imposing his larger-than-life personality on players who might well have responded indifferently to his conducting had he been less charismatic. Leopold Stokowski functioned in much the same way. He was widely thought by his peers to have been far more a showman than an artist, to the point that Toscanini contemptuously dismissed him as a “clown.” But he had, like Koussevitzky, a richly romantic musical imagination coupled with the showmanship of a stage actor, and so the orchestras that he led, however skeptical they might be about his musical seriousness, did whatever he wanted.
All great conductors share this same ability to impose their will on an orchestra—and that, after all, is the heart of the matter. A conductor can be effective only if the orchestra does what he wants. It is not like a piano, whose notes automatically sound when the keys are pressed, but a living organism with a will of its own. Conducting, then, is first and foremost an act of persuasion, as Mauceri acknowledges:
The person who stands before a symphony orchestra is charged with something both impossible and improbable. The impossible part is herding a hundred musicians to agree on something, and the improbable part is that one does it by waving one’s hands in the air.
This is why so many famous conductors have claimed that the art of conducting cannot be taught. In the deepest sense, they are right. To be sure, it is perfectly possible, as Reiner did, to teach the rudiments of clear stick technique and effective rehearsal practice. But the mystery at the heart of conducting is, indeed, unteachable: One cannot tell a budding young conductor how to cultivate a magnetic personality, any more than an actor can be taught how to have star quality. What sets the Bernsteins and Bogarts of the world apart from the rest of us is very much like what James M. Barrie said of feminine charm in What Every Woman Knows: “If you have it, you don’t need to have anything else; and if you don’t have it, it doesn’t much matter what else you have.”
2 Excerpts from many of these films were woven together into a two-part BBC documentary, The Art of Conducting, which is available on home video and can also be viewed in its entirety on YouTube
Choose your plan and pay nothing for six Weeks!
Not that he tries. What was remarkable about the condescension in this instance was that Franken directed it at women who accused him of behaving “inappropriately” toward them. (In an era of strictly enforced relativism, we struggle to find our footing in judging misbehavior, so we borrow words from the prissy language of etiquette. The mildest and most common rebuke is unfortunate, followed by the slightly more serious inappropriate, followed by the ultimate reproach: unacceptable, which, depending on the context, can include both attempted rape and blowing your nose into your dinner napkin.) Franken’s inappropriateness entailed, so to speak, squeezing the bottoms of complete strangers, and cupping the occasional breast.
Franken himself did not use the word “inappropriate.” By his account, he had done nothing to earn the title. His earlier vague denials of the allegations, he told his fellow senators, “gave some people the false impression that I was admitting to doing things that, in fact, I haven’t done.” How could he have confused people about such an important matter? Doggone it, it’s that damn sensitivity of his. The nation was beginning a conversation about sexual harassment—squeezing strangers’ bottoms, stuff like that—and “I wanted to be respectful of that broader conversation because all women deserve to be heard and their experiences taken seriously.”
Well, not all women. The women with those bottoms and breasts he supposedly manhandled, for example—their experiences don’t deserve to be taken seriously. We’ve got Al’s word on it. “Some of the allegations against me are not true,” he said. “Others, I remember very differently.” His accusers, in other words, fall into one of two camps: the liars and the befuddled. You know how women can be sometimes. It might be a hormonal thing.
But enough about them, Al seemed to be saying: Let’s get back to Al. “I know the work I’ve been able to do has improved people’s lives,” Franken said, but he didn’t want to get into any specifics. “I have used my power to be a champion of women.” He has faith in his “proud legacy of progressive advocacy.” He’s been passionate and worked hard—not for himself, mind you, but for his home state of Minnesota, by which he’s “blown away.” And yes, he would get tired or discouraged or frustrated once in a while. But then that big heart of his would well up: “I would think about the people I was doing this for, and it would get me back on my feet.” Franken recently published a book about himself: Giant of the Senate. I had assumed the title was ironic. Now I’m not sure.
Yet even in his flights of self-love, the problem that has ever attended Senator Franken was still there. You can’t take him seriously. He looks as though God made him to be a figure of fun. Try as he might, his aspect is that of a man who is going to try to make you laugh, and who is built for that purpose and no other—a close cousin to Bert Lahr or Chris Farley. And for years, of course, that’s the part he played in public life, as a writer and performer on Saturday Night Live. When he announced nine years ago that he would return to Minnesota and run for the Senate—when he came out of the closet and tried to present himself as a man of substance—the effect was so disorienting that I, and probably many others, never quite recovered. As a comedian-turned-politician, he was no longer the one and could never quite become the other.
The chubby cheeks and the perpetual pucker, the slightly crossed eyes behind Coke-bottle glasses, the rounded, diminutive torso straining to stay upright under the weight of an enormous head—he was the very picture of Comedy Boy, and suddenly he wanted to be something else: Politics Boy. I have never seen the famously tasteless tearjerker The Day the Clown Cried, in which Jerry Lewis stars as a circus clown imprisoned in a Nazi death camp, but I’m sure watching it would be a lot like watching the ex-funnyman Franken deliver a speech about farm price supports.
Then he came to Washington and slipped right into place. His career is testament to a dreary fact of life here: Taken in the mass, senators are pretty much interchangeable. Party discipline determines nearly every vote they cast. Only at the margins is one Democrat or Republican different in a practical sense from another Democrat or Republican. Some of us held out hope, despite the premonitory evidence, that Franken might use his professional gifts in service of his new job. Yet so desperate was he to be taken seriously that he quickly passed serious and swung straight into obnoxious. It was a natural fit. In no time at all, he mastered the senatorial art of asking pointless or showy questions in committee hearings, looming from his riser over fumbling witnesses and hollering “Answer the question!” when they didn’t respond properly.
It’s not hard to be a good senator, if you have the kind of personality that frees you to simulate chumminess with people you scarcely know or have never met and will probably never see again. There’s not much to it. A senator has a huge staff to satisfy his every need. There are experts to give him brief, personal tutorials on any subject he will be asked about, writers to write his questions for his committee hearings and an occasional op-ed if an idea strikes him, staffers to arrange his travel and drive him here or there, political aides to guard his reputation with the folks back home, press aides to regulate his dealings with reporters, and legislative aides to write the bills should he ever want to introduce any. The rest is show biz.
Oddly, Franken was at his worst precisely when he was handling the show-biz aspects of his job. While his inquisitions in committee hearings often showed the obligatory ferocity and indignation, he could also appear baffled and aimless. His speeches weren’t much good, and he didn’t deliver them well. As if to prove the point, he published a collection of them earlier this year, Speaking Franken. Until Pearl Harbor, he’d been showing signs of wanting to run for president. Liberal pundits were talking him up as a national candidate. Speaking Franken was likely intended to do for him what Profiles in Courage did for John Kennedy, another middling senator with presidential longings. Unfortunately for Franken, Ted Sorensen is still dead.
The final question raised by Franken’s resignation is why so many fellow Democrats urged him to give up his seat so suddenly, once his last accuser came forward. The consensus view involved Roy Moore, in those dark days when he was favored to win Alabama’s special election. With the impending arrival of an accused pedophile on the Republican side of the aisle, Democrats didn’t want an accused sexual harasser in their own ranks to deflect what promised to be a relentless focus on the GOP’s newest senator. This is bad news for any legacy Franken once hoped for himself. None of his work as a senator will commend him to history. He will be remembered instead for two things: as a minor TV star, and as Roy Moore’s oldest victim.
Choose your plan and pay nothing for six Weeks!
Review of 'Lioness' By Francine Klagsbrun
Golda Meir, Israel’s fourth prime minister, moved to Palestine from America in 1921, at the age of 22, to pursue Socialist Zionism. She was instrumental in transforming the Jewish people into a state; signed that state’s Declaration of Independence; served as its first ambassador to the Soviet Union, as labor minister for seven years, and as foreign minister for a decade. In 1969, she became the first female head of state in the Western world, serving from the aftermath of the 1967 Six-Day War and through the nearly catastrophic but ultimately victorious 1973 Yom Kippur War. She resigned in 1974 at the age of 76, after five years as prime minister. Her involvement at the forefront of Zionism and the leadership of Israel thus extended more than half a century.
This is the second major biography of Golda Meir in the last decade, after Elinor Burkett’s excellent Golda in 2008. Klagsbrun’s portrait is even grander in scope. Her epigraph comes from Ezekiel’s lamentation for Israel: What a lioness was your mother / Among the lions! / Crouching among the great beasts / She reared her cubs. The “mother” was Israel; the “cubs,” her many ancient kings; the “great beasts,” the hostile nations surrounding her. One finishes Klagsbrun’s monumental volume, which is both a biography of Golda and a biography of Israel in her time, with a deepened sense that modern Israel, its prime ministers, and its survival is a story of biblical proportions.Golda Meir’s story spans three countries—Russia, America, and Israel. Before she was Golda Meir, she was Golda Meyerson; and before that, she was Golda Mabovitch, born in 1898 in Kiev in the Russian Empire. Her father left for America after the horrific Kishinev pogrom in 1903, found work in Milwaukee as a carpenter, and in 1906 sent for his wife and three daughters, who escaped using false identities and border bribes. Golda said later that what she took from Russia was “fear, hunger and fear.” It was an existential fear that she never forgot.
In Milwaukee, Golda found socialism in the air: The city had both a socialist mayor and a socialist congressman, and she was enthralled by news from Palestine, where Jews were living out socialist ideals in kibbutzim. She immersed herself in Poalei Zion (Workers of Zion), a movement synthesizing Zionism and socialism, and in 1917 married a fellow socialist, Morris Meyerson. As soon as conditions permitted, they moved to Palestine, where the marriage ultimately failed—a casualty of the extended periods she spent away from home working for Socialist Zionism and her admission that the cause was more important to her than her husband and children. Klagsbrun writes that Meir might appear to be the consummate feminist: She asserted her independence from her husband, traveled continually and extensively on her own, left her husband and children for months to pursue her work, and demanded respect as an individual rather than on special standards based on her gender. But she never considered herself a feminist and indeed denigrated women’s organizations as reducing issues to women’s interests only, and she gave minimal assistance to other women. Klagsbrun concludes that questions about Meir as a feminist figure ultimately “hang in the air.”
Her American connection and her unaccented American English became strategic assets for Zionism. She understood American Jews, spoke their language, and conducted many fundraising trips to the United States, tirelessly raising tens of millions of dollars of critically needed funds. David Ben-Gurion called her the “woman who got the money which made the state possible.” Klagsbrun provides the schedule of her 1932 trip as an example of her efforts: Over the course of a single month, the 34-year-old Zionist pioneer traveled to Kansas City, Tulsa, Dallas, San Antonio, Los Angeles, San Francisco, Seattle, and three cities in Canada. She became the face of Zionism in America—“The First Lady,” in the words of a huge banner at a later Chicago event, “of the Jewish People.” She connected with American Jews in a way no other Zionist leader had done before her.
In her own straightforward way, she mobilized the English language and sent it into battle for Zionism. While Abba Eban denigrated her poor Hebrew—“She has a vocabulary of two thousand words, okay, but why doesn’t she use them?”—she had a way of crystallizing issues in plainspoken English. Of British attempts to prevent the growth of the Jewish community in Palestine, she said Britain “should remember that Jews were here 2,000 years before the British came.” Of expressions of sympathy for Israel: “There is only one thing I hope to see before I die, and that is that my people should not need expressions of sympathy anymore.” And perhaps her most famous saying: “Peace will come when the Arabs love their children more than they hate us.”
Once she moved to the Israeli foreign ministry, she changed her name from Meyerson to Meir, in response to Ben-Gurion’s insistence that ministers assume Israeli names. She began a decade-long tenure there as the voice and face of Israel in the world. At a Madison Square Garden rally after the 1967 Six-Day War, she observed sardonically that the world called Israelis “a wonderful people,” complimented them for having prevailed “against such odds,” and yet wanted Israel to give up what it needed for its self-defense:
“Now that they have won this battle, let them go back where they came from, so that the hills of Syria will again be open for Syrian guns; so that Jordanian Legionnaires, who shoot and shell at will, can again stand on the towers of the Old City of Jerusalem; so that the Gaza Strip will again become a place from which infiltrators are sent to kill and ambush.” … Is there anybody who has the boldness to say to the Israelis: “Go home! Begin preparing your nine and ten year olds for the next war, perhaps in ten years.”
The next war would come not in ten years, but in six, and while Meir was prime minister.
Klagsbrun’s extended discussion of Meir’s leadership before, during, and after the 1973 Yom Kippur War is one of the most valuable parts of her book, enabling readers to make informed judgments about that war and assess Meir’s ultimate place in Israeli history. The book makes a convincing case that there was no pre-war “peace option” that could have prevented the conflict. Egypt’s leader, Anwar Sadat, was insisting on a complete Israeli withdrawal before negotiations could even begin, and Meir’s view was, “We had no peace with the old boundaries. How can we have peace by returning to them?” She considered the demand part of a plan to push Israel back to the ’67 lines “and then bring the Palestinians back, which means no more Israel.”
A half-century later, after three Israeli offers of a Palestinian state on substantially all the disputed territories—with the Palestinians rejecting each offer, insisting instead on an Israeli retreat to indefensible lines and recognition of an alleged Palestinian “right of return”—Meir’s view looks prescient.
Klagsbrun’s day-by-day description of the ensuing war is largely favorable to Meir, who relied on assurances from her defense minister, Moshe Dayan, that the Arabs would not attack, and assurances from her intelligence community that, even if they did, Israel would have a 48-hour notice—enough time to mobilize the reserves that constituted more than 75 percent of its military force. Both sets of assurances proved false, and the joint Egyptian-Syrian attack took virtually everyone in Israel by surprise. Dayan had something close to a mental breakdown, but Meir remained calm and in control after the initial shock, making key military decisions. She was able to rely on the excellent personal relationships she had established with President Nixon and his national security adviser, Henry Kissinger, and the critical resupply of American arms that enabled Israel—once its reserves were called into action—to take the war into Egyptian and Syrian territories, with Israeli forces camped in both countries by its end.
Meir had resisted the option of a preemptive strike against Egypt and Syria when it suddenly became clear, 12 hours before the war started, that coordinated Egyptian and Syrian attacks were coming. On the second day of the war, she told her war cabinet that she regretted not having authorized the IDF to act, and she sent a message to Kissinger that Israel’s “failure to take such action is the reason for our situation now.” After the war, however, she testified that, had Israel begun the war, the U.S. would not have sent the crucial assistance that Israel needed (a point on which Kissinger agreed), and that she therefore believed she had done the right thing. A preemptive response, however, or a massive call-up of the reserves in the days before the attacks, might have avoided a war in which Israel lost 2,600 soldiers—the demographic equivalent of all the American losses in the Vietnam War.
It is hard to fault Meir’s decision, given the erroneous information and advice she was uniformly receiving from all her defense and intelligence subordinates, but it is a reminder that for Israeli prime ministers (such as Levi Eshkol in the Six-Day War, Menachem Begin with the Iraq nuclear reactor in 1981, and Ehud Olmert with the Syrian one in 2007), the potential necessity of taking preemptive action always hangs in the air. Klagsbrun’s extensive discussion of the Yom Kippur War is a case study of that question, and an Israeli prime minister may yet again face that situation.
The Meir story is also a tale of the limits of socialism as an organizing principle for the modern state. Klagsbrun writes about “Golda’s persistent—and hopelessly utopian—vision of how a socialist society should be conducted,” exemplified by her dream of instituting commune-like living arrangements for urban families, comparable to those in the kibbutzim, where all adults would share common kitchens and all the children would eat at school. She also tried to institute a family wage system, in which people would be paid according to their needs rather than their talents, a battle she lost when the unionized nurses insisted on being paid as professionals, based on their education and experience, and not the sizes of their families.
Socialism foundered not only on the laws of economics and human nature but also in the realm of foreign relations. In 1973, enraged that the socialist governments and leaders in Europe had refused to come to Israel’s aid during the Yom Kippur War, Meir convened a special London conference of the Socialist International, attended by eight heads of state and a dozen other socialist-party leaders. Before the conference, she told Willy Brandt, Germany’s socialist chancellor, that she wanted “to hear for myself, with my own ears, what it was that kept the heads of these socialist governments from helping us.”
In her speech at the conference, she criticized the Europeans for not even permitting “refueling the [American] planes that saved us from destruction.” Then she told them, “I just want to understand …what socialism is really about today”:
We are all old comrades, long-standing friends. … Believe me, I am the last person to belittle the fact that we are only one tiny Jewish state and that there are over twenty Arab states with vast territories, endless oil, and billions of dollars. But what I want to know from you today is whether these things are the decisive factors in Socialist thinking, too?
After she concluded her speech, the chairman asked whether anyone wanted to reply. No one did, and she thus effectively received her answer.
One wonders what Meir would think of the Socialist International today. On the centenary of the Balfour Declaration last year, the World Socialist website called it “a sordid deal” that launched “a nakedly colonial project.” Socialism was part of the cause for which she went to Palestine in 1921, and it has not fared well in history’s judgment. But the other half—
Zionism—became one of the great successes of the 20th century, in significant part because of the lifelong efforts of individuals such as she.
Golda Meir has long been a popular figure in the American imagination, particularly among American Jews. Her ghostwritten autobiography was a bestseller; Ingrid Bergman played her in a well-received TV film; Anne Bancroft played her on the Broadway stage. But her image as the “71-year old grandmother,” as the press frequently referred to her when she became prime minister, has always obscured the historic leader beneath that façade. She was a woman with strengths and weaknesses who willed herself into half a century of history. Francine Klagsbrun has given us a magisterial portrait of a lioness in full.
Choose your plan and pay nothing for six Weeks!
Back in 2016, then–deputy national-security adviser Ben Rhodes gave an extraordinary interview to the New York Times Magazine in which he revealed how President Obama exploited a clueless and deracinated press to steamroll opposition to the Iranian nuclear deal. “We created an echo chamber,” Rhodes told journalist David Samuels. “They”—writers and bloggers and pundits—“were saying things that validated what we had given them to say.”
Rhodes went on to explain that his job was made easier by structural changes in the media, such as the closing of foreign bureaus, the retirement of experienced editors and correspondents, and the shift from investigative reporting to aggregation. “The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns,” he said. “That’s a sea change. They literally know nothing.”
And they haven’t learned much. It was dispiriting to watch in December as journalists repeated arguments against the Jerusalem decision presented by Rhodes on Twitter. On December 5, quoting Mahmoud Abbas’s threat that moving the U.S. Embassy to Jerusalem would have “dangerous consequences,” Rhodes tweeted, “Trump seems to view all foreign policy as an extension of a patchwork of domestic policy positions, with no regard for the consequences of his actions.” He seemed blissfully unaware that the same could be said of his old boss.
The following day, Rhodes tweeted, “In addition to making goal of peace even less possible, Trump is risking huge blowback against the U.S. and Americans. For no reason other than a political promise he doesn’t even understand.” On December 8, quoting from a report that the construction of a new American Embassy would take some time, Rhodes asked, “Then why cause an international crisis by announcing it?”
Rhodes made clear his talking points for the millions of people inclined to criticize President Trump: Acknowledging Israel’s right to name its own capital is unnecessary and self-destructive. Rhodes’s former assistant, Ned Price, condensed the potential lines of attack in a single tweet on December 5. “In order to cater to his political base,” Price wrote, “Trump appears willing to: put U.S. personnel at great risk; risk C-ISIL [counter-ISIL] momentum; destabilize a regional ally; strain global alliances; put Israeli-Palestinian peace farther out of reach.”
Prominent media figures happily reprised their roles in the echo chamber. Susan Glasser of Politico: “Just got this in my in box from Ayman Odeh, leading Arab Israeli member of parliament: ‘Trump is a pyromaniac who could set the entire region on fire with his madness.’” BBC reporter Julia Merryfarlane: “Whether related or not, everything that happens from now on in Israel and the Pal territories will be examined in the context of Trump signaling to move the embassy to Jerusalem.” Neither Rhodes nor Price could have asked for more.
Network news broadcasts described the president’s decision as “controversial” but only reported on the views of one side in the controversy. Guess which one. “There have already been some demonstrations,” reported NBC’s Richard Engel. “They are expected to intensify, with Palestinians calling for three days of rage if President Trump goes through with it.” Left unmentioned was the fact that Hamas calls for days of rage like you and I call for pizza.
Throughout Engel’s segment, the chyron read: “Controversial decision could lead to upheaval.” On ABC, George Stephanopoulos said, “World leaders call the decision dangerous.” On CBS, Gayle King chimed in: “U.S. allies and leaders around the world say it’s a big mistake that will torpedo any chance of Middle East peace.” Oh? What were the chances of Middle East peace prior to Trump’s speech?
On CNN, longtime peace processor Aaron David Miller likened recognizing Jerusalem to hitting “somebody over the head with a hammer.” On MSNBC, Chris Matthews fumed: “Deaths are coming.” That same network featured foreign-policy gadfly Steven Clemons of the Atlantic, who said Trump “stuck a knife in the back of the two-state process.” Price and former Obama official Joel Rubin also appeared on the network to denounce Trump. “American credibility is shot, and in diplomacy, credibility relies on your word, and our word is, at this moment, not to be trusted from a peace-process perspective, certainly,” Rubin said. This from the administration that gave new meaning to the words “red line.”
Some journalists were so devoted to Rhodes’s tendentious narrative of Trump’s selfishness and heedlessness that they mangled the actual story. “He had promised this day would come, but to hear these words from the White House was jaw-dropping,” said Martha Raddatz of ABC. “Not only signing a proclamation reversing nearly 70 years of U.S. policy, but starting plans to move the embassy to Jerusalem. No one else on earth has an embassy there!” How dare America take a brave stand for a small and threatened democracy!
In fact, Trump was following U.S. policy as legislated by the Congress in 1995, reaffirmed in the Senate by a 90–0 vote just last June, and supported (in word if not in deed) by his three most recent predecessors as well as the last four Democratic party platforms. Most remarkable, the debate surrounding the Jerusalem policy ignored a crucial section of the president’s address. “We are not taking a position on any final-status issues,” he said, “including the specific boundaries of Israeli sovereignty in Jerusalem, or the resolution of contested borders. Those questions are up to the parties involved.” What we did then was simply accept the reality that the city that houses the Knesset and where the head of government receives foreign dignitaries is the capital of Israel.
However, just as had happened during the debate over the Iran deal, the facts were far less important to Rhodes than the overarching strategic goal. In this case, the objective was to discredit and undermine President Trump’s policy while isolating the conservative government of Israel. Yet there were plenty of reasons to be skeptical toward the disingenuous duo of Rhodes and Price. Trump’s announcement was bold, for sure, but the tepid protests from Arab capitals more worried about the rise of Iran, which Rhodes and Price facilitated, than the Palestinian issue suggested that the “Arab street” would sit this one out.
Which is what happened. Moreover, verbal disagreement aside, there is no evidence that the Atlantic alliance is in jeopardy. Nor has the war on ISIS lost momentum. As for putting “Israeli–Palestinian peace farther out of reach,” if third-party recognition of Jerusalem as Israel’s capital forecloses a deal, perhaps no deal was ever possible. Rhodes and Price would like us to overlook the fact that the two sides weren’t even negotiating during the Obama administration—an administration that did as much as possible to harm relations between Israel and the United States.
This most recent episode of the Trump show was a reminder that some things never change. Jerusalem was, is, and will be the capital of the Jewish state. President Trump routinely ignores conventional wisdom and expert opinion. And whatever nonsense President Obama and his allies say today, the press will echo tomorrow.