On the day that the enemies of the Jews had expected to rule over them, the situation was reversed, and it was the Jews themselves who ruled over those who hated them. The Jews gathered themselves in their cities in all the provinces of Ahashverosh the king, to lay their hands on those who sought to harm them, and no man could stand before them, for fear of them had fallen on all the peoples. The king said to Esther the queen: “In Susa the capital the Jews have killed and destroyed five hundred men, and the ten sons of Haman.”…The rest of the Jews who were in the king’s provinces gathered together and fought for their lives, gaining respite from their enemies and killing 75,000 of those who hated them. But the spoils they did not touch.A
from Chapter 9 of the Book of Esther
s the book of Esther reaches its climax, the Jews of Persia have turned political defeat into political triumph. Esther, the young Jewish queen of Persia, has taken on the Hitler-like vizier, Haman, deploying political and sexual innuendo to drive a wedge between him and the king, Ahashverosh (whom the Greeks called Xerxes). It was Haman who had persuaded the king to order the extermination of every Jew in the Persian Empire, and this royal edict remains in force. But Esther’s attack on the vizier has brought about a dramatic shift in power. Esther succeeds in having Haman deposed and hanged, and she positions her cousin Mordecai as the new vizier in his place. And after two months of nail-biting tension, she is able to get the king to issue a second decree, which permits the Jews of Persia to raise up a military force with which to defend themselves.
When the day Haman had appointed for the massacre comes eight months later, the Jews use this strength to deal a deathblow to anti-Semitic power in the empire. They kill more than 75,000 men, lay waste to its leadership, and establish a deterrent against future threats to the Jewish communities of Persia. Perhaps most significant, the crushing of the anti-Semitic nemesis establishes the position of the new Jewish vizier with the king, ensuring that Ahashverosh’s authority will be wielded in such a way as to protect the Jewish interest for years to come.
The narrative in the Book of Esther only touches on these final stages of Mordecai and Esther’s efforts, but there is enough for us to understand what happened. After months of feverish diplomatic work, Mordecai had succeeded in parlaying the fact of the new decree’s existence and the feeble mumblings of Ahashverosh into a widespread belief that he in fact had the influence, authority, and power to make good on it: “The man Mordecai grew greater and greater,” “his reputation had gone out to all the provinces,” and with that “the fear of Mordecai had fallen upon them.” By the opening of the actual war, the influence of the Jews in the empire had become overwhelming. The hard core of anti-Semitic power had been isolated, as “all the princes of the provinces, and the satraps, and the governors and those that conduct the king’s affairs supported the Jews.” Many of those who had been willing to support the anti-Semites had switched sides or disappeared into the woodwork, and it appeared that the Jews and their allies would score a terrifying victory. Thus the “fear of them had fallen on all the peoples,” and “none could stand before them.”
Some have suggested that Mordecai now had the option of restraining the fury of the promised Jewish onslaught: There was no longer much question of a real anti-Semitic assault, and if he feared there would be an anti-Semitic resurgence should he relent, he could have opted just to arrest or execute a few hundred gang leaders across the empire. Would this not have sufficed? Mordecai obviously did not believe such a minimalist response would have been enough, and his decisions are straight out of Machiavelli’s textbook of power politics:
For it must be noted, that men must either be caressed or annihilated; they will revenge themselves for small injuries, but cannot do so for great ones; the injury therefore that we do to a man must be such that we need not fear his vengeance.
Moreover, a ruler or a prince
must not mind incurring the charge of cruelty for the purpose of keeping his subjects united and faithful; for, with a very few examples, he will be more merciful than those who, from excess of tenderness, allow disorders to arise, from whence spring bloodshed and rapine….And of all princes, it is impossible for a new prince to escape the reputation of cruelty.
In other words, a minimalistic response to a genuine threat all but ensures two undesired consequences, both of them deadly. First, the defeated enemy will nurture the hope of revenge, and continues to be an active threat as he seeks an opportunity to reassert his challenge. Second, the mildness of the response encourages others to take advantage of what can be perceived as hesitancy or weakness on the part of the ruler. The only hope to avoid future outrages is thus the assertion of overwhelming power in the first instance.
Strength attracts strength, and power attracts power. Thus the weak, to the degree they can make themselves seem strong, can attract the support of the strong, thereby becoming strong in reality.And this is what Mordecai chooses to do: “The Jews struck at all their enemies with the sword, killing and destroying, and they did as they pleased to those who hated them. And in Susa the capital the Jews killed and destroyed five hundred….The rest of the Jews who were in the king’ s provinces gathered together and fought for their lives, gaining respite from their enemies and killing seventy-five thousand of those who hated them.”
Throughout the empire, the Jews enter into battle with the anti-Semitic forces, in most cases with the active assistance of the allies who have rallied to support Mordecai’s position. Abandoned by most of their former supporters among the people and in the government, the anti-Semitic inciters and those who had actively advanced their cause are brought low in province after province. The carnage is so great that (i) the anti-Semitic basilisk is in fact beheaded, its life and leadership burned out of the body politic; and (ii) the lesson is learned by all future challengers to the safety of the Jews and the power of Mordecai in the king’s court. Nowhere in all the vast reaches of the Persian Empire does there any longer exist a leader capable of inspiring the peoples to rise and harm the Jewish communities, nor can anyone imagine becoming one while Mordecai’s influence persists.
ut there is a third reason for the decision to go to war that perhaps even surpasses the others in importance: the position of the king himself toward the Jews, and toward Mordecai as their leader.
When, earlier in the story, Ahashverosh had expressed an opinion on the subject of saving the lives of the Jews of Persia, he had managed to leave no question as to how little the subject concerned him. Pressed by the queen, the best he had been able to do was to reply that he had already done something good for the Jews once (“Behold, I have given Haman’s house to Esther, and he himself has been hanged on the gallows”), and that although there was nothing else to be done (“an edict written in the king’s name and sealed with the king’s ring cannot be revoked”), Mordecai and Esther were free to try if they so chose (“And you, write concerning the Jews as seems good in your eyes”).
Ahashverosh is a scoundrel. Not only is there no reason to believe his present professions of sympathy for the Jews, but his original complicity in the plan to destroy them, especially when combined with his subsequent statements on the subject, suggests that there is every reason to fear for the future. Who is to say that some turn of events will not take the Jews out of favor and return Ahashverosh to the original course he and Haman had set? With these facts constantly before him, Mordecai has another, crucial reason to bring the war against the enemies of the Jews to its spectacular conclusion: Strength attracts strength, and power attracts power. Thus the weak, to the degree they can make themselves seem strong, can attract the support of the strong, thereby becoming strong in reality.
Ahashverosh has made it clear that he is not the slightest bit inclined to become the protector of the Jews so long as they are a diffuse and contrary minority, and therefore “it is of no benefit for the king to tolerate them.” If Mordecai is to make the reversal of the Jews’ fortunes complete—and if he is to lend this reversal a measure of stability and permanence—he has no choice but to make it perfectly obvious to Ahashverosh that the Jews are strong, and that it is in fact of very real benefit to tolerate them, to ally himself with them, and to protect them against future threats that may arise.
This, the final transformation in the king’s relationship with the Jews, is depicted in an exchange between Ahashverosh and his Jewish queen at the height of the tension on the day of the war between the Jews and their enemies. Reports from the provinces have not yet begun to arrive, but the dimensions of the catastrophe that has befallen the anti-Semites in Susa have already become known. For the first time in the narrative, we see Ahashverosh initiating a conversation with Esther, and telling her: “In Susa the capital the Jews have killed and destroyed five hundred men, and the ten sons of Haman. What have they done in the rest of the king ’s provinces?” Immediately after this, there tumbles from the king’s lips a statement that one is tempted to mistake for a non sequitur. He says to Esther: “And whatever your wish, it will be given you, and whatever more you may request, it will be done.”
The change in the man is obvious when one considers what he has said on the three previous occasions on which he has used this expression in speaking to Esther: upon her first forbidden approach to him in the throne room, and at the two banquets of wine that she prepared for him. On all three previous occasions, the king responded to the queen’s approach with a variation of the formula: “Whatever your wish, it will be given you. Whatever your request, up to half the kingdom, it will be done.”
Ahashverosh’s largesse on this occasion differs from these earlier gestures. The king now seeks out Esther to find out what she wants in the absence of any initiative on her part. With visions of blood dancing before his eyes, and afraid as to what may happen next, Ahashverosh’s relationship with his Jewish queen undergoes a final, dramatic revision. It is Esther who now embodies power in the king’s eyes, and it is he who offers his favors—his service—in an effort to gain favor with her. Their relationship is finally and completely reversed: Esther, who had come into Ahashverosh’s bedchamber five years earlier in search of a way of winning him over so as to avoid the life of a discarded harem girl, now finds the king anxiously seeking to win her pleasure.
In this context, Ahashverosh demonstratively (or perhaps unconsciously) drops the hedge setting an upper bound on her request to “only” half the kingdom, the implication being that she can now ask for the entire kingdom if she so wishes. In practice, once it is the king who is seeking her favor, he does not even need to make this offer explicit, for it has already been granted. In fact, Esther asks for much less: “If it please the king, let the Jews in Susa do tomorrow according to the law for today, and let Haman’s ten sons be hanged on the gallows.”
Like the king, Esther has no way of knowing what has happened in the rest of the empire, and news from the farthest provinces will not be available for weeks. In the best case, the war in the provinces will have come to an end that night, with the anti-Semitic menace eradicated. In the worst, the war will have to be extended by a decree from the palace. Her request is that the Jewish reign of arms in Susa be allowed to continue until there is news of what has happened at least in those neighboring cities from which reports can arrive after a day’s ride. The point is that in the capital, the initiative should remain in Mordecai’s hands until he is able to determine what should happen next. The Jews and their allies are therefore permitted to continue holding the streets of Susa at sword point for another 24 hours, flushing out of hiding another 300 of their enemies. Moreover, the bodies of Haman’s sons, in life the very symbol of continued anti-Semitic power and the possibility of revenge for Haman’s death, are transformed into a symbol of Jewish effectiveness when these grisly relics are put on display for potential opponents to consider.
By the time the streets of Susa have grown quiet after the second day’s battle, reports have begun to arrive from other cities and towns. Everywhere, the victory of the Jews has become a rout. Men have been hounded out and struck down, the specter of the massacre of which Haman had dreamed is dead, and the decree of death that has hung over the heads of the Jews for so long has been lifted. The day on which the enemies of the Jews had hoped to rule over them has been transformed, miraculously, into a day of honor and glory, with the Jews themselves achieving rule over those who hated them.
ll this is considered a triumph by the narrative itself and by later Jewish tradition. But contemporary readers who gather on the holiday of Purim to read the Book of Esther aloud, as Jews have for 2,000 years, often find it difficult to look upon the account of the Jews’ war against their enemies in this way. They tend to lose interest in the story after the death of Haman. Indeed, many synagogues in the United States and elsewhere end the reading of the Megillah right there. This is despite the fact that Haman is hanged well before the actual turning point in Mordecai and Esther’s struggle to save the Jews, and long before the actual war itself, which is the event that in fact brings the Jews redemption.
There is good reason why the account of the Jews’ bloody and overwhelming victory, which in other societies would be remembered and savored with pleasure, is often underemphasized, passed over in discussion, and even, in some cases, avoided as if it were an object of shame. The liberal societies of our time are founded on the principle of nonviolent resolution of disputes. The doctrines of the social compact, the rule of law, the voluntary division of labor, and the mutual benefits of contractual exchange—all these are the basis not only for our political order, but also for a prevailing consciousness, whose hold is all the stronger as one approaches the more educated populations within Western society. Individuals who have grown up in this culture have few life experiences to suggest to them that there is any real need for force, violence, and war; and their educators strain to inculcate in them the belief that it is a virtue to “outgrow” the use of force. On such a view, reason and appetite are the only familiar and appropriate springs of human action. And all that is sought by reason and appetite—food, possessions, sex, and knowledge—can be obtained in quantities by most members of an industrialized and free society without recourse to force, and even, it is thought, without the subjugation of any individual by any other.
For those who see the world this way, the functioning of the human spirit, which, for lack of understanding, they refer to using pejoratives such as the “lust for power,” is a mystery. They tend to deny the existence of a real need for power and control within themselves, and they sincerely profess incomprehension when such needs manifest themselves in others. Thus, a great many individuals, recognizing no need for power, force, and war in themselves, come to consider these things to be objectively undesirable. Then evil, to the extent that it continues to exist as a concept at all, comes to be associated with power, force, and war and with those who have recourse to them.
Among Jews, such disregard for power and force is always strongly present. It was the prophets of Israel who introduced into the world the ideal of an end to violence among nations, with Isaiah calling for swords and spears to be beaten into agricultural implements, and Jeremiah going so far as to call for a “new covenant” to be instilled in every breast at birth, so that men should no longer desire iniquity.
Jews have always been exposed to these ideas, and the history of the last centuries, in which they were largely cut off from the experience of armed conflict and high politics—and driven into ever-deeper familiarity with the realm of ideas—did much to refashion the Jews as a caste of dream-thinkers and idealists, for whom every step toward the establishment of societies based on the principle of nonviolent resolution of disputes has served to reconfirm the idea that power and violence are simply unnecessary.
For such readers, the story of Esther up until Haman’s demise seems quaint and harmless. Unable to understand the terrifying contest of spirit and rule that leads to Haman’s execution, they find in it nothing more than a dead coincidence: As it happens, the king’s wife turns out to be a Jew, and so Haman’s plot is foiled. The fact that Mordecai and Esther then go on to orchestrate a rampage that soaks the empire to its farthest reaches in blood is for them an embarrassment and a mystery. What need was there for this? What rejoicing and holiday could there be in this? What moral teaching could there be in this?
Yet the narrative itself is unambiguous in making the power and control that the Jews consolidated in the fighting a cause for celebration—and one of the book’s central moral themes. The summary that immediately precedes the account of the war therefore touts the fact that “on the day that the enemies of the Jews had expected to rule over them, the situation was reversed, and it was the Jews themselves who ruled over those who hated them.” The passage that caps the war footage speaks of the relief gained in “killing seventy-five thousand of those who hated them,” making the morrow “a day of feasting and gladness.” And the summary that accompanies Mordecai’s official interpretation of events underscores the fact that Haman’s “evil plan, which he had intended against the Jews, should be turned on its head, and they hanged him and his sons on the gallows”—where “turning the evil plan on its head” means the death of all those who had planned to perpetrate the massacre against the Jews.
The trouble with this account for the contemporary reader is that today we are not supposed to permit ourselves any kind of pride or satisfaction over a victory that involves wholesale bloodshed, even if we do recognize it as having been necessary. In our time and place, being good is thought to be closely allied with the revulsion we have learned toward killing, not only of noncombatants but even of those participating in the fighting against us. Our own moral sensibilities are in this sense “higher” than those that drove the wars of liquidation in the books of Joshua and Samuel, and even the ending to the book of Esther. On the other hand, one need only think of the foolishness of certain pacifists, ecologians, vegetarians, and abstentionist sectarians who insist that the use of force, the expansion of industry, the killing of animals, or sexual intimacy are inherently immoral in order to recognize the possibility of wandering lost in endless, false, and dangerous “higher” moralities—that is, moralities that are supposedly higher than our own—whose pursuit bears no fruit other than destruction. As the saying of Ecclesiastes has it: “Be not overly righteous, and strive not to be too clever, for why should you destroy yourself?”
But stop and ask yourself this: Is there not some terrific hypocrisy in taking such pride in the moral heights we believe ourselves to have attained in comparison with the past—and yet finding ourselves appalled and annoyed at the demands of so many of those who, in their sanctimoniousness, their naiveté, their utter irrelevance to the world and its doings, hawk their ever-more-suffocating formulas for what we must and must not do to remain decent members of society? Should we not, after all, be grateful when we come across someone who is willing and able to apply moral principle more seriously, more thoroughly, and more consistently than we are willing to do?
Our society presses relentlessly for us to believe this—for us to admit that our biblical forefathers really knew little about justice and goodness, and that we ourselves are no great shakes either. For us to admit that some future morality that is just coming into being holds the key to being truly good. But in fact, we should think that people who are constantly raising the moral bar in this way, insisting on an ever-steeper hierarchy of value systems extending from the “highest” moralities of our time down to the supposed non-moralities of previous eras, are gravely mistaken. We are repelled by certain standards of behavior that still pertained in the time of biblical Israel—standards with respect to warfare, for example, or polygamy or slavery. But at the same time, there is something equally repellent about the idea that because of such hard-won improvements in the moral standards according to which we live, we must now, as a logical consequence of this, agree to be judged today in accordance with a framework that asserts it is the more enlightened view from the future. The fact that each of these two poles repels suggests that there is not one ideal at work in circumscribing and prescribing our behaviors as human beings, but two—and that the truth lies in the balance between them.
an’s consciousness is challenged by objective conditions that prevent him from living in an inertial introspection and pull him, often against his will, toward action in the world. These conditions are essentially two: first, the needs and urges of his body; and second, the needs of others, of his family, his people, and his world. It takes little experience to discover that these two influences are fundamentally and irreconcilably contrary to each other, producing antithetical impulses within philosophy and religion. Each vies with the other and against it, and they must be kept distinct for religion and ethics to be able to speak coherently. These are the ideals of purity and morality.
Purity. The needs and urges of one’s body and spirit have always been seen as demanding that man pull away from ideas and truths to occupy himself with eating, digestion, infatuations and sex, clothing and shelter, natural and chemical intoxications, sleep, discomfort and illness of various kinds, honor and anger, phobias, depressions and other impairments of the spirit, and death. They are a bottomless pit, into which all life’s energies and abilities easily disappear without a trace. After a lifetime preoccupied with the pursuit of them, man finds that he has nothing to show for his efforts other than having worn out a body that had started its career fresh. It has therefore been considered a virtue to minimize one’s concern for the needs and urges of the body and of the spirit to whatever degree possible so as to free the mind for its confrontation with higher things. This virtue, when found in men, has been called purity or holiness—the Hebrew word for holiness being kedusha, meaning “separation,” from the body, the concerns of men and the world. And its most basic ethical form is the command of the books of Moses: “Holy shall you be.”
Morality. The needs of the world, on the other hand—the protection of innocent life, the dissemination of truth and the establishment of justice, the alleviation of suffering, the development of productive talents and capacities, the facilitation of happiness, and the attainment of peace—all these have been held to be the noblest of efforts, and the pursuit of them has been held to be a virtue. Yet if they are to be pursued to any worthy effect, they demand the greatest possible concentration of the individual’s worldly resources, the maximal use of his body and his spirit to attain high levels of experience and skill, reputation, respect and wealth, allies and power, in order to have a hope of achieving whatever betterment of the world can possibly be achieved. And this virtue has been called morality or justice—the Hebrew word for justice being tzedek, meaning “that which is right,” its purport being one of involvement with the concerns of men and of the world. And here, too, the books of Moses speak in the language of a command: “Justice, justice, shall you pursue.”
The saint makes a token effort toward power and leaves the rest to God, while the hero leaves nothing to God until he himself reaches exhaustion.The crux of the contradiction between these two ideals is man’s relationship to power. Purity requires that man renounce power; but morality requires that man have power in order to pursue right. This is true on the individual level, in which one can give to others only if one has something to give. But it is even truer when one considers moralities of scale, which require vast amounts of political power, economic power, and military power. Without power, there is no police force capable of defending the innocent, no court capable of doing justice, no army capable of wresting peace from the aggressor, no surplus capable of feeding and clothing the poor, or of paying to teach truth to the young. Morality requires power, and morality on a vast scale requires power on a vast scale.
For a saint, a man of perfect study and prayer, power is essentially exorcised as a motive, and so the entire world of spiritual blemishes—the obsession with honor and wealth, tantrums and rages, depressions, competitiveness, cruelty—are not found in him. But power is lost to him as a tool: He may give charity from what he has, but the good he can do is of necessity circumscribed; he may wish to do right in the world, but he has few resources and does not really know how. For the hero, the man of great deeds, the endless game of accumulating power and the preoccupation with wielding it, of learning the rules and building alliances, of consolidating the wins and recovering from the losses, of gradually growing to the point where one is in fact able to move an immovable world—all this leaves him relatively little time for contemplation, for study and thought, for prayer. He may include such activities in his daily routine, but he finds it difficult to concentrate as the world presses in on him, demanding that he return to it. The saint and the hero may be religious men both. Yet the saint makes a token effort toward power and leaves the rest to God, while the hero leaves nothing to God until he himself reaches exhaustion.
Purity and morality are untranslatable ideals, a vertical axis against which man measures how inwardly removed he is from the world, and a horizontal one measuring the degree to which he outwardly affects it—so that it is forever difficult to advance in one direction without doing damage in the other, a dilemma that appears in Jewish tradition again and again. Thus David, Israel’s greatest king, was responsible for the moral achievement of uniting the fragmented Jewish tribes and leading them to victory against the enemies that had caused them such suffering. Yet the Bible held that these very acts disqualified him from building the Temple in Jerusalem because he had “shed much blood upon the earth”—so that the construction of the sanctuary was left to his son Solomon.
Even more difficult is the rabbinic story told of the arrest of the Rabbis Eleazar ben Parata and Hanina ben Teriadon who are to be brought to trial for their activities by the Roman authorities during the persecutions of the Emperor Hadrian in Judea in the 130s CE. Hanina was a well-known representation of saintliness, of whom it is said that his only sin was that he once allocated Purim alms as though they were regular charity, a mistake that he then corrected by replacing the misapportioned alms with money from his own pocket. Yet according to the Talmud, Hanina tells his cellmate: “Happy are you who have been arrested on five charges but will be delivered. Woe is unto me, who have been arrested on one charge but will be condemned. For you have occupied yourself with study of Torah as well as deeds of kindness, whereas I have occupied myself with Torah alone.” When Eleazar is brought before the tribunal and accused, this worldlier rabbi refuses to give a straight answer, dodging and maneuvering until he succeeds in confounding the court, eventually winning his release. But when Hanina is asked by the court why he occupies himself with Torah though it is against the law, he replies with words of purity: “Because I was commanded to do so by the Lord, my God.” Having confessed his guilt, he and his wife are put to death, and his daughter is consigned to serve the Romans in a brothel.
Lest the point somehow be missed, the Talmudic account refers in this context to the opinion of Rabbi Huna: “He who only occupies himself with the study of Torah is as if he had no God.” That is, even divine assistance depends on making sacrifices in purity in order to gather ability in the ways of tribunals and occupying armies.
From the earliest times, the response of the Jews to the conflict between the demands of purity and the moral need to achieve power in the world was for each individual to strike a balance between them. Jews were to seek power in the world for six days of the week and seek purity by withdrawing in the seventh; to seek worldly power through sexual relations and raising up heirs, yet maintain purity through the institution of marriage and periods of monthly abstinence; seek power through the consumption of meat, yet strive for purity in the choice of the livestock and the manner of their slaughter, as well as periodic fasts; and so forth. Yet to maintain an entire nation on course along this middle path, it was thought necessary to appoint individuals whose work would be the embodiment of each ideal, the possibility of embodying both at once apparently being intolerably remote. Thus, from the time of Moses, the Jews instituted what amounted to a division of labor between the judge and the priest, between the man of morality and the man of purity—and even between Judah and Joseph, the tribes responsible for the pursuit of justice and national well-being, and Levi, the tribe of purity. Similarly, when the Jews entered the land, rabbinic tradition suggests that three strategic necessities became incumbent upon them as a nation: that they appoint a king, build the Temple, and destroy the evil Amalek tribe—these representing the establishment of justice (the king, Amalek) and purity (the Temple) in Israel.
The idea that the political and military leader is essential to morality and religion—that he is, in other words, an important moral and religious figure in his own right—sits uneasily with our tendency to believe that politics is “immoral” or amoral, an opinion that has been handed down to us from antiquity. The last centuries have seen endless confusion on this score, as moral thought, which in the time of the prophets had been inseparable from human exploration of the political realm, has become the preserve of men who have removed themselves from the affairs of the world, the better to pursue “pure reason” and similar projects. Contrary to their own protestations, such individuals are not particularly adept at formulating moral systems, precisely because they have so little experience with what is required to achieve anything in practice. Kant, in particular, insisted on the equation of morality with the eradication of self-interest—that is, he insisted that morality was identical with purity—and thus was forced by his own reasoning to conclude that worldly actions are perhaps never actually moral. But this has not prevented generations of his disciples from applying this misguided standard to the actions of governments and politicians (all of which actions are “self-interested” in that they are taken to enhance the power, interests, and cause of that government or politician) and determining them all to be, for this reason, tainted and “immoral.”
The appearance of things associated with impurity break the spell of the higher being that we strive to be, and so such intrusions are proscribed and even deplored.Lending plausibility to an argument that would otherwise have little to recommend it is this: Politics is “dirty.” That is, people who are not known to lie, cheat, deceive, break agreements, blackmail and blacklist, to engage in dual loyalties and false loyalties, in campaigns of espionage and character assassination, in bribery and incitement to violence, suddenly find themselves doing so and more when political power is at stake. There is an objective sense in which what is accepted and even necessary in the political arena is neither necessary nor acceptable among family or friends, or even among rival businesses. Politics is dirty because it is in fact impure, relative to the world of family, house of prayer, school, and business in which most individuals spend most of their lives.
But this does not make it immoral. There are many activities that are impure in this sense: The modes of behavior accepted in the bedroom, the graveyard, the operating room, the slaughterhouse, the lavatory, and the battlefield are none of them activities suited to relations of family, synagogue, school, and business. This is not because any of them are inherently immoral, but because they are impure. They are activities that focus attention on the body, its various organs, their functioning and malfunctioning, their decay and mortality—whereas family, school, and business are relatively pure, allowing us to focus on the minds of those who are with us and the unique human relations in which we are engaged with them. The house of prayer is held to be purer still, and here even much of what takes place in the realm of business, school, or family life is held in abeyance for a time so that we may attend more carefully to God and to his teaching.
This ability to distinguish spheres of greater purity from other, lesser ones is what allows mankind to step into civilization, leaving behind the physical organs and bodily fluids, decay and illness, the corpse, and death itself, in order to enter into a “safe space,” a bubble that is “separated” from the world, in which it is possible to concentrate on things that are at once more essential and more personal. When we enter into such times and such places, the appearance of things associated with impurity break the spell of the higher being that we strive to be, and so such intrusions are proscribed and even deplored.
The same may be said concerning the accepted behaviors of politics. Here, too, many of the activities are brutal, and in fact they have often been referred to as “naked power,” verbal actions whose meaning is pure force, acts of the spirit that are in their essence violence, whether accompanied by physical blows or not. Yet in order to achieve power to do good, one must be experienced, talented, and expert in the ways in which power is in fact allocated and applied. One must know war as it is waged by others, and be able to wage such wars more effectively than they. One must know finance as it is waged by others, and be able to build an economy more effectively than they. One must be able to gain influence and wield it as it is wielded by others, but be able to use this influence more effectively than they. Power is a matter of beating one’s opponents at their own game and using the results for good. Participating in the ways of the political world as one finds it is not inherently immoral, any more than the activities of the lavatory are immoral. In neither case does a habitually pious person desire to behave in such ways. He does it neither for pleasure nor for some kind of personal gain, but because there is presumably no choice. And it would be absurd if each time an individual took it upon himself to achieve right and justice, even at the expense of his own personal purity, he would also have to be castigated for being immoral besides.
Indeed, the biblical narrative and subsequent rabbinic tradition reserved the appellation of tzadik, meaning “the righteous,” for precisely those figures whose lives are spent in outward political and moral action, immersed in power and evil, but who nonetheless manage to maintain a level of relative purity in these circumstances. This is the meaning of the oft-repeated Talmudic appellation “Joseph the righteous”: It is not that Joseph, while ruling amid the despotism and brutality of ancient Egypt, somehow manages to maintain an unparalleled standard of purity in the way he leads his life. Rather, the significance of his great act of self-discipline—the refusal of the Egyptian temptress that results in his being falsely accused of rape and thrown in a dungeon—is in the fact that he is able to maintain any standard of purity at all in the polluted realm in which he flourishes, and despite the ways in which he must accommodate this realm to attain success. Others referred to as “righteous” are of this sort as well. Noah saves mankind from the flood and is referred to as righteous in the books of Moses despite the fact that in his time “the evil of man was great upon the earth, and the whole nature of the thoughts of his mind was only to evil all day long.” Lot risks his life and that of his family to save perfect strangers from the mob and is referred to as righteous in the books of Moses despite living amid the depravity of Sodom. Jacob, who wrests control of his father’s inheritance from his powerful brother and makes a fortune at the hands of the Mesopotamian idolaters, is called righteous by the rabbis despite spending the best years of his life serving his father-in-law surrounded by immorality and idolatry. And Mordecai, who saves the Jews of Persia, is called righteous by the rabbis despite likewise living amid the iniquity of the Persian court.
f course, the fact that the political world is a sphere of lesser purity does not legitimize every means to any political end. One cannot make great sacrifices in one’s purity and humanity where the ends being pursued are immoral or unimportant. The political struggles of municipal zoning boards, for example, or the notorious politics internal to academia, cannot justify relinquishing civilized behavior. In high politics, on the other hand, it is, as Joseph says in Genesis, “the preservation of the multitude of men alive” that is in fact at stake, so that it can always be reasonably argued that departures from our accustomed standards of purity are justifiable, and even obligatory.
Left to the hands of others who would use the power of the state for their own gain, the law would serve the few, the country would engage in oppression and unjust wars, and thousands would die for nothing. Indeed, it is the political world, with all its impurity, that makes it possible for the civil world of daily life to exist as it does. It is politics that musters the ugly power necessary for higher society to live oblivious of the muck, just as the body marshals the resources needed for the mind to do its work, although most of the time the mind is unaware of what is taking place beneath it. If the political world should one day fail in its impure task when faced by malevolent challenges from outside society or within, the bubble of civilization in which we spend most of our adult lives would come crashing down into the lava of impurity below.
There is no point in attempting to count the strata of impurity upon which our world floats, and upon which it depends for its existence. But it bears emphasizing that the impure sphere of politics floats like a bubble on top of other, yet impurer worlds: The realm of wars, both foreign and domestic, is one such, in which even today nations use the most gruesome means in order to survive—means that would be unthinkable even in an arena such as that of domestic political life. And beneath this lies another, even fouler world, which exists now only in the farthest reaches: that of the idolaters, in which murderous violence was acceptable even within the family, and in which no safety truly existed anywhere. It was this realm in which the genocides of antiquity took place: in which Rome put all of Carthage to the sword and sowed the soil of its lands with salt so that no human being should ever be able to persist there again. And it was in this world, according to the hideous exigencies of its wars, that Joshua entered the land of Canaan, after 40 years in the desert, with an imperative to secure a stable Jewish nation and faith:
You will beat them and you will utterly destroy them, you will sign no treaties with them, nor will you show them any mercy. And you will not make marriages with them: You will not give your daughter to his son, nor will you take his daughter for your son. For they will turn your son away from following me, and they will serve other gods….You will destroy their altars, and break down their images, and cut down their asherim, and burn their idols with fire….The idols of their gods will you burn with fire. Do not desire the silver and gold that is on them nor take it for yourself, lest you become ensnared by it, for it is a horror to the Lord, your God. You will not bring a horror into your house, lest you become accursed as it is, but you will loathe it and abhor it, for it is an accursed thing.
The point is all too clear: “They will turn your son away,” “lest you become ensnared,” “lest you become accursed as it is.” Without an end to the murderous Canaanite presence in the land, the moral life of the Jews, so we are told, cannot come into existence, for the Jews would rapidly become Canaanites themselves: perverse, murderous, idolatrous. Indeed, the subsequent narrative tells precisely this story. It tells of how the Jews, having failed to live up to the imperative of driving the Canaanites from the land, sank into a thousand years of assimilation following the ways of the idolaters.
It is the curse of politics that in certain cases such monstrous acts of impurity may be considered the most moral option given the paucity of alternatives.The modern world is quick to decry this war against the Canaanites. But if we are honest, we have to admit that contemporary warfare has resorted to the categorical destruction of innocents for less. The “counter-value” warfare waged against Hiroshima and Dresden was not aimed at saving the United States or even Western civilization. The war had long since turned in favor of the Allies. Yet these enormities were deemed necessary to bring the war to a speedier conclusion. Japanese and German children were considered worthy of destruction to save the lives of American servicemen. The premise of the war against the Canaanites is, if anything, less cynical, since it assumes what was clearly not the case when the decision was taken to use the first nuclear weapons: that the possibility of Jewish civilization—indeed, of moral civilization itself—could not persist without the banishment of the worst level of impurity from the land.
That there may be a place in moral argumentation for such acts is not easily assimilated. Within the confines of our own world, the rules are different: One may not take the life of an innocent person to save one’s own life under any circumstance. When the individual violates this principle, it is rightly understood as the greatest of crimes. Yet most of us can glimpse our own descent into the realm in which our accepted norms of behavior dissolve in contemporary scenarios in which the free world is faced with annihilation, or in which the State of Israel stands to be destroyed in war with the Arabs or Iran. Would we refuse to order a nuclear strike in such a case? The harsh truth is that the immorality of such a strike, killing countless innocents to save a civilization, cannot be deduced from the immorality of murdering an innocent individual to save another.
From this, it is evident that the political arena is not merely “dirty.” In certain cases, it leads rapidly into a pollution in which man is transformed into a beast of the lowest grade: not merely killing individuals for his own survival, but destroying cities and bringing nations to ruin. This, at any rate, is what we find in the most horrifying of the biblical accounts of ancient warfare, which assume that there can be an imperative to wage war of this kind if the world has fallen into otherwise irreparable evil. It is the curse of politics that in certain cases such monstrous acts of impurity may be considered the most moral option given the paucity of alternatives. But, of course, it is always possible instead to preserve one’s own purity—and in doing so to allow the world to fall ever further.
IN THE BOOK OF ESTHER, Mordecai’s war is fought in the world of his time and place. It is fought by its rules because any other choice in that time and place would have been folly. Thus if one were to ask why so many men had to die on the day of the fighting if the results were by then practically assured, the answer is just that which we have read in Machiavelli’s politics. Without decisive action against an enemy that had been preparing to murder all the Jews, Mordecai would have guaranteed himself a reputation of hesitancy and mildness—a reputation that would have breathed new life into the anti-Semitism of the empire and left the king doubting the Jewish vizier’s abilities.
And if one were to ask why Haman’s 10 sons had to die, it is wishful thinking to argue that every one of them was active as a leader in the camp of the anti-Semites, although some of them were exactly that. Rather, their deaths are sought, as was accepted in the course of warfare and politics in antiquity, to prevent Haman’s enmity from leaving heirs, as well as to degrade his memory and emphasize the enormity of his defeat. The book of Daniel tells of Darius issuing a parallel order for the destruction of those who persecuted Daniel, along with their families: “And the king commanded, and they brought those men who had accused Daniel, and they cast them into the lion’s den, they, their children and their wives.” In Herodotus, Artabanus, uncle to Xerxes, similarly offers to wager his life and those of his children on the outcome of a war he opposes: “If things go well for the king, as you say they will, let me and my children be put to death. But if they fall out as I prophesy, let your children suffer, and you too, if you come back alive.”
And if one insists that Mordecai should have conducted the war without resorting to the impure norms of Persian politics, even though such a nod to purity might have jeopardized the endeavor, the first answer must be that of Ecclesiastes: “There are righteous men who perish through their righteousness, and there are the wicked who flourish by their wickedness. Be not overly righteous.”
Yet harsh as is Mordecai’s onslaught, he nevertheless does demand that the Jews carry on their war on a purer level than that which they expected to have waged against them. Thus Mordecai’s decree, copied more or less verbatim from Haman’s, speaks of the death of children and women, as well as the appropriation of all the property of their enemies, all with the intention of inspiring counterterror in the enemy camp. When the day itself is described in the narrative, however, there is no suggestion that the Jews followed through with these threats. No casualties are mentioned among the dependents, and indeed, the text repeatedly emphasizes that the Jews did not even touch their enemies’ property.
The issue of respecting the property rights of one’s enemies and their families is one that has its roots in the earlier stages of the plot, when Haman first approaches Ahashverosh with the hope of convincing him to destroy the Jews. In making his case, Haman seeks to engage every interest of the king’s to which he can appeal, including a possible financial interest: “If it please the king, let it be written that they be destroyed, and I will weigh out ten thousand talents of silver into the hands of those who have charge of the business, to bring it into the king’s treasuries.” This is an outrageously large sum, in the range of what Herodotus reports to have been the income in silver of the entire Persian Empire for a year, and the only imaginable source for such a fortune would have been the plunder of the Jews’ property. Yet Ahashverosh, ever eager to demonstrate his power by wasting state moneys, assures Haman that “the silver is given to you, and the people, to do with them as you see fit,” thus clearing the way for the vizier to offer the Jews’ property as an incentive to the murderers. In so doing, he greatly expands the circle of those who will potentially be willing to do the work of annihilation, including not only those who hate Jews, but those who want their property.
All of this is in contrast to the wars against the Canaanites and Amalek, in which plundering was proscribed. In the case of the Canaanites, the fear was principally that in claiming the property of the idolaters, the Jews would end up with idols in their homes to which they would be inevitably drawn. But in the case of Saul’s effort to destroy Amalek, there is no mention of idols, and the issue, once the plundering takes place, seems to be completely different: “But Saul and the people took pity on Agag, and on the best of the sheep, and the oxen, and the fatlings, and the lambs, and all that was good, and did not destroy them, but everything that was of little value and poor, they destroyed.” When confronted by the prophet Samuel, Saul explains: “I have transgressed the instruction of the Lord and your words, for I feared the people and listened to their words.” For this crime, of giving in to the desire of the people for plunder, Saul is stripped of his kingdom.
At stake in the argument over the right to plunder is the motive for destroying Amalek. In Samuel’s eyes, Amalek’s history of unlimited terror, bloodshed, and evildoing justify what is otherwise a horrendous act. But if the Jews begin claiming Amalekite cattle for themselves, the war will turn out to have been fought, in fact, for another reason altogether. Far from engaging in an act whose purpose is to make the world safe from Amalek’s predations, they are in that case just engaging in an act of murder for the sake of stealing, itself a very great evil. Samuel instructs Saul to kill, horribly, so that a better life may become possible for mankind, but his fighting men want to kill for plunder. In Samuel’s eyes, the choice is between right and evil, and Saul chooses the latter.
The distinction between just war and murder is today referred to in Israel as the “purity of arms.” And this is what is at issue, too, in the story of Esther, in which Mordecai’s war against the Persian anti-Semites is recounted as a revisiting of the Amalekite war in the book of Samuel. Here, the emphasis on not touching the property of the anti-Semites is intended to indicate the purity of the cause. Men are killed because they had been planning to murder the Jews, and as a preemption against future threats. The fact that this is understood by the Jews to be the sole motive raises their warfare to a level of purity much higher than that of Haman, and higher too than that which had been practiced by their forebears in the time of Saul.
It is for this reason that rabbinic tradition refers to Mordecai as “the righteous”: because in successfully raising Jewish military action to a higher level of purity relative to the fearsome acts required by the politics and warfare of his place and time, he provided the kind of political leadership for which the Jews should hope in every generation.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Mordecai’s Challenge: An Essay on War, Leadership, and Purim
Must-Reads from Magazine
Exactly one week later, a Star Wars cantina of the American extremist right featuring everyone from David Duke to a white-nationalist Twitter personality named “Baked Alaska” gathered in Charlottesville, Virginia, to protest the removal of a statue honoring the Confederate general Robert E. Lee. A video promoting the gathering railed against “the international Jewish system, the capitalist system, and the forces of globalism.” Amid sporadic street battles between far-right and “antifa” (anti-fascist) activists, a neo-Nazi drove a car into a crowd of peaceful counterprotestors, killing a 32-year-old woman.
Here, in the time span of just seven days, was the dual nature of contemporary American anti-Semitism laid bare. The most glaring difference between these two displays of hate lies not so much in their substance—both adhere to similar conspiracy theories articulating nefarious, world-altering Jewish power—but rather their self-characterization. The animosity expressed toward Jews in Charlottesville was open and unambiguous, with demonstrators proudly confessing their hatred in the familiar language of Nazis and European fascists.
The socialists in Chicago, meanwhile, though calling for a literal second Holocaust on the shores of the Mediterranean, would fervently and indignantly deny they are anti-Semitic. On the contrary, they claim the mantle of “anti-fascism” and insist that this identity naturally makes them allies of the Jewish people. As for those Jews who might oppose their often violent tactics, they are at best bystanders to fascism, at worst collaborators in “white supremacy.”
So, whereas white nationalists explicitly embrace a tribalism that excludes Jews regardless of their skin color, the progressives of the DSA and the broader “woke” community conceive of themselves as universalists—though their universalism is one that conspicuously excludes the national longings of Jews and Jews alone. And whereas the extreme right-wingers are sincere in their anti-Semitism, the socialists who called for the elimination of Israel are just as sincere in their belief that they are not anti-Semitic, even though anti-Semitism is the inevitable consequence of their rhetoric and worldview.
The sheer bluntness of far-right anti-Semitism makes it easier to identify and stigmatize as beyond the pale; individuals like David Duke and the hosts of the “Daily Shoah” podcast make no pretense of residing within the mainstream of American political debate. But the humanist appeals of the far left, whose every libel against the Jewish state is paired with a righteous invocation of “justice” for the Palestinian people, invariably trigger repetitive and esoteric debates over whether this or that article, allusion, allegory, statement, policy, or political initiative is anti-Semitic or just critical of Israel. What this difference in self-definition means is that there is rarely, if ever, any argument about the substantive nature of right-wing anti-Semitism (despicable, reprehensible, wicked, choose your adjective), while the very existence of left-wing anti-Semitism is widely doubted and almost always indignantly denied by those accused of practicing it.T o be sure, these recent manifestations of anti-Semitism occur on the left and right extremes. And statistics tell a rather comforting story about the state of anti-Semitism in America. Since the Anti-Defamation League began tracking it in 1979, anti-Jewish hate crime is at an historic low; indeed, it has been declining since a recent peak of 1,554 incidents in 2006. America, for the most part, remains a very philo-Semitic country, one of the safest, most welcoming countries for Jews on earth. A recent Pew poll found Jews to be the most admired religious group in the United States.1 If American Jews have anything to dread, it’s less anti-Semitism than the loss of Jewish peoplehood through assimilation, that is being “loved to death” by Gentiles.2 Few American Jews can say that anti-Semitism has a seriously deleterious impact on their life, that it has denied them educational or employment opportunities, or that they fear for the physical safety of themselves or their families because of their Jewish identity.
The question is whether the extremes are beginning to move in on the center. In the past year alone, the DSA’s rolls tripled from 8,000 to 25,000 dues-paying members, who have established a conspicuous presence on social media reaching far beyond what their relatively miniscule numbers attest. The DSA has been the subject of widespread media coverage, ranging from the curious to the adulatory. The white supremacists, meanwhile, found themselves understandably heartened by the strange difficulty President Donald Trump had in disavowing them. He claimed, in fact, that there had been some “very fine people” among their ranks. “Thank you President Trump for your honesty & courage to tell the truth about #Charlottesville,” tweeted David Duke, while the white-nationalist Richard Spencer said, “I’m proud of him for speaking the truth.”
Indeed, among the more troubling aspects of our highly troubling political predicament—and one that, from a Jewish perspective, provokes not a small amount of angst—is that so many ideas, individuals, and movements that could once reliably be categorized as “extreme,” in the literal sense of articulating the views of a very small minority, are no longer so easily dismissed. The DSA is part of a much broader revival of the socialist idea in America, as exemplified by the growing readership of journals like Jacobin and Current Affairs, the popularity of the leftist Chapo Trap House podcast, and the insurgent presidential campaign of self-described democratic socialist Bernie Sanders—who, according to a Harvard-Harris poll, is now the most popular politician in the United States. Since 2015, the average age of a DSA member dropped from 64 to 30, and a 2016 Harvard poll found a majority of Millennials do not support capitalism.
Meanwhile, the Republican Party of Donald Trump offers “nativism and culture war wedges without the Reaganomics,” according to Nicholas Grossman, a lecturer in political science at the University of Illinois. A party that was once reliably internationalist and assertive against Russian aggression now supports a president who often preaches isolationism and never has even a mildly critical thing to say about the KGB thug ruling over the world’s largest nuclear arsenal.
Like ripping the bandage off an ugly and oozing wound, Trump’s presidential campaign unleashed a bevy of unpleasant social forces that at the very least have an indirect bearing on Jewish welfare. The most unpleasant of those forces has been the so-called alternative right, or “alt-right,” a highly race-conscious political movement whose adherents are divided on the “JQ” (Jewish Question). Throughout last year’s campaign, Jewish journalists (this author included) were hit with a barrage of luridly anti-Semitic Twitter messages from self-described members of the alt-right. The tamer missives instructed us to leave America for Israel, others superimposed our faces onto the bodies of concentration camp victims.3
I do not believe Donald Trump is himself an anti-Semite, if only because anti-Semitism is mainly a preoccupation—as distinct from a prejudice—and Trump is too narcissistic to indulge any preoccupation other than himself. And there is no evidence to suggest that he subscribes to the anti-Semitic conspiracy theories favored by his alt-right supporters. But his casual resort to populism, nativism, and conspiracy theory creates a narrative environment highly favorable to anti-Semites.
Nativism, of which Trump was an early and active practitioner, is never good for the Jews, no matter how affluent or comfortable they may be and notwithstanding whether they are even the target of its particular wrath. Racial divisions, which by any measure have grown significantly worse in the year since Trump was elected, hurt all Americans, obviously, but they have a distinct impact on Jews, who are left in a precarious position as racial identities calcify. Not only are the newly emboldened white supremacists of the alt-right invariably anti-Semites, but in the increasingly racialist taxonomy of the progressive left—which more and more mainstream liberals are beginning to parrot—Jews are considered possessors of “white privilege” and, thus, members of the class to be divested of its “power” once the revolution comes. In the racially stratified society that both extremes envision, Jews lose out, simultaneously perceived (by the far right) as wily allies and manipulators of ethnic minorities in a plot to mongrelize America and (by the far left) as opportunistic “Zionists” ingratiating themselves with a racist and exploitative “white” establishment that keeps minorities down.T his politics is bad for all Americans, and Jewish Americans in particular. More and more, one sees the racialized language of the American left being applied to the Middle East conflict, wherein Israel (which is, in point of fact, one of the most racially diverse countries in the world) is referred to as a “white supremacist” state no different from that of apartheid South Africa. In a book just published by MIT Press, ornamented with a forward by Cornel West and entitled “Whites, Jews, and Us,” a French-Algerian political activist named Houria Bouteldja asks, “What can we offer white people in exchange for their decline and for the wars that will ensue?” Drawing the Jews into her race war, Bouteldja, according to the book’s jacket copy, “challenges widespread assumptions among the left in the United States and Europe—that anti-Semitism plays any role in Arab–Israeli conflicts, for example, or that philo-Semitism doesn’t in itself embody an oppressive position.” Jew-hatred is virtuous, and appreciation of the Jews is racism.
Few political activists of late have done more to racialize the Arab–Israeli conflict—and, through insidious extension of the American racial hierarchy, designate American Jews as oppressors—than the Brooklyn-born Arab activist Linda Sarsour. An organizer of the Women’s March, Sarsour has seamlessly insinuated herself into a variety of high-profile progressive campaigns, a somewhat incongruent position given her reactionary views on topics like women’s rights in Saudi Arabia. (“10 weeks of PAID maternity leave in Saudi Arabia,” she tweets. “Yes PAID. And ur worrying about women driving. Puts us to shame.”) Sarsour, who is of Palestinian descent, claims that one cannot simultaneously be a feminist and a Zionist, when it is the exact opposite that is true: No genuine believer in female equality can deny the right of Israel to exist. The Jewish state respects the rights of women more than do any of its neighbors. In an April 2017 interview, Sarsour said that she had become a high-school teacher for the purpose of “inspiring young people of color like me.” Just three months earlier, however, in a video for Vox, Sarsour confessed, “When I wasn’t wearing hijab I was just some ordinary white girl from New York City.” The donning of Muslim garb, then, confers a racial caste of “color,” which in turn confers virtue, which in turn confers a claim on political power.
This attempt to describe the Israeli–Arab conflict in American racial vernacular marks Jews as white (a perverse mirror of Nazi biological racism) and thus implicates them as beneficiaries of “structural racism,” “white privilege,” and the whole litany of benefits afforded to white people at birth in the form of—to use Ta-Nehisi Coates’s abstruse phrase—the “glowing amulet” of “whiteness.” “It’s time to admit that Arthur Balfour was a white supremacist and an anti-Semite,” reads the headline of a recent piece in—where else? —the Forward, incriminating Jewish nationalism as uniquely perfidious by dint of the fact that, like most men of his time, a (non-Jewish) British official who endorsed the Zionist idea a century ago held views that would today be considered racist. Reading figures like Bouteldja and Sarsour brings to mind the French philosopher Pascal Bruckner’s observation that “the racialization of the world has to be the most unexpected result of the antidiscrimination battle of the last half-century; it has ensured that the battle continuously re-creates the curse from which it is trying to break free.”
If Jews are white, and if white people—as a group—enjoy tangible and enduring advantages over everyone else, then this racially essentialist rhetoric ends up with Jews accused of abetting white supremacy, if not being white supremacists themselves. This is one of the overlooked ways in which the term “white supremacy” has become devoid of meaning in the age of Donald Trump, with everyone and everything from David Duke to James Comey to the American Civil Liberties Union accused of upholding it. Take the case of Ben Shapiro, the Jewish conservative polemicist. At the start of the school year, Shapiro was scheduled to give a talk at UC Berkeley, his alma matter. In advance, various left-wing groups put out a call for protest in which they labeled Shapiro—an Orthodox Jew—a “fascist thug” and “white supremacist.” An inconvenient fact ignored by Shapiro’s detractors is that, according to the ADL, he was the top target of online abuse from actual white supremacists during the 2016 presidential election. (Berkeley ultimately had to spend $600,000 protecting the event from leftist rioters.)
A more pernicious form of this discourse is practiced by left-wing writers who, insincerely claiming to have the interests of Jews at heart, scold them and their communal organizations for not doing enough in the fight against anti-Semitism. Criticizing Jews for not fully signing up with the “Resistance” (which in form and function is fast becoming the 21st-century version of the interwar Popular Front), they then slyly indict Jews for being complicit in not only their own victimization but that of the entire country at the hands of Donald Trump. The first and foremost practitioner of this bullying and rather artful form of anti-Semitism is Jeet Heer, a Canadian comic-book critic who has achieved some repute on the American left due to his frenetic Twitter activity and availability when the New Republic needed to replace its staff that had quit en masse in 2014. Last year, when Heer came across a video of a Donald Trump supporter chanting “JEW-S-A” at a rally, he declared on Twitter: “We really need to see more comment from official Jewish groups like ADL on way Trump campaign has energized anti-Semitism.”
But of course “Jewish groups” have had plenty to say about the anti-Semitism expressed by some Trump supporters—too much, in the view of their critics. Just two weeks earlier, the ADL had released a report analyzing over 2 million anti-Semitic tweets targeting Jewish journalists over the previous year. This wasn’t the first time the ADL raised its voice against Trump and the alt-right movement he emboldened, nor would it be the last. Indeed, two minutes’ worth of Googling would have shown Heer that his pronouncements about organizational Jewish apathy were wholly without foundation.4
It’s tempting to dismiss Heer’s observation as mere “concern trolling,” a form of Internet discourse characterized by insincere expressions of worry. But what he did was nastier. Immediately presented with evidence for the inaccuracy of his claims, he sneered back with a bit of wisdom from the Jewish sage Hillel the Elder, yet cast as mild threat: “If I am not for myself, who will be for me?” In other words: How can you Jews expect anyone to care about your kind if you don’t sufficiently oppose—as arbitrarily judged by moi, Jeet Heer—Donald Trump?
If this sort of critique were coming from a Jewish donor upset that his preferred organization wasn’t doing enough to combat anti-Semitism, or a Gentile with a proven record of concern for Jewish causes, it wouldn’t have turned the stomach. What made Heer’s interjection revolting is that, to put it mildly, he’s not exactly known for being sympathetic toward the Jewish plight. In 2015, Heer put his name to a petition calling upon an international comic-book festival to drop the Israeli company SodaStream as a co-sponsor because the Jewish state is “built on the mass ethnic cleansing of Palestinian communities and sustained through racism and discrimination.” Heer’s name appeared alongside that of Carlos Latuff, a Brazilian cartoonist who won second place in the Iranian government’s 2006 International Holocaust Cartoon Competition. For his writings on Israel, Heer has been praised as being “very good on the conflict” by none other than Philip Weiss, proprietor of the anti-Semitic hate site Mondoweiss.
In light of this track record, Heer’s newfound concern about anti-Semitism appeared rather dubious. Indeed, the bizarre way in which he expressed this concern—as, ultimately, a critique not of anti-Semitism per se but of the country’s foremost Jewish civil-rights organization—suggests he cares about anti-Semitism insofar as its existence can be used as a weapon to beat his political adversaries. And since the incorrigibly Zionist American Jewish establishment ranks high on that list (just below that of Donald Trump and his supporters), Heer found a way to blame it for anti-Semitism. And what does that tell you? It tells you that—presented with a 16-second video of a man chanting “JEW-S-A” at a Donald Trump rally—Heer’s first impulse was to condemn not the anti-Semite but the Jews.
Heer isn’t the only leftist (or New Republic writer) to assume this rhetorical cudgel. In a piece entitled “The Dismal Failure of Jewish Groups to Confront Trump,” one Stephen Lurie attacked the ADL for advising its members to stay away from the Charlottesville “Unite the Right Rally” and let police handle any provocations from neo-Nazis. “We do not have a Jewish organizational home for the fight against fascism,” he quotes a far-left Jewish activist, who apparently thinks that we live in the Weimar Republic and not a stable democracy in which law-enforcement officers and not the balaclava-wearing thugs of antifa maintain the peace. Like Jewish Communists of yore, Lurie wants to bully Jews into abandoning liberalism for the extreme left, under the pretext that mainstream organizations just won’t cut it in the fight against “white supremacy.” Indeed, Lurie writes, some “Jewish institutions and power players…have defended and enabled white supremacy.” The main group he fingers with this outrageous slander is the Republican Jewish Coalition, the implication being that this explicitly partisan Republican organization’s discrete support for the Republican president “enables white supremacy.”
It is impossible to imagine Heer, Lurie, or other progressive writers similarly taking the NAACP to task for its perceived lack of concern about racism, or castigating the Human Rights Campaign for insufficiently combating homophobia. No, it is only the cowardice of Jews that is condemned—condemned for supposedly ignoring a form of bigotry that, when expressed on the left, these writers themselves ignore or even defend. The logical gymnastics of these two New Republic writers is what happens when, at base, one fundamentally resents Jews: You end up blaming them for anti-Semitism. Blaming Jews for not sufficiently caring enough about anti-Semitism is emotionally the same as claiming that Jews are to blame for anti-Semitism. Both signal an envy and resentment of Jews predicated upon a belief that they have some kind of authority that the claimant doesn’t and therefore needs to undermine.T his past election, one could not help but notice how the media seemingly discovered anti-Semitism when it emanated from the right, and then only when its targets were Jews on the left. It was enough to make one ask where they had been when left-wing anti-Semitism had been a more serious and pervasive problem. From at least 1996 (the year Pat Buchanan made his last serious attempt at securing the GOP presidential nomination) to 2016 (when the Republican presidential nominee did more to earn the support of white supremacists and neo-Nazis than any of his predecessors), anti-Semitism was primarily a preserve of the American left. In that two-decade period—spanning the collapse of the Oslo Accords and rise of the Second Intifada to the rancorous debate over the Iraq War and obsession with “neocons” to the presidency of Barack Obama and the 2015 Iran nuclear deal—anti-Israel attitudes and anti-Semitic conspiracy made unprecedented inroads into respectable precincts of the American academy, the liberal intelligentsia, and the Democratic Party.
The main form that left-wing anti-Semitism takes in the United States today is unhinged obsession with the wrongs, real or perceived, of the state of Israel, and the belief that its Jewish supporters in the United States exercise a nefarious control over the levers of American foreign policy. In this respect, contemporary left-wing anti-Semitism is not altogether different from that of the far right, though it usually lacks the biological component deeming Jews a distinct and inferior race. (Consider the left-wing anti-Semite’s eagerness to identify and promote Jewish “dissidents” who can attest to their co-religionists’ craftiness and deceit.) The unholy synergy of left and right anti-Semitism was recently epitomized by former CIA agent and liberal stalwart Valerie Plame’s hearty endorsement, on Twitter, of an article written for an extreme right-wing website by a fellow former CIA officer named Philip Giraldi: “America’s Jews Are Driving America’s Wars.” Plame eventually apologized for sharing the article with her 50,000 followers, but not before insisting that “many neocon hawks are Jewish” and that “just FYI, I am of Jewish descent.”
The main fora in which left-wing anti-Semitism appears is academia. According to the ADL, anti-Semitic incidents on college campuses doubled from 2014 to 2015, the latest year that data are available. Writing in National Affairs, Ruth Wisse observes that “not since the war in Vietnam has there been a campus crusade as dynamic as the movement of Boycott, Divestment, and Sanctions against Israel.” Every academic year, a seeming surfeit of controversies erupt on campuses across the country involving the harassment of pro-Israel students and organizations, the disruption of events involving Israeli speakers (even ones who identify as left-wing), and blatantly anti-Semitic outbursts by professors and student activists. There was the Oberlin professor of rhetoric, Joy Karega, who posted statements on social media claiming that Israel had created ISIS and had orchestrated the murderous attack on Charlie Hebdo in Paris. There is the Rutgers associate professor of women’s and gender studies, Jasbir Puar, who popularized the ludicrous term “pinkwashing” to defame Israel’s LGBT acceptance as a massive conspiracy to obscure its oppression of Palestinians. Her latest book, The Right to Maim, academically peer-reviewed and published by Duke University Press, attacks Israel for sparing the lives of Palestinian civilians, accusing its military of “shooting to maim rather than to kill” so that it may keep “Palestinian populations as perpetually debilitated, and yet alive, in order to control them.”
One could go on and on about such affronts not only to Jews and supporters of Israel but to common sense, basic justice, and anyone who believes in the prudent use of taxpayer dollars. That several organizations exist solely for the purpose of monitoring anti-Israel and anti-Semitic agitation on American campuses attests to the prolificacy of the problem. But it’s unclear just how reflective these isolated examples of the college experience really are. A 2017 Stanford study purporting to examine the issue interviewed 66 Jewish students at five California campuses noted for “being particularly fertile for anti-Semitism and for having an active presence of student groups critical of Israel and Zionism.” It concluded that “contrary to widely shared impressions, we found a picture of campus life that is neither threatening nor alarmist…students reported feeling comfortable on their campuses, and, more specifically, comfortable as Jews on their campuses.” To the extent that Jewish students do feel pressured, the report attempted to spread the blame around, indicting pro-Israel activists alongside those agitating against it. “[Survey respondents] fear that entering political debate, especially when they feel the social pressures of both Jewish and non-Jewish activist communities, will carry social costs that they are unwilling to bear.”
Yet by its own admission, the report “only engaged students who were either unengaged or minimally engaged in organized Jewish life on their campuses.” Researchers made a study of anti-Semitism, then, by interviewing the Jews least likely to experience it. “Most people don’t really think I’m Jewish because I look very Latina…it doesn’t come up in conversation,” one such student said in an interview. Ultimately, the report revealed more about the attitudes of unengaged (and, thus, uninformed) Jews than about the state of anti-Semitism on college campuses. That may certainly be useful in its own right as a means of understanding how unaffiliated Jews view debates over Israel, but it is not an accurate marker of developments on college campuses more broadly.
A more extensive 2016 Brandeis study of Jewish students at 50 schools found 34 percent agreed at least “somewhat” that their campus has a hostile environment toward Israel. Yet the variation was wide; at some schools, only 3 percent agreed, while at others, 70 percent did. Only 15 percent reported a hostile environment towards Jews. Anti-Semitism was found to be more prevalent at public universities than private ones, with the determinative factor being the presence of a Students for Justice in Palestine chapter on campus. Important context often lost in conversations about campus anti-Semitism, and reassuring to those concerned about it, is that it is simply not the most important issue roiling higher education. “At most schools,” the report found, “fewer than 10 percent of Jewish students listed issues pertaining to either Jews or Israel as among the most pressing on campus.”F or generations, American Jews have depended on anti-Semitism’s remaining within a moral quarantine, a cordon sanitaire, and America has reliably kept this societal virus contained. While there are no major signs that this barricade is breaking down in the immediate future, there are worrying indications on the political horizon.
Surveying the situation at the international level, the declining global position of the United States—both in terms of its hard military and economic power relative to rising challengers and its position as a credible beacon of liberal democratic values—does not portend well for Jews, American or otherwise. American leadership of the free world, has, in addition to ensuring Israel’s security, underwritten the postwar liberal world order. And it is the constituent members of that order, the liberal democratic states, that have served as the best guarantor of the Jews’ life and safety over their 6,000-year history. Were America’s global leadership role to diminish or evaporate, it would not only facilitate the rise of authoritarian states like Iran and terrorist movements such as al-Qaeda, committed to the destruction of Israel and the murder of Jews, but inexorably lead to a worldwide rollback of liberal democracy, an outcome that would inevitably redound to the detriment of Jews.
Domestically, political polarization and the collapse of public trust in every American institution save the military are demolishing what little confidence Americans have left in their system and governing elites, not to mention preparing the ground for some ominous political scenarios. Widely cited survey data reveal that the percentage of American Millennials who believe it “essential” to live in a liberal democracy hovers at just over 25 percent. If Trump is impeached or loses the next election, a good 40 percent of the country will be outraged and susceptible to belief in a stab-in-the-back theory accounting for his defeat. Whom will they blame? Perhaps the “neoconservatives,” who disproportionately make up the ranks of Trump’s harshest critics on the right?
Ultimately, the degree to which anti-Semitism becomes a problem in America hinges on the strength of the antibodies within the country’s communal DNA to protect its pluralistic and liberal values. But even if this resistance to tribalism and the cult of personality is strong, it may not be enough to abate the rise of an intellectual and societal disease that, throughout history, thrives upon economic distress, xenophobia, political uncertainty, ethnic chauvinism, conspiracy theory, and weakening democratic norms.
1 Somewhat paradoxically, according to FBI crime statistics, the majority of religiously based hate crimes target Jews, more than double the amount targeting Muslims. This indicates more the commitment of the country’s relatively small number of hard-core anti-Semites than pervasive anti-Semitism.
4 The ADL has had to maintain a delicate balancing act in the age of Trump, coming under fire by many conservative Jews for a perceived partisan tilt against the right. This makes Heer’s complaint all the more ignorant — and unhelpful.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'The Once and Future Liberal' By Mark Lilla
Lilla, a professor at Columbia University, tells us that “the story of how a successful liberal politics of solidarity became a failed pseudo-politics of identity is not a simple one.” And about this, he’s right. Lilla quotes from the feminist authors of the 1977 Combahee River Collective Manifesto: “The most profound and potentially most radical politics come directly out of our own identity, as opposed to working to end somebody else’s oppression.” Feminists looked to instantiate the “radical” and electrifying phrase which insisted that “the personal is political.” The phrase, argues Lilla, was generally seen in “a somewhat Marxist fashion to mean that everything that seems personal is in fact political.”
The upshot was fragmentation. White feminists were deemed racist by black feminists—and both were found wanting by lesbians, who also had black and white contingents. “What all these groups wanted,” explains Lilla, “was more than social justice and an end to the [Vietnam] war. They also wanted there to be no space between what they felt inside and what they saw and did in the world.” He goes on: “The more obsessed with personal identity liberals become, the less willing they become to engage in reasoned political debate.” In the end, those on the left came to a realization: “You can win a debate by claiming the greatest degree of victimization and thus the greatest outrage at being subjected to questioning.”
But Lilla’s insights into the emotional underpinnings of political correctness are undercut by an inadequate, almost bizarre sense of history. He appears to be referring to the 1970s when, zigzagging through history, he writes that “no recognition of personal or group identity was coming from the Democratic Party, which at the time was dominated by racist Dixiecrats and white union officials of questionable rectitude.”
What is he talking about? Is Lilla referring to the Democratic Party of Lyndon Johnson, Hubert Humphrey, and George McGovern? Is he referring obliquely to George Wallace? If so, why is Wallace never mentioned? Lilla seems not to know that it was the 1972 McGovern Democratic Convention that introduced minority seating to be set aside for blacks and women.
At only 140 pages, this is a short book. But even so, Lilla could have devoted a few pages to Frankfurt ideologist Herbert Marcuse and his influence on the left. In the 1960s, Marcuse argued that leftists and liberals were entitled to restrain centrist and conservative speech on the grounds that the universities had to act as a counterweight to society at large. But this was not just rhetoric; in the campus disruption of the early 1970s at schools such as Yale, Cornell, and Amherst, Marcuse’s ideals were pushed to the fore.
If Lilla’s argument comes off as flaccid, perhaps that’s because the aim of The Once and Future Liberal is more practical than principled. “The only way” to protect our rights, he tells the reader, “is to elect liberal Democratic governors and state legislators who’ll appoint liberal state attorneys.” According to Lilla, “the paradox of identity liberalism” is that it undercuts “the things it professes to want,” namely political power. He insists, rightly, that politics has to be about persuasion but then contradicts himself in arguing that “politics is about seizing power to defend the truth.” In other words, Lilla wants a better path to total victory.
Given what Lilla, descending into hysteria, describes as “the Republican rage for destruction,” liberals and Democrats have to win elections lest the civil rights of blacks, women, and gays are rolled back. As proof of the ever-looming danger, he notes that when the “crisis of the mid-1970s threatened…the country turned not against corporations and banks, but against liberalism.” Yet he gives no hint of the trail of liberal failures that led to the crisis of the mid-’70s. You’d never know reading Lilla, for example, that the Black Power movement intensified racial hostilities that were then further exacerbated by affirmative action and busing. And you’d have no idea that, at considerable cost, the poverty programs of the Great Society failed to bring poorer African Americans into the economic mainstream. Nor does Lilla deal with the devotion to Keynesianism that produced inflation without economic growth during the Carter presidency.
Despite his discursive ambling through the recent history of American political life, Lilla has a one-word explanation for identity politics: Reaganism. “Identity,” he writes, is “Reaganism for lefties.” What’s crucial in combating Reaganism, he argues, is to concentrate on our “shared political” status as citizens. “Citizenship is a crucial weapon in the battle against Reaganite dogma because it brings home that fact that we are part of a legitimate common enterprise.” But then he asserts that the “American right uses the term citizenship today as a means of exclusion.” The passage might lead the reader to think that Lilla would take up the question of immigration and borders. But he doesn’t, and the closing passages of the book dribble off into characteristic zigzags. Lilla tells us that “Black Lives Matter is a textbook example of how not to build solidarity” but then goes on, without evidence, to assert the accuracy of the Black Lives Matter claim that African-Americans have been singled out for police mistreatment.
It would be nice to argue that The Once and Future Liberal is a near miss, a book that might have had enduring importance if only it went that extra step. But Lilla’s passing insights on the perils of a politically correct identity politics drown in the rhetoric of conventional bromides that fill most of the pages of this disappointing book.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
n Athens several years ago, I had dinner with a man running for the national parliament. I asked him whether he thought he had a shot at winning. He was sure of victory, he told me. “I have hired a very famous political consultant from Washington,” he said. “He is the man who elected Reagan. Expensive. But the best.”
The political genius he then described was a minor political flunky I had met in Washington long ago, a more-or-less anonymous member of the Republican National Committee before he faded from view at the end of Ronald Reagan’s second term. Mutual acquaintances told me he still lived in a nice neighborhood in Northern Virginia, but they never could figure out what the hell he did to earn his money. (This is a recurring mystery throughout the capital.) I had to come to Greece to find the answer.
It is one of the dark arts of Washington, this practice of American political hacks traveling to faraway lands and suckering foreign politicians into paying vast sums for splashy, state-of-the-art, essentially worthless “services.” And it’s perfectly legal. Paul Manafort, who briefly managed Donald Trump’s campaign last summer, was known as a pioneer of the globe-trotting racket. If he hadn’t, as it were, veered out of his gutter into the slightly higher lane of U.S. presidential politics, he likely could have hoovered cash from the patch pockets of clueless clients from Ouagadougou to Zagreb for the rest of his natural life and nobody in Washington would have noticed.
But he veered, and now he and a colleague find themselves indicted by Robert Mueller, the Inspector Javert of the Russian-collusion scandal. When those indictments landed, they instantly set in motion the familiar scramble. Trump fans announced that the indictments were proof that there was no collusion between the Trump campaign and the Russians—or, in the crisp, emphatic phrasing of a tweet by the world’s Number One Trump Fan, Donald Trump: “NO COLLUSION!!!!” The Russian-scandal fetishists in the press corps replied in chorus: It’s still early! Javert required more time, and so will Mueller, and so will they.
A good Washington scandal requires a few essential elements. One is a superabundance of information. From these data points, conspiracy-minded reporters can begin to trace associations, warranted or not, and from the associations, they can infer motives and objectives with which, stretched together, they can limn a full-blown conspiracy theory. The Manafort indictment released a flood of new information, and at once reporters were pawing for nuggets that might eventually form a compelling case for collusion.
They failed to find any because Manafort’s indictment, in essence, involved his efforts to launder his profits from his international political work, not his work for the Trump campaign. Fortunately for the obsessives, another element is required for a good scandal: a colorful cast. The various Clinton scandals brought us Asian money-launderers and ChiCom bankers, along with an entire Faulkner-novel’s worth of bumpkins, sharpies, and backwoods swindlers, plus that intern in the thong. Watergate, the mother lode of Washington scandals, featured a host of implausible characters, from the central-casting villain G. Gordon Liddy to Sam Ervin, a lifelong segregationist and racist who became a hero to liberals everywhere.
Here, at last, is one area where the Russian scandal has begun to show promise. Manafort and his business partner seem too banal to hold the interest of anyone but a scandal obsessive. Beneath the pile of paper Mueller dumped on them, however, another creature could be seen peeking out shyly. This would be the diminutive figure of George Papadopoulos. An unpaid campaign adviser to Trump, Papadopoulos pled guilty to lying to the FBI about the timing of his conversations with Russian agents. He is quickly becoming the stuff of legend.
Papadopoulos is an exemplar of a type long known to American politics. He is the nebbish bedazzled by the big time—achingly ambitious, though lacking the skill, or the cunning, to climb the greasy pole. So he remains at the periphery of the action, ever eager to serve. Papadopoulos’s résumé, for a man under 30, is impressively padded. He said he served as the U.S. representative to the Model United Nations in 2012, though nobody recalls seeing him there. He boasted of a four-year career at the Hudson Institute, though in fact he spent one year there as an unpaid intern and three doing contract research for one of Hudson’s scholars. On his LinkedIn page, he listed himself as a keynote speaker at a Greek American conference in 2008, but in fact he participated only in a panel discussion. The real keynoter was Michael Dukakis.
With this hunger for achievement, real or imagined, Papadopoulos could not let a presidential campaign go by without climbing aboard. In late 2015, he somehow attached himself to Ben Carson’s campaign. He was never paid and lasted four months. His presence went largely unnoticed. “If there was any work product, I never saw it,” Carson’s campaign manager told Time. The deputy campaign manager couldn’t even recall his name. Then suddenly, in April 2016, Papadopoulos appeared on a list of “foreign-policy advisers” to Donald Trump—and, according to Mueller’s court filings, resolved to make his mark by acting as a liaison between Trump’s campaign and the Russian government.
While Mueller tells the story of Papadopoulos’s adventures in the dry, Joe Friday prose of a legal document, it could easily be the script for a Peter Sellers movie from the Cold War era. The young man’s résumé is enough to impress the campaign’s impressionable officials as they scavenge for foreign-policy advisers: “Hey, Corey! This dude was in the Model United Nations!”
Papadopoulus (played by Sellers) sets about his mission. A few weeks after signing on to the campaign, he travels to Europe, where he meets a mysterious “Professor” (Peter Ustinov). “Initially the Professor seemed uninterested in Papadopoulos,” says Mueller’s indictment. A likely story! Yet when Papadopoulus lets drop that he’s an adviser to Trump, the Professor suddenly “appeared to take great interest” in him. They arrange a meeting in London to which the Professor invites a “female Russian national” (Elke Sommer). Without much effort, the femme fatale convinces Papadopoulus that she is Vladimir Putin’s niece. (“I weel tell z’American I em niece of Great Leader! Zat idjut belief ennytink!”) Over the next several months our hero sends many emails to campaign officials and to the Professor, trying to arrange a meeting between them. As far we know from the indictment, nothing came of his mighty efforts.
And there matters lay until January 2017, when the FBI came calling. Agents asked Papadopoulos about his interactions with the Russians. Even though he must have known that hundreds of his emails on the subject would soon be available to the FBI, he lied and told the agents that the contacts had occurred many months before he joined the campaign. History will record Papadopoulos as the man who forgot that emails carry dates on them. After the FBI interview, according to the indictment, he tried to destroy evidence with the same competence he has brought to his other endeavors. He closed his Facebook account, on which several communications with the Russians had taken place. He threw out his old cellphone. (That should do it!) After that, he began wearing a blindfold, on the theory that if he couldn’t see the FBI, the FBI couldn’t see him.
I made that last one up, obviously. For now, the great hope of scandal hobbyists is that Papadopoulus was wearing a wire between the time he secretly pled guilty and the time his plea was made public. This would have allowed him to gather all kinds of incriminating dirt in conversations with former colleagues. And the dirt is there, all right, as the Manafort indictment proves. Unfortunately for our scandal fetishists, so far none of it shows what their hearts most desire: active collusion between Russia and the Trump campaign.
Choose your plan and pay nothing for six Weeks!
An affair to remember
All this changed with the release in 1967 of Arthur Penn’s Bonnie and Clyde and Mike Nichols’s The Graduate. These two films, made in nouveau European style, treated familiar subjects—a pair of Depression-era bank robbers and a college graduate in search of a place in the adult world—in an unmistakably modern manner. Both films were commercial successes that catapulted their makers and stars into the top echelon of what came to be known as “the new Hollywood.”
Bonnie and Clyde inaugurated a new era in which violence on screen simultaneously became bloodier and more aestheticized, and it has had enduring impact as a result. But it was The Graduate that altered the direction of American moviemaking with its specific appeal to younger and hipper moviegoers who had turned their backs on more traditional cinematic fare. When it opened in New York in December, the movie critic Hollis Alpert reported with bemusement that young people were lining up in below-freezing weather to see it, and that they showed no signs of being dismayed by the cold: “It was as though they all knew they were going to see something good, something made for them.”
The Graduate, whose aimless post-collegiate title character is seduced by the glamorous but neurotic wife of his father’s business partner, is part of the common stock of American reference. Now, a half-century later, it has become the subject of a book-length study, Beverly Gray’s Seduced by Mrs. Robinson: How The Graduate Became the Touchstone of a Generation.1 As is so often the case with pop-culture books, Seduced by Mrs. Robinson is almost as much about its self-absorbed Baby Boomer author (“The Graduate taught me to dance to the beat of my own drums”) as its subject. It has the further disadvantage of following in the footsteps of Mark Harris’s magisterial Pictures at a Revolution: Five Movies and the Birth of the New Hollywood (2008), in which the film is placed in the context of Hollywood’s mid-’60s cultural flux. But Gray’s book offers us a chance to revisit this seminal motion picture and consider just why it was that The Graduate spoke to Baby Boomers in a distinctively personal way.T he Graduate began life in 1963 as a novella of the same name by Charles Webb, a California-born writer who saw his book not as a comic novel but as a serious artistic statement about America’s increasingly disaffected youth. It found its way into the hands of a producer named Lawrence Turman who saw The Graduate as an opportunity to make the cinematic equivalent of Salinger’s The Catcher in the Rye. Turman optioned the book, then sent it to Mike Nichols, who in 1963 was still best known for his comic partnership with Elaine May but had just made his directorial debut with the original Broadway production of Barefoot in the Park.
Both men saw that The Graduate posed a problem to anyone seeking to put it on the screen. In Turman’s words, “In the book the character of Benjamin Braddock is sort of a whiny pain in the fanny [whom] you want to shake or spank.” To this end, they turned to Buck Henry, who had co-created the popular TV comedy Get Smart with Mel Brooks, to write a screenplay that would retain much of Webb’s dryly witty dialogue (“I think you’re the most attractive of all my parents’ friends”) while making Benjamin less priggish.
Nichols’s first major act was casting Dustin Hoffman, an obscure New York stage actor pushing 30, for the title role. No one but Nichols seems to have thought him suitable in any way. Not only was Hoffman short and nondescript-looking, but he was unmistakably Jewish, whereas Benjamin is supposedly the scion of a newly monied WASP family from southern California. Nevertheless, Nichols decided he wanted “a short, dark, Jewish, anomalous presence, which is how I experience myself,” in order to underline Benjamin’s alienation from the world of his parents.
Nichols filled the other roles in equally unexpected ways. He hired the Oscar winner Anne Bancroft, only six years Hoffman’s senior, to play the unbalanced temptress who lures Benjamin into her bed, then responds with volcanic rage when he falls in love with her beautiful daughter Elaine. He and Henry also steered clear of on-screen references to the campus protests that had only recently started to convulse America. Instead, he set The Graduate in a timeless upper-middle-class milieu inhabited by people more interested in social climbing than self-actualization—the same milieu from which Benjamin is so alienated that he is reduced to near-speechlessness whenever his family and their friends ask him what he plans to do now that he has graduated.
The film’s only explicit allusion to its cultural moment is the use on the soundtrack of Simon & Garfunkel’s “The Sound of Silence,” the painfully earnest anthem of youthful angst that is for all intents and purposes the theme song of The Graduate. Nevertheless, Henry’s screenplay leaves little doubt that the film was in every way a work of its time and place. As he later explained to Mark Harris, it is a study of “the disaffection of young people for an environment that they don’t seem to be in sync with.…Nobody had made a film specifically about that.”
This aspect of The Graduate is made explicit in a speech by Benjamin that has no direct counterpart in the novel: “It’s like I was playing some kind of game, but the rules don’t make any sense to me. They’re being made up by all the wrong people. I mean, no one makes them up. They seem to make themselves up.”
The Graduate was Nichols’s second film, following his wildly successful movie version of Edward Albee’s Who’s Afraid of Virginia Woolf?. Albee’s play was a snarling critique of the American dream, which he believed to be a snare and a delusion. The Graduate had the same skeptical view of postwar America, but its pessimism was played for laughs. When Benjamin is assured by a businessman in the opening scene that the secret to success in America is “plastics,” we are meant to laugh contemptuously at the smugness of so blinkered a view of life. Moreover, the contempt is as real as the laughter: The Graduate has it both ways. For the same reason, the farcical quality of the climactic scene (in which Benjamin breaks up Elaine’s marriage to a handsome young WASP and carts her off to an unknown fate) is played without musical underscoring, a signal that what Benjamin is doing is really no laughing matter.
The youth-oriented message of The Graduate came through loud and clear to its intended audience, which paid no heed to the mixed reviews from middle-aged reviewers unable to grasp what Nichols and Henry were up to. Not so Roger Ebert, the newly appointed 25-year-old movie critic of the Chicago Sun-Times, who called The Graduate “the funniest American comedy of the year…because it has a point of view. That is to say, it is against something.”
Even more revealing was the response of David Brinkley, then the co-anchor of NBC’s nightly newscast, who dismissed The Graduate as “frantic nonsense” but added that his college-age son and his classmates “liked it because it said about the parents and others what they would have said about us if they had made the movie—that we are self-centered and materialistic, that we are licentious and deeply hypocritical about it, that we try to make them into walking advertisements for our own affluence.”
A year after the release of The Graduate, a film-industry report cited in Pictures at a Revolution revealed that “48 percent of all movie tickets in America were now being sold to filmgoers under the age of 24.” A very high percentage of those tickets were to The Graduate and Bonnie and Clyde. At long last, Hollywood had figured out what the Baby Boomers wanted to see.A nd how does The Graduate look a half-century later? To begin with, it now appears to have been Mike Nichols’s creative “road not taken.” In later years, Nichols became less an auteur than a Hollywood director who thought like a Broadway director, choosing vehicles of solid middlebrow-liberal appeal and serving them faithfully without imposing a strong creative vision of his own. In The Graduate, by contrast, he revealed himself to be powerfully aware of the same European filmmaking trends that shaped Bonnie and Clyde. Within a naturalistic framework, he deployed non-naturalistic “new wave” cinematographic techniques with prodigious assurance—and he was willing to end The Graduate on an ambiguous note instead of wrapping it up neatly and pleasingly, letting the camera linger on the unsure faces of Hoffman and Ross as they ride off into an unsettling future.
It is this ambiguity, coupled with Nichols’s prescient decision not to allow The Graduate to become a literal portrayal of American campus life in the troubled mid-’60s, that has kept the film fresh. But The Graduate is fresh in a very particular way: It is a young person’s movie, the tale of a boy-man terrified by the prospect of growing up to be like his parents. Therein lay the source of its appeal to young audiences. The Graduate showed them what they, too, feared most, and hinted at a possible escape route.
In the words of Beverly Gray, who saw The Graduate when it first came out in 1967: “The Graduate appeared in movie houses just as we young Americans were discovering how badly we wanted to distance ourselves from the world of our parents….That polite young high achiever, those loving but smothering parents, those comfortable but slightly bland surroundings: They combined to form an only slightly exaggerated version of my own cozy West L.A. world.”
Yet to watch The Graduate today—especially if you first saw it when much younger—is also to be struck by the extreme unattractiveness of its central character. Hoffman plays Benjamin not as the comically ineffectual nebbish of Jewish tradition but as a near-catatonic robot who speaks by turns in a flat monotone and a frightened nasal whine. It is impossible to understand why Mrs. Robinson would want to go to bed with such a mousy creature, much less why Elaine would run off with him—an impression that has lately acquired an overlay of retrospective irony in the wake of accusations that Hoffman has sexually harassed female colleagues on more than one occasion. Precisely because Benjamin is so unlikable, it is harder for modern-day viewers to identify with him in the same way as did Gray and her fellow Boomers. To watch a Graduate-influenced film like Noah Baumbach’s Kicking and Screaming (1995), a poignant romantic comedy about a group of Gen-X college graduates who deliberately choose not to get on with their lives, is to see a closely similar dilemma dramatized in an infinitely more “relatable” way, one in which the crippling anxiety of the principal characters is presented as both understandable and pitiable, thus making it funnier.
Be that as it may, The Graduate is a still-vivid snapshot of a turning point in American cultural history. Before Benjamin Braddock, American films typically portrayed men who were not overgrown, smooth-faced children but full-grown adults, sometimes misguided but incontestably mature. After him, permanent immaturity became the default position of Hollywood-style masculinity.
For this reason, it will be interesting to see what the Millennials, so many of whom demand to be shielded from the “triggering” realities of adult life, make of The Graduate if and when they come to view it. I have a feeling that it will speak to a fair number of them far more persuasively than it did to those of us who—unlike Benjamin Braddock—longed when young to climb the high hill of adulthood and see for ourselves what awaited us on the far side.
1 Algonquin, 278 pages
Choose your plan and pay nothing for six Weeks!
“I think that’s best left to states and locales to decide,” DeVos replied. “If the underlying question is . . .”
Murphy interrupted. “You can’t say definitively today that guns shouldn’t be in schools?”
“Well, I will refer back to Senator Enzi and the school that he was talking about in Wapiti, Wyoming, I think probably there, I would imagine that there’s probably a gun in the school to protect from potential grizzlies.”
Murphy continued his line of questioning unfazed. “If President Trump moves forward with his plan to ban gun-free school zones, will you support that proposal?”
“I will support what the president-elect does,” DeVos replied. “But, senator, if the question is around gun violence and the results of that, please know that my heart bleeds and is broken for those families that have lost any individual due to gun violence.”
Because all this happened several million outrage cycles ago, you may have forgotten what happened next. Rather than mention DeVos’s sympathy for the victims of gun violence, or her support for federalism, or even her deference to the president, the media elite fixated on her hypothetical aside about grizzly bears.
“Betsy DeVos Cites Grizzly Bears During Guns-in-Schools Debate,” read the NBC News headline. “Citing grizzlies, education nominee says states should determine school gun policies,” reported CNN. “Sorry, Betsy DeVos,” read a headline at the Atlantic, “Guns Aren’t a Bear Necessity in Schools.”
DeVos never said that they were, of course. Nor did she “cite” the bear threat in any definitive way. What she did was decline the opportunity to make a blanket judgment about guns and schools because, in a continent-spanning nation of more than 300 million people, one standard might not apply to every circumstance.
After all, there might be—there are—cases when guns are necessary for security. Earlier this year, Virginia Governor Terry McAuliffe signed into law a bill authorizing some retired police officers to carry firearms while working as school guards. McAuliffe is a Democrat.
In her answer to Murphy, DeVos referred to a private meeting with Senator Enzi, who had told her of a school in Wyoming that has a fence to keep away grizzly bears. And maybe, she reasoned aloud, the school might have a gun on the premises in case the fence doesn’t work.
As it turns out, the school in Wapiti is gun-free. But we know that only because the Washington Post treated DeVos’s offhand remark as though it were the equivalent of Alexander Butterfield’s revealing the existence of the secret White House tapes. “Betsy DeVos said there’s probably a gun at a Wyoming school to ward off grizzlies,” read the Post headline. “There isn’t.” Oh, snap!
The article, like the one by NBC News, ended with a snarky tweet. The Post quoted user “Adam B.,” who wrote, “‘We need guns in schools because of grizzly bears.’ You know what else stops bears? Doors.” Clever.
And telling. It becomes more difficult every day to distinguish between once-storied journalistic institutions and the jabbering of anonymous egg-avatar Twitter accounts. The eagerness with which the press misinterprets and misconstrues Trump officials is something to behold. The “context” the best and brightest in media are always eager to provide us suddenly goes poof when the opportunity arises to mock, impugn, or castigate the president and his crew. This tendency is especially pronounced when the alleged gaffe fits neatly into a prefabricated media stereotype: that DeVos is unqualified, say, or that Rick Perry is, well, Rick Perry.
On November 2, the secretary of energy appeared at an event sponsored by Axios.com and NBC News. He described a recent trip to Africa:
It’s going to take fossil fuels to push power out to those villages in Africa, where a young girl told me to my face, “One of the reasons that electricity is so important to me is not only because I won’t have to try to read by the light of a fire, and have those fumes literally killing people, but also from the standpoint of sexual assault.” When the lights are on, when you have light, it shines the righteousness, if you will, on those types of acts. So from the standpoint of how you really affect people’s lives, fossil fuels is going to play a role in that.
This heartfelt story of the impact of electrification on rural communities was immediately distorted into a metaphor for Republican ignorance and cruelty.
“Energy Secretary Rick Perry Just Made a Bizarre Claim About Sexual Assault and Fossil Fuels,” read the Buzzfeed headline. “Energy Secretary Rick Perry Says Fossil Fuels Can Prevent Sexual Assault,” read the headline from NBC News. “Rick Perry Says the Best Way to Prevent Rape Is Oil, Glorious Oil,” said the Daily Beast.
“Oh, that Rick Perry,” wrote Gail Collins in a New York Times column. “Whenever the word ‘oil’ is mentioned, Perry responds like a dog on the scent of a hamburger.” You will note that the word “oil” is not mentioned at all in Perry’s remarks.
You will note, too, that what Perry said was entirely commonsensical. While the precise relation between public lighting and public safety is unknown, who can doubt that brightly lit areas feel safer than dark ones—and that, as things stand today, cities and towns are most likely to be powered by fossil fuels? “The value of bright street lights for dispirited gray areas rises from the reassurance they offer to some people who need to go out on the sidewalk, or would like to, but lacking the good light would not do so,” wrote Jane Jacobs in The Death and Life of Great American Cities. “Thus the lights induce these people to contribute their own eyes to the upkeep of the street.” But c’mon, what did Jane Jacobs know?
No member of the Trump administration so rankles the press as the president himself. On the November morning I began this column, I awoke to outrage that President Trump had supposedly violated diplomatic protocol while visiting Japan and its prime minister, Shinzo Abe. “President Trump feeds fish, winds up pouring entire box of food into koi pond,” read the CNN headline. An article on CBSNews.com headlined “Trump empties box of fish food into Japanese koi pond” began: “President Donald Trump’s visit to Japan briefly took a turn from formal to fishy.” A Bloomberg reporter traveling with the president tweeted, “Trump and Abe spooning fish food into a pond. (Toward the end, @potus decided to just dump the whole box in for the fish).”
Except that’s not what Trump “decided.” In fact, Trump had done exactly what Abe had done a few seconds before. That fact was buried in write-ups of the viral video of Trump and the fish. “President Trump was criticized for throwing an entire box of fish food into a koi pond during his visit to Japan,” read a Tweet from the New York Daily News, linking to a report on phony criticism Trump received because of erroneous reporting from outlets like the News.
There’s an endless, circular, Möbius-strip-like quality to all this nonsense. Journalists are so eager to catch the president and his subordinates doing wrong that they routinely traduce the very canons of journalism they are supposed to hold dear. Partisan and personal animus, laziness, cynicism, and the oversharing culture of social media are a toxic mix. The press in 2017 is a lot like those Japanese koi fish: frenzied, overstimulated, and utterly mindless.
Choose your plan and pay nothing for six Weeks!
Review of 'Lessons in Hope' By George Weigel
Standing before the eternal flame, a frail John Paul shed silent tears for 6 million victims, including some of his own childhood friends from Krakow. Then, after reciting verses from Psalm 31, he began: “In this place of memories, the mind and heart and soul feel an extreme need for silence. … Silence, because there are no words strong enough to deplore the terrible tragedy of the Shoah.” Parkinson’s disease strained his voice, but it was clear that the pope’s irrepressible humanity and spiritual strength had once more stood him in good stead.
George Weigel watched the address from NBC’s Jerusalem studios, where he was providing live analysis for the network. As he recalls in Lessons in Hope, his touching and insightful memoir of his time as the pope’s biographer, “Our newsroom felt the impact of those words, spoken with the weight of history bearing down on John Paul and all who heard him: normally a place of bedlam, the newsroom fell completely silent.” The pope, he writes, had “invited the world to look, hard, at the stuff of its redemption.”
Weigel, a senior fellow at the Ethics and Public Policy Center, published his biography of John Paul in two volumes, Witness to Hope (1999) and The End and the Beginning (2010). His new book completes a John Paul triptych, and it paints a more informal, behind-the-scenes portrait. Readers, Catholic and otherwise, will finish the book feeling almost as though they knew the 264th successor of Peter. Lessons in Hope is also full of clerical gossip. Yet Weigel never loses sight of his main purpose: to illuminate the character and mind of the “emblematic figure of the second half of the twentieth century.”
The book’s most important contribution comes in its restatement of John Paul’s profound political thought at a time when it is sorely needed. Throughout, Weigel reminds us of the pope’s defense of the freedom of conscience; his emphasis on culture as the primary engine of history; and his strong support for democracy and the free economy.
When the Soviet Union collapsed, the pope continued to promote these ideas in such encyclicals as Centesimus Annus. The 1991 document reiterated the Church’s opposition to socialist regimes that reduce man to “a molecule within the social organism” and trample his right to earn “a living through his own initiative.” Centesimus Annus also took aim at welfare states for usurping the role of civil society and draining “human energies.” The pope went on to explain the benefits, material and moral, of free enterprise within a democratic, rule-of-law framework.
Yet a libertarian manifesto Centesimus Annus was not. It took note of free societies’ tendency to breed spiritual poverty, materialism, and social incohesion, which in turn could lead to soft totalitarianism. John Paul called on state, civil society, and people of God to supply the “robust public moral culture” (in Weigel’s words) that would curb these excesses and ensure that free-market democracies are ordered to the common good.
When Weigel emerged as America’s preeminent interpreter of John Paul, in the 1980s and ’90s, these ideas were ascendant among Catholic thinkers. In addition to Weigel, proponents included the philosopher Michael Novak and Father Richard John Neuhaus of First Things magazine (both now dead). These were faithful Catholics (in Neuhaus’s case, a relatively late convert) nevertheless at peace with the free society, especially the American model. They had many qualms with secular modernity, to be sure. But with them, there was no question that free societies and markets are preferable to unfree ones.
How things have changed. Today all the energy in those Catholic intellectual circles is generated by writers and thinkers who see modernity as beyond redemption and freedom itself as the problem. For them, the main question is no longer how to correct the free society’s course (by shoring up moral foundations, through evangelization, etc.). That ship has sailed or perhaps sunk, according to this view. The challenges now are to protect the Church against progressivism’s blows and to see beyond the free society as a political horizon.
Certainly the trends that worried John Paul in Centesimus Annus have accelerated since the encyclical was issued. “The claim that agnosticism and skeptical relativism are the philosophy and the basic attitude which correspond to democratic forms of political life” has become even more hegemonic than it was in 1991. “Those who are convinced that they know the truth and firmly adhere to it” increasingly get treated as ideological lepers. And with the weakening of transcendent truths, ideas are “easily manipulated for reasons of power.”
Thus a once-orthodox believer finds himself or herself compelled to proclaim that there is no biological basis to gender; that men can menstruate and become pregnant; that there are dozens of family forms, all as valuable and deserving of recognition as the conjugal union of a man and a woman; and that speaking of the West’s Judeo-Christian patrimony is tantamount to espousing white supremacy. John Paul’s warnings read like a description of the present.
The new illiberal Catholics—a label many of these thinkers embrace—argue that these developments aren’t a distortion of the idea of the free society but represent its very essence. This is a mistake. Basic to the free society is the freedom of conscience, a principle enshrined in democratic constitutions across the West and, I might add, in the Catholic Church’s post–Vatican II magisterium. Under John Paul, religious liberty became Rome’s watchword in the fight against Communist totalitarianism, and today it is the Church’s best weapon against the encroachments of secular progressivism. The battle is far from lost, moreover. There is pushback in the courts, at the ballot box, and online. Sometimes it takes demagogic forms that should discomfit people of faith. Then again, there is a reason such pushback is called “reaction.”
A bigger challenge for Catholics prepared to part ways with the free society as an ideal is this: What should Christian politics stand for in the 21st century? Setting aside dreams of reuniting throne and altar and similar nostalgia, the most cogent answer offered by Catholic illiberalism is that the Church should be agnostic with respect to regimes. As Harvard’s Adrian Vermeule has recently written, Christians should be ready to jettison all “ultimate allegiances,” including to the Constitution, while allying with any party or regime when necessary.
What at first glance looks like an uncompromising Christian politics—cunning, tactical, and committed to nothing but the interests of the Church—is actually a rather passive vision. For a Christianity that is “radically flexible” in politics is one that doesn’t transform modernity from within. In practice, it could easily look like the Vatican Ostpolitik diplomacy that sought to appease Moscow before John Paul was elected.
Karol Wojtya discarded Ostpolitik as soon as he took the Petrine office. Instead, he preached freedom and democracy—and meant it. Already as archbishop of Krakow under Communism, he had created free spaces where religious and nonreligious dissidents could engage in dialogue. As pope, he expressed genuine admiration for the classically liberal and decidedly secular Vaclav Havel. He hailed the U.S. Constitution as the source of “ordered freedom.” And when, in 1987, the Chilean dictator Augusto Pinochet asked him why he kept fussing about democracy, seeing as “one system of government is as good as another,” the pope responded: No, “the people have a right to their liberties, even if they make mistakes in exercising them.”
The most heroic and politically effective Christian figure of the 20th century, in other words, didn’t follow the path of radical flexibility. His Polish experience had taught him that there are differences between regimes—that some are bound to uphold conscience and human dignity, even if they sometimes fall short of these commitments, while others trample rights by design. The very worst of the latter kind could even whisk one’s boyhood friends away to extermination camps. There could be no radical Christian flexibility after the Holocaust.