All the hopes that we entertain in the midst of the confusion and dangers of the present are founded, positively…
I. The Beginning of the Bible and Its Greek Counterparts
All the hopes that we entertain in the midst of the confusion and dangers of the present are founded, positively or negatively, directly or indirectly, on the experiences of the past. Of these experiences, the broadest and deepest—so far as Western man is concerned—are indicated by the names of two cities: Jerusalem and Athens. Western man became what he is, and is what he is, through the coming together of biblical faith and Greek thought. In order to understand ourselves and to illuminate our trackless way into the future, we must understand Jerusalem and Athens. It goes without saying that this is a task whose proper performance goes much beyond my power; but we cannot define our tasks by our powers, for our powers become known to us through the performance of our tasks, and it is better to fail nobly than to succeed basely.
The objects to which we refer when we speak of Jerusalem and Athens are understood today, by the science devoted to such objects, as cultures; “culture” is meant to be a scientific concept. According to this concept there is an indefinitely large number of cultures: n cultures. The scientist who studies them beholds them as objects; as scientist, he stands outside all of them; he has no preference for any of them; he is not only impartial but objective; he is anxious not to distort any of them; in speaking about them he avoids any “culture-bound” concepts—i.e., concepts bound to any particular culture or kind of culture. In many cases the objects studied by the scientist of culture do or did not know that they are or were cultures. This causes no difficulty for him: electrons also do not know that they are electrons; even dogs do not know that they are dogs. By the mere fact that he speaks of his objects as cultures, the scientific student takes it for granted that he understands the people whom he studies better than they understood or understand themselves.
This whole approach has been questioned for some time, but the questioning does not seem to have had any effect on the scientists. The man who started the questioning was Nietzsche. We have said that according to the prevailing view there were or are n cultures. Let us say there were or are 1,001 cultures, thus reminding ourselves of the 1,001 Arabian Nights; the account of the cultures, if it is well done, will be a series of exciting stories, perhaps of tragedies. Accordingly, Nietzsche speaks of our subject in a speech by his Zarathustra that is entitled “Of 1,000 Goals and One.” The Hebrews and the Greeks appear in this speech as two among a number of nations, not superior to the two others that are mentioned or to the 996 that are not. The peculiarity of the Greeks, according to Nietzsche, is the full dedication of the individual to the contest for excellence, distinction, supremacy. The peculiarity of the Hebrews is the utmost honoring of father and mother. Nietzsche’s reverence for the sacred tables of the Hebrews, as well as for those of the other nations in question, is deeper than that of any other beholder. Yet since he too is only a beholder of these tables, since what one table commends or commands is incompatible with what others command, he himself is not subject to the commandments of any. This is true also and especially of the tables, or “values,” of modern Western culture. But according to him, all scientific concepts, and hence in particular the concept of culture, are culture-bound; the concept of culture is an outgrowth of 19th-century Western culture; its application to the “cultures” of other ages and climates is an act stemming from the spiritual imperialism of that particular culture. There is, then, for Nietzsche, a glaring contradiction between the claimed objectivity of the science of cultures and the subjectivity of that science. To state the case differently, one cannot behold—i.e., truly understand—any culture unless one is firmly rooted in one’s own culture or unless one belongs, in one’s capacity as a beholder, to some culture. But if the universality of the beholding of all cultures is to be preserved, the culture to which the beholder of all cultures belongs must be the universal culture, the culture of mankind, the world culture; the universality of beholding presupposes, if only by anticipating, the universal culture which is no longer one culture among many. Nietzsche sought therefore for a culture that would no longer be particular and hence in the last analysis arbitrary. The single goal of mankind is conceived by him as in a sense super-human: he speaks of the super-man of the future. The super-man is meant to unite in himself, on the highest level, both Jerusalem and Athens.
However much the science of all cultures may protest its innocence of all preferences or evaluations, it fosters a specific moral posture. Since it requires openness to all cultures, it fosters universal tolerance and the exhilaration which derives from the beholding of diversity; it necessarily affects all cultures that it can still affect by contributing to their transformation in one and the same direction; it willy-nilly brings about a shift of emphasis from the particular to the universal. By asserting, if only implicitly, the Tightness of pluralism, it asserts that pluralism is the right way; it asserts the monism of universal tolerance and respect for diversity; for by virtue of being an “-ism,” pluralism is a monism.
One remains somewhat closer to the science of culture as it is commonly practiced if one limits oneself to saying that every attempt to understand the phenomena in question remains dependent upon a conceptual framework that is alien to most of these phenomena and therefore necessarily distorts them. “Objectivity” can be expected only if one attempts to understand the various cultures or peoples exactly as they understand or understood themselves. Men of ages and climates other than our own did not understand themselves in terms of cultures because they were not concerned with culture in the present-day meaning of the term. What we now call culture is the accidental result of concerns that were not concerns with culture but with other things—above all with the Truth.
Yet our intention to speak of Jerusalem and Athens seems to compel us to go beyond the self-understanding of either. Or is there a notion, a word that points to the highest that both the Bible and the greatest works of the Greeks claim to convey? There is such a word: wisdom. Not only the Greek philosophers but the Greek poets as well were considered to be wise men, and the Torah is said, in the Torah, to be “your wisdom in the eyes of the nations.” We, then, must try to understand the difference between biblical wisdom and Greek wisdom. We see at once that each of the two claims to be the true wisdom, thus denying to the other its claim to be wisdom in the strict and highest sense. According to the Bible, the beginning of wisdom is fear of the Lord; according to the Greek philosophers, the beginning of wisdom is wonder. We are thus compelled from the very beginning to make a choice, to take a stand. Where then do we stand? Confronted by the incompatible claims of Jerusalem and Athens, we are open to both and willing to listen to each. We ourselves are not wise but we wish to become wise. We are seekers for wisdom, “philo-sophoi.” Yet since we say that we wish to hear first and then to act or to decide, we have already decided in favor of Athens against Jerusalem.
This, indeed, seems to be the necessary position for all of us who cannot be Orthodox and therefore must accept the principle of the historical-critical study of the Bible. The Bible was traditionally understood to be the true and authentic account of the deeds of God and men from the beginning till the restoration after the Babylonian exile. The deeds of God include His legislation as well as His inspirations to the prophets, and the deeds of men include their praises of God and their prayers to Him as well as their God-inspired admonitions. Biblical criticism starts from the observation that the biblical account is in important respects not authentic but derivative or consists not of “histories” but of “memories of ancient histories,” to borrow a Machiavellian expression. Biblical criticism reached its first climax in Spinoza’s Theological-Political Treatise, which is frankly anti-theological; Spinoza read the Bible as he read the Talmud and the Koran. The result of his criticism can be summarized as follows: the Bible consists to a considerable extent of self-contradictory assertions, of remnants of ancient prejudices or superstitions, and of the outpourings of an uncontrolled imagination; in addition, it is poorly compiled and poorly preserved. He arrived at this conclusion by presupposing the impossibility of miracles. The considerable differences between 19th- and 20th-century biblical criticism and that of Spinoza can be traced to their difference in regard to the evaluation of imagination: whereas for Spinoza imagination is simply sub-rational, it was assigned a much higher rank in later times when it was understood as the vehicle of religious or spiritual experience, which necessarily expresses itself in symbols and the like. The historical-critical study of the Bible is the attempt to understand the various layers of the Bible as they were understood by their immediate addressees, i.e., the contemporaries of its authors. Of course, the Bible speaks of many things—for instance, the creation of the world—that for the biblical authors themselves belong to the remote past. But there is undoubtedly much history in the Bible—accounts of events written by contemporaries or near-contemporaries. One is thus led to say that the Bible contains both “myth” and “history.” Yet this distinction is alien to the Bible; it is a special form of the Greek distinction between mythos and logos. From the point of view of the Bible, the “myths” are as true as the “histories”: what Israel “in fact” did or suffered cannot be understood except in the light of the “facts” of Creation and Election. What is now called “historical” are those deeds and speeches that are equally accessible to the believer and to the unbeliever. But from the point of view of the Bible, the unbeliever is the fool who has said in his heart “there is no God”; the Bible narrates everything as it is credible to the wise in the biblical sense of wisdom. Let us never forget that there is no biblical word for doubt. The biblical signs and wonders convince men who have little faith or who believe in other gods; they are not addressed to “the fools who say in their hearts ‘there is no God.’”
It is true that we cannot ascribe to the Bible the theological concept of miracles, for that concept presupposes the concept of nature, and the concept of nature is foreign to the Bible. One is, however, tempted to ascribe to the Bible what one may call the poetic concept of miracles as illustrated by Psalm 114:
When Israel went out of Egypt, the house of Jacob from a people of strange tongue, Judah became his sanctuary and Israel his dominion. The sea saw and fled; the Jordan turned back. The mountains skipped like rams, the hills like lambs. What ails thee, sea, that thou fleest, thou Jordan that thou turnst back? Ye mountains that ye skip like rams, ye hills like lambs? From the presence of the Lord tremble thou earth, from the presence of the God of Jacob who turns the rock into a pond of water, the flint into a fountain of waters.
The presence of God calls forth from His creatures a conduct that differs strikingly from their ordinary conduct: it enlivens the lifeless; it makes fluid the fixed. It is not easy to say whether the author of the psalm did not mean his utterance to be simply or literally true. It is easy to say that the concept of poetry—as distinguished from that of song—is foreign to the Bible. It is perhaps more simple to say that owing to the victory of science over natural theology the impossibility of miracles can no longer be said to be established but has degenerated to the status of an undemonstrable hypothesis. One may trace to the hypothetical character of this fundamental premise the hypothetical character of many, not to say all, results of biblical criticism. Certain it is that biblical criticism in all its forms makes use of terms having no biblical equivalents and is to this extent unhistorical.
How then must we proceed? We shall not take issue with the findings or even the premises of biblical criticism. Let us grant that the Bible and in particular the Torah consists to a considerable extent of “memories of ancient histories,” even of memories of memories. But memories of memories are not necessarily distorted or pale reflections of the original; they may be recollections of recollections, deepenings through meditation of the primary experience. We shall therefore take the latest and uppermost layer as seriously as the earlier ones. We shall start from the uppermost layer—from what is first for us, even though it may not be simply the first. We shall start, that is, where both the traditional and the historical study of the Bible necessarily start. In thus proceeding we avoid the compulsion to make an advance decision in favor of Athens against Jerusalem. For the Bible does not require us to believe in the miraculous character of events that the Bible does not present as miraculous. God’s speaking to men may be described as miraculous, but the Bible does not claim that the putting-together of those speeches was done miraculously. We begin at the beginning, at the beginning of the beginning. The beginning of the beginning happens to deal with the beginning: the creation of heaven and earth. The Bible begins reasonably.
“In the beginning God created heaven and earth.” Who says this? We are not told; hence we do not know. We have no right to assume that God said it, for the Bible introduces God’s sayings by expressions like “God said.” We shall then assume that the words were spoken by a nameless man. Yet no man can have been an eyewitness of God’s creating heaven and earth; the only eyewitness was God. Since “There did not arise in Israel a prophet like Moses whom the Lord saw face to face,” it is understandable that tradition ascribed to Moses the sentence quoted and its whole sequel. But what is understandable or plausible is not as such certain. The narrator does not claim to have heard the account from God; perhaps he heard it from some man or men; perhaps he is retelling a tale. The Bible continues: “And the earth was unformed and void. . . .” It is not clear whether the earth thus described was created by God or antedated His creation. But it is quite clear that while speaking about how the earth looked at first, the Bible is silent about how heaven looked at first. The earth, i.e., that which is not heaven, seems to be more important than heaven. This impression is confirmed by the sequel.
God created everything in six days. On the first day He created light; on the second, heaven; on the third, the earth, the seas, and vegetation; on the fourth, the sun, the moon, and the stars; on the fifth, the water animals and the birds; and on the sixth, the land animals and man. The most striking difficulties are these: light and hence day (and nights) are presented as preceding the sun, and vegetation is presented as preceding the sun. The first difficulty is disposed of by the observation that creation-days are not sun-days. One must add at once, however, that there is a connection between the two kinds of days, for there is a connection, a correspondence between light and sun. The account of creation manifestly consists of two parts, the first part dealing with the first three creation-days and the second part dealing with the last three. The first part begins with the creation of light and the second with the creation of the heavenly light-givers. Correspondingly, the first part ends with the creation of vegetation and the second with the creation of man. All creatures dealt with in the first part lack local motion; all creatures dealt with in the second part possess local motion.1 Vegetation precedes the sun because vegetation lacks local motion and the sun possesses it. Vegetation belongs to the earth; it is rooted in the earth; it is the fixed covering of the fixed earth.2 Vegetation was brought forth by the earth at God’s command; the Bible does not speak of God’s “making” vegetation; but as regards the living beings in question, God commanded the earth to bring them forth and yet God “made” them. Vegetation was created at the end of the first half of the creation-days; at the end of the last half, the living beings that spend their whole lives on the firm earth were created. The living beings—beings that possess life in addition to local motion—were created on the fifth and sixth days, on the days following the day on which the heavenly light-givers were created. The Bible presents the creatures in an ascending order. Heaven is lower than earth. The heavenly light-givers lack life; they are lower than the lowliest living beast; they serve the living creatures, which are to be found only beneath heaven; they have been created in order to rule over day and night: they have not been made in order to rule over the earth, let alone over man.
The most striking characteristic of the biblical account of creation is its demoting or degrading of heaven and the heavenly lights. Sun, moon, and stars precede the living things because they are lifeless: they are not gods. What the heavenly lights lose, man gains; man is the peak of creation. The creatures of the first three days cannot change their places; the heavenly bodies change their places but not their courses; the living beings change their courses but not their “ways”; men alone can change their “ways.” Man is the only being created in God’s image. Only in the case of man’s creation does the biblical account of creation repeatedly speak of God’s “creating” him; in the case of the creation of heaven and the heavenly bodies, that account speaks of God’s “making” them. Similarly, only in the case of man’s creation does the Bible intimate that there is a multiplicity in God: “Let us make man in our image, after our likeness. . . . So God created man in His image, in the image of God He created him; male and female He created them.” Bisexuality is not a preserve of man, but only man’s bisexuality could give rise to the view that there are gods and goddesses: there is no biblical word for “goddess.” Hence creation is not begetting. The biblical account of creation teaches silently what the Bible teaches elsewhere explicitly: there is only one God, the God whose name is written as the Tetragrammaton, the living God Who lives from ever to ever, Who alone has created heaven and earth and all their hosts; He has not created any gods and hence there are no gods besides Him. The many gods whom men worship are either nothings that owe such being as they possess to man’s making them, or if they are something (like sun, moon, and stars), they surely are not gods.3 All non-polemical references to “other gods” occurring in the Bible are fossils whose preservation indeed poses a question but only a rather unimportant one. Not only did the biblical God not create any gods; on the basis of the biblical account of creation, one could doubt whether He created any beings one would be compelled to call “mythical”: heaven and earth and all their hosts are always accessible to man as man. One would have to start from this fact in order to understand why the Bible contains so many sections that, on the basis of the distinction between mythical (or legendary) and historical, would have to be described as historical.
According to the Bible, creation was completed by, and culminated in, the creation of man. Only after the creation of man did God “see all that He had made, and behold, it was very good.” What then is the origin of the evil or the bad? The biblical answer seems to be that since everything of divine origin is good, evil is of human origin. Yet if God’s creation as a whole is very good, it does not follow that all its parts are good or that creation as a whole contains no evil whatsoever: God did not find all parts of His creation to be good. Perhaps creation as a whole cannot be “very good” if it does not contain some evils. There cannot be light if there is not darkness, and the darkness is as much created as is the light: God creates evil as well as He makes peace (Isaiah 45:7). However this may be, the evils whose origin the Bible lays bare, after it has spoken of creation, are a particular kind of evils: the evils that beset man. Those evils are not due to creation or implicit in it, as the Bible shows by setting forth man’s original condition. In order to set forth that condition, the Bible must retell man’s creation by making man’s creation as much as possible the sole theme. This second account answers the question, not of how heaven and earth and all their hosts have come into being but of how human life as we know it—beset with evils with which it was not beset originally—has come into being. This second account may only supplement the first account but it may also correct it and thus contradict it. After all, the Bible never teaches that one can speak about creation without contradicting oneself. In post-biblical parlance, the mysteries of the Torah (sithre torah) are the contradictions of the Torah; the mysteries of God are the contradictions regarding God.
The first account of creation ended with man; the second account begins with man. According to the first account, God created man and only man in His image; according to the second account, God formed man from the dust of the earth and He blew into his nostrils the breath of life. The second account makes clear that man consists of two profoundly different ingredients, a high one and a low one. According to the first account, it would seem that man and woman were created simultaneously; according to the second account, man was created first. The life of man as we know it, the life of most men, is that of tillers of the soil; their life is needy and harsh. If human life had been needy and harsh from the very beginning, man would have been compelled or at least almost irresistibly tempted to be harsh, uncharitable, unjust; he would not have been fully responsible for his lack of charity or justice. But man is to be fully responsible. Hence the harshness of human life must be due to man’s fault. His original condition must have been one of ease: he was not in need of rain nor of hard work; he was put by God into a well-watered garden that was rich in trees that were good for food. Yet while man was created for a life of ease, he was not created for a life of luxury: there was no gold or precious stones in the garden of Eden. Man was created for a simple life. Accordingly, God permitted him to eat of every tree of the garden except the tree of knowledge of good and evil, “for in the day that you eat of it, you shall surely die.” Man was not denied knowledge; without knowledge he could not have known the tree of knowledge, nor the woman, nor the brutes; nor could he have understood the prohibition. Man was denied knowledge of good and evil, i.e., the knowledge sufficient for guiding himself, his life. Though not being a child, he was to live in childlike simplicity and obedience to God. We are free to surmise that there is a connection between the demotion of heaven in the first account and the prohibition against eating of the tree of knowledge in the second. While man was forbidden to eat of the tree of knowledge, he was not forbidden to eat of the tree of life.
Man, lacking knowledge of good and evil, was content with his condition and in particular with his loneliness. But God, possessing knowledge of good and evil, found that “it is not good for man to be alone, so I will make him a helper as his counterpart.” So God formed the brutes and brought them to man, but they proved not to be the desired helpers. Thereupon God formed the woman out of a rib of the man. The man welcomed her as bone of his bones and flesh of his flesh but, lacking knowledge of good and evil, he did not call her good. The narrator adds that “therefore [namely because the woman is bone of man’s bone and flesh of his flesh] a man leaves his father and his mother, and cleaves to his wife, and they become one flesh.” Both were naked but, lacking knowledge of good and evil, they were not ashamed.
Thus the stage was set for the fall of our first parents. The first move came from the serpent, the most cunning of all the beasts of the field; it seduced the woman into disobedience and then the woman seduced the man. The seduction moves from the lowest to the highest. The Bible does not tell what induced the serpent to seduce the woman into disobeying the divine prohibition. It is reasonable to assume that the serpent acted as it did because it was cunning, i.e., possessed a low kind of wisdom, a congenital malice; everything that God had created would not be very good if it did not include something congenitally bent on mischief. The serpent begins its seduction by suggesting that God might have forbidden man and woman to eat of any tree in the garden, i.e., that God’s prohibition might be malicious or impossible to comply with. The woman corrects the serpent and in so doing makes the prohibition more stringent than it was: “We may eat of the fruit of the other trees of the garden; it is only about the tree in the middle of the garden that God said: you shall not eat of it or touch it, lest you die.”
Now, God did not forbid the man to touch the fruit of the tree of knowledge of good and evil. Besides, the woman does not explicitly speak of the tree of knowledge; she may have had in mind the tree of life. Moreover, God had issued the prohibition only to the man, whereas the woman claims that God had spoken to her as well; she surely knew the divine prohibition only through human tradition. The serpent assures her that they will not die, “for God knows that when you eat of it, your eyes will be opened and you will be like God, knowing good and evil.” The serpent tacitly questions God’s veracity. At the same time it glosses over the fact that eating of the tree involves disobedience to God. In this it is followed by the woman. According to the serpent’s assertion, knowledge of good and evil makes man immune to death (although we cannot know whether the serpent believes this). But the woman, having forgotten the divine prohibition, having therefore in a manner tasted of the tree of knowledge, is no longer wholly unaware of good and evil: she “saw that the tree was good for eating and a delight to the eyes and that the tree was to be desired to make one wise”; therefore she took of its fruit and ate. She thus made the fall of the man almost inevitable, for he was cleaving to her: she gave some of the fruit of the tree to the man, and he ate. The man drifts into disobedience by following the woman. After they had eaten of the tree, their eyes were opened and they knew that they were naked, and they sewed fig leaves together and made themselves aprons: through the fall they became ashamed of their nakedness; eating of the tree of knowledge of good and evil made them realize that nakedness is evil.
The Bible says nothing to the effect that our first parents fell because they were prompted by the desire to be like God; they did not rebel highhandedly against God; rather, they forgot to obey God; they drifted into disobedience. Nevertheless, God punished them severely. But the punishment did not do away with the fact that, as God Himself said, as a consequence of his disobedience “man has become like one of us, knowing good and evil.” There was now the danger that man might eat of the tree of life and live forever. Therefore God expelled him from the garden and made it impossible for him to return to it. One may wonder why man, while he was still in the garden of Eden, had not eaten of the tree of life of which he had not been forbidden to eat. Perhaps he did not think of it because, lacking knowledge of good and evil, he did not fear to die and, besides, the divine prohibition drew his attention away from the tree of life to the tree of knowledge.
The Bible intends to teach that man was meant to live in simplicity, without knowledge of good and evil. But the narrator seems to be aware of the fact that a being which can be forbidden to strive for knowledge of good and evil, i.e., that can understand to some degree that knowledge of good and evil is evil for it, necessarily possesses such knowledge. Human suffering from evil presupposes human knowledge of good and evil and vice versa. Man wishes to live without evil. The Bible tells us that he was given the opportunity to live without evil and that he cannot blame God for the evils from which he suffers. By giving man that opportunity, God convinces him that his deepest wish cannot be fulfilled. The story of the fall is the first part of the story of God’s education of man.
Man has to live with knowledge of good and evil and with the sufferings inflicted on him because of that knowledge or its acquisition. Human goodness or badness presupposes that knowledge and its concomitants. The Bible gives us the first inkling of human goodness and badness in the story of the first brothers. The older brother, Cain, was a tiller of the soil; the younger brother, Abel, a keeper of sheep. God preferred the offering of the keeper of sheep, who brought the choicest of the firstlings of his flock, to that of the tiller of the soil. There were many reasons for this preference but one of them seems to be that the pastoral life is closer to original simplicity than the life of the tillers of the soil. Cain, however was vexed, and despite his having been warned by God against sinning in general, killed his brother. After a futile attempt to deny his guilt—an attempt that increased that guilt (“Am I my brother’s keeper?”)—he was cursed by God as the serpent and the soil had been after the Fall, in contradistinction to Adam and Eve who were not cursed. He was punished by God, but not with death: anyone slaying Cain would be punished much more severely than Cain himself. The relatively mild punishment of Cain cannot be explained by the fact that murder had not been expressly forbidden: Cain possessed some knowledge of good and evil, and he knew that Abel was his brother, even assuming that he did not know that man was created in the image of God. It is better to explain Cain’s punishment by assuming that punishments were milder in the beginning than later on. Cain—like his fellow fratricide, Romulus—founded a city, and some of his descendants were the ancestors of men practicing various arts: the city and the arts, so alien to man’s original simplicity, owe their origin to Cain and his race rather than to Seth, the substitute for Abel, and his race. It goes without saying that this is not the last word of the Bible on the city and the arts but it is its first word, just as the prohibition against eating of the tree of knowledge is, one may say, its first word simply, and the revelation of the Torah—i.e., the highest kind of knowledge of good and evil that is vouchsafed to men—is its last word. The account of the race of Cain culminates in the song of Lamech who boasted to his wives of his slaying of men, of his being superior to God as an avenger. The (antediluvian) race of Seth cannot boast of a single inventor; its only distinguished members were Enoch, who walked with God, and Noah, who was a righteous man and walked with God: civilization and piety are two very different things.
By the time of Noah the wickedness of man had become so great that God repented of His creation of man and all other earthly creatures, Noah alone excepted; so He brought on the flood. Generally speaking, prior to the flood, man’s lifespan was much longer than after it. Man’s antediluvian longevity was a relic of his original condition. Man originally lived in the garden of Eden where he could have eaten of the tree of life and thus become immortal. The longevity of antediluvian man reflects this lost chance. To this extent the transition from antediluvian to postdiluvian man is a decline. This impression is confirmed by the fact that before the flood rather than after it the sons of God consorted with the daughters of man and thus generated the mighty men of old, the men of renown. On the other hand, the fall of our first parents made possible or necessary in due time God’s revelation of His Torah, and this was decisively prepared, as we shall see, by the flood. In this respect, the transition from antediluvian to postdiluvian mankind is a progress. The ambiguity regarding the Fall—the fact that it was a sin and hence avoidable and that it was at the same time inevitable—is reflected in the ambiguity regarding the status of antediluvian mankind.
The link between antediluvian mankind and the revelation of the Torah is supplied by the first covenant between God and men, the covenant following the flood. The flood was the proper punishment for the extreme and well-nigh universal wickedness of antediluvian men. Prior to the flood, mankind lived, so to speak, without restraint, without law. While our first parents were still in the garden of Eden, they were not forbidden anything except to eat of the tree of knowledge. The vegetarianism of antediluvian men was not due to an explicit prohibition (Gen. 1:29); rather, their abstention from meat belongs together with their abstention from wine (cf. 9:20); both were relics of man’s original simplicity. After the expulsion from the garden of Eden, God did not punish men, apart from the relatively mild punishment which He inflicted on Cain. Nor did He establish human judges. God experimented, as it were, for the instruction of mankind, with the possibility of mankind’s living free of the law. The experiment, just like the experiment of having men remain like innocent children, ended in failure. Fallen or awake man needs restraint, must live under law. But this law must not be simply imposed. It must form part of a covenant in which God and man are equally, though not equal, partners. Such a partnership was established only after the flood; it did not exist in antediluvian times either before or after the fall.
The inequality regarding the covenant is shown especially by the fact that when God undertook never again to destroy almost all life on earth as long as the earth lasts, He did not do so on the condition that all or almost all men obey the laws promulgated by God after the flood: God makes His promise despite, or because of, His knowing that the devisings of man’s heart are evil from his youth. Noah is the ancestor of all later men just as Adam was; the purgation of the earth through the flood is to some extent a restoration of mankind to its original state; it is a kind of second creation. Within the limits indicated, the condition of postdiluvian men is superior to that of antediluvian men. One point requires special emphasis: in the legislation following the flood, murder is expressly forbidden and made punishable by death on the ground that man was created in the image of God (9:6). The first covenant brought an increase in hope and at the same time an increase in punishment. Not until after the flood was man’s rule over the beasts, ordained or established from the beginning, to be accompanied by the beasts’ fear and dread of man (cf. 9:2 with 1:26-30 and 2:15).
The covenant following the flood prepares the covenant with Abraham. The Bible singles out three events that took place between the covenant after the flood and God’s calling of Abraham: Noah’s curse of Canaan, a son of Ham; the achievement of excellence by Nimrod, a grandson of Ham; and men’s attempt to prevent their dispersal over the earth by building a city which had a tower that reached to the heavens. Canaan, whose land came to be the promised land, was cursed because Ham saw the nakedness of his father, Noah—because Ham transgressed a most sacred, if unpromulgated, law; the curse of Canaan was accompanied by the blessing of Shem and Japheth who turned their eyes away from the nakedness of their father. Here we have the first and the most fundamental division of mankind, at any rate of postdiluvian mankind, the division into “cursed” and “blessed.” Nimrod was the first to be a mighty man on earth—a mighty hunter before the Lord; his kingdom included Babel (big kingdoms are attempts to overcome by force the division of mankind, conquest and hunting being akin to each other). The city that men built in order to remain together and thus to make a name for themselves was Babel; God scattered them by confounding their speech, by bringing about the division of mankind into groups that could not understand one another: into nations, i.e., groups united not only by descent but also by language. The division of mankind into nations may be described as a milder alternative to the flood.
The three events that took place between God’s covenant with mankind after the flood and His calling of Abraham point to God’s way of dealing with men who know good and evil and devise evil from their youth. Well-nigh universal wickedness will no longer be punished with well-nigh universal destruction, but will be prevented through the division of mankind into nations. Mankind will be divided, not into the cursed and the blessed (the curses and blessings were Noah’s, not God’s), but into a chosen nation and into nations that are not chosen. The emergence of nations made it possible to replace Noah’s Ark—which floated alone on the waters covering the entire earth—by a whole, numerous nation living in the midst of the nations covering the earth. The election of the holy nation begins with the election of Abraham. Noah was distinguished from his contemporaries by his righteousness; Abraham separates himself from his contemporaries and in particular from his country and kindred at God’s command—a command accompanied by God’s promise to make of him a great nation. The Bible does not say that this primary election of Abraham was preceded by the fact of Abraham’s righteousness. However this may be, Abraham shows his righteousness by obeying God’s command at once, by trusting in God’s promise whose fulfillment he could not possibly live to see, given the short lifespan of postdiluvian man: only after Abraham’s offspring would have become a great nation would the land of Canaan be given to them forever.
The fulfillment of the promise required that Abraham not remain childless, and he was already quite old. Accordingly, God promised him that he would have issue. It was Abraham’s trust in God’s promise that, above everything else, made him righteous in the eyes of the Lord. It was God’s intention that His promise be fulfilled through the offspring of Abraham and his wife Sarah. But this promise seemed laughable to Abraham, to say nothing of Sarah: Abraham was one hundred years old and Sarah, ninety. Yet nothing is too wondrous for the Lord. The laughable announcement became a joyous one. It was followed immediately by God’s announcement to Abraham of His concern with the wickedness of the people of Sodom and Gomorrah. God did not yet know whether those people were as wicked as they were said to be. But they might be; they might deserve total destruction as much as did the generation of the flood. Noah had accepted the destruction of his generation without any questioning. Abraham, however, who had a deeper trust in God, in God’s righteousness, and a deeper awareness of his being only dust and ashes, presumed in fear and trembling to appeal to God’s righteousness lest He, the judge of the whole earth, destroy the righteous along with the wicked. In response to Abraham’s insistent pleading, God as it were promised to Abraham that He would not destroy Sodom if ten righteous men could be found in the city: He would save the city for the sake of the ten righteous men within it. Abraham acted as the mortal partner in God’s righteousness; he acted as if he had some share in the responsibility for God’s acting righteously. No wonder God’s covenant with Abraham was incomparably more incisive than His covenant immemediately following the flood.
Abraham’s trust in God thus appears to be the trust that God in His righteousness will not do anything incompatible with His righteousness and that while, or because, nothing is too wondrous for the Lord, there are firm boundaries set to Him by His own righteousness, by Himself. This awareness is deepened and therewith modified by the last and severest test of Abraham’s trust: God’s command to him to sacrifice Isaac, his only son by Sarah. Abraham’s supreme test presupposes the wondrous character of Isaac’s birth: the very son who was to be the sole link between Abraham and the chosen people and who was born against all reasonable expectations, was to be sacrificed by his father. This command contradicted not only the divine promise, but also the divine prohibition against the shedding of innocent blood. Yet Abraham did not argue with God as he had done in the case of Sodom’s destruction. In the case of Sodom, Abraham was not confronted with a divine command to do a certain thing and more particularly he was not confronted with a command to surrender to God what was dearest to him: Abraham did not argue with God for the preservation of Isaac because he loved God—not himself or his most cherished hope—with all his heart, with all his soul, and with all his might. The same concern with God’s righteousness that had induced him to plead with God for the preservation of Sodom if ten just men could be found in that city, induced him not to plead for the preservation of Isaac, for God rightfully demands that He alone be loved unqualifiedly. The fact that the command to sacrifice Isaac contradicted the prohibition against the shedding of innocent blood must be understood in the light of the difference between human justice and divine justice: God alone is unqualifiedly, if un-fathomably, just. God promised Abraham that He would spare Sodom if ten righteous men could be found in it, and Abraham was satisfied with this promise; He did not promise that He would spare the city if nine righteous men were found in it; would those nine be destroyed together with the wicked? And even if all Sodomites were wicked and hence justly destroyed, did their infants who were destroyed with them deserve their destruction? The apparent contradiction between the command to sacrifice Isaac and the divine promise to the descendants of Isaac is disposed of by the consideration that nothing is too wondrous for the Lord. Abraham’s supreme trust in God, his simple, singleminded, childlike faith was rewarded although, or because, it presupposed his entire unconcern with any reward, for Abraham was willing to forgo, to destroy, to kill the only reward with which he was concerned: God prevented the sacrifice of Isaac. Abraham’s intended action needed a reward although he was not concerned with a reward because his intended action cannot be said to have been intrinsically rewarding. The preservation of Isaac is as wondrous as his birth. These two wonders illustrate more clearly than anything else the origin of the holy nation.
The God Who created heaven and earth, Who is the only God, Whose only image is man, Who forbade man to eat of the tree of knowledge of good and evil, Who made a covenant with mankind after the flood and thereafter a convenant with Abraham which became His covenant with Abraham, Isaac, and Jacob—what kind of God is He? Or, to speak more reverently and more adequately, what is His name? This question was addressed to God Himself by Moses when he was sent by Him to the sons of Israel. God replied: “Ehyeh-Asher-Ehyeh,” which is most often translated: “I am That (Who) I am.” I believe, however, that we ought to render this statement, “I shall be What I shall be,” thus preserving the connection between God’s name and the fact that He makes covenants with men, i.e., that He reveals Himself to men above all by His commandments and by His promises and His fulfillment of those promises. “I shall be What I shall be” is, as it were, explained in the verse (Ex. 33:19), “I shall be gracious to whom I shall be gracious and I shall show mercy to whom I shall show mercy.” God’s actions cannot be predicted, unless He Himself has predicted them, i.e., promised them. But as is shown precisely by the account of Abraham’s binding of Isaac, the way in which He fulfills His promises cannot be known in advance. The biblical God is a mysterious God: He comes in a thick cloud (Ex. 19:4); He cannot be seen; His presence can be sensed but not always and everywhere; what is known of Him is only what He chose to communicate by His word through His chosen servants. The rest of the chosen people knows His word—apart from the Ten Commandments (Deut. 4:12 and 5:4-5)—only mediately and does not wish to know it immediately (Ex. 20:19, and 21, 24:1-2; Deut. 10:15-18; Amos 3:7). For almost all purposes the word of God as revealed to His prophets and especially to Moses became the source of knowledge of good and evil, the true tree of knowledge which is at the same time the tree of life.
Having said this much about the beginning of the Bible and what it entails, let us now cast a glance at some Greek counterparts to the beginning of the Bible—to begin with, at Hesiod’s Theogony as well as the remains of Parmenides’s and Empedocles’s works. They are all the works of known authors. This does not mean that they are, or present themselves as being, merely human. Hesiod sings what the Muses, the daughters of Zeus who is the father of gods and men, taught him or commanded him to sing. One could say that the Muses vouch for the truth of Hesiod’s song, were it not for the fact that they sometimes speak lies which resemble what is true. Parmenides transmits the teaching of a goddess, and so does Empedocles. Yet these men composed their books; their songs or speeches are books. The Bible, on the other hand, is not a book. The most one could say is that it is a collection of books. The author of a book, in the strict sense of the term, excludes everything that is not necessary, that does not fulfill a function necessary to the purpose his book is meant to fulfill. The compilers of the Bible as a whole and of the Torah in particular seem to have followed an entirely different rule. Confronted with a variety of preexisting holy speeches, which as such had to be treated with the utmost respect, they excluded only what could not by any stretch of the imagination be rendered compatible with the fundamental and authoritative teaching; their very piety, aroused and fostered by the pre-existing holy speeches, led them to make such changes in those holy speeches as they did make. Their work may then abound in contradictions and repetitions that no one ever intended as such, whereas in a book in the strict sense there is nothing that is not intended by the author.
Hesiod’s Theogony sings of the generation or begetting of the gods; the gods were not “made” by anybody. Far from having been created by a god, earth and heaven are the ancestors of the immortal gods. More precisely, according to Hesiod everything that is has come to be. First there arose Chaos, Gaia (Earth), and Eros. Gaia gave birth first to Ouranos (Heaven) and then, mating with Ouranos, she brought forth Kronos and his brothers and sisters. Ouranos hated his children and did not wish them to come to life. At the wish and advice of Gaia, Kronos deprived his father of his generative power and thus unintentionally brought about the emergence of Aphrodite; Kronos became the king of the gods. Kronos’s evil deed was avenged by his son Zeus whom he had generated by mating with Rheia and whom he had planned to destroy; Zeus dethroned his father and thus became the king of the gods, the father of gods and men, the mightiest of all gods. Given his ancestors it is not surprising that while he is the father of men and belongs to the gods who are the givers of good things, he is far from being kind to men. Mating with Mnemosyne, the daughter of Gaia and Ouranos, Zeus generated the nine Muses. The Muses give sweet and gentle eloquence and understanding to the kings whom they wish to honor. Through the Muses there are singers on earth, just as through Zeus there are kings. While kingship and song may go together, there is a profound difference between the two—a difference that, guided by Hesiod, one may compare to that between the hawk and the nightingale. Surely Metis (Wisdom), while being Zeus’s first spouse and having become inseparable from him, is not identical with him; the relation of Zeus and Metis may remind one of the relation of God and wisdom in the Bible.
Hesiod speaks of the creation or making of men not in the Theogony but in his Works and Days, i.e., in the context of his speeches regarding how man should live, regarding man’s right life, which includes the teaching regarding the right seasons (the “days”); the question of the right life does not arise regarding the gods. The right life for man is the just life, the life devoted to working, especially to tilling the soil. Work thus understood is a blessing ordained by Zeus who blesses the just and crushes the proud: often even a whole city is destroyed for the deeds of a single bad man. Yet Zeus takes cognizance of men’s justice and injustice only if he so wills. Accordingly, work appears to be not a blessing but a curse: men must work because the gods keep hidden from them the means of life and they do this in order to punish them for Prometheus’s theft of fire—a theft inspired by philanthropy. But was not Prometheus’s action itself prompted by the fact that men were not properly provided for by the gods and in particular by Zeus? Be this as it may, Zeus did not deprive men of the fire that Prometheus had stolen for them; he punished them by sending them Pandora and her box, that was filled with countless evils like hard labor. The evils with which human life is beset cannot be traced to human sin. Hesiod conveys the same message by his story of the five races of men which came into being successively. The first of these, the golden race, was made by the gods while Kronos was still ruling in heaven. These men lived without toil or grief; they had all good things in abundance because the earth by itself gave them abundant fruit. Yet the men made by father Zeus lack this bliss. Hesiod does not make clear whether this is due to Zeus’s ill-will or to his lack of power; he gives us no reason to think that it is due to man’s sin. He creates the impression that human life becomes ever more miserable as one race of men succeeds another: there is no divine promise, supported by the fulfillment of earlier divine promises, that permits one to trust and to hope.
The most striking difference between the poet Hesiod and the philosophers Parmenides and Empedocles is that according to the philosophers, not everything has come into being: that which truly is, has not come into being and does not perish. This does not necessarily mean that what exists always is a god or gods. For if Empedocles calls one of the eternal four elements Zeus, this Zeus has hardly anything in common with what Hesiod, or the people generally, understood by Zeus. At any rate, according to both philosophers, the gods as ordinarily understood have come into being, just like heaven and earth, and will therefore perish again.
At the time when the opposition between Jerusalem and Athens reached the level of what one may call its classical struggle, in the 12th and 13th centuries, philosophy was represented by Aristotle. The Aristotelian god, like the biblical God, is a thinking being, but in opposition to the biblical God he is only a thinking being, pure thought: pure thought that thinks itself and only itself. Only by thinking himself and nothing but himself does he rule the world. He surely does not rule by giving orders and laws. Hence he is not a creator-god: the world is as eternal as god. Man is not his image: man is much lower in rank than other parts of the world. For Aristotle it is almost a blasphemy to ascribe justice to his god; he is above justice as well as injustice.
It has often been said that the philosopher who comes closest to the Bible is Plato. This was said not least during the classical struggle between Jerusalem and Athens in the Middle Ages. Both Platonic philosophy and biblical piety are animated by the concern with purity and purification: “pure reason” in Plato’s sense is closer to the Bible than “pure reason” in Kant’s sense or for that matter in Anaxagoras’s and Aristotle’s sense. Plato teaches, just as the Bible does, that heaven and earth were created or made by an invisible God whom he calls the Father, who is eternal, who is good, and hence whose creation is good. The coming-into-being and the preservation of the world that he has created depend on the will of its maker. What Plato himself calls theology consists of two teachings: (1) God is good and hence in no way the cause of evil; (2) God is simple and hence unchangeable. On the question of divine concern with men’s justice and injustice, Platonic teaching is in fundamental agreement with biblical teaching; it even culminates in a statement that agrees almost literally with biblical statements.4 Yet the differences between the Platonic and biblical teachings are no less striking than the similarities. The Platonic teaching on creation does not claim to be more than a likely tale. The Platonic God is a creator also of gods, of visible living beings, i.e., of the stars; the created gods rather than the creator God create the mortal living beings and in particular man; heaven is a blessed god. The Platonic God does not create the world by his word; he creates it after having looked to the eternal ideas which therefore are higher than he. In accordance with this, Plato’s explicit theology is presented within the context of the first discussion of education in the Republic, within the context of what one may call the discussion of elementary education; in the second and final discussion of education—the education of philosophers—theology is replaced by the doctrine of ideas. As for the thematic discussion of providence in the Laws, it may suffice here to say that it occurs within the context of the discussion of penal law.
In his likely tale of how God created the visible whole, Plato makes a distinction between two kinds of gods, the visible cosmic gods and the traditional gods—between the gods who revolve manifestly, i.e., who manifest themselves regularly, and the gods who manifest themselves so far as they will. The least one would have to say is that according to Plato the cosmic gods are of much higher rank than the traditional gods, the Greek gods. Inasmuch as the cosmic gods are accessible to man as man—to his observations and calculations—whereas the Greek gods are accessible only to the Greeks through Greek tradition, one may, in comic exaggeration, ascribe the worship of the cosmic gods to barbarians. This ascription is made in a manner and with an intention altogether non-comic in the Bible: Israel is forbidden to worship the sun and the moon and the stars which the Lord has allotted to the other peoples everywhere under heaven. This implies that the worship of the cosmic gods by other peoples, the barbarians, is not due to a natural or rational cause, to the fact that those gods are accessible to man as man, but to an act of God’s will. It goes without saying that according to the Bible the God Who manifests Himself as far as He wills, Who is not universally worshipped as such, is the only true God. The Platonic statement taken in conjunction with the biblical statement brings out the fundamental opposition of Athens at its peak to Jerusalem: the opposition of the God or gods of the philosophers to the God of Abraham, Isaac, and Jacob, the opposition of reason and revelation.
II. On Socrates and the Prophets
Fifty years ago, in the middle of World War I, Hermann Cohen, the greatest representative of, and spokesman for, German Jewry, the most powerful figure among the German professors of philosophy of his time, stated his view on Jerusalem and Athens in a lecture entitled “The Social Ideal in Plato and the Prophets.” He repeated that lecture shortly before his death, and we may regard it as stating his final view on Jerusalem and Athens and therewith on the truth. For, as Cohen says right at the beginning, “Plato and the prophets are the two most important sources of modern culture.” Being concerned with “the social ideal,” he does not say a single word about Christianity in the whole lecture.
Cohen’s view may be restated as follows. The truth is the synthesis of the teachings of Plato and the prophets. What we owe to Plato is the insight that the truth is in the first place the truth of science but that science must be supplemented, overarched, by the idea of the good which to Cohen means, not God, but rational, scientific ethics. The ethical truth must not only be compatible with the scientific truth; the ethical truth needs the scientific truth. The prophets are very much concerned with knowledge: with the knowledge of God. But this knowledge, as the prophets understood it, has no connection whatever with scientific knowledge; it is knowledge only in a metaphorical sense. It is perhaps with a view to this fact that Cohen speaks once of the divine Plato but never of the divine prophets. Why then can he not leave matters at Platonic philosophy? What is the fundamental defect of Platonic philosophy that is remedied by the prophets and only by the prophets? According to Plato, the cessation of evil requires the rule of the philosophers, of the men who possess the highest kind of human knowledge, i.e., of science in the broadest sense of the term. But this kind of knowledge like, to some extent, all scientific knowledge, is, according to Plato, the preserve of a small minority: of the men who possess a certain nature and certain gifts that most men lack. Plato presupposes that there is an unchangeable human nature and, as a consequence, a fundamental structure of the good human society which is unchangeable. This leads him to assert or to assume that there will be wars as long as there will be human beings, that there ought to be a class of warriors and that the class ought to be higher in rank and honor than the class of producers and exchangers. These defects in Plato’s system are remedied by the prophets precisely because they lack the idea of science and hence the idea of nature, and therefore they can believe that men’s conduct toward one another can undergo a change much more radical than any change ever dreamed of by Plato.
Cohen brought out very well the antagonism between Plato and the prophets. Nevertheless we cannot leave matters at his view of that antagonism. Cohen’s thought belongs to the world preceding World War I, and accordingly reflects a greater faith in the power of modern Western culture to mold the fate of mankind than seems to be warranted now. The worst things experienced by Cohen were the Dreyfus scandal and the pogroms instigated by Tsarist Russia: he did not experience Communist Russia and Hitler Germany. More disillusioned than he regarding modern culture, we wonder whether the two separate ingredients of modern culture, of the modern synthesis, are not more solid than the synthesis itself. Catastrophes and horrors of a magnitude hitherto unknown, which we have seen and through which we have lived, were better provided for, or made intelligible, by both Plato and the prophets than by the modern belief in progress. Since we are less certain than Cohen was that the modern synthesis is superior to its pre-modern ingredients, and since the two ingredients are in fundamental opposition to each other, we are ultimately confronted by a problem rather than by a solution.
More particularly, Cohen understood Plato in the light of the opposition between Plato and Aristotle—an opposition that he understood in turn in the light of the opposition between Kant and Hegel. We, however, are more impressed than Cohen was by the kinship between Plato and Aristotle on the one hand and the kinship between Kant and Hegel on the other. In other words, the quarrel between the ancients and the moderns seems to us to be more fundamental than either the quarrel between Plato and Aristotle or that between Kant and Hegel.
We, moreover, prefer to speak of Socrates and the prophets rather than of Plato and the prophets, and for the following reasons. We are no longer as sure as Cohen was that we can draw a clear line between Socrates and Plato. There is traditional support for drawing such a clear line, above all in Aristotle; but Aristotle’s statements on this kind of subject no longer possess for us the authority that they formerly possessed, and this is clue partly to Cohen himself. The clear distinction between Socrates and Plato is based not only on tradition, but on the results of modern historical criticism; yet these results are in the decisive respect hypothetical. The decisive fact for us is that Plato points, as it were, away from himself to Socrates. If we wish to understand Plato, we must take him seriously; we must take seriously in particular his deference to Socrates. Plato points not only to Socrates’s speeches but to his whole life, and to his fate as well. Hence Plato’s life and fate do not have the symbolic character of Socrates’s life and fate. Socrates, as presented by Plato, had a mission; Plato did not claim to have a mission. It is in the first place this fact—the fact that Socrates had a mission—that induces us to consider, not Plato and the prophets, but Socrates and the prophets.
I cannot speak in my own words of the mission of the prophets. Let me, however, remind the reader of some prophetic utterances of singular force and grandeur. Isaiah 6:
In the year that King Uzziah died I saw also the Lord sitting upon a throne, high and lifted up, and his train filled the temple. Above it stood the seraphim: each one had six wings; with twain he covered his face, and with twain he covered his feet, and with twain he did fly. And one cried unto another, and said, Holy, holy, holy is the Lord of hosts: the whole world is full of his glory. . . . Then I said, Woe is me! for I am undone; because I am a man of unclean lips, and I dwell in the midst of a people of unclean lips. . . . Then flew one of the seraphim unto me, having a live coal in his hand, which he had taken with the tongs from off the altar: And he laid it upon my mouth, and said, Lo, this hath touched thy lips; and thine iniquity is taken away, and thy sin purged. Also I heard the voice of the Lord, saying, Whom shall I send, and who will go for us? Then said I, Here am I; send me.
Isaiah, it seems, volunteered for his mission. Could he not have remained silent? Could he refuse to volunteer? When the word of the Lord came unto Jonah, “Arise, go to Nineveh, that great city, and cry against it; for their wickedness is come up before me,” “Jonah rose up to flee unto Tarshish from the presence of the Lord”; Jonah ran away from his mission; but God did not allow him to run away; He compelled him to fulfill it. Of this compulsion we hear in different ways from Amos and Jeremiah. Amos 3:7-8: “Surely the Lord God will do nothing but he revealeth his secret unto his servants the prophets. The lion hath roared, who will not fear? The Lord God hath spoken; who will not prophesy?” The prophets, overpowered by the majesty of the Lord, bring the message of His wrath and His mercy. Jeremiah 1:4-10.
Then the word of the Lord came unto me, saying, Before I formed thee in the belly I knew thee and before thou camest out of the womb I sanctified thee, and I ordained thee a prophet unto the nations. Then said I, Ah, Lord God! behold, I cannot speak; for I am a child. But the Lord said unto me, Say not, I am a child; for thou shalt go to all that I shall send thee, and whatsoever I command thee thou shalt speak. . . . Then the Lord put forth his hand, and touched my mouth. And the Lord said unto me, Behold I have put my words in thy mouth. See, I have this day set thee over the nations and over the kingdoms, to root out, and to pull down, and to destroy, and to throw down, to build, and to plant.
To be sure, the claim to have been sent by God was raised also by men who were not truly prophets but prophets of falsehood, false prophets. Many or most hearers were therefore uncertain as to which kinds of claimants to prophecy were to be trusted or believed. According to the Bible, the false prophets simply lied in saying that they were sent by God. The false prophets tell the people what the people like to hear; hence they are much more popular than the true prophets. The false prophets are “prophets of the deceit of their own heart” (ibid. 26); they tell the people what they themselves imagined (consciously or unconsciously) because they wished it or their hearers wished it. But: “Is not my word like as a fire saith the Lord, and like a hammer that breaketh rock in pieces?” (ibid. 29). Or, as Jeremiah put it when opposing the false prophet, Hananiah: “The prophets that have been before me and before thee of old prophesied both against many countries, and against great kingdoms, of war, and of evil, and of pestilence” (28:8). This does not mean that a prophet is true only if he is a prophet of doom; the true prophets are also prophets of ultimate salvation. We understand the difference between the true and the false prophets if we listen to and meditate on these words of Jeremiah: “Thus saith the Lord; Cursed is the man, that trusteth in man, and makes flesh his arm, and whose heart departeth from the Lord. . . . Blessed is the man that trusteth in the Lord, and whose hope the Lord is.” The false prophets trust in flesh, even if that flesh is the temple in Jerusalem, the promised land, the chosen people itself, or even God’s promise to the chosen people (if that promise is taken to be an unconditional promise and not as a part of a covenant). The true prophets, regardless of whether they predict doom or salvation, predict the unexpected, the humanly unforeseeable—what would not occur to men, left to themselves, to fear or to hope. The true prophets speak and act by the spirit and in the spirit of Ehyeh-asher-ehyeh. For the false prophets, on the other hand, there cannot be the wholly unexpected, whether bad or good.
Of Socrates’s mission we know only through Plato’s Apology of Socrates, which presents itself as the speech delivered by Socrates when he defended himself against the charge that he did not believe in the existence of the gods worshipped by the city of Athens and that he corrupted the young. In that speech he denies possessing any more than human wisdom. This denial was understood by Judah Halevi among others as follows: “Socrates said to the people: ‘I do not deny your divine wisdom, but I say that I do not understand it; I am wise only in human wisdom.’”5 While this interpretation points in the right direction, it goes somewhat too far. Socrates, at least, immediately after having denied possessing anything more than human wisdom, refers to the speech that originated his mission, and of this speech he says that it is not his but he seems to ascribe to it divine origin. He does trace what he says to a speaker who is worthy of the Athenians’ credence. But it is probable that he means by that speaker his companion, Chairephon, who is more worthy of credence than Socrates because he was attached to the democratic regime. This Chairephon, having once come to Delphi, asked Apollo’s oracle whether there was anyone wiser than Socrates. The Pythia replied that no one was wiser. This reply originated Socrates’s mission. We see at once that Socrates’s mission originated in human initiative, in the initiative of one of Socrates’s companions. Socrates, on the other hand, takes it for granted that the reply given by the Pythia was given by the god Apollo himself. Yet this does not induce him to take it for granted that the god’s reply is true. He does take it for granted that it is not meet for the god to lie. Yet this does not make the god’s reply convincing to him. In fact he tries to refute that reply by discovering men who are wiser than he. Engaging in this quest, he finds out that the god spoke the truth: Socrates is wiser than other men because he knows nothing, i.e., nothing about the most important things, whereas the others believe that they know the truth about the most important things. Thus his attempt to refute the oracle turns into a vindication of the oracle. Without intending it, he comes to the assistance of the god; he serves the god; he obeys the god’s command. Although no god had ever spoken to him, he is satisfied that the god had commanded him to examine himself and the others, i.e., to philosophize, or to exhort everyone he meets to the practice of virtue: he has been given by the god to the city of Athens as a gadfly.
While Socrates does not claim to have heard the speech of a god, he claims that a voice—something divine and demonic—speaks to him from time to time, his daimonion. This daimonion, however, has no connection with Socrates’s mission, for it never urges him forward but only keeps him back. While the Delphic oracle urged him forward toward philosophizing, toward examining his fellow men, and thus made him generally hated and thus brought him into mortal danger, his daimonion kept him back from political activity and thus saved him from mortal danger.
The fact that both Socrates and the prophets have a divine mission means, or at any rate implies, that both Socrates and the prophets are concerned with justice or righteousness, with the perfectly just society which, as such, would be free of all evils. To this extent Socrates’s figuring out of the best social order and the prophets’ vision of the messianic age are in agreement. Yet whereas the prophets predict the coming of the messianic age, Socrates merely holds that the perfect society is possible: whether it will ever be actual depends on an unlikely, although not impossible, coincidence, the coincidence of philosophy and political power. For, according to Socrates, the coming-into-being of the best political order is not due to divine intervention; human nature will remain as it always has been; the decisive difference between the best political order and all other societies is that in the former the philosophers will be kings or the natural potentiality of the philosophers will reach its utmost perfection. In the most perfect social order, as Socrates sees it, knowledge of the most important things will remain, as it always was, the preserve of the philosophers, i.e., of a very small part of the population. According to the prophets, however, in the messianic age “the earth shall be full of knowledge of the Lord, as the waters cover the earth” (Isaiah 11:9), and this will be brought about by God Himself. As a consequence, the messianic age will be the age of universal peace: all nations shall come to the mountain of the Lord, to the house of the God of Jacob, “and they shall beat their swords into plowshares, and their spears into pruning hooks: nation shall not lift up sword against nation, neither shall they learn war any more” (Isaiah 2:2-4). The best regime, however, as Socrates envisages it, will animate a single city which, as a matter of course, will become embroiled in wars with other cities. The cessation of evils that Socrates expects from the establishment of the best regime will not include the cessation of war.
Finally, the perfectly just man, the man who is as just as is humanly possible, is, according to Socrates, the philosopher; according to the prophets, he is the faithful servant of the Lord. The philosopher is the man who dedicates his life to the quest for knowledge of the good, of the idea of the good; what we would call moral virtue is only the condition or by-product of that quest. According to the prophets, however, there is no need for the quest for knowledge of the good: God “hath shewed thee, O man, what is good; and what doth the Lord require of thee, but to do justly, and to love mercy, and to walk humbly with thy God” (Micah 6.8).
1 Cf. U. Cassuto, A Commentary on the Book of Genesis, Part I, Jerusalem, 1961, p. 42.
2 Cf. the characterization of the plants as ???e?a (“in or of the earth”) in Plato’s Republic, 491 d 1. Cf. Empedocles A 70.
3 Cf. the distinction between the two kinds of “other gods” in Deut. 4:15-19, between the idols on the one hand and sun, moon, and stars on the other.
4 Compare Plato’s Laws 905 a 4-b 2 with Amos 9:1-3 and Psalm 139:7-10.
5 Kuzari IV, 13 and V, 14.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Jerusalem and Athens: Some Introductory Reflections
Must-Reads from Magazine
Can it be reversed?
Writing in these pages last year (“Illiberalism: The Worldwide Crisis,” July/August 2016), I described this surge of intemperate politics as a global phenomenon, a crisis of illiberalism stretching from France to the Philippines and from South Africa to Greece. Donald Trump and Bernie Sanders, I argued, were articulating American versions of this growing challenge to liberalism. By “liberalism,” I was referring not to the left or center-left but to the philosophy of individual rights, free enterprise, checks and balances, and cultural pluralism that forms the common ground of politics across the West.
Less a systematic ideology than a posture or sensibility, the new illiberalism nevertheless has certain core planks. Chief among these are a conspiratorial account of world events; hostility to free trade and finance capital; opposition to immigration that goes beyond reasonable restrictions and bleeds into virulent nativism; impatience with norms and procedural niceties; a tendency toward populist leader-worship; and skepticism toward international treaties and institutions, such as NATO, that provide the scaffolding for the U.S.-led postwar order.
The new illiberals, I pointed out, all tend to admire established authoritarians to varying degrees. Trump, along with France’s Marine Le Pen and many others, looks to Vladimir Putin. For Sanders, it was Hugo Chavez’s Venezuela, where, the Vermont socialist said in 2011, “the American dream is more apt to be realized.” Even so, I argued, the crisis of illiberalism traces mainly to discontents internal to liberal democracies.
Trump’s election and his first eight months in office have confirmed the thrust of my predictions, if not all of the policy details. On the policy front, the new president has proved too undisciplined, his efforts too wild and haphazard, to reorient the U.S. government away from postwar liberal order.
The courts blunted the “Muslim ban.” The Trump administration has reaffirmed Washington’s commitment to defend treaty partners in Europe and East Asia. Trumpian grumbling about allies not paying their fair share—a fair point in Europe’s case, by the way—has amounted to just that. The president did pull the U.S. out of the Trans-Pacific Partnership, but even the ultra-establishmentarian Hillary Clinton went from supporting to opposing the pact once she figured out which way the Democratic winds were blowing. The North American Free Trade Agreement, which came into being nearly a quarter-century ago, does look shaky at the moment, but there is no reason to think that it won’t survive in some modified form.
Yet on the cultural front, the crisis of illiberalism continues to rage. If anything, it has intensified, as attested by the events surrounding the protest over a Robert E. Lee statue in Charlottesville, Virginia. The president refused to condemn unequivocally white nationalists who marched with swastikas and chanted “Jews will not replace us.” Trump even suggested there were “very fine people” among them, thus winking at the so-called alt-right as he had during the campaign. In the days that followed, much of the left rallied behind so-called antifa (“anti-fascist”) militants who make no secret of their allegiance to violent totalitarian ideologies at the other end of the political spectrum.
Disorder is the new American normal, then. Questions that appeared to have been settled—about the connection between economic and political liberty, the perils of conspiracism and romantic politics, America’s unique role on the world stage, and so on—are unsettled once more. Serious people wonder out loud whether liberal democracy is worth maintaining at all, with many of them concluding that it is not. The return of ideas that for good reason were buried in the last century threatens the decent political order that has made the U.S. an exceptionally free and prosperous civilization.F or many leftists, America’s commitment to liberty and equality before the law has always masked despotism and exploitation. This view long predated Trump’s rise, and if they didn’t subscribe to it themselves, too often mainstream Democrats and progressives treated its proponents—the likes of Noam Chomsky and Howard Zinn—as beloved and respectable, if slightly eccentric, relatives.
This cynical vision of the free society (as a conspiracy against the dispossessed) was a mainstay of Cold War–era debates about the relative merits of Western democracy and Communism. Soviet apologists insisted that Communist states couldn’t be expected to uphold “merely” formal rights when they had set out to shape a whole new kind of man. That required “breaking a few eggs,” in the words of the Stalinist interrogators in Arthur Koestler’s Darkness at Noon. Anyway, what good were free speech and due process to the coal miner, when under capitalism the whole social structure was rigged against him?
That line worked for a time, until the scale of Soviet tyranny became impossible to justify by anyone but its most abject apologists. It became obvious that “bourgeois justice,” however imperfect, was infinitely preferable to the Marxist alternative. With the Communist experiment discredited, and Western workers uninterested in staging world revolution, the illiberal left began shifting instead to questions of identity. In race-gender-sexuality theory and the identitarian “subaltern,” it found potent substitutes for dialectical materialism and the proletariat. We are still living with the consequences of this shift.
Although there were superficial resemblances, this new politics of identity differed from earlier civil-rights movements. Those earlier movements had sought a place at the American table for hitherto entirely or somewhat excluded groups: blacks, women, gays, the disabled, and so on. In doing so, they didn’t seek to overturn or radically reorganize the table. Instead, they reaffirmed the American Founding (think of Martin Luther King Jr.’s constant references to the Declaration of Independence). And these movements succeeded, owing to America’s tremendous capacity for absorbing social change.
Yet for the new identitarians, as for the Marxists before them, liberal-democratic order was systematically rigged against the downtrodden—now redefined along lines of race, gender, and sexuality, with social class quietly swept under the rug. America’s strides toward racial progress, not least the election and re-election of an African-American president, were dismissed. The U.S. still deserved condemnation because it fell short of perfect inclusion, limitless autonomy, and complete equality—conditions that no free society can achieve given the root fact of human nature. The accidentals had changed from the Marxist days, in other words, but the essentials remained the same.
In one sense, though, the identitarians went further. The old Marxists still claimed to stand on objectively accessible truth. Not so their successors. Following intellectual lodestars such as the gender theorist Judith Butler, the identity left came to reject objective truth—and with it, biological sex differences, aesthetic standards in art, the possibility of universal moral precepts, and much else of the kind. All of these things, the left identitarians said, were products of repressive institutions, hierarchies, and power.
Today’s “social-justice warriors” are heirs to this sordid intellectual legacy. They claim to seek justice. But, unmoored from any moral foundations, SJW justice operates like mob justice and revolutionary terror, usually carried out online. SJWs claim to protect individual autonomy, but the obsession with group identity and power dynamics means that SJW autonomy claims must destroy the autonomy of others. Self-righteousness married to total relativism is a terrifying thing.
It isn’t enough to have legalized same-sex marriage in the U.S. via judicial fiat; the evangelical baker must be forced to bake cakes for gay weddings. It isn’t enough to have won legal protection and social acceptance for the transgendered; the Orthodox rabbi must use preferred trans pronouns on pain of criminal prosecution. Likewise, since there is no objective truth to be gained from the open exchange of ideas, any speech that causes subjective discomfort among members of marginalized groups must be suppressed, if necessary through physical violence. Campus censorship that began with speech codes and mobs that prevented conservative and pro-Israel figures from speaking has now evolved into a general right to beat anyone designated as a “fascist,” on- or off-campus.
For the illiberal left, the election of Donald Trump was indisputable proof that behind America’s liberal pieties lurks, forever, the beast of bigotry. Trump, in this view, wasn’t just an unqualified vulgarian who nevertheless won the decisive backing of voters dissatisfied with the alternative or alienated from mainstream politics. Rather, a vote for Trump constituted a declaration of war against women, immigrants, and other victims of American “structures of oppression.” There would be no attempt to persuade Trump supporters; war would be answered by war.
This isn’t liberalism. Since it can sometimes appear as an extension of traditional civil-rights activism, however, identity leftism has glommed itself onto liberalism. It is frequently impossible to tell where traditional autonomy- and equality-seeking liberalism ends and repressive identity leftism begins. Whether based on faulty thinking or out of a sense of weakness before an angry and energetic movement, liberals have too often embraced the identity left as their own. They haven’t noticed how the identitarians seek to undermine, not rectify, liberal order.
Some on the left, notably Columbia University’s Mark Lilla, are sounding the alarm and calling on Democrats to stress the common good over tribalism. Yet these are a few voices in the wilderness. Identitarians of various stripes still lord over the broad left, where it is fashionable to believe that the U.S. project is predatory and oppressive by design. If there is a viable left alternative to identity on the horizon, it is the one offered by Sanders and his “Bernie Bros”—which is to say, a reversion to the socialism and class struggle of the previous century.
Americans, it seems, will have to wait a while for reason and responsibility to return to the left.T
hen there is the illiberal fever gripping American conservatives. Liberal democracy has always had its critics on the right, particularly in Continental Europe, where statist, authoritarian, and blood-and-soil accounts of conservatism predominate. Mainstream Anglo-American conservatism took a different course. It has championed individual rights, free enterprise, and pluralism while insisting that liberty depends on public virtue and moral order, and that sometimes the claims of liberty and autonomy must give way to those of tradition, state authority, and the common good.
The whole beauty of American order lies in keeping in tension these rival forces that are nevertheless fundamentally at peace. The Founders didn’t adopt wholesale Enlightenment liberalism; rather, they tempered its precepts about universal rights with the teachings of biblical religion as well as Roman political theory. The Constitution drew from all three wellsprings. The product was a whole, and it is a pointless and ahistorical exercise to elevate any one source above the others.
American conservatism and liberalism, then, are in fact branches of each other, the one (conservatism) invoking tradition and virtue to defend and, when necessary, discipline the regime of liberty; the other (liberalism) guaranteeing the open space in which churches, volunteer organizations, philanthropic activity, and other sources of tradition and civic virtue flourish, in freedom, rather than through state establishment or patronage.
One result has been long-term political stability, a blessing that Americans take for granted. Another has been the transformation of liberalism into the lingua franca of all politics, not just at home but across a world that, since 1945, has increasingly reflected U.S. preferences. The great French classical liberal Raymond Aron noted in 1955 that the “essentials of liberalism—the respect for individual liberty and moderate government—are no longer the property of a single party: they have become the property of all.” As Aron archly pointed out, even liberalism’s enemies tend to frame their objections using the rights-based talk associated with liberalism.
Under Trump, however, some in the party of the right have abdicated their responsibility to liberal democracy as a whole. They have reduced themselves to the lowest sophistry in defense of the New Yorker’s inanities and daily assaults on presidential norms. Beginning when Trump clinched the GOP nomination last year, a great deal of conservative “thinking” has amounted to: You did X to us, now enjoy it as we dish it back to you and then some. Entire websites and some of the biggest stars in right-wing punditry are singularly devoted to making this rather base point. If Trump is undermining this or that aspect of liberal order that was once cherished by conservatives, so be it; that 63 million Americans supported him and that the president “drives the left crazy”—these are good enough reasons to go along.
Some of this is partisan jousting that occurs with every administration. But when it comes to Trump’s most egregious statements and conduct—such as his repeated assertions that the U.S. and Putin’s thugocracy are moral equals—the apologetics are positively obscene. Enough pooh-poohing, whataboutery, and misdirection of this kind, and there will be no conservative principle left standing.
More perniciously, as once-defeated illiberal philosophies have returned with a vengeance to the left, so have their reactionary analogues to the right. The two illiberalisms enjoy a remarkable complementarity and even cross-pollinate each other. This has developed to the point where it is sometimes hard to distinguish Tucker Carlson from Chomsky, Laura Ingraham from Julian Assange, the Claremont Review from New Left Review, and so on.
Two slanders against liberalism in particular seem to be gathering strength on the thinking right. The first is the tendency to frame elements of liberal democracy, especially free trade, as a conspiracy hatched by capitalists, the managerial class, and others with soft hands against American workers. One needn’t renounce liberal democracy as a whole to believe this, though believers often go the whole hog. The second idea is that liberalism itself was another form of totalitarianism all along and, therefore, that no amount of conservative course correction can set right what is wrong with the system.
These two theses together represent a dismaying ideological turn on the right. The first—the account of global capitalism as an imposition of power over the powerless—has gained currency in the pages of American Affairs, the new journal of Trumpian thought, where class struggle is a constant theme. Other conservatives, who were always skeptical of free enterprise and U.S.-led world order, such as the Weekly Standard’s Christopher Caldwell, are also publishing similar ideas to a wider reception than perhaps greeted them in the past.
In a March 2017 essay in the Claremont Review of Books, for example, Caldwell flatly described globalization as a “con game.” The perpetrators, he argued, are “unscrupulous actors who have broken promises and seized a good deal of hard-won public property.” These included administrations of both parties that pursued trade liberalization over decades, people who live in cities and therefore benefit from the knowledge-based economy, American firms, and really anyone who has ever thought to capitalize on global supply chains to boost competitiveness—globalists, in a word.
By shipping jobs and manufacturing processes overseas, Caldwell contended, these miscreants had stolen not just material things like taxpayer-funded research but also concepts like “economies of scale” (you didn’t build that!). Thus, globalization in the West differed “in degree but not in kind from the contemporaneous Eastern Bloc looting of state assets.”
That comparison with predatory post-Communist privatization is a sure sign of ideological overheating. It is somewhat like saying that a consumer bank’s lending to home buyers differs in degree but not in kind from a loan shark’s racket in a housing project. Well, yes, in the sense that the underlying activity—moneylending, the purchase of assets—is the same in both cases. But the context makes all the difference: The globalization that began after World War II and accelerated in the ’90s took place within a rules-based system, which duly elected or appointed policymakers in Western democracies designed in good faith and for a whole host of legitimate strategic and economic reasons.
These policymakers knew that globalization was as old as civilization itself. It would take place anyway, and the only question was whether it would be rules-based and efficient or the kind of globalization that would be driven by great-power rivalry and therefore prone to protectionist trade wars. And they were right. What today’s anti-trade types won’t admit is that defeating the Trans-Pacific Partnership and a proposed U.S.-European trade pact known as TTIP won’t end globalization as such; instead, it will cede the game to other powers that are less concerned about rules and fair play.
The postwar globalizers may have gone too far (or not far enough!). They certainly didn’t give sufficient thought to the losers in the system, or how to deal with the de-industrialization that would follow when information became supremely mobile and wages in the West remained too high relative to skills and productivity gains in the developing world. They muddled and compromised their way through these questions, as all policymakers in the real world do.
The point is that these leaders—the likes of FDR, Churchill, JFK, Ronald Reagan, Margaret Thatcher, and, yes, Bill Clinton—acted neither with malice aforethought nor anti-democratically. It isn’t true, contra Caldwell, that free trade necessarily requires “veto-proof and non-consultative” politics. The U.S., Britain, and other members of what used to be called the Free World have respected popular sovereignty (as understood at the time) for as long as they have been trading nations. Put another way, you were far more likely to enjoy political freedom if you were a citizen of one of these states than of countries that opposed economic liberalism in the 20th century. That remains true today. These distinctions matter.
Caldwell and like-minded writers of the right, who tend to dwell on liberal democracies’ crimes, are prepared to tolerate far worse if it is committed in the name of defeating “globalism.” Hence the speech on Putin that Caldwell delivered this spring at a Hillsdale College gathering in Phoenix. Promising not to “talk about what to think about Putin,” he proceeded to praise the Russian strongman as the “preeminent statesman of our time” (alongside Turkish strongman Recep Tayyip Erdogan). Putin, Caldwell said, “has become a symbol of national self-determination.”
Then Caldwell made a remark that illuminates the link between the illiberalisms of yesterday and today. Putin is to “populist conservatives,” he declared, what Castro once was to progressives. “You didn’t have to be a Communist to appreciate the way Castro, whatever his excesses, was carving out a space of autonomy for his country.”
Whatever his excesses, indeed.T
he other big idea is that today’s liberal crises aren’t a bug but a core feature of liberalism. This line of thinking is particularly prevalent among some Catholic traditionalists and other orthodox Christians (both small- and capital-“o”). The common denominator, it seems to me, is having grown up as a serious believer at a time when many liberals—to their shame—have declared war on faith generally and social conservatism in particular.
The argument essentially is this:
We (social conservatives, traditionalists) saw the threat from liberalism coming. With its claims about abstract rights and universal reason, classical liberalism had always posed a danger to the Church and to people of God. We remembered what those fired up by the new ideas did to our nuns and altars in France. Still we made peace with American liberal order, because we were told that the Founders had “built on low but solid ground,” to borrow Leo Strauss’s famous formulation, or that they had “built better than they knew,” as American Catholic hierarchs in the 19th century put it.
Maybe these promises held good for a couple of centuries, the argument continues, but they no longer do. Witness the second sexual revolution under way today. The revolutionaries are plainly telling us that we must either conform our beliefs to Herod’s ways or be driven from the democratic public square. Can it still be said that the Founding rested on solid ground? Did the Founders really build better than they knew? Or is what is passing now precisely what they intended, the rotten fruit of the Enlightenment universalism that they planted in the Constitution? We don’t love Trump (or Putin, Hungary’s Viktor Orbán, etc.), but perhaps he can counter the pincer movement of sexual and economic liberalism, and restore a measure of solidarity and commitment to the Western project.
The most pessimistic of these illiberal critics go so far as to argue that liberalism isn’t all that different from Communism, that both are totalitarian children of the Enlightenment. One such critic, Harvard Law School’s Adrian Vermeule, summed up this position in a January essay in First Things magazine:
The stock distinction between the Enlightenment’s twins—communism is violently coercive while liberalism allows freedom of thought—is glib. Illiberal citizens, trapped [under liberalism] without exit papers, suffer a narrowing sphere of permitted action and speech, shrinking prospects, and increasing pressure from regulators, employers, and acquaintances, and even from friends and family. Liberal society celebrates toleration, diversity, and free inquiry, but in practice it features a spreading social, cultural, and ideological conformism.1
I share Vermeule’s despair and that of many other conservative-Christian friends, because there have been genuinely alarming encroachments against conscience, religious freedom, and the dignity of life in Western liberal democracies in recent years. Even so, despair is an unhelpful companion to sober political thought, and the case for plunging into political illiberalism is weak, even on social-conservative grounds.
Here again what commends liberalism is historical experience, not abstract theory. Simply put, in the real-world experience of the 20th century, the Church, tradition, and religious minorities fared far better under liberal-democratic regimes than they did under illiberal alternatives. Are coercion and conformity targeting people of faith under liberalism? To be sure. But these don’t take the form of the gulag or the concentration camp or the soccer stadium–cum-killing field. Catholic political practice knows well how to draw such moral distinctions between regimes: Pope John Paul II befriended Reagan. If liberal democracy and Communism were indeed “twins” whose distinctions are “glib,” why did he do so?
And as Pascal Bruckner wrote in his essay “The Tyranny of Guilt,” if liberal democracy does trap or jail you (politically speaking), it also invariably slips the key under your cell door. The Swedish midwives driven out of the profession over their pro-life views can take their story to the media. The Down syndrome advocacy outfit whose anti-eugenic advertising was censored in France can sue in national and then international courts. The Little Sisters of the Poor can appeal to the Supreme Court for a conscience exemption to Obamacare’s contraceptives mandate. And so on.
Conversely, once you go illiberal, you don’t just rid yourself of the NGOs and doctrinaire bureaucrats bent on forcing priests to perform gay marriages; you also lose the legal guarantees that protect the Church, however imperfectly, against capricious rulers and popular majorities. And if public opinion in the West is turning increasingly secular, indeed anti-Christian, as social conservatives complain and surveys seem to confirm, is it really a good idea to militate in favor of a more illiberal order rather than defend tooth and nail liberal principles of freedom of conscience? For tomorrow, the state might fall into Elizabeth Warren’s hands.
Nor, finally, is political liberalism alone to blame for the Church’s retreating on various fronts. There have been plenty of wounds inflicted by churchmen and laypeople, who believed that they could best serve the faith by conforming its liturgy, moral teaching, and public presence to liberal order. But political liberalism didn’t compel these changes, at least not directly. In the space opened up by liberalism, and amid the kaleidoscopic lifestyles that left millions of people feeling empty and confused, it was perfectly possible to propose tradition as an alternative. It is still possible to do so.N one of this is to excuse the failures of liberals. Liberals and mainstream conservatives must go back to the drawing board, to figure out why it is that thoughtful people have come to conclude that their system is incompatible with democracy, nationalism, and religious faith. Traditionalists and others who see Russia’s mafia state as a defender of Christian civilization and national sovereignty have been duped, but liberals bear some blame for driving large numbers of people in the West to that conclusion.
This is a generational challenge for the liberal project. So be it. Liberal societies like America’s by nature invite such questioning. But before we abandon the 200-and-some-year-old liberal adventure, it is worth examining the ways in which today’s left-wing and right-wing critiques of it mirror bad ideas that were overcome in the previous century. The ideological ferment of the moment, after all, doesn’t relieve the illiberals of the responsibility to reckon with the lessons of the past.
1 Vermeule was reviewing The Demon in Democracy, a 2015 book by the Polish political theorist and parliamentarian Ryszard Legutko that makes the same case. Fred Siegel’s review of the English edition appeared in our June 2016 issue.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
How the courts are intervening to block some of the most unjust punishments of our time
Barrett’s decision marked the 59th judicial setback for a college or university since 2013 in a due-process lawsuit brought by a student accused of sexual assault. (In four additional cases, the school settled a lawsuit before any judicial decision occurred.) This body of law serves as a towering rebuke to the Obama administration’s reinterpretation of Title IX, the 1972 law barring sex discrimination in schools that receive federal funding.
Beginning in 2011, the Education Department’s Office for Civil Rights (OCR) issued a series of “guidance” documents pressuring colleges and universities to change how they adjudicated sexual-assault cases in ways that increased the likelihood of guilty findings. Amid pressure from student and faculty activists, virtually all elite colleges and universities have gone far beyond federal mandates and have even further weakened the rights of students accused of sexual assault.
Like all extreme victims’-rights approaches, the new policies had the greatest impact on the wrongly accused. A 2016 study from UCLA public-policy professor John Villasenor used just one of the changes—schools employing the lowest standard of proof, a preponderance of the evidence—to predict that as often as 33 percent of the time, campus Title IX tribunals would return guilty findings in cases involving innocent students. Villasenor’s study could not measure the impact of other Obama-era policy demands—such as allowing accusers to appeal not-guilty findings, discouraging cross-examination of accusers, and urging schools to adjudicate claims even when a criminal inquiry found no wrongdoing.
In a September 7 address at George Mason University, Education Secretary Betsy DeVos stated that “no student should be forced to sue their way to due process.” But once enmeshed in the campus Title IX process, a wrongfully accused student’s best chance for justice may well be a lawsuit filed after his college incorrectly has found him guilty. (According to data from United Educators, a higher-education insurance firm, 99 percent of students accused of campus sexual assault are male.) The Foundation for Individual Rights has identified more than 180 such lawsuits filed since the 2011 policy changes. That figure, obviously, excludes students with equally strong claims whose families cannot afford to go to court. These students face life-altering consequences. As Judge T.S. Ellis III noted in a 2016 decision, it is “so clear as to be almost a truism” that a student will lose future educational and employment opportunities if his college wrongly brands him a rapist.
“It is not the role of the federal courts to set aside decisions of school administrators which the court may view as lacking in wisdom or compassion.” So wrote the Supreme Court in a 1975 case, Wood v. Strickland. While the Supreme Court has made clear that colleges must provide accused students with some rights, especially when dealing with nonacademic disciplinary questions, courts generally have not been eager to intervene in such matters.
This is what makes the developments of the last four years all the more remarkable. The process began in May 2013, in a ruling against St. Joseph’s University, and has lately accelerated (15 rulings in 2016 and 21 thus far in 2017). Of the 40 setbacks for colleges in federal court, 14 came from judges nominated by Barack Obama, 11 from Clinton nominees, and nine from selections of George W. Bush. Brown University has been on the losing side of three decisions; Duke, Cornell, and Penn State, two each.
Court decisions since the expansion of Title IX activism have not all gone in one direction. In 36 of the due-process lawsuits, courts have permitted the university to maintain its guilty finding. (In four other cases, the university settled despite prevailing at a preliminary stage.) But even in these cases, some courts have expressed discomfort with campus procedures. One federal judge was “greatly troubled” that Georgia Tech veered “very far from an ideal representation of due process” when its investigator “did not pursue any line of investigation that may have cast doubt on [the accuser’s] account of the incident.” Another went out of his way to say that he considered it plausible that a former Case Western Reserve University student was actually “innocent of the charges levied against him.” And one state appellate judge opened oral argument by bluntly informing the University of California’s lawyer, “When I . . . finished reading all the briefs in this case, my comment was, ‘Where’s the kangaroo?’”
Judges have, obviously, raised more questions in cases where the college has found itself on the losing side. Those lawsuits have featured three common areas of concern: bias in the investigation, resulting in a college decision based on incomplete evidence; procedures that prevented the accused student from challenging his accuser’s credibility, chiefly through cross-examination; and schools utilizing a process that seemed designed to produce a predetermined result, in response to real or perceived pressure from the federal government.C olleges and universities have proven remarkably willing to act on incomplete information when adjudicating sexual-assault cases. In December 2013, for example, Amherst College expelled a student for sexual assault despite text messages (which the college investigator failed to discover) indicating that the accuser had consented to sexual contact. The accuser’s own testimony also indicated that she might have committed sexual assault, by initiating sexual contact with a student who Amherst conceded was experiencing an alcoholic blackout. When the accused student sued Amherst, the college said its failure to uncover the text messages had been irrelevant because its investigator had only sought texts that portrayed the incident as nonconsensual. In February, Judge Mark Mastroianni allowed the accused student’s lawsuit to proceed, commenting that the texts could raise “additional questions about the credibility of the version of events [the accuser] gave during the disciplinary proceeding.” The two sides settled in late July.
Amherst was hardly alone in its eagerness to avoid evidence that might undermine the accuser’s version of events; the same happened at Penn State, St. Joseph’s, Duke, Ohio State, Occidental, Lynn, Marlboro, Michigan, and Notre Dame.
Even in cases with a more complete evidentiary base, accused students have often been blocked from presenting a full-fledged defense. As part of its reinterpretation of Title IX, the Obama administration sought to shield campus accusers from cross-examination. OCR’s 2011 guidance “strongly” discouraged direct cross-examination of accusers by the accused student—a critical restriction, since most university procedures require the accused student, rather than his lawyer, to defend himself in the hearing. OCR’s 2014 guidance suggested that this type of cross-examination in and of itself could create a hostile environment. The Obama administration even spoke favorably about the growing trend among schools to abolish hearings altogether and allow a single official to serve as investigator, prosecutor, judge, and jury in sexual-assault cases.
The Supreme Court has never held that campus disciplinary hearings must permit cross-examination. Nonetheless, the recent attack on the practice has left schools struggling to explain why they would not want to utilize what the Court has described as the “greatest legal engine ever invented for the discovery of truth.” In June 2016, the University of Cincinnati found a student guilty of sexual assault after a hearing at which neither his accuser nor the university’s Title IX investigator appeared. In an unintentionally comical line, the hearing chair noted the absent witnesses before asking the accused student if he had “any questions of the Title IX report.” The student, befuddled, replied, “Well, since she’s not here, I can’t really ask anything of the report.” (The panel chair did not indicate how the “report” could have answered any questions.) Cincinnati found the student guilty anyway.1
Limitations on full cross-examination also played a role in judicial setbacks for Middlebury, George Mason, James Madison, Ohio State, Occidental, Penn State, Brandeis, Amherst, Notre Dame, and Skidmore.
Finally, since 2011, more than 300 students have filed Title IX complaints with the Office for Civil Rights, alleging mishandling of their sexual-assault allegation by their college. OCR’s leadership seemed to welcome the complaints, which allowed Obama officials not only to inspect the individual case but all sexual-assault claims at the school in question over a three-year period. Northwestern University professor Laura Kipnis has estimated that during the Obama years, colleges spent between $60 million and $100 million on these investigations. If OCR finds a Title IX violation, that might lead to a loss of federal funding. This has led Harvard Law professors Jeannie Suk Gersen, Janet Halley, Elizabeth Bartholet, and Nancy Gertner to observe in a white paper submitted to OCR that universities have “strong incentives to ensure the school stays in OCR’s good graces.”
One of the earliest lawsuits after the Obama administration’s policy shift, involving former Xavier University basketball player Dez Wells, demonstrated how an OCR investigation can affect the fairness of a university inquiry. The accuser’s complaint had been referred both to Xavier’s Title IX office and the Cincinnati police. The police concluded that the allegation was meritless; Hamilton County Prosecuting Attorney Joseph Deters later said he considered charging the accuser with filing a false police report.
Deters asked Xavier to delay its proceedings until his office completed its investigation. School officials refused. Instead, three weeks after the initial allegation, the university expelled Wells. He sued and speculated that Xavier’s haste came not from a quest for justice but instead from a desire to avoid difficulties in finalizing an agreement with OCR to resolve an unrelated complaint filed by two female Xavier students. (In recent years, OCR has entered into dozens of similar resolution agreements, which bind universities to policy changes in exchange for removing the threat of losing federal funds.) In a July 2014 ruling, Judge Arthur Spiegel observed that Xavier’s disciplinary tribunal, however “well-equipped to adjudicate questions of cheating, may have been in over its head with relation to an alleged false accusation of sexual assault.” Soon thereafter, the two sides settled; Wells transferred to the University of Maryland.
Ohio State, Occidental, Cornell, Middlebury, Appalachian State, USC, and Columbia have all found themselves on the losing side of court decisions arising from cases that originated during a time in which OCR was investigating or threatening to investigate the school. (In the Ohio State case, one university staffer testified that she didn’t know whether she had an obligation to correct a false statement by an accuser to a disciplinary panel.) Pressure from OCR can be indirect, as well. The Obama administration interpreted federal law as requiring all universities to have at least one Title IX coordinator; larger universities now employ dozens of Title IX personnel who, as the Harvard Law professors explained, “have reason to fear for their jobs if they hold a student not responsible or if they assign a rehabilitative or restorative rather than a harshly punitive sanction.”A mid the wave of judicial setbacks for universities, two decisions in particular stand out. Easily the most powerful opinion in a campus due-process case came in March 2016 from Judge F. Dennis Saylor. While the stereotypical campus sexual-assault allegation results from an alcohol-filled, one-night encounter between a male and a female student, a case at Brandeis University involved a long-term monogamous relationship between two male students. A bad breakup led to the accusing student’s filing the following complaint, against which his former boyfriend was expected to provide a defense: “Starting in the month of September, 2011, the Alleged violator of Policy had numerous inappropriate, nonconsensual sexual interactions with me. These interactions continued to occur until around May 2013.”
To adjudicate, Brandeis hired a former OCR staffer, who interviewed the two students and a few of their friends. Since the university did not hold a hearing, the investigator decided guilt or innocence on her own. She treated each incident as if the two men were strangers to each other, which allowed her to determine that sexual “violence” had occurred in the relationship. The accused student, she found, sometimes looked at his boyfriend in the nude without permission and sometimes awakened his boyfriend with kisses when the boyfriend wanted to stay asleep. The university’s procedures prevented the student from seeing the investigator’s report, with its absurdly broad definition of sexual misconduct, in preparing his appeal. “In the context of American legal culture,” Boston Globe columnist Dante Ramos later argued, denying this type of information “is crazy.” “Standard rules of evidence and other protections for the accused keep things like false accusations or mistakes by authorities from hurting innocent people.” When the university appeal was denied, the student sued.
At an October 2015 hearing to consider the university’s motion to dismiss, Saylor seemed flabbergasted at the unfairness of the school’s approach. “I don’t understand,” he observed, “how a university, much less one named after Louis Brandeis, could possibly think that that was a fair procedure to not allow the accused to see the accusation.” Brandeis’s lawyer cited pressure to conform to OCR guidance, but the judge deemed the university’s procedures “closer to Salem 1692 than Boston, 2015.”
The following March, Saylor issued an 89-page opinion that has been cited in virtually every lawsuit subsequently filed by an accused student. “Whether someone is a ‘victim’ is a conclusion to be reached at the end of a fair process, not an assumption to be made at the beginning,” Saylor wrote. “If a college student is to be marked for life as a sexual predator, it is reasonable to require that he be provided a fair opportunity to defend himself and an impartial arbiter to make that decision.” Saylor concluded that Brandeis forced the accused student “to defend himself in what was essentially an inquisitorial proceeding that plausibly failed to provide him with a fair and reasonable opportunity to be informed of the charges and to present an adequate defense.”
The student, vindicated by the ruling’s sweeping nature, then withdrew his lawsuit. He currently is pursuing a Title IX complaint against Brandeis with OCR.
Four months later, a three-judge panel of the Second Circuit Court of Appeals produced an opinion that lacked Saylor’s rhetorical flourish or his understanding of the basic unfairness of the campus Title IX process. But by creating a more relaxed standard for accused students to make federal Title IX claims, the Second Circuit’s decision in Doe v. Columbia carried considerable weight.
Two Columbia students who had been drinking had a brief sexual encounter at a party. More than four months later, the accuser claimed she was too intoxicated to have consented. Her allegation came in an atmosphere of campus outrage about the university’s allegedly insufficient toughness on sexual assault. In this setting, the accused student found Columbia’s Title IX investigator uninterested in hearing his side of the story. He cited witnesses who would corroborate his belief that the accuser wasn’t intoxicated; the investigator declined to speak with them. The student was found guilty, although for reasons differing from the initial claim; the Columbia panel ruled that he had “directed unreasonable pressure for sexual activity toward the [accuser] over a period of weeks,” leaving her unable to consent on the night in question. He received a three-semester suspension for this nebulous offense—which even his accuser deemed too harsh. He sued, and the case was assigned to Judge Jesse Furman.
Furman’s opinion provided a ringing victory for Columbia and the Obama-backed policies it used. As Title IX litigator Patricia Hamill later observed, Furman’s “almost impossible standard” required accused students to have inside information about the institution’s handling of other sexual-assault claims—information they could plausibly obtain only through the legal process known as discovery, which happens at a later stage of litigation—in order to survive a university’s initial motion to dismiss. Furman suggested that, to prevail, an accused student would need to show that his school treated a female student accused of sexual assault more favorably, or at least provide details about how cases against other accused students showed a pattern of bias. But federal privacy law keeps campus disciplinary hearings private, leaving most accused students with little opportunity to uncover the information before their case is dismissed.
At the same time, the opinion excused virtually any degree of unfairness by the institution. Furman reasoned that taking “allegations of rape on campus seriously and . . . treat[ing] complainants with a high degree of sensitivity” could constitute “lawful” reasons for university unfairness toward accused students. Samantha Harris of the Foundation for Individual Rights in Education detected the decision’s “immediate and nationwide impact” in several rulings against accused students. It also played the same role in university briefs that Saylor’s Brandeis opinion did in filings by accused students.
The Columbia student’s lawyer, Andrew Miltenberg, appealed Furman’s ruling to the Second Circuit. The stakes were high, since a ruling affirming the lower court’s reasoning would have all but foreclosed Title IX lawsuits by accused students in New York, Connecticut, and Vermont. But a panel of three judges, all nominated by Democratic presidents, overturned Furman’s decision. In the opinion’s crucial passage, Judge Pierre Leval held that a university “is not excused from liability for discrimination because the discriminatory motivation does not result from a discriminatory heart, but rather from a desire to avoid practical disadvantages that might result from unbiased action. A covered university that adopts, even temporarily, a policy of bias favoring one sex over the other in a disciplinary dispute, doing so in order to avoid liability or bad publicity, has practiced sex discrimination, notwithstanding that the motive for the discrimination did not come from ingrained or permanent bias against that particular sex.” Before the Columbia decision, courts almost always had rebuffed Title IX pleadings from accused students. More recently, judges have allowed Title IX claims to proceed against Amherst, Cornell, California–Santa Barbara, Drake, and Rollins.
After the Second Circuit’s decision, Columbia settled with the accused student, sparing its Title IX decision-makers from having to testify at a trial. James Madison was one of the few universities to take a different course, with disastrous results. A lawsuit from an accused student survived a motion to dismiss, but the university refused to settle, allowing the student’s lawyer to depose the three school employees who had decided his client’s fate. One unintentionally revealed that he had misapplied the university’s own definition of consent. Another cited the importance of the accuser’s slurring words on a voicemail, thus proving her extreme intoxication on the night of the alleged assault. It was left to the accused student’s lawyer, at a deposition months after the decision had been made, to note that the voicemail in question actually was received on a different night. In December 2016, Judge Elizabeth Dillon, an Obama nominee, granted summary judgment to the accused student, concluding that “significant anomalies in the appeal process” violated his due-process rights under the Constitution.niversities were on the losing side of 36 due-process rulings when Obama appointee Catherine Lhamon was presiding over the Office for Civil Rights between 2013 and 2016; no record exists of her publicly acknowledging any of them. In June 2017, however, Lhamon suddenly rejoiced that “yet another federal court” had found that students disciplined for sexual misconduct “were not denied due process.” That Fifth Circuit decision, involving two former students at the University of Houston, was an odd case for her to celebrate. The majority cabined its findings to the “unique facts” of the case—that the accused students likely would have been found guilty even under the fairest possible process. And the dissent, from Judge Edith Jones, denounced the procedures championed by Lhamon and other Obama officials as “heavily weighted in favor of finding guilt,” predicting “worse to come if appellate courts do not step in to protect students’ procedural due process right where allegations of quasi-criminal sexual misconduct arise.”
At this stage, Lhamon, who now chairs the U.S. Commission on Civil Rights, cannot be taken seriously when it comes to questions of campus due process. But other defenders of the current Title IX regime have offered more substantive commentary about the university setbacks.
Legal scholar Michelle Anderson was one of the few to even discuss the due-process decisions. “Colleges and universities do not always adjudicate allegations of sexual assault well,” she noted in a 2016 law review article defending the Obama-era policies. Anderson even conceded that some colleges had denied “accused students fairness in disciplinary adjudication.” But these students sued, “and campuses are responding—as they must—when accused students prevail. So campuses face powerful legal incentives on both sides to address campus sexual assault, and to do so fairly and impartially.”
This may be true, but Anderson does not explain why wrongly accused students should bear the financial and emotional burden of inducing their colleges to implement fair procedures. More important, scant evidence exists that colleges have responded to the court victories of wrongly accused students by creating fairer procedures. Some have even made it more difficult for wrongly accused students to sue. After losing a lawsuit in December 2014, Brown eliminated the right of students accused of sexual assault to have “every opportunity” to present evidence. That same year, an accused student showed how Swarthmore had deviated from its own procedures in his case. The college quickly settled the lawsuit—and then added a clause to its procedures immunizing it from similar claims in the future. Swarthmore currently informs accused students that “rules of evidence ordinarily found in legal proceedings shall not be applied, nor shall any deviations from any of these prescribed procedures alone invalidate a decision.”
Many lawsuits are still working their way through the judicial system; three cases are pending at federal appellate courts. Of the two that address substantive matters, oral arguments seemed to reveal skepticism of the university’s position. On July 26, a three-judge panel of the First Circuit considered a case at Boston College, where the accused student plausibly argued that someone else had committed the sexual assault (which occurred on a poorly lit dance floor). Judges Bruce Selya and William Kayatta seemed troubled that a Boston College dean had improperly intruded on the hearing board’s deliberations. At the Sixth Circuit a few days later, Judges Richard Griffin and Amul Thapar both expressed concerns about the University of Cincinnati’s downplaying the importance of cross-examination in campus-sex adjudications. Judge Eric Clay was quieter, but he wondered about the tension between the university’s Title IX and truth-seeking obligations.
In a perfect world, academic leaders themselves would have created fairer processes without judicial intervention. But in the current campus environment, such an approach is impossible. So, at least for the short term, the courts remain the best, albeit imperfect, option for students wrongly accused of sexual assault. Meanwhile, every year, young men entrust themselves and their family’s money to institutions of higher learning that are indifferent to their rights and unconcerned with the injustices to which these students might be subjected.
1 After a district court placed that finding on hold, the university appealed to the Sixth Circuit.
Choose your plan and pay nothing for six Weeks!
For a very limited time, we are extending a six-week free trial on both our subscription plans. Put your intellectual life in order while you can. This offer is also valid for existing subscribers wishing to purchase a gift subscription. Click here for more details.
Review of 'Terror in France' By Gilles Kepel
Kepel is particularly knowledgeable about the history and process of radicalization that takes place in his nation’s heavily Muslim banlieues (the depressed housing projects ringing Paris and other major cities), and Terror in France is informed by decades of fieldwork in these volatile locales. What we have been witnessing for more than a decade, Kepel argues, is the “third wave” of global jihadism, which is not so much a top-down doctrinally inspired campaign (as were the 9/11 attacks, directed from afar by the oracular figure of Osama bin Laden) but a bottom-up insurgency with an “enclave-based ethnic-racial logic of violence” to it. Kepel traces the phenomenon back to 2005, a convulsive year that saw the second-generation descendants of France’s postcolonial Muslim immigrants confront a changing socio-political landscape.
That was the year of the greatest riots in modern French history, involving mostly young Muslim men. It was also the year that Abu Musab al-Suri, the Syrian-born Islamist then serving as al-Qaeda’s operations chief in Europe, published The Global Islamic Resistance Call. This 1,600-page manifesto combined pious imprecations against the West with do-it-yourself ingenuity, an Anarchist’s Cookbook for the Islamist set. In Kepel’s words, the manifesto preached a “jihadism of proximity,” the brand of civil war later adopted by the Islamic State. It called for ceaseless, mass-casualty attacks in Western cities—attacks which increase suspicion and regulation of Muslims and, in turn, drive those Muslims into the arms of violent extremists.
The third-generation jihad has been assisted by two phenomena: social-networking sites that easily and widely disseminate Islamist propaganda (thus increasing the rate of self-radicalization) and the so-called Arab Spring, which led to state collapse in Syria and Libya, providing “an exceptional site for military training and propaganda only a few hours’ flight from Europe, and at a very low cost.”
Kepel’s book is not just a study of the ideology and tactics of Islamists but a sociopolitical overview of how this disturbing phenomenon fits within a country on the brink. For example, Kepel finds that jihadism is emerging in conjunction with developments such as the “end of industrial society.” A downturn in work has led to an ominous situation in which a “right-wing ethnic nationalism” preying on the economically anxious has risen alongside Islamism as “parallel conduits for expressing grievances.” Filling a space left by the French Communist Party (which once brought the ethnic French working class and Arab immigrants together), these two extremes leer at each other from opposite sides of a societal chasm, signaling the potentially cataclysmic future that awaits France if both mass unemployment and Islamist terror continue undiminished.
The French economy has also had a more direct inciting effect on jihadism. Overregulated labor markets make it difficult for young Muslims to get jobs, thus exacerbating the conditions of social deprivation and exclusion that make individuals susceptible to radicalization. The inability to tackle chronic unemployment has led to widespread Muslim disillusionment with the left (a disillusionment aggravated by another, often glossed over, factor: widespread Muslim opposition to the Socialist Party’s championing of same-sex marriage). Essentially, one left-wing constituency (unions) has made the unemployment of another constituency (Muslim youth) the mechanism for maintaining its privileges.
Kepel does not, however, cite deprivation as the sole or even main contributing factor to Islamist radicalization. One Parisian banlieue that has sent more than 80 residents to fight in Syria, he notes, has “attractive new apartment buildings” built by the state and features a mosque “constructed with the backing of the Socialist mayor.” It is also the birthplace of well-known French movie stars of Arab descent, and thus hardly a place where ambition goes to die. “The Islamophobia mantra and the victim mentality it reinforces makes it possible to rationalize a total rejection of France and a commitment to jihad by making a connection between unemployment, discrimination, and French republican values,” Kepel writes. Indeed, Kepel is refreshingly derisive of the term “Islamophobia” throughout the book, excoriating Islamists and their fellow travelers for “substituting it for anti-Semitism as the West’s cardinal sin.” These are meaningful words coming from Kepel, a deeply learned scholar of Islam who harbors great respect for the faith and its adherents.
Kepel also weaves the saga of jihadism into the ongoing “kulturkampf within the French left.” Arguments about Islamist terrorism demonstrate a “divorce between a secular progressive tradition” and the children of the Muslim immigrants this tradition fought to defend. The most ironically perverse manifestation of this divorce was ISIS’s kidnapping of Didier François, co-founder of the civil-rights organization SOS Racisme. Kepel recognizes the origins of this divorce in the “red-green” alliance formed decades ago between Islamists and elements of the French intellectual left, such as Michel Foucault, a cheerleader of the Iranian revolution.
Though he offers a rigorous history and analysis of the jihadist problem, Kepel is generally at a loss for solutions. He decries a complacent French elite, with its disregard for genuine expertise (evidenced by the decline in institutional academic support for Islamicists and Arabists) and the narrow, relatively impenetrable way in which it perpetuates itself, chiefly with a single school (the École normale supérieure) that practically every French politician must attend. Despite France’s admirable republican values, this has made the process of assimilation rather difficult. But other than wishing that the public education system become more effective and inclusive at instilling republican values, Kepel provides little in the way of suggestions as to how France emerges from this mess. That a scholar of such erudition and humanity can do little but throw up his hands and issue a sigh of despair cannot bode well. The third-generation jihad owes as much to the political breakdown in France as it does to the meltdown in the Middle East. Defeating this two-headed beast requires a new and comprehensive playbook: the West’s answer to The Global Islamic Resistance Call. That book has yet to be written.
Choose your plan and pay nothing for six Weeks!
resident Trump, in case you haven’t noticed, has a tendency to exaggerate. Nothing is “just right” or “meh” for him. Buildings, crowds, election results, and military campaigns are always outsized, gargantuan, larger, and more significant than you might otherwise assume. “People want to believe that something is the biggest and the greatest and the most spectacular,” he wrote 30 years ago in The Art of the Deal. “I call it truthful hyperbole. It’s an innocent form of exaggeration—and a very effective form of promotion.”
So effective, in fact, that the press has picked up the habit. Reporters and editors agree with the president that nothing he does is ordinary. After covering Trump for more than two years, they still can’t accept him as a run-of-the-mill politician. And while there are aspects of Donald Trump and his presidency that are, to say the least, unusual, the media seem unable to distinguish between the abnormal and significant—firing the FBI director in the midst of an investigation into one’s presidential campaign, for example—and the commonplace.
Consider the fiscal deal President Trump struck with Democratic leaders in early September.
On September 6, the president held an Oval Office meeting with Vice President Pence, Treasury Secretary Mnuchin, and congressional leaders of both parties. He had to find a way to (a) raise the debt ceiling, (b) fund the federal government, and (c) spend money on hurricane relief. The problem is that a bloc of House Republicans won’t vote for (a) unless the increase is accompanied by significant budget cuts, which interferes with (b) and (c). To raise the debt ceiling, then, requires Democratic votes. And the debt ceiling must be raised. “There is zero chance—no chance—we will not raise the debt ceiling,” Senate Majority Leader Mitch McConnell said in August.
The meeting went like this. First House Speaker Paul Ryan asked for an 18-month increase in the debt ceiling so Republicans wouldn’t have to vote again on the matter until after the midterm elections. Democrats refused. The bargaining continued until Ryan asked for a six-month increase. The Democrats remained stubborn. So Trump, always willing to kick a can down the road, interrupted Mnuchin to offer a three-month increase, a continuing resolution that will keep the government open through December, and about $8 billion in hurricane money. The Democrats said yes.
That, anyway, is what happened. But the media are not satisfied to report what happened. They want—they need—to tell you what it means. And what does it mean? Well, they aren’t really sure. But it’s something big. It’s something spectacular. For example:
1. “Trump Bypasses Republicans to Strike Deal on Debt Limit and Harvey Aid” was the headline of a story for the New York Times by Peter Baker, Thomas Kaplan, and Michael D. Shear. “The deal to keep the government open and paying its debts until Dec. 15 represented an extraordinary public turn for the president, who has for much of his term set himself up on the right flank of the Republican Party,” their article began. Fair enough. But look at how they import speculation and opinion into the following sentence: “But it remained unclear whether Mr. Trump’s collaboration with Democrats foreshadowed a more sustained shift in strategy by a president who has presented himself as a master dealmaker or amounted to just a one-time instinctual reaction of a mercurial leader momentarily eager to poke his estranged allies.”
2. “The decision was one of the most fascinating and mysterious moves he’s made with Congress during eight months in office,” reported Jeff Zeleny, Dana Bash, Deirdre Walsh, and Jeremy Diamond for CNN. Thanks for sharing!
3. “Trump budget deal gives GOP full-blown Stockholm Syndrome,” read the headline of Tina Nguyen’s piece for Vanity Fair. “Donald Trump’s unexpected capitulation to new best buds ‘Chuck and Nancy’ has thrown the Grand Old Party into a frenzy as Republicans search for explanations—and scapegoats.”
4. “For Conservatives, Trump’s Deal with Democrats Is Nightmare Come True,” read the headline for a New York Times article by Jeremy W. Peters and Maggie Haberman. “It is the scenario that President Trump’s most conservative followers considered their worst nightmare, and on Wednesday it seemed to come true: The deal-making political novice, whose ideology and loyalty were always fungible, cut a deal with Democrats.”
5. “Trump sides with Democrats on fiscal issues, throwing Republican plans into chaos,” read the Washington Post headline the day after the deal was announced. “The president’s surprise stance upended sensitive negotiations over the debt ceiling and other crucial policy issues this fall and further imperiled his already tenuous relationships with Senate Majority Leader Mitch McConnell and House Speaker Paul Ryan.” Yes, the negotiations were upended. Then they made a deal.
6. “Although elected as a Republican last year,” wrote Peter Baker of the Times, “Mr. Trump has shown in the nearly eight months in office that he is, in many ways, the first independent to hold the presidency since the advent of the two-party system around the time of the Civil War.” The title of Baker’s news analysis: “Bound to No Party, Trump Upends 150 Years of Two-Party Rule.” One hundred and fifty years? Why not 200?
The journalistic rule of thumb used to be that an article describing a political, social, or cultural trend requires at least three examples. Not while covering Trump. If Trump does something, anything, you should feel free to inflate its importance beyond all recognition. And stuff your “reporting” with all sorts of dramatic adjectives and frightening nouns: fascinating, mysterious, unexpected, extraordinary, nightmare, chaos, frenzy, and scapegoats. It’s like a Vince Flynn thriller come to life.
The case for the significance of the budget deal would be stronger if there were a consensus about whom it helped. There isn’t one. At first the press assumed Democrats had won. “Republicans left the Oval Office Wednesday stunned,” reported Rachael Bade, Burgess Everett, and Josh Dawsey of Politico. Another trio of Politico reporters wrote, “In the aftermath, Republicans seethed privately and distanced themselves publicly from the deal.” Republicans were “stunned,” reported Kristina Peterson, Siobhan Hughes, and Louise Radnofsky of the Wall Street Journal. “Meet the swamp: Donald Trump punts September agenda to December after meeting with Congress,” read the headline of Charlie Spiering’s Breitbart story.
By the following week, though, these very outlets had decided the GOP was looking pretty good. “Trump’s deal with Democrats bolsters Ryan—for now,” read the Politico headline on September 11. “McConnell: No New Debt Ceiling Vote until ‘Well into 2018,’” reported the Washington Post. “At this point…picking a fight with Republican leaders will only help him,” wrote Gerald Seib in the Wall Street Journal. “Trump has long warned that he would work with Democrats, if necessary, to fulfill his campaign promises. And Wednesday’s deal is a sign that he intends to follow through on that threat,” wrote Breitbart’s Joel Pollak.
The sensationalism, the conflicting interpretations, the visceral language is dizzying. We have so many reporters chasing the same story that each feels compelled to gussy up a quotidian budget negotiation until it resembles the Ribbentrop–Molotov pact, and none feel it necessary to apply to their own reporting the scrutiny and incredulity they apply to Trump. The truth is that no one knows what this agreement portends. Nor is it the job of a reporter to divine the meaning of current events like an augur of Rome. Sometimes a cigar is just a cigar. And a deal is just a deal.
Choose your plan and pay nothing for six Weeks!
Remembering something wonderful
Not surprisingly, many well-established performers were left in the lurch by the rise of the new media. Moreover, some vaudevillians who, like Fred Allen, had successfully reinvented themselves for radio were unable to make the transition to TV. But a handful of exceptionally talented performers managed to move from vaudeville to radio to TV, and none did it with more success than Jack Benny, whose feigned stinginess, scratchy violin playing, slightly effeminate demeanor, and preternaturally exact comic timing made him one of the world’s most beloved performers. After establishing himself in vaudeville, he became the star of a comedy series, The Jack Benny Program, that aired continuously, first on radio and then TV, from 1932 until 1965. Save for Bob Hope, no other comedian of his time was so popular.
With the demise of nighttime network radio as an entertainment medium, the 931 weekly episodes of The Jack Benny Program became the province of comedy obsessives—and because Benny’s TV series was filmed in black-and-white, it is no longer shown in syndication with any regularity. And while he also made Hollywood films, some of which were box-office hits, only one, Ernst Lubitsch’s To Be or Not to Be (1942), is today seen on TV other than sporadically.
Nevertheless, connoisseurs of comedy still regard Benny, who died in 1974, as a giant, and numerous books, memoirs, and articles have been published about his life and art. Most recently, Kathryn H. Fuller-Seeley, a professor at the University of Texas at Austin, has brought out Jack Benny and the Golden Age of Radio Comedy, the first book-length primary-source academic study of The Jack Benny Program and its star.1 Fuller-Seeley’s genuine appreciation for Benny’s work redeems her anachronistic insistence on viewing it through the fashionable prism of gender- and race-based theory, and her book, though sober-sided to the point of occasional starchiness, is often quite illuminating.
Most important of all, off-the-air recordings of 749 episodes of the radio version of The Jack Benny Program survive in whole or part and can easily be downloaded from the Web. As a result, it is possible for people not yet born when Benny was alive to hear for themselves why he is still remembered with admiration and affection—and why one specific aspect of his performing persona continues to fascinate close observers of the American scene.B orn Benjamin Kubelsky in Chicago in 1894, Benny was the son of Eastern European émigrés (his father was from Poland, his mother from Lithuania). He started studying violin at six and had enough talent to pursue a career in music, but his interests lay elsewhere, and by the time he was a teenager, he was working in vaudeville as a comedian who played the violin as part of his act. Over time he developed into a “monologist,” the period term for what we now call a stand-up comedian, and he began appearing in films in 1929 and on network radio three years after that.
Radio comedy, like silent film, is now an obsolete art form, but the program formats that it fostered in the ’20s and ’30s all survived into the era of TV, and some of them flourish to this day. One, episodic situation comedy, was developed in large part by Jack Benny and his collaborators. Benny and Harry Conn, his first full-time writer, turned his weekly series, which started out as a variety show, into a weekly half-hour playlet featuring a regular cast of characters augmented by guest stars. Such playlets, relying as they did on a setting that was repeated from week to week, were easier to write than the free-standing sketches favored by Allen, Hope, and other ex-vaudevillians, and by the late ’30s, the sitcom had become a staple of radio comedy.
The process, as documented by Fuller-Seeley, was a gradual one. The Jack Benny Program never broke entirely with the variety format, continuing to feature both guest stars (some of whom, like Ronald Colman, ultimately became semi-regular members of the show’s rotating ensemble of players) and songs sung by Dennis Day, a tenor who joined the cast in 1939. Nor was it the first radio situation comedy: Amos & Andy, launched in 1928, was a soap-opera-style daily serial that also featured regular characters. Nevertheless, it was Benny who perfected the form, and his own character would become the prototype for countless later sitcom stars.
The show’s pivotal innovation was to turn Benny and the other cast members into fictionalized versions of themselves—they were the stars of a radio show called “The Jack Benny Program.” Sadye Marks, Benny’s wife, played Mary Livingstone, his sharp-tongued secretary, with three other characters added as the self-reflexive concept took shape. Don Wilson, the stout, genial announcer, came on board in 1934. He was followed in 1936 by Phil Harris, Benny’s roguish bandleader, and, in 1939, by Day, Harris’s simple-minded vocalist. To this team was added a completely fictional character, Rochester Van Jones, Benny’s raspy-voiced, outrageously impertinent black valet, played by Eddie Anderson, who joined the cast in 1938.
As these five talented performers coalesced into a tight-knit ensemble, the jokey, vaudeville-style sketch comedy of the early episodes metamorphosed into sitcom-style scripts that portrayed their offstage lives, as well as the making of the show itself. Scarcely any conventional jokes were told, nor did Benny’s writers employ the topical and political references in which Allen and Hope specialized. Instead, the show’s humor arose almost entirely from the close interplay of character and situation.
Benny was not solely responsible for the creation of this format, which was forged by Conn and perfected by his successors. Instead, he doubled as the star and producer—or, to use the modern term, show runner—closely supervising the writing of the scripts and directing the performances of the other cast members. In addition, he and Conn turned the character of Jack Benny from a sophisticated vaudeville monologist into the hapless butt of the show’s humor, a vain, sexually inept skinflint whose character flaws were ceaselessly twitted by his colleagues, who in turn were given most of the biggest laugh lines.
This latter innovation was a direct reflection of Benny’s real-life personality. Legendary for his voluble appreciation of other comedians, he was content to respond to the wisecracking of his fellow cast members with exquisitely well-timed interjections like “Well!” and “Now, cut that out,” knowing that the comic spotlight would remain focused on the man of whom they were making fun and secure in the knowledge that his own comic personality was strong enough to let them shine without eclipsing him in the process.
And with each passing season, the fictional personalities of Benny and his colleagues became ever more firmly implanted in the minds of their listeners, thus allowing the writers to get laughs merely by alluding to their now-familiar traits. At the same time, Benny and his writers never stooped to coasting on their familiarity. Even the funniest of the “cheap jokes” that were their stock-in-trade were invariably embedded in carefully honed dramatic situations that heightened their effectiveness.
A celebrated case in point is the best-remembered laugh line in the history of The Jack Benny Program, heard in a 1948 episode in which a burglar holds Benny up on the street. “Your money or your life,” the burglar says—to which Jack replies, after a very long pause, “I’m thinking it over!” What makes this line so funny is, of course, our awareness of Benny’s stinginess, reinforced by a decade and a half of constant yet subtly varied repetition. What is not so well remembered is that the line is heard toward the end of an episode that aired shortly after Ronald Colman won an Oscar for his performance in A Double Life. Inspired by this real-life event, the writers concocted an elaborately plotted script in which Benny talks Colman (who played his next-door neighbor on the show) into letting him borrow the Oscar to show to Rochester. It is on his way home from this errand that Benny is held up, and the burglar not only robs him of his money but also steals the statuette, a situation that was resolved to equally explosive comic effect in the course of two subsequent episodes.
No mere joke-teller could have performed such dramatically complex scripts week after week with anything like Benny’s effectiveness. The secret of The Jack Benny Program was that its star, fully aware that he was not “being himself” but playing a part, did so with an actor’s skill. This was what led Ernst Lubitsch to cast him in To Be or Not to Be, in which he plays a mediocre Shakespearean tragedian, a character broadly related to but still quite different from the one who appeared on his own radio show. As Lubitsch explained to Benny, who was skeptical about his ability to carry off the part:
A clown—he is a performer what is doing funny things. A comedian—he is a performer what is saying funny things. But you, Jack, you are an actor, you are an actor playing the part of a comedian and this you are doing very well.
To Be or Not to Be also stands out from the rest of Benny’s work because he plays an identifiably Jewish character. The Jack Benny character that he played on radio and TV, by contrast, was never referred to or explicitly portrayed as Jewish. To be sure, most listeners were in no doubt of his Jewishness, and not merely because Benny made no attempt in real life to conceal his ethnicity, of which he was by all accounts proud. The Jack Benny Program was written by Jews, and the ego-puncturing insults with which their scripts were packed, as well as the schlemiel-like aspect of Benny’s “fall guy” character, were quintessentially Jewish in style.
As Benny explained in a 1948 interview cited by Fuller-Seeley:
The humor of my program is this: I’m a big shot, see? I’m fast-talking. I’m a smart guy. I’m boasting about how marvelous I am. I’m a marvelous lover. I’m a marvelous fiddle player. Then, five minutes after I start shooting off my mouth, my cast makes a shmo out of me.
Even so, his avoidance of specific Jewish identification on the air is noteworthy precisely because his character was a miser. At a time when overt anti-Semitism was still common in America, it is remarkable that Benny’s comic persona was based in large part on an anti-Semitic stereotype—yet one that seems not to have inspired any anti-Semitic attacks on Benny himself. When, in 1945, his writers came up with the idea of an “I Can’t Stand Jack Benny Because . . . ” write-in campaign, they received 270,000 entries. Only three made mention of his Jewishness.
As for the winning entry, submitted by a California lawyer, it says much about what insulated Benny from such attacks: “He fills the air with boasts and brags / And obsolete, obnoxious gags / The way he plays his violin / Is music’s most obnoxious sin / His cowardice alone, indeed, / Is matched by his obnoxious greed / And all the things that he portrays / Show up MY OWN obnoxious ways.” It is clear that Benny’s foibles were seen by his listeners not as particular but universal, just as there was no harshness in the razzing of his fellow cast members, who very clearly loved the Benny character in spite of his myriad flaws. So, too, did the American people. Several years after his TV series was cancelled, a corporation that was considering using him as a spokesman commissioned a national poll to find out how popular he was. It learned that only 3 percent of the respondents disliked him.
Therein lay Benny’s triumph: He won total acceptance from the American public and did so by embodying a Jewish stereotype from which the sting of prejudice had been leached. Far from being a self-hating whipping boy for anti-Semites, he turned himself into WASP America’s Jewish uncle, preposterous yet lovable.W hen the bottom fell out of network radio, Benny negotiated the move to TV without a hitch, debuting on the small screen in 1950 and bringing the radio version of The Jack Benny Program to a close five years later, making it one of the very last radio comedy series to shut up shop. Even after his weekly TV series was finally canceled by CBS in 1965, he continued to star in well-received one-shot specials on NBC.
But Benny’s TV appearances, for all their charm, were never quite equal in quality to his radio work, which is why he clung to the radio version of The Jack Benny Program until network radio itself went under: Better than anyone else, he knew how good the show had been. For the rest of his life, he lived off the accumulated comic capital built up by 21 years of weekly radio broadcasts.
Now, at long last, he belongs to the ages, and The Jack Benny Program is a museum piece. Yet it remains hugely influential, albeit at one or more removes from the original. From The Dick Van Dyke Show and The Danny Thomas Show to Seinfeld, Everybody Loves Raymond, and The Larry Sanders Show, every ensemble-cast sitcom whose central character is a fictionalized version of its star is based on Benny’s example. And now that the ubiquity of the Web has made the radio version of his series readily accessible for the first time, anyone willing to make the modest effort necessary to seek it out is in a position to discover that The Jack Benny Program, six decades after it left the air, is still as wonderfully, benignly funny as it ever was, a monument to the talent of the man who, more than anyone else, made it so.
Choose your plan and pay nothing for six Weeks!
Review of 'The Transferred Life of George Eliot' By Philip Davis
Not that there’s any danger these theoretically protesting students would have read George Eliot’s works—not even the short one, Silas Marner (1861), which in an earlier day was assigned to high schoolers. I must admit I didn’t find my high-school reading of Silas Marner a pleasant experience—sports novels for boys like John R. Tunis’s The Kid from Tomkinsville were inadequate preparation. I must confess, too, that when I was in graduate school, determined to study 17th-century English verse, my reaction to the suggestion that I should also read Middlemarch (1871–72) was “What?! An 800-page novel by the guy who wrote Silas Marner?” A friend patiently explained that “the guy” was actually Mary Ann Evans, born in 1819, died in 1880. Partly because she was living in sin with the literary jack-of-all-trades George Henry Lewes (legally and irrevocably bound to his estranged wife), she adopted “George Eliot” as a protective pseudonym when, in her 1857 debut, she published Scenes from Clerical Life.
I did, many times over and with awe and delight, go on to read Middlemarch and the seven other novels, often in order to teach them to college students. Students have become less and less receptive over the years. Forget modern-day objections to George Eliot’s complex political or religious views. Adam Bede (1859) and The Mill on the Floss (1860) were too hefty, and the triple-decked Middlemarch and Deronda, even if I set aside three weeks for them, rarely got finished.
The middle 20th century was perhaps a more a propitious time for appreciating George Eliot, Henry James, and other 19th-century English and American novelists. Influential teachers like F.R. Leavis at Cambridge and Lionel Trilling at Columbia were then working hard to persuade students that the study of literature, not just poetry and drama but also fiction, matters both to their personal lives—the development of their sensibility or character—and to their wider society. The “moral imagination” that created Middlemarch enriches our minds by dramatizing the complications—the frequent blurring of good and evil—in our lives. Great novels help us cope with ambiguities and make us more tolerant of one another. Many of Leavis’s and Trilling’s students became teachers themselves, and for several decades the feeling of cultural urgency was sustained. In the 1970s, though, between the leftist emphasis on literature as “politics by other means” and the deconstructionist denial of the possibility of any knowledge, literary or otherwise, independent of political power, the high seriousness of Leavis and Trilling began to fade.
The study of George Eliot and her life has gone through many stages. Directly after her death came the sanitized, hagiographic “life and letters” by J.W. Cross, the much younger man she married after Lewes’s death. Gladstone called it “a Reticence in three volumes.” The three volumes helped spark, if they didn’t cause, the long reaction against the Victorian sages generally that culminated in the dismissively satirical work of the Bloomsbury biographer and critic Lytton Strachey in his immensely influential Eminent Victorians (1916). Strachey’s mistreatment of his forbears was, with regard to George Eliot at least, tempered almost immediately by Virginia Woolf. It was Woolf who in 1919 provocatively said that Middlemarch had been “the first English novel for adults.” Eventually, the critical tide against George Eliot was decisively reversed in the ’40s by Joan Bennett and Leavis, who made the inarguable case for her genuine and lasting achievement. That period of correction culminated in the 1960s with Gordon S. Haight’s biography and with interpretive studies by Barbara Hardy and W.J. Harvey. Books on George Eliot over the last four decades have largely been written by specialists for specialists—on her manuscripts or working notes, and on her affiliations with the scientists, social historians, and competing novelists of her day.
The same is true, only more so, of the books written, with George Eliot as the ostensible subject, to promote deconstructionist or feminist agendas. Biographies have done a better job appealing to the common reader, not least because the woman’s own story is inherently compelling. The question right now is whether a book combining biographical and interpretive insight—one “pitched,” as publishers like to say, not just at experts but at the common reader—is past praying for.
Philip Davis, a Victorian scholar and an editor at Oxford University Press, hopes not. His The Transferred Life of George Eliot—transferred, that is, from her own experience into her letters, journals, essays, and novels, and beyond them into us—deserves serious attention. Davis is conscious that George Eliot called biographies of writers “a disease of English literature,” both overeager to discover scandals and too inclined to substitute day-to-day travels, relationships, dealings with publishers and so on, for critical attention to the books those writers wrote. Davis therefore devotes himself to George Eliot’s writing. Alas, he presumes rather too much knowledge on the reader’s part of the day-to-day as charted in Haight’s marvelous life. (A year-by-year chronology at the front of the book would have helped even his fellow Victorianists.)
As for George Eliot’s writing, Davis is determined to refute “what has been more or less said . . . in the schools of theory for the last 40 years—that 19th-century realism is conservatively bland and unimaginative, bourgeois and parochial, not truly art at all.” His argument for the richness, breadth, and art of George Eliot’s realism—her factual and sympathetic depiction of poor and middling people, without omitting a candid representation of the rich—is most convincing. What looms largest, though, is the realist, the woman herself—the Mary Ann Evans who, from the letters to the novels, became first Marian Evans the translator and essayist and then later “her own greatest character”: George Eliot the novelist. Davis insists that “the meaning of that person”—not merely the voice of her omniscient narrators but the omnipresent imagination that created the whole show—“has not yet exhausted its influence nor the larger future life she should have had, and may still have, in the world.”
The transference of George Eliot’s experience into her fiction is unquestionable: In The Mill on the Floss, for example, Mary Ann is Maggie, and her brother Isaac is Tom Tulliver. Davis knows that a better word might be transmutation, as George Eliot had, in Henry James’s words, “a mind possessed,” for “the creations which brought her renown were of the incalculable kind, shaped themselves in mystery, in some intellectual back-shop or secret crucible, and were as little as possible implied in the aspect of her life.” No data-accumulating biographer, even the most exhaustive, can account for that “incalculable . . . mystery.”
Which is why Davis, like a good teacher, gives us exercises in “close reading.” He pauses to consider how a George Eliot sentence balances or turns on an easy-to-skip-over word or phrase—the balance or turn often representing a moment when the novelist looks at what’s on the underside of the cards.
George Eliot’s style is subtle because her theme is subtle. Take D.H. Lawrence’s favorite heroine, the adolescent Maggie Tulliver. The external event in The Mill on the Floss may be the girl’s impulsive cutting off her unruly hair to spite her nagging aunts, or the young woman’s drifting down the river with a superficially attractive but truly impossible boyfriend. But the real “action” is Maggie’s internal self-blame and self-assertion. No Victorian novelist was better than George Eliot at tracing the psychological development of, say, a husband and wife who realize they married each other for shallow reasons, are unhappy, and now must deal with the ordinary necessities of balancing the domestic budget—Lydgate and Rosamund in Middlemarch—or, in the same novel, the religiously inclined Dorothea’s mistaken marriage to the old scholar Casaubon. That mistake precipitates not merely disenchantment and an unconscious longing for love with someone else, but (very finely) a quest for a religious explanation of and guide through her quandary.
It’s the religio-philosophical side of George Eliot about which Davis is strongest—and weakest. Her central theological idea, if one may simplify, was that the God of the Bible didn’t exist “out there” but was a projection of the imagination of the people who wrote it. Jesus wasn’t, in Davis’s characterization of her view, “the impervious divine, but [a man who] shed tears and suffered,” and died feeling forsaken. “This deep acceptance of so-called weakness was what most moved Marian Evans in her Christian inheritance. It was what God was for.” That is, the character of Jesus, and the dramatic play between him and his Father, expressed the human emotions we and George Eliot are all too familiar with. The story helps reconcile us to what is, finally, inescapable suffering.
George Eliot came to this demythologized understanding not only of Judaism and Christianity but of all religions through her contact first with a group of intellectuals who lived near Coventry, then with two Germans she translated: David Friedrich Strauss, whose 1,500-page Life of Jesus Critically Examined (1835–36) was for her a slog, and Ludwig Feuerbach, whose Essence of Christianity (1841) was for her a joy. Also, in the search for the universal morality that Strauss and Feuerbach believed Judaism and Christianity expressed mythically, there was Spinoza’s utterly non-mythical Ethics (1677). It was seminal for her—offering, as Davis says, “the intellectual origin for freethinking criticism of the Bible and for the replacement of religious superstition and dogmatic theology by pure philosophic reason.” She translated it into English, though her version did not appear until 1981.
I wish Davis had left it there, but he takes it too far. He devotes more than 40 pages—a tenth of the whole book—to her three translations, taking them as a mother lode of ideational gold whose tailings glitter throughout her fiction. These 40 pages are followed by 21 devoted to Herbert Spencer, the Victorian hawker of theories-of-everything (his 10-volume System of Synthetic Philosophy addresses biology, psychology, sociology, and ethics). She threw herself at the feet of this intellectual huckster, and though he rebuffed her painfully amorous entreaties, she never ceased revering him. Alas, Spencer was a stick—the kind of philosopher who was incapable of emotion. And she was his intellectual superior in every way. The chapter is largely unnecessary.
The book comes back to life when Davis turns to George Henry Lewes, the man who gave Mary Ann Evans the confidence to become George Eliot—perhaps the greatest act of loving mentorship in all of literature. Like many prominent Victorians, Lewes dabbled in all the arts and sciences, publishing highly readable accounts of them for a general audience. His range was as wide as Spencer’s, but his personality and writing had an irrepressible verve that Spencer could only have envied. Lewes was a sort Stephen Jay Gould yoked to Daniel Boorstin, popularizing other people’s findings and concepts, and coming up with a few of his own. He regarded his Sea-Side Studies (1860) as “the book . . . which was to me the most unalloyed delight,” not least because Marian, whom he called Polly, had helped gather the data. She told a friend “There is so much happiness condensed in it! Such scrambles over rocks, and peeping into clear pool [sic], and strolls along the pure sands, and fresh air mingling with fresh thoughts.” In his remarkably intelligent 1864 biography of Goethe, Lewes remarks that the poet “knew little of the companionship of two souls striving in emulous spirit of loving rivalry to become better, to become wiser, teaching each other to soar.” Such a companionship Lewes and George Eliot had in spades, and some of Davis’s best passages describe it.
Regrettably, Davis also offers many passages well below the standard of his best—needlessly repeating an already established point or obfuscating the obvious. Still, The Transferred Lives is the most formidably instructive, and certainly complete, life-and-works treatment of George Eliot we have.