I had the singular honor of attending an early private screening of Gandhi with an audience of invited guests from…
I had the singular honor of attending an early private screening of Gandhi with an audience of invited guests from the National Council of Churches. At the end of the three-hour movie there was hardly, as they say, a dry eye in the house. When the lights came up I fell into conversation with a young woman who observed, reverently, that Gandhi’s last words were “Oh, God,” causing me to remark regretfully that the real Gandhi had not spoken in English, but had cried, Hai Rama! (“Oh, Rama”). Well, Rama was just Indian for God, she replied, at which I felt compelled to explain that, alas, Rama, collectively with his three half-brothers, represented the seventh reincarnation of Vishnu. The young woman, who seemed to have been under the impression that Hinduism was Christianity under another name, sensed somehow that she had fallen on an uncongenial spirit, and the conversation ended.
At a dinner party shortly afterward, a friend of mine, who had visited India many times and even gone to the trouble of learning Hindi, objected strenuously that the picture of Gandhi that emerges in the movie is grossly inaccurate, omitting, as one of many examples, that when Gandhi’s wife lay dying of pneumonia and British doctors insisted that a shot of penicillin would save her, Gandhi refused to have this alien medicine injected in her body and simply let her die. (It must be noted that when Gandhi contracted malaria shortly afterward he accepted for himself the alien medicine quinine, and that when he had appendicitis he allowed British doctors to perform on him the alien outrage of an appendectomy.) All of this produced a wistful mooing from an editor of a major newspaper and a recalcitrant, “But still. . . .” I would prefer to explicate things more substantial than a wistful mooing, but there is little doubt it meant the editor in question felt that even if the real Mohandas K. Gandhi had been different from the Gandhi of the movie it would have been nice if he had been like the movie-Gandhi, and that presenting him in this admittedly false manner was beautiful, stirring, and perhaps socially beneficial.
An important step in the canonization of this movie-Gandhi was taken by the New York Film Critics Circle, which not only awarded the picture its prize as best film of 1982, but awarded Ben Kingsley, who played Gandhi (a remarkably good performance), its prize as best actor of the year. But I cannot believe for one second that these awards were made independently of the film’s content—which, not to put too fine a point on it, is an all-out appeal for pacifism—or in anything but the most shameful ignorance of the historical Gandhi.
Now it does not bother me that Shakespeare omitted from his King John the signing of the Magna Charta—by far the most important event in John’s reign. All Shakespeare’s “histories” are strewn with errors and inventions. Shifting to the cinema and to more recent times, it is hard for me to work up much indignation over the fact that neither Eisenstein’s Battleship Potemkin nor his October recounts historical episodes in anything like the manner in which they actually occurred (the famous march of the White Guards down the steps at Odessa—artistically one of the greatest sequences in film history—simply did not take place). As we draw closer to the present, however, the problem becomes much more difficult. If the Soviet Union were to make an artistically wondrous film about the entry of Russian tanks into Prague in 1968 (an event I happened to witness), and show them being greeted with flowers by a grateful populace, the Czechs dancing in the streets with joy, I do not guarantee that I would maintain my serene aloofness. A great deal depends on whether the historical events represented in a movie are intended to be taken as substantially true, and also on whether—separated from us by some decades or occurring yesterday—they are seen as having a direct bearing on courses of action now open to us.
On my second viewing of Gandhi, this time at a public showing at the end of the Christmas season, I happened to leave the theater behind three teenage girls, apparently from one of Manhattan’s fashionable private schools. “Gandhi was pretty much an FDR,” one opined, astonishing me almost as much by her breezy use of initials to invoke a President who died almost a quarter-century before her birth as by the stupefying nature of the comparison. “But he was a religious figure, too,” corrected one of her friends, adding somewhat smugly, “It’s not in our historical tradition to honor spiritual leaders.” Since her schoolteachers had clearly not led her to consider Jonathan Edwards and Roger Williams as spiritual leaders, let alone Joseph Smith and William Jennings Bryan, the intimation seemed to be that we are a society with poorer spiritual values than, let’s say, India. There can be no question, in any event, that the girls felt they had just been shown the historical Gandhi—an attitude shared by Ralph Nader, who at last account had seen the film three times. Nader has conceived the most extraordinary notion that Gandhi’s symbolic flouting of the British salt tax was a “consumer issue” which he later expanded into the wider one of Indian independence. A modern parallel to Gandhi’s program of home-spinning and home-weaving, another “consumer issue” says Nader, might be the use of solar energy to free us from the “giant multinational oil corporations.”
As it happens, the government of India openly admits to having provided one-third of the financing of Gandhi out of state funds, straight out of the national treasury—and after close study of the finished product I would not be a bit surprised to hear that it was 100 percent. If Pandit Nehru is portrayed flatteringly in the film, one must remember that Nehru himself took part in the initial story conferences (he originally wanted Gandhi to be played by Alec Guinness) and that his daughter Indira Gandhi is, after all, Prime Minister of India (though no relation to Mohandas Gandhi). The screenplay was checked and rechecked by Indian officials at every stage, often by the Prime Minister herself, with close consultations on plot and even casting. If the movie contains a particularly poisonous portrait of Mohammed Ali Jinnah, the founder of Pakistan, the Indian reply, I suppose, would be that if the Pakistanis want an attractive portrayal of Jinnah let them pay for their own movie. A friend of mine, highly sophisticated in political matters but innocent about film-making, declared that Gandhi should be preceded by the legend: The following film is a paid political advertisement by the government of India.
Gandhi, then, is a large, pious, historical morality tale centered on a saintly, sanitized Mahatma Gandhi cleansed of anything too embarrassingly Hindu (the word “caste” is not mentioned from one end of the film to the other) and, indeed, of most of the rest of Gandhi’s life, much of which would drastically diminish his saintliness in Western eyes. There is little to indicate that the India of today has followed Gandhi’s precepts in almost nothing. There is little, in fact, to indicate that India is even India. The spectator realizes the scene is the Indian subcontinent because there are thousands of extras dressed in dhotis and saris. The characters go about talking in these quaint Peter Sellers accents. We have occasional shots of India’s holy poverty, holy hovels, some landscapes, many of them photographed quite beautifully, for those who like travelogues. We have a character called Lord Mountbatten (India’s last Viceroy); a composite American journalist (assembled from Vincent Sheehan, William L. Shirer, Louis Fischer, and straight fiction); a character called simply “Viceroy” (presumably another composite); an assemblage of Gandhi’s Indian followers under the name of one of them (Patel); and of course Nehru.
I sorely missed the fabulous Annie Besant, that English clergyman’s wife, turned atheist, turned Theo-sophist, turned Indian nationalist, who actually became president of the Indian National Congress and had a terrific falling out with Gandhi, becoming his fierce opponent. And if the producers felt they had to work in a cameo role for an American star to add to the film’s appeal in the United States, it is positively embarrassing that they should have brought in the photographer Margaret Bourke-White, a person of no importance whatever in Gandhi’s life and a role Candice Bergen plays with a repellant unctuousness. If the film-makers had been interested in drama and not hagiography, it is hard to see how they could have resisted the awesome confrontation between Gandhi and, yes, Margaret Sanger. For the two did meet. Now there was a meeting of East and West, and may the better person win! (She did. Margaret Sanger argued her views on birth control with such vigor that Gandhi had a nervous breakdown.)
I cannot honestly say I had any reasonable expectation that the film would show scenes of Gandhi’s pretty teenage girl followers fighting “hysterically” (the word was used) for the honor of sleeping naked with the Mahatma and cuddling the nude septuagenarian in their arms. (Gandhi was “testing” his vow of chastity in order to gain moral strength for his mighty struggle with Jinnah.) When told there was a man named Freud who said that, despite his declared intention, Gandhi might actually be enjoying the caresses of the naked girls, Gandhi continued, unperturbed. Nor, frankly, did I expect to see Gandhi giving daily enemas to all the young girls in his ashrams (his daily greeting was, “Have you had a good bowel movement this morning, sisters?”), nor see the girls giving him his daily enema. Although Gandhi seems to have written less about home rule for India than he did about enemas, and excrement, and latrine cleaning (“The bathroom is a temple. It should be so clean and inviting that anyone would enjoy eating there”), I confess such scenes might pose problems for a Western director.
Gandhi, therefore, the film, this paid political advertisement for the government of India, is organized around three axes: (1) Anti-racism—all men are equal regardless of race, color, creed, etc.; (2) anti-colonialism, which in present terms translates as support for the Third World, including, most eminently, India; (3) nonviolence, presented as an absolutist pacifism. There are other, secondary precepts and subheadings. Gandhi is portrayed as the quintessence of tolerance (“I am a Hindu and a Muslim and a Christian and a Jew”), of basic friendliness to Britain (“The British have been with us for a long time and when they leave we want them to leave as friends”), of devotion to his wife and family. His vow of chastity is represented as something selfless and holy, rather like the celibacy of the Catholic clergy. But, above all, Gandhi’s life and teachings are presented as having great import for us today. We must learn from Gandhi.
I propose to demonstrate that the film grotesquely distorts both Gandhi’s life and character to the point that it is nothing more than a pious fraud, and a fraud of the most egregious kind. Hackneyed Indian falsehoods such as that “the British keep trying to break India up” (as if Britain didn’t give India a unity it had never enjoyed in history), or that the British created Indian poverty (a poverty which had not only existed since time immemorial but had been considered holy), almost pass unnoticed in the tide of adulation for our fictional saint. Gandhi, admittedly, being a devout Hindu, was far more self-contradictory than most public men. Sanskrit scholars tell me that flat self-contradiction is even considered an element of “Sanskrit rhetoric.” Perhaps it is thought to show profundity.
Gandhi rose early, usually at three-thirty, and before his first bowel movement (during which he received visitors, although possibly not Margaret Bourke-White) he spent two hours in meditation, listening to his “inner voice.” Now Gandhi was an extremely vocal individual, and in addition to spending an hour each day in vigorous walking, another hour spinning at his primitive spinning wheel, another hour at further prayers, another hour being massaged nude by teenage girls, and many hours deciding such things as affairs of state, he produced a quite unconscionable number of articles and speeches and wrote an average of sixty letters a day. All considered, it is not really surprising that his inner voice said different things to him at different times. Despising consistency and never checking his earlier statements, and yet inhumanly obstinate about his position at any given moment, Gandhi is thought by some Indians today (according to V.S. Naipaul) to have been so erratic and unpredictable that he may have delayed Indian independence for twenty-five years.
For Gandhi was an extremely difficult man to work with. He had no partners, only disciples. For members of his ashrams, he dictated every minute of their days, and not only every morsel of food they should eat but when they should eat it. Without ever having heard of a protein or a vitamin, he considered himself an expert on diet, as on most things, and was constantly experimenting. Once when he fell ill, he was found to have been living on a diet of ground-nut butter and lemon juice; British doctors called it malnutrition. And Gandhi had even greater confidence in his abilities as a “nature doctor,” prescribing obligatory cures for his ashramites, such as dried cow-dung powder and various concoctions containing cow dung (the cow, of course, being sacred to the Hindu). And to those he really loved he gave enemas—but again, alas, not to Margaret Bourke-White. Which is too bad, really. For admiring Candice Bergen’s work as I do, I would have been most interested in seeing how she would have experienced this beatitude. The scene might have lived in film history.
There are 400 biographies of Gandhi, and his writings run to 80 volumes, and since he lived to be seventy-nine, and rarely fell silent, there are, as I have indicated, quite a few inconsistencies. The authors of the present movie even acknowledge in a little-noticed opening title that they have made a film only true to Gandhi’s “spirit.” For my part, I do not intend to pick through Gandhi’s writings to make him look like Attila the Hun (although the thought is tempting), but to give a fair, weighted balance of his views, laying stress above all on his actions, and on what he told other men to do when the time for action had come.
Anti-racism: the reader will have noticed that in the present-day community of nations South Africa is a pariah. So it is an absolutely amazing piece of good fortune that Gandhi, born the son of the Prime Minister of a tiny Indian principality and received as an attorney at the bar of the Middle Temple in London, should have begun his climb to greatness as a member of the small Indian community in, precisely, South Africa. Natal, then a separate colony, wanted to limit Indian immigration and, as part of the government program, ordered Indians to carry identity papers (an action not without similarities to measures under consideration in the U.S. today to control illegal immigration). The film’s lengthy opening sequences are devoted to Gandhi’s leadership in the fight against Indians carrying their identity papers (burning their registration cards), with for good measure Gandhi being expelled from the first-class section of a railway train, and Gandhi being asked by whites to step off the sidewalk. This inspired young Indian leader calls, in the film, for interracial harmony, for people to “live together.”
Now the time is 1893, and Gandhi is a “caste” Hindu, and from one of the higher castes. Although, later, he was to call for improving the lot of India’s Untouchables, he was not to have any serious misgivings about the fundamentals of the caste system for about another thirty years, and even then his doubts, to my way of thinking, were rather minor. In the India in which Gandhi grew up, and had only recently left, some castes could enter the courtyards of certain Hindu temples, while others could not. Some castes were forbidden to use the village well. Others were compelled to live outside the village, still others to leave the road at the approach of a person of higher caste and perpetually to call out, giving warning, so that no one would be polluted by their proximity. The endless intricacies of Hindu caste by-laws varied somewhat region by region, but in Madras, where most South African Indians were from, while a Nayar could pollute a man of higher caste only by touching him, Kammalans polluted at a distance of 24 feet, toddy drawers at 36 feet, Pulayans and Cherumans at 48 feet, and beef-eating Paraiyans at 64 feet. All castes and the thousands of sub-castes were forbidden, needless to say, to marry, eat, or engage in social activity with any but members of their own group. In Gandhi’s native Gujarat a caste Hindu who had been polluted by touch had to perform extensive ritual ablutions or purify himself by drinking a holy beverage composed of milk, whey, and (what else?) cow dung.
Low-caste Hindus, in short, suffered humiliations in their native India compared to which the carrying of identity cards in South Africa was almost trivial. In fact, Gandhi, to his credit, was to campaign strenuously in his later life for the reduction of caste barriers in India—a campaign almost invisible in the movie, of course, conveyed in only two glancing references, leaving the audience with the officially sponsored if historically astonishing notion that racism was introduced into India by the British. To present the Gandhi of 1893, a conventional caste Hindu, fresh from caste-ridden India where a Paraiyan could pollute at 64 feet, as the champion of interracial equalitariansim is one of the most brazen hypocrisies I have ever encountered in a serious movie.
The film, moreover, does not give the slightest hint as to Gandhi’s attitude toward blacks, and the viewers of Gandhi would naturally suppose that, since the future Great Soul opposed South African discrimination against Indians, he would also oppose South African discrimination against black people. But this is not so. While Gandhi, in South Africa, fought furiously to have Indians recognized as loyal subjects of the British empire, and to have them enjoy the full rights of Englishmen, he had no concern for blacks whatever. In fact, during one of the “Kaffir Wars” he volunteered to organize a brigade of Indians to put down a Zulu rising, and was decorated himself for valor under fire.
For, yes, Gandhi (Sergeant-Major Gandhi) was awarded Victoria’s coveted War Medal. Throughout most of his life Gandhi had the most inordinate admiration for British soldiers, their sense of duty, their discipline and stoicism in defeat (a trait he emulated himself). He marveled that they retreated with heads high, like victors. There was even a time in his life when Gandhi, hardly to be distinguished from Kipling’s Gunga Din, wanted nothing so much as to be a Soldier of the Queen. Since this is not in keeping with the “spirit” of Gandhi, as decided by Pandit Nehru and Indira Gandhi, it is naturally omitted from the movie.
Anti-colonialism: as almost always with historical films, even those more honest than Gandhi, the historical personage on which the movie is based is not only more complex but more interesting than the character shown on the screen. During his entire South African period, and for some time after, until he was about fifty, Gandhi was nothing more or less than an imperial loyalist, claiming for Indians the rights of Englishmen but unshakably loyal to the crown. He supported the empire ardently in no fewer than three wars: the Boer War, the “Kaffir War,” and, with the most extreme zeal, World War I. If Gandhi’s mind were of the modern European sort, this would seem to suggest that his later attitude toward Britain was the product of unrequited love: he had wanted to be an Englishman; Britain had rejected him and his people; very well then, they would have their own country. But this would imply a point of “agonizing reappraisal,” a moment when Gandhi’s most fundamental political beliefs were reexamined and, after the most bitter soul-searching, repudiated. But I have studied the literature and cannot find this moment of bitter soul-searching. Instead, listening to his “inner voice” (which in the case of divines of all countries often speaks in the tones of holy opportunism), Gandhi simply, tranquilly, without announcing any sharp break, set off in a new direction.
It should be understood that it is unlikely Gandhi ever truly conceived of “becoming” an Englishman, first, because he was a Hindu to the marrow of his bones, and also, perhaps, because his democratic instincts were really quite weak. He was a man of the most extreme, autocratic temperament, tyrannical, unyielding even regarding things he knew nothing about, totally intolerant of all opinions but his own. He was, furthermore, in the highest degree reactionary, permitting in India no change in the relationship between the feudal lord and his peasants or servants, the rich and the poor. In his The Life and Death of Mahatma Gandhi, the best and least hagiographic of the full-length studies, Robert Payne, although admiring Gandhi greatly, explains Gandhi’s “new direction” on his return to India from South Africa as follows:
He spoke in generalities, but he was searching for a single cause, a single hard-edged task to which he would devote the remaining years of his life. He wanted to repeat his triumph in South Africa on Indian soil. He dreamed of assembling a small army of dedicated men around him, issuing stern commands and leading them to some almost unobtainable goal.
Gandhi, in short, was a leader looking for a cause. He found it, of course, in home rule for India and, ultimately, in independence.
We are therefore presented with the seeming anomaly of a Gandhi who, in Britain when war broke out in August 1914, instantly contacted the War Office, swore that he would stand by England in its hour of need, and created the Indian Volunteer Corps, which he might have commanded if he hadn’t fallen ill with pleurisy. In 1915, back in India, he made a memorable speech in Madras in which he proclaimed, “I discovered that the British empire had certain ideals with which I have fallen in love. . . .” In early 1918, as the war in Europe entered its final crisis, he wrote to the Viceroy of India, “I have an idea that if I become your recruiting agent-in-chief, I might rain men upon you,” and he proclaimed in a speech in Kheda that the British “love justice; they have shielded men against oppression.” Again, he wrote to the Viceroy, “I would make India offer all her able-bodied sons as a sacrifice to the empire at this critical moment. . . .” To some of his pacifist friends, who were horrified, Gandhi replied by appealing to the Bhagavad Gita and to the endless wars recounted in the Hindu epics, the Ramayana and the Mahabharata, adding further to the pacifists’ horror by declaring that Indians “have always been warlike, and the finest hymn composed by Tulsidas in praise of Rama gives the first place to his ability to strike down the enemy.”
This was in contradiction to the interpretation of sacred Hindu scriptures Gandhi had offered on earlier occasions (and would offer later), which was that they did not recount military struggles but spiritual struggles; but, unusual for him, he strove to find some kind of synthesis. “I do not say, ‘Let us go and kill the Germans,’” Gandhi explained. “I say, ‘Let us go and die for the sake of India and the empire.’” And yet within two years, the time having come for swaraj (home rule), Gandhi’s inner voice spoke again, and, the leader having found his cause, Gandhi proclaimed resoundingly: “The British empire today represents Satanism, and they who love God can afford to have no love for Satan.”
The idea of swaraj, originated by others, crept into Gandhi’s mind gradually. With a fair amount of winding about, Gandhi, roughly, passed through three phases. First, he was entirely pro-British, and merely wanted for Indians the rights of Englishmen (as he understood them). Second, he was still pro-British, but with the belief that, having proved their loyalty to the empire, Indians would be granted some degree of swaraj. Third, as the home-rule movement gathered momentum, it was the swaraj, the whole swaraj, and nothing but the swaraj, and he turned relentlessly against the crown. The movie to the contrary, he caused the British no end of trouble in their struggles during World War II.
But it should not be thought for one second that Gandhi’s finally full-blown desire to detach India from the British empire gave him the slightest sympathy with other colonial peoples pursuing similar objectives. Throughout his entire life Gandhi displayed the most spectacular inability to understand or even really take in people unlike himself—a trait which V.S. Naipaul considers specifically Hindu, and I am inclined to agree. Just as Gandhi had been totally unconcerned with the situation of South Africa’s blacks (he hardly noticed they were there until they rebelled), so now he was totally unconcerned with other Asians or Africans. In fact, he was adamantly opposed to certain Arab movements within the Ottoman empire for reasons of internal Indian politics.
At the close of World War I, the Muslims of India were deeply absorbed in what they called the “Khilafat” movement—“Khilafat” being their corruption of “Caliphate,” the Caliph in question being the Ottoman Sultan. In addition to his temporal powers, the Sultan of the Ottoman empire held the spiritual position of Caliph, supreme leader of the world’s Muslims and successor to the Prophet Muhammad. At the defeat of the Central Powers (Germany, Austria, Turkey), the Sultan was a prisoner in his palace in Constantinople, shorn of his religious as well as his political authority, and the Muslims of India were incensed. It so happened that the former subject peoples of the Ottoman empire, principally Arabs, were perfectly happy to be rid of this Caliph, and even the Turks were glad to be rid of him, but this made no impression at all on the Muslims of India, for whom the issue was essentially a club with which to beat the British. Until this odd historical moment, Indian Muslims had felt little real allegiance to the Ottoman Sultan either, but now that he had fallen, the British had done it! The British had taken away their Khilafat! And one of the most ardent supporters of this Indian Muslim movement was the new Hindu leader, Gandhi.
No one questions that the formative period for Gandhi as a political leader was his time in South Africa. Throughout history Indians, divided into 1,500 language and dialect groups (India today has 15 official languages), had little sense of themselves as a nation. Muslim Indians and Hindu Indians felt about as close as Christians and Moors during their 700 years of cohabitation in Spain. In addition to which, the Hindus were divided into thousands of castes and sub-castes, and there were also Parsees, Sikhs, Jains. But in South Africa officials had thrown them all in together, and in the mind of Gandhi (another one of those examples of nationalism being born in exile) grew the idea of India as a nation, and Muslim-Hindu friendship became one of the few positions on which he never really reversed himself. So Gandhi—ignoring Arabs and Turks—became an adamant supporter of the Khilafat movement out of strident Indian nationalism. He had become a national figure in India for having unified 13,000 Indians of all faiths in South Africa, and now he was determined to reach new heights by unifying hundreds of millions of Indians of all faiths in India itself. But this nationalism did not please everyone, particularly Tolstoy, who in his last years carried on a curious correspondence with the new Indian leader. For Tolstoy, Gandhi’s Indian nationalism “spoils everything.”
As for the “anti-colonialism” of the nationalist Indian state since independence, Indira Gandhi, India’s present Prime Minister, hears an inner voice of her own, it would appear, and this inner voice told her to justify the Soviet invasion of Afghanistan as produced by provocative maneuvers on the part of the U.S. and China, as well as to be the first country outside the Soviet bloc to recognize the Hanoi puppet regime in Cambodia. So everything plainly depends on who is colonizing whom, and Mrs. Gandhi’s voice perhaps tells her that the subjection of Afghanistan and Cambodia to foreign rule is “defensive” colonialism. And the movie’s message that Mahatma Gandhi, and by plain implication India (the country for which he plays the role of Joan of Arc), have taken a holy, unchanging stance against the colonization of nation by nation is just another of its hypocrisies. For India, when it comes to colonialism or anti-colonialism, it has been Realpolitik all the way.
Nonviolence: but the real center and raison d’être of Gandhi is ahimsa, nonviolence, which principle when incorporated into vast campaigns of noncooperation with British rule the Mahatma called by an odd name he made up himself, satyagraha, which means something like “truth-striving.” During the key part of his life, Gandhi devoted a great deal of time explaining the moral and philosophical meanings of both ahimsa and satyagraha. But much as the film sanitizes Gandhi to the point where one would mistake him for a Christian saint, and sanitizes India to the point where one would take it for Shangri-la, it quite sweeps away Gandhi’s ethical and religious ponderings, his complexities, his qualifications, and certainly his vacillations, which simplifying process leaves us with our old European friend: pacifism. It is true that Gandhi was much impressed by the Sermon on the Mount, his favorite passage in the Bible, which he read over and over again. But for all the Sermon’s inspirational value, and its service as an ideal in relations among individual human beings, no Christian state which survived has ever based its policies on the Sermon on the Mount since Constantine declared Christianity the official religion of the Roman empire. And no modern Western state which survives can ever base its policies on pacifism. And no Hindu state will ever base its policies on ahimsa. Gandhi himself—although the film dishonestly conceals this from us—many times conceded that in dire circumstances “war may have to be resorted to as a necessary evil.”
It is something of an anomaly that Gandhi, held in popular myth to be a pure pacifist (a myth which governments of India have always been at great pains to sustain in the belief that it will reflect credit on India itself, and to which the present movie adheres slavishly), was until fifty not ill-disposed to war at all. As I have already noted, in three wars, no sooner had the bugles sounded than Gandhi not only gave his support, but was clamoring for arms. To form new regiments! To fight! To destroy the enemies of the empire I Regular Indian army units fought in both the Boer War and World War I, but this was not enough for Gandhi. He wanted to raise new troops, even, in the case of the Boer and Kaffir Wars, from the tiny Indian colony in South Africa. British military authorities thought it not really worth the trouble to train such a small body of Indians as soldiers, and were even resistant to training them as an auxiliary medical corps (“stretcher bearers”), but finally yielded to Gandhi’s relentless importuning. As first instructed, the Indian Volunteer Corps was not supposed actually to go into combat, but Gandhi, adamant, led his Indian volunteers into the thick of battle. When the British commanding officer was mortally wounded during an engagement in the Kaffir War, Gandhi—though his corps’ deputy commander—carried the officer’s stretcher himself from the battlefield and for miles over the sun-baked veldt. The British empire’s War Medal did not have its name for nothing, and it was generally earned.
Anyone who wants to wade through Gandhi’s endless ruminations about himsa and ahimsa (violence and nonviolence) is welcome to do so, but it is impossible for the skeptical reader to avoid the conclusion—let us say in 1920, when swaraj (home rule) was all the rage and Gandhi’s inner voice started telling him that ahimsa was the thing—that this inner voice knew what it was talking about. By this I mean that, though Gandhi talked with the tongue of Hindu gods and sacred scriptures, his inner voice had a strong sense of expediency. Britain, if only comparatively speaking, was a moral nation, and nonviolent civil disobedience was plainly the best and most effective way of achieving Indian independence. Skeptics might also not be surprised to learn that as independence approached, Gandhi’s inner voice began to change its tune. It has been reported that Gandhi “half-welcomed” the civil war that broke out in the last days. Even a fratricidal “bloodbath” (Gandhi’s word) would be preferable to the British.
And suddenly Gandhi began endorsing violence left, right, and center. During the fearsome rioting in Calcutta he gave his approval to men “using violence in a moral cause.” How could he tell them that violence was wrong, he asked, “unless I demonstrate that nonviolence is more effective?” He blessed the Nawab of Maler Kotla when he gave orders to shoot ten Muslims for every Hindu killed in his state. He sang the praises of Subhas Chandra Bose, who, sponsored by first the Nazis and then the Japanese, organized in Singapore an Indian National Army with which he hoped to conquer India with Japanese support, establishing a totalitarian dictatorship. Meanwhile, after independence in 1947, the armies of the India that Gandhi had created immediately marched into battle, incorporating the state of Hyderabad by force and making war in Kashmir on secessionist Pakistan. When Gandhi was assassinated by a Hindu extremist in January 1948 he was honored by the new state with a vast military funeral—in my view by no means inapposite.
But it is not widely realized (nor will this film tell you) how much violence was associated with Gandhi’s so-called “nonviolent” movement from the very beginning. India’s Nobel Prize-winning poet, Rabindranath Tagore, had sensed a strong current of nihilism in Gandhi almost from his first days, and as early as 1920 wrote of Gandhi’s “fierce joy of annihilation,” which Tagore feared would lead India into hideous orgies of devastation—which ultimately proved to be the case. Robert Payne has said that there was unquestionably an “unhealthy atmosphere” among many of Gandhi’s fanatic followers, and that Gandhi’s habit of going to the edge of violence and then suddenly retreating was fraught with danger. “In matters of conscience I am uncompromising,” proclaimed Gandhi proudly. “Nobody can make me yield.” The judgment of Tagore was categorical. Much as he might revere Gandhi as a holy man, he quite detested him as a politician and considered that his campaigns were almost always so close to violence that it was utterly disingenuous to call them nonviolent.
For every satyagraha true believer, moreover, sworn not to harm the adversary or even to lift a finger in his own defense, there were sometimes thousands of incensed freebooters and skirmishers bound by no such vow. Gandhi, to be fair, was aware of this, and nominally deplored it—but with nothing like the consistency shown in the movie. The film leads the audience to believe that Gandhi’s first “fast unto death,” for example, was in protest against an act of barbarous violence, the slaughter by an Indian crowd of a detachment of police constables. But in actual fact Gandhi reserved this “ultimate weapon” of his to interdict a 1931 British proposal to grant Untouchables a “separate electorate” in the Indian national legislature—in effect a kind of affirmative-action program for Untouchables. For reasons I have not been able to decrypt, Gandhi was dead set against the project, but I confess it is another scene I would like to have seen in the movie: Gandhi almost starving himself to death to block affirmative action for Untouchables.
From what I have been able to decipher, Gandhi’s main preoccupation in this particular struggle was not even the British. Benefiting from the immense publicity, he wanted to induce Hindus, overnight, ecstatically, and without any of these British legalisms, to “open their hearts” to Untouchables. For a whole week Hindu India was caught up in a joyous delirium. No more would the Untouchables be scavengers and sweepers! No more would they be banned from Hindu temples! No more would they pollute at 64 feet! It lasted just a week. Then the temple doors swung shut again, and all was as before. Meanwhile, on the passionate subject of swaraj, Gandhi was crying, “I would not flinch from sacrificing a million lives for India’s liberty!” The million Indian lives were indeed sacrificed, and in full. They fell, however, not to the bullets of British soldiers but to the knives and clubs of their fellow Indians in savage butcheries when the British finally withdrew.
Although the movie sneers at this reasoning as being the flimsiest of pretexts, I cannot imagine an impartial person studying the subject without concluding that concern for Indian religious minorities was one of the principal reasons Britain stayed in India as long as it did. When it finally withdrew, blood-maddened mobs surged through the streets from one end of India to the other, the majority group in each area, Hindu or Muslim, slaughtering the defenseless minority without mercy in one of the most hideous periods of carnage of modern history.
A comparison is in order. At the famous Amritsar massacre of 1919, shot in elaborate and loving detail in the present movie and treated by post-independence Indian historians as if it were Auschwitz, Ghurka troops under the command of a British officer, General Dyer, fired into an unarmed crowd of Indians defying a ban and demonstrating for Indian independence. The crowd contained women and children; 379 persons died; it was all quite horrible. Dyer was court-martialed and cashiered, but the incident lay heavily on British consciences for the next three decades, producing a severe inhibiting effect. Never again would the British empire commit another Amritsar, anywhere.
As soon as the oppressive British were gone, however, the Indians—gentle, tolerant people that they are—gave themselves over to an orgy of bloodletting. Trained troops did not pick off targets at a distance with Enfield rifles. Blood-crazed Hindus, or Muslims, ran through the streets with knives, beheading babies, stabbing women, old people. Interestingly, our movie shows none of this on camera (the oldest way of stacking the deck in Hollywood). All we see is the aged Gandhi, grieving, and of course fasting, at these terrible reports of riots. And, naturally, the film doesn’t whisper a clue as to the total number of dead, which might spoil the mood somehow. The fact is that we will never know how many Indians were murdered by other Indians during the country’s Independence Massacres, but almost all serious studies place the figure over a million, and some, such as Payne’s sources, go to 4 million. So, for those who like round numbers, the British killed some 400 seditious colonials at Amritsar and the name Amritsar lives in infamy, while Indians may have killed some 4 million of their own countrymen for no other reason than that they were of a different religious faith and people think their great leader would make an inspirational subject for a movie. Ahimsa, as can be seen, then, had an absolutely tremendous moral effect when used against Britain, but not only would it not have worked against Nazi Germany (the most obvious reproach, and of course quite true), but, the crowning irony, it had virtually no effect whatever when Gandhi tried to bring it into play against violent Indians.
Despite this at best patchy record, the film-makers have gone to great lengths to imply that this same prinicple of ahimsa—presented in the movie as the purest form of pacifism—is universally effective, yesterday, today, here, there, everywhere. We hear no talk from Gandhi of war sometimes being a “necessary evil,” but only him announcing—and more than once—“An eye for an eye makes the whole world blind.” In a scene very near the end of the movie, we hear Gandhi say, as if after deep reflection: “Tyrants and murderers can seem invincible at the time, but in the end they always fall. Think of it. Always.” During the last scene of the movie, following the assassination, Margaret Bourke-White is keening over the death of the Great Soul with an English admiral’s daughter named Madeleine Slade, in whose bowel movements Gandhi took the deepest interest (see their correspondence), and Miss Slade remarks incredulously that Gandhi felt that he had failed. They are then both incredulous for a moment, after which Miss Slade observes mournfully, “When we most needed it [presumably meaning during World War II], he offered the world a way out of madness. But the world didn’t see it.” Then we hear once again the assassin’s shots, Gandhi’s “Oh, God,” and last, in case we missed them the first time, Gandhi’s words (over the shimmering waters of the Ganges?): “Tyrants and murderers can seem invincible at the time, but in the end they always fall. Think of it. Always.” This is the end of the picture.
Now, as it happens, I have been thinking about tyrants and murderers for some time. But the fact that in the end they always fall has never given me much comfort, partly because, not being a Hindu and not expecting reincarnation after reincarnation, I am simply not prepared to wait them out. It always occurs to me that, while I am waiting around for them to fall, they might do something mean to me, like fling me into a gas oven or send me off to a Gulag. Unlike a Hindu and not worshipping stasis, I am also given to wondering who is to bring these murderers and tyrants down, it being all too risky a process to wait for them and the regimes they establish simply to die of old age. The fact that a few reincarnations from now they will all have turned to dust somehow does not seem to suggest a rational strategy for dealing with the problem.
Since the movie’s Madeleine Slade specifically invites us to revere the “way out of madness” that Gandhi offered the world at the time of World War II, I am under the embarrassing obligation of recording exactly what courses of action the Great Soul recommended to the various parties involved in that crisis. For Gandhi was never stinting in his advice. Indeed, the less he knew about a subject, the less he stinted.
I am aware that for many not privileged to have visited the former British Raj, the names Gujarat, Rajasthan, and Deccan are simply words. But other names, such as Germany, Poland, Czechoslovakia, somehow have a harder profile. The term “Jew,” also, has a reasonably hard profile, and I feel all Jews sitting emotionally at the movie Gandhi should be apprised of the advice that the Mahatma offered their coreligionists when faced with the Nazi peril: they should commit collective suicide. If only the Jews of Germany had the good sense to offer their throats willingly to the Nazi butchers’ knives and throw themselves into the sea from cliffs they would arouse world public opinion, Gandhi was convinced, and their moral triumph would be remembered for “ages to come.” If they would only pray for Hitler (as their throats were cut, presumably), they would leave a “rich heritage to mankind.” Although Gandhi had known Jews from his earliest days in South Africa—where his three staunchest white supporters were Jews, every one—he disapproved of how rarely they loved their enemies. And he never repented of his recommendation of collective suicide. Even after the war, when the full extent of the Holocaust was revealed, Gandhi told Louis Fischer, one of his biographers, that the Jews died anyway, didn’t they? They might as well have died significantly.
Gandhi’s views on the European crisis were not entirely consistent. He vigorously opposed Munich, distrusting Chamberlain. “Europe has sold her soul for the sake of a seven days’ earthly existence,” he declared. “The peace that Europe gained at Munich is a triumph of violence.” But when the Germans moved into the Bohemian heartland, he was back to urging nonviolent resistance, exhorting the Czechs to go forth, unarmed, against the Wehrmacht, perishing gloriously—collective suicide again. He had Madeleine Slade draw up two letters to President Eduard Beneš of Czechoslovakia, instructing him on the proper conduct of Czechoslovak satyagrahi when facing the Nazis.
When Hitler attacked Poland, however, Gandhi suddenly endorsed the Polish army’s military resistance, calling it “almost nonviolent.” (If this sounds like double-talk, I can only urge readers to read Gandhi.) He seemed at this point to have a rather low opinion of Hitler, but when Germany’s panzer divisions turned west, Allied armies collapsed under the ferocious onslaught, and British ships were streaming across the Straits of Dover from Dunkirk, he wrote furiously to the Viceroy of India: “This manslaughter must be stopped. You are losing; if you persist, it will only result in greater bloodshed. Hitler is not a bad man. . . .”
Gandhi also wrote an open letter to the British people, passionately urging them to surrender and accept whatever fate Hitler had prepared for them. “Let them take possession of your beautiful island with your many beautiful buildings. You will give all these, but neither your souls, nor your minds.” Since none of this had the intended effect, Gandhi, the following year, addressed an open letter to the prince of darkness himself, Adolf Hitler.
The scene must be pictured. In late December 1941, Hitler stood at the pinnacle of his might. His armies, undefeated—anywhere—ruled Europe from the English Channel to the Volga. Rommel had entered Egypt. The Japanese had reached Singapore. The U.S. Pacific Fleet lay at the bottom of Pearl Harbor. At this superbly chosen moment, Mahatma Gandhi attempted to convert Adolf Hitler to the ways of nonviolence. “Dear Friend,” the letter begins, and proceeds to a heartfelt appeal to the Führer to embrace all mankind “irrespective of race, color, or creed.” Every admirer of the film Gandhi should be compelled to read this letter. Surprisingly, it is not known to have had any deep impact on Hitler. Gandhi was no doubt disappointed. He moped about, really quite depressed, but still knew he was right. When the Japanese, having cut their way through Burma, threatened India, Gandhi’s strategy was to let them occupy as much of India as they liked and then to “make them feel unwanted.” His way of helping his British “friends” was, at one of the worst points of the war, to launch massive civil-disobedience campaigns against them, paralyzing some of their efforts to defend India from the Japanese.
Here, then, is your leader, O followers of Gandhi: a man who thought Hitler’s heart would be melted by an appeal to forget race, color, and creed, and who was sure the feelings of the Japanese would be hurt if they sensed themselves unwanted. As world-class statesmen go, it is not a very good record. Madeleine Slade was right, I suppose. The world certainly didn’t listen to Gandhi. Nor, for that matter, has the modern government of India listened to Gandhi. Although all Indian politicians of all political parties claim to be Gandhians, India has blithely fought three wars against Pakistan, one against China, and even invaded and seized tiny, helpless Goa, and all without a whisper of a shadow of a thought of ahimsa. And of course India now has atomic weapons, a satyagraha technique if ever there was one.
I am sure that almost everyone who sees the movie Gandhi is aware that, from a religious point of view, the Mahatma was something called a “Hindu”—but I do not think one in a thousand has the dimmest notion of the fundamental beliefs of the Hindu religion. The simplest example is Gandhi’s use of the word “God,” which, for members of the great Western religions—Christianity, Judaism, and Islam, all interrelated—means a personal god, a godhead. But when Gandhi said “God” in speaking English, he was merely translating from Gujarati or Hindi, and from the Hindu culture. Gandhi, in fact, simply did not believe in a personal God, and wrote in so many words, “God is not a person . . . but a force; the undefinable mysterious Power that pervades everything; a living Power that is Love. . . .” And Gandhi’s very favorite definition of God, repeated many thousands of times, was, “God is Truth,” which reduces God to some kind of abstract principle.
Like all Hindus, Gandhi also believed in the “Great Oneness,” according to which everything is part of God, meaning not just you and me and everyone else, but every living creature, every dead creature, every plant, the pitcher of milk, the milk in the pitcher, the tumbler into which the milk is poured. . . . After all of which, he could suddenly pop up with a declaration that God is “the Maker, the Law-Giver, a jealous Lord,” phrases he had probably picked up in the Bible and, with Hindu fluidity, felt he could throw in so as to embrace even more of the Great Oneness. So when Gandhi said, “I am a Hindu and a Muslim and a Christian and a Jew,” it was (from a Western standpoint) Hindu double-talk. Hindu holy men, some of them reformers like Gandhi, have actually even “converted” to Islam, then Christianity, or whatever, to worship different “aspects” of the Great Oneness, before reconverting to Hinduism. Now for Christians, fastidious in matters of doctrine, a man who converts to Islam is an apostate (or vice versa), but a Hindu is a Hindu is a Hindu. The better to experience the Great Oneness, many Hindu holy men feel they should be women as well as men, and one quite famous one even claimed he could menstruate (I will spare the reader the details).
In this ecumenical age, it is extremely hard to shake Westerners loose from the notion that the devout of all religions, after all, worship “the one God.” But Gandhi did not worship the one God. He did not worship the God of mercy. He did not worship the God of forgiveness. And this for the simple reason that the concepts of mercy and forgiveness are absent from Hinduism. In Hinduism, men do not pray to God for forgiveness, and a man’s sins are never forgiven—indeed, there is no one out there to do the forgiving. In your next life you may be born someone higher up the caste scale, but in this life there is no hope. For Gandhi, a true Hindu, did not believe in man’s immortal soul. He believed with every ounce of his being in karma, a series, perhaps a long series, of reincarnations, and at the end, with great good fortune: mukti, liberation from suffering and the necessity of rebirth, nothingness. Gandhi once wrote to Tolstoy (of all people) that reincarnation explained “reasonably the many mysteries of life.” So if Hindus today still treat an Untouchable as barely human, this is thought to be perfectly right and fitting because of his actions in earlier lives. As can be seen, Hinduism, by its very theology, with its sacred triad of karma, reincarnation, and caste (with caste an absolutely indispensable part of the system) offers the most complacent justification of inhumanity of any of the world’s great religious faiths.
Gandhi, needless to say, was a Hindu reformer, one of many. Until well into his fifties, however, he accepted the caste system in toto as the “natural order of society,” promoting control and discipline and sanctioned by his religion. Later, in bursts of zeal, he favored moderating it in a number of ways. But he stuck by the basic varna system (the four main caste groupings plus the Untouchables) until the end of his days, insisting that a man’s position and occupation should be determined essentially by birth. Gandhi favored milder treatment of Untouchables, renaming them Harijans, “children of God,” but a Harijan was still a Harijan. Perhaps because his frenzies of compassion were so extreme (no, no, he would clean the Harijan‘s latrine), Hindu reverence for him as a holy man became immense, but his prescriptions were rarely followed. Industrialization and modernization have introduced new occupations and sizable social and political changes in India, but the caste system has dexterously adapted and remains largely intact today. The Sudras still labor. The sweepers still sweep. Max Weber, in his The Religion of India, after quoting the last line of the Communist Manifesto, suggests somewhat sardonically that low-caste Hindus, too, have “nothing to lose but their chains,” that they, too, have “a world to win”—the only problem being that they have to die first and get born again, higher, it is to be hoped, in the immutable system of caste. Hinduism in general, wrote Weber, “is characterized by a dread of the magical evil of innovation.” Its very essence is to guarantee stasis.
In addition to its literally thousands of castes and sub-castes, Hinduism has countless sects, with discordant rites and beliefs. It has no clear ecclesiastical organization and no universal body of doctrine. What I have described above is your standard, no-frills Hindu, of which in many ways Gandhi was an excellent example. With the reader’s permission I will skip over the Upanishads, Vedanta, Yoga, the Puranas, Tantra, Bhakti, the Bhagavad-Gita (which contains theistic elements), Brahma, Vishnu, Shiva, and the terrible Kali or Durga, to concentrate on those central beliefs that most motivated Gandhi’s behavior as a public figure.
It should be plain by now that there is much in the Hindu culture that is distasteful to the Western mind, and consequently is largely unknown in the West—not because Hindus do not go on and on about these subjects, but because a Western squeamishness usually prevents these preoccupations from reaching print (not to mention film). When Gandhi attended his first Indian National Congress he was most distressed at seeing the Hindus—not laborers but high-caste Hindus, civic leaders—defecating all over the place, as if to pay attention to where the feces fell was somehow unclean. (For, as V.S. Naipaul puts it, in a twisted Hindu way it is unclean to clean. It is unclean even to notice. “It was the business of the sweepers to remove excrement, and until the sweepers came, people were content to live in the midst of their own excrement.”) Gandhi exhorted Indians endlessly on the subject, saying that sanitation was the first need of India, but he retained an obvious obsession with excreta, gleefully designing latrines and latrine drills for all hands at the ashram, and, all in all, what with giving and taking enemas, and his public bowel movements, and his deep concern with everyone else’s bowel movements (much correspondence), and endless dietary experiments as a function of bowel movements, he devoted a rather large share of his life to the matter. Despite his constant campaigning for sanitation, it is hard to believe that Gandhi was not permanently marked by what Arthur Koestler terms the Hindu “morbid infatuation with filth,” and what V.S. Naipaul goes as far as to call the Indian “deification of filth.” (Decades later, Krishna Menon, a Gandhian and one-time Indian Defense Minister, was still fortifying his sanctity by drinking a daily glass of urine.)
But even more important, because it is dealt with in the movie directly—if of course dishonestly—is Gandhi’s parallel obsession with brahmacharya, or sexual chastity. There is a scene late in the film in which Margaret Bourke-White (again!) asks Gandhi’s wife if he has ever broken his vow of chastity, taken, at that time, about forty years before. Gandhi’s wife, by now a sweet old lady, answers wistfully, with a pathetic little note of hope, “Not yet.” What lies behind this adorable scene is the following: Gandhi held as one of his most profound beliefs (a fundamental doctrine of Hindu medicine) that a man, as a matter of the utmost importance, must conserve his bindu, or seminal fluid. Koestler (in The Lotus and the Robot) gives a succinct account of this belief, widespread among orthodox Hindus: “A man’s vital energy is concentrated in his seminal fluid, and this is stored in a cavity in the skull. It is the most precious substance in the body . . . an elixir of life both in the physical and mystical sense, distilled from the blood. . . . A large store of bindu of pure quality guarantees health, longevity, and supernatural powers. . . . Conversely, every loss of it is a physical and spiritual impoverishment.” Gandhi himself said in so many words, “A man who is unchaste loses stamina, becomes emasculated and cowardly, while in the chaste man secretions [semen] are sublimated into a vital force pervading his whole being.” And again, still Gandhi: “Ability to retain and assimilate the vital liquid is a matter of long training. When properly conserved it is transmuted into matchless energy and strength.” Most male Hindus go ahead and have sexual relations anyway, of course, but the belief in the value of bindu leaves the whole culture in what many observers have called a permanent state of “semen anxiety.” When Gandhi once had a nocturnal emission he almost had a nervous breakdown.
Gandhi was a truly fanatical opponent of sex for pleasure, and worked it out carefully that a married couple should be allowed to have sex three or four times in a lifetime, merely to have children, and favored embodying this restriction in the law of the land. The sexual-gratification wing of the present-day feminist movement would find little to attract them in Gandhi’s doctrine, since in all his seventy-nine years it never crossed his mind once that there could be anything enjoyable in sex for women, and he was constantly enjoining Indian women to deny themselves to men, to refuse to let their husbands “abuse” them. Gandhi had been married at thirteen, and when he took his vow of chastity, after twenty-four years of sexual activity, he ordered his two oldest sons, both young men, to be totally chaste as well.
But Gandhi’s monstrous behavior to his own family is notorious. He denied his sons education—to which he was bitterly hostile. His wife remained illiterate. Once when she was very sick, hemorrhaging badly, and seemed to be dying, he wrote to her from jail icily: “My struggle is not merely political. It is religious and therefore quite pure. It does not matter much whether one dies in it or lives. I hope and expect that you will also think likewise and not be unhappy.” To die, that is. On another occasion he wrote, speaking about her: “I simply cannot bear to look at Ba’s face. The expression is often like that on the face of a meek cow and gives one the feeling, as a cow occasionally does, that in her own dumb manner she is saying something. I see, too, that there is selfishness in this suffering of hers. . . .” And in the end he let her die, as I have said, rather than allow British doctors to give her a shot of penicillin (while his inner voice told him that it would be all right for him to take quinine). He disowned his oldest son, Harilal, for wishing to marry. He banished his second son for giving his struggling older brother a small sum of money. Harilal grew quite wild with rage against his father, attacked him in print, converted to Islam, took to women, drink, and died an alcoholic in 1948. The Mahatma attacked him right back in his pious way, proclaiming modestly in an open letter in Young India, “Men may be good, not necessarily their children.”
If the reader thinks I have delivered unduly harsh judgments on India and Hindu civilization, I can refer him to An Area of Darkness and India: A Wounded Civilization, two quite brilliant books on India by V.S. Naipaul, a Hindu, and a Brahmin, born in Trinidad. In the second, the more discursive, Naipaul writes that India “has little to offer the world except its Gandhi an concept of holy poverty and the recurring crooked comedy of its holy men, and . . . is now dependent in every practical way on other, imperfectly understood civilizations.”
Hinduism, Naipaul writes, “has given men no idea of a contract with other men, no idea of the state. It has enslaved one quarter of the population [the Untouchables] and always has left the whole fragmented and vulnerable. Its philosophy of withdrawal has diminished men intellectually and not equipped them to respond to challenge; it has stifled growth. So that again and again in India history has repeated itself: vulnerability, defeat, withdrawal.” Indians, Naipaul says, have no historical notion of the past. “Through centuries of conquest the civilization declined into an apparatus for survival, turning away from the mind . . . and creativity . . . stripping itself down, like all decaying civilizations, to its magical practices and imprisoning social forms.” He adds later, “No government can survive on Gandhian fantasy; and the spirituality, the solace of a conquered people, which Gandhi turned into a form of national assertion, has soured more obviously into the nihilism that it always was.” Naipaul condemns India again and again for its “intellectual parasitism,” its “intellectual vacuum,” its “emptiness,” the “blankness of its decayed civilization.” “Indian poverty is more dehumanizing than any machine; and, more than in any machine civilization, men in India are units, locked up in the straitest obedence by their idea of their dharma. . . . The blight of caste is not only untouchability and the consequent deification in India of filth; the blight, in an India that tries to grow, is also the overall obedience it imposes, . . . the diminishing of adventurousness, the pushing away from men of individuality and the possibility of excellence.”
Although Naipaul blames Gandhi as well as India itself for the country’s failure to develop an “ideology” adequate for the modern world, he grants him one or two magnificent moments—always, it should be noted, when responding to “other civilizations.” For Gandhi, Naipaul remarks pointedly, had matured in alien societies: Britain and South Africa. With age, back in India, he seemed from his autobiography to be headed for “lunacy,” says Naipaul, and was only rescued by external events, his reactions to which were determined in part by “his experience of the democratic ways of South Africa” [my emphasis]. For it is one of the enduring ironies of Gandhi’s story that it was in South Africa—South Africa—a country in which he became far more deeply involved than he had been in Britain, that Gandhi caught a warped glimmer of that strange institution of which he would never have seen even a reflection within Hindu society: democracy.
Another of Gandhi’s most powerful obsessions (to which the movie alludes in such a syrupy and misleading manner that it would be quite impossible for the audience to understand it) was his visceral hatred of the modern, industrial world. He even said, more than once, that he actually wouldn’t mind if the British remained in India, to police it, conduct foreign policy, and such trivia, if it would only take away its factories and railways. And Gandhi hated, not just factories and railways, but also the telegraph, the telephone, the radio, the airplane. He happened to be in England when Louis Blériot, the great French aviation pioneer, first flew the English Channel—an event which at the time stirred as much excitement as Lindbergh’s later flight across the Atlantic—and Gandhi was in a positive fury that giant crowds were acclaiming such an insignificant event. He used the telegraph extensively himself, of course, and later would broadcast daily over All-India Radio during his highly publicized fasts, but consistency was never Gandhi’s strong suit.
Gandhi’s view of the good society, about which he wrote ad nauseam, was an Arcadian vision set far in India’s past. It was the pristine Indian village, where, with all diabolical machinery and technology abolished—and with them all unhappiness—contented villagers would hand-spin their own yarn, hand-weave their own cloth, serenely follow their bullocks in the fields, tranquilly prodding them in the anus in the time-hallowed Hindu way. This was why Gandhi taught himself to spin, and why all the devout Gandhians, like monkeys, spun also. This was Gandhi’s program. Since he said it several thousand times, we have no choice but to believe that he sincerely desired the destruction of modern technology and industry and the return of India to the way of life of an idyllic (and quite likely nonexistent) past. And yet this same Mahatma Gandhi hand-picked as the first Prime Minister of an independent India Pandit Nehru, who was committed to a policy of industrialization and for whom the last word in the politico-economic organization of the state was (and remained) Beatrice Webb.
What are we to make of this Gandhi? We are dealing with two strangenesses here, Indians and Gandhi himself. The plain fact is that both Indian leaders and the Indian people ignored Gandhi’s precepts almost as thoroughly as did Hitler. They ignored him on sexual abstinence. They ignored his modifications of the caste system. They ignored him on the evils of modern industry, the radio, the telephone. They ignored him on education. They ignored his appeals for national union, the former British Raj splitting into a Muslim Pakistan and a Hindu India. No one sought a return to the Arcadian Indian village of antiquity. They ignored him, above all, on ahimsa, nonviolence. There was always a small number of exalted satyagrahi who, martyrs, would march into the constables’ truncheons, but one of the things that alarmed the British—as Tagore indicated—was the explosions of violence that accompanied all this alleged nonviolence. Naipaul writes that with independence India discovered again that it was “cruel and horribly violent.” Jaya Prakash Narayan, the late opposition leader, once admitted, “We often behave like animals. . . . We are more likely than not to become aggressive, wild, violent. We kill and burn and loot. . . .”
Why, then, did the Hindu masses so honor this Mahatma, almost all of whose most cherished beliefs they so pointedly ignored, even during his lifetime? For Hindus, the question is not really so puzzling. Gandhi, for them, after all, was a Mahatma, a holy man. He was a symbol of sanctity, not a guide to conduct. Hinduism has a long history of holy men who, traditionally, do not offer themselves up to the public as models of general behavior but withdraw from the world, often into an ashram, to pursue their sanctity in private, a practice which all Hindus honor, if few emulate. The true oddity is that Gandhi, this holy man, having drawn from British sources his notions of nationalism and democracy, also absorbed from the British his model of virtue in public life. He was a historical original, a Hindu holy man that a British model of public service and dazzling advances in mass communications thrust out into the world, to become a great moral leader and the “father of his country.”
Some Indians feel that after the early 1930’s, Gandhi, although by now world-famous, was in fact in sharp decline. Did he at least “get the British out of India”? Some say no. India, in the last days of the British Raj, was already largely governed by Indians (a fact one would never suspect from this movie), and it is a common view that without this irrational, wildly erratic holy man the transition to full independence might have gone both more smoothly and more swiftly. There is much evidence that in his last years Gandhi was in a kind of spiritual retreat and, with all his endless praying and fasting, was no longer pursuing (the very words seem strange in a Hindu context) “the public good.” What he was pursuing, in a strict reversion to Hindu tradition, was his personal holiness. In earlier days he had scoffed at the title accorded him, Mahatma (literally “great soul”). But toward the end, during the hideous paroxysms that accompanied independence, with some of the most unspeakable massacres taking place in Calcutta, he declared, “And if . . . the whole of Calcutta swims in blood, it will not dismay me. For it will be a willing offering of innocent blood.” And in his last days, after there had already been one attempt on his life, he was heard to say, “I am a true Mahatma.”
We can only wonder, furthermore, at a public figure who lectures half his life about the necessity of abolishing modern industry and returning India to its ancient primitiveness, and then picks a Fabian socialist, already drawing up Five-Year Plans, as the country’s first Prime Minister. Audacious as it may seem to contest the views of such heavy thinkers as Margaret Bourke-White, Ralph Nader, and J.K. Galbraith (who found the film’s Gandhi “true to the original” and endorsed the movie wholeheartedly), we have a right to reservations about such a figure as a public man.
I should not be surprised if Gandhi’s greatest real humanitarian achievement was an improvement in the treatment of Untouchables—an area where his efforts were not only assiduous, but actually bore fruit. In this, of course, he ranks well behind the British, who abolished suttee—over ferocious Hindu opposition—in 1829. The ritual immolation by fire of widows on their husbands’ funeral pyres, suttee had the full sanction of the Hindu religion, although it might perhaps be wrong to overrate its importance. Scholars remind us that it was never universal, only “usual.” And there was, after all, a rather extensive range of choice. In southern India the widow was flung into her husband’s fire-pit. In the valley of the Ganges she was placed on the pyre when it was already aflame. In western India, she supported the head of the corpse with her right hand, while, torch in her left, she was allowed the honor of setting the whole thing on fire herself. In the north, where perhaps women were more impious, the widow’s body was constrained on the burning pyre by long poles pressed down by her relatives, just in case, screaming in terror and choking and burning to death, she might forget her dharma. So, yes, ladies, members of the National Council of Churches, believers in the one God, mourners for that holy India before it was despoiled by those brutish British, remember suttee, that interesting, exotic practice in which Hindus, over the centuries, burned to death countless millions of helpless women in a spirit of pious devotion, crying for all I know, Hai Rama! Hai Rama!
I would like to conclude with some observations on two Englishmen, Madeleine Slade, the daughter of a British admiral, and Sir Richard Attenborough, the producer, director, and spiritual godfather of the film, Gandhi. Miss Slade was a jewel in Gandhi’s crown—a member of the British ruling class, as she was, turned fervent disciple of this Indian Mahatma. She is played in the film by Geraldine James with nobility, dignity, and a beatific manner quite up to the level of Candice Bergen, and perhaps even the Virgin Mary. I learn from Ved Mehta’s Mahatma Gandhi and his Apostles, however, that Miss Slade had another master before Gandhi. In about 1917, when she was fifteen, she made contact with the spirit of Beethoven by listening to his sonatas on a player piano. “I threw myself down on my knees in the seclusion of my room,” she wrote in her autobiography, “and prayed, really prayed to God for the first time in my life: ‘Why have I been born over a century too late? Why hast Thou given me realization of him and yet put all these years in between?’”
After World War I, still seeking how best to serve Beethoven, Miss Slade felt an “infinite longing” when she visited his birthplace and grave, and, finally, at the age of thirty-two, caught up with Romain Rolland, who had partly based his renowned Jean Christophe on the composer. But Rolland had written a new book now, about a man called Gandhi, “another Christ,” and before long Miss Slade was quite literally falling on her knees before the Mahatma in India, “conscious of nothing but a sense of light.” Although one would never guess this from the film, she soon (to quote Mehta’s impression) began “to get on Gandhi’s nerves,” and he took every pretext to keep her away from him, in other ashrams, and working in schools and villages in other parts of India. She complained to Gandhi in letters about discrimination against her by orthodox Hindus, who expected her to live in rags and vile quarters during menstruation, considering her unclean and virtually untouchable. Gandhi wrote back, agreeing that women should not be treated like that, but adding that she should accept it all with grace and cheerfulness, “without thinking that the orthodox party is in any way unreasonable.” (This is as good an example as any of Gandhi’s coherence, even in his prime. Women should not be treated like that, but the people who treated them that way were in no way unreasonable.)
Some years after Gandhi’s death, Miss Slade rediscovered Beethoven, becoming conscious again “of the realization of my true self. For a while I remained lost in the world of the spirit. . . .” She soon returned to Europe and serving Beethoven, her “true calling.” When Mehta finally found her in Vienna, she told him, “Please don’t ask me any more about Bapu [Gandhi]. I now belong to van Beethoven. In matters of the spirit, there is always a call.” A polite description of Madeleine Slade is that she was an extreme eccentric. In the vernacular, she was slightly cracked.
Sir Richard Attenborough, however, isn’t cracked at all. The only puzzle is how he suddenly got to be a pacifist, a fact which his press releases now proclaim to the world. Attenborough trained as a pilot in the RAF in World War II, and was released briefly to the cinema, where he had already begun his career in Noël Coward’s super-patriotic In Which We Serve. He then returned to active service, flying combat missions with the RAF. Richard Attenborough, in short—when Gandhi was pleading with the British to surrender to the Nazis, assuring them that “Hitler is not a bad man”—was fighting for his country. The Viceroy of India warned Gandhi grimly that “We are engaged in a struggle,” and Attenborough played his part in that great struggle, and proudly, too, as far as I can tell. To my knowledge he has never had a crise de conscience on the matter, or announced that he was carried away by the war fever and that Britain really should have capitulated to the Nazis—which Gandhi would have had it do.
Although the present film is handsomely done in its way, no one has ever accused Attenborough of being excessively endowed with either acting or directing talent. In the 50’s he was a popular young British entertainer, but his most singular gift appeared to be his entrepreneurial talent as a businessman, using his movie fees to launch successful London restaurants (at one time four), and other business ventures. At the present moment he is Chairman of the Board of Capital Radio (Britain’s most successful commercial station), Gold-crest Films, the British Film Institute, and Deputy Chairman of the BBC’s new Channel 4 television network. Like most members of the nouveaux riches on the rise, he has also reached out for symbols of respectability and public service, and has assembled quite a collection. He is a Trustee of the Tate Gallery, Pro-Chancellor of Sussex University, President of Britain’s Muscular Dystrophy Group, Chairman of the Actors’ Charitable Trust and, of course, Chairman of the Royal Academy of Dramatic Art. There may be even more, but this is a fair sampling. In 1976, quite fittingly, he was knighted, by a Labor government, but his friends say he still insists on being called “Dickie.”
It is quite general today for members of the professional classes, even when not artistic types, to despise commerce and feel that the state, the economy, and almost everything else would be better and more idealistically run by themselves rather than these loutish businessmen. Sir Dickie, however, being a highly successful businessman himself, would hardly entertain such an antipathy. But as he scrambled his way to the heights perhaps he found himself among high-minded idealists, utopians, equalitarians, and lovers of the oppressed. Now there are those who think Sir Dickie converted to pacifism when Indira Gandhi handed him a check for several million dollars. But I do not believe this. I think Sir Dickie converted to pacifism out of idealism.
His pacifism, I confess, has been more than usually muddled. In 1968, after twenty-six years in the profession, he made his directorial debut with Oh! What a Lovely War, with its superb parody of Britain’s jingoistic music-hall songs of the “Great War,” World War I. Since I had the good fortune to see Joan Littlewood’s original London stage production, which gave the work its entire style, I cannot think that Sir Dickie’s contribution was unduly large. Like most commercially successful parodies—from Sandy Wilson’s The Boy Friend to Broadway’s Superman, Dracula, and The Crucifier of Blood—Oh! What a Lovely War depended on the audience’s (if not Miss Littlewood’s) retaining a substantial affection for the subject being parodied: in this case, a swaggering hyper-patriotism, which recalled days when the empire was great. In any event, since Miss Littlewood identified herself as a Communist and since Communists, as far as I know, are never pacifists, Sir Dickie’s case for the production’s “pacifism” seems stymied from the other angle as well.
Sir Dickie’s next blow for pacifism was Young Winston (1973), which, the new publicity manual says, “explored how Churchill’s childhood traumas and lack of parental affection became the spurs which goaded him to . . . a position of great power.” One would think that a man who once flew combat missions under the orders of the great war leader—and who seemingly wanted his country to win—would thank God for childhood traumas and lack of parental affection if such were needed to provide a Churchill in the hour of peril. But on pressed Sir Dickie, in the year of his knighthood, with A Bridge Too Far, the story of the futile World War II assault on Arnhem, described by Sir Dickie—now, at least—as “a further plea for pacifism.”
But does Sir Richard Attenborough seriously think that, rather than go through what we did at Arnhem, we should have given in, let the Nazis be, and even—true pacifists-let them occupy Britain, Canada, the United States, contenting ourselves only with “making them feel unwanted”? At the level of idiocy to which discussions of war and peace have sunk in the West, every harebrained idealist who discovers that war is not a day at the beach seems to think he has found an irresistible argument for pacifism. Is Pearl Harbor an argument for pacifism? Bataan? Dunkirk? Dieppe? The Ardennes? Roland fell at Roncesvalles. Is the Song of Roland a pacifist epic? If so, why did William the Conqueror have it chanted to his men as they marched into battle at Hastings? Men prove their valor in defeat as well as in victory. Even Sergeant-Major Gandhi knew that. Up in the moral never-never land which Sir Dickie now inhabits, perhaps they think the Alamo led to a great wave of pacifism in Texas.
In a feat of sheer imbecility, Attenborough has dedicated Gandhi to Lord Mountbatten, who commanded the Southeast Asian Theater during World War II. Mount-batten, you might object, was hardly a pacifist—but then again he was murdered by Irish terrorists, which proves how frightful all that sort of thing is, Sir Dickie says, and how we must end it all by imitating Gandhi. Not the Gandhi who called for seas of innocent blood, you understand, but the movie-Gandhi, the nice one.
The historical Gandhi’s favorite mantra, strange to tell, was Do or Die (he called it literally that, a “mantra”). I think Sir Dickie should reflect on this, because it means, dixit Gandhi, that a man must be prepared to die for what he believes in, for, himsa or ahimsa, death is always there, and in an ultimate test men who are not prepared to face it lose. Gandhi was erratic, irrational, tyrannical, obstinate. He sometimes verged on lunacy. He believed in a religion whose ideas I find somewhat repugnant. He worshipped cows. But I still say this: he was brave. He feared no one.
On a lower level of being, I have consequently given some thought to the proper mantra for spectators of the movie Gandhi. After much reflection, in homage to Ralph Nader, I have decided on Caveat Emptor, “buyer beware.” Repeated many thousand times in a seat in the cinema it might with luck lead to Om, the Hindu dream of nothingness, the Ultimate Void.
The Gandhi Nobody Knows
Must-Reads from Magazine
Can it be reversed?
Writing in these pages last year (“Illiberalism: The Worldwide Crisis,” July/August 2016), I described this surge of intemperate politics as a global phenomenon, a crisis of illiberalism stretching from France to the Philippines and from South Africa to Greece. Donald Trump and Bernie Sanders, I argued, were articulating American versions of this growing challenge to liberalism. By “liberalism,” I was referring not to the left or center-left but to the philosophy of individual rights, free enterprise, checks and balances, and cultural pluralism that forms the common ground of politics across the West.
Less a systematic ideology than a posture or sensibility, the new illiberalism nevertheless has certain core planks. Chief among these are a conspiratorial account of world events; hostility to free trade and finance capital; opposition to immigration that goes beyond reasonable restrictions and bleeds into virulent nativism; impatience with norms and procedural niceties; a tendency toward populist leader-worship; and skepticism toward international treaties and institutions, such as NATO, that provide the scaffolding for the U.S.-led postwar order.
The new illiberals, I pointed out, all tend to admire established authoritarians to varying degrees. Trump, along with France’s Marine Le Pen and many others, looks to Vladimir Putin. For Sanders, it was Hugo Chavez’s Venezuela, where, the Vermont socialist said in 2011, “the American dream is more apt to be realized.” Even so, I argued, the crisis of illiberalism traces mainly to discontents internal to liberal democracies.
Trump’s election and his first eight months in office have confirmed the thrust of my predictions, if not all of the policy details. On the policy front, the new president has proved too undisciplined, his efforts too wild and haphazard, to reorient the U.S. government away from postwar liberal order.
The courts blunted the “Muslim ban.” The Trump administration has reaffirmed Washington’s commitment to defend treaty partners in Europe and East Asia. Trumpian grumbling about allies not paying their fair share—a fair point in Europe’s case, by the way—has amounted to just that. The president did pull the U.S. out of the Trans-Pacific Partnership, but even the ultra-establishmentarian Hillary Clinton went from supporting to opposing the pact once she figured out which way the Democratic winds were blowing. The North American Free Trade Agreement, which came into being nearly a quarter-century ago, does look shaky at the moment, but there is no reason to think that it won’t survive in some modified form.
Yet on the cultural front, the crisis of illiberalism continues to rage. If anything, it has intensified, as attested by the events surrounding the protest over a Robert E. Lee statue in Charlottesville, Virginia. The president refused to condemn unequivocally white nationalists who marched with swastikas and chanted “Jews will not replace us.” Trump even suggested there were “very fine people” among them, thus winking at the so-called alt-right as he had during the campaign. In the days that followed, much of the left rallied behind so-called antifa (“anti-fascist”) militants who make no secret of their allegiance to violent totalitarian ideologies at the other end of the political spectrum.
Disorder is the new American normal, then. Questions that appeared to have been settled—about the connection between economic and political liberty, the perils of conspiracism and romantic politics, America’s unique role on the world stage, and so on—are unsettled once more. Serious people wonder out loud whether liberal democracy is worth maintaining at all, with many of them concluding that it is not. The return of ideas that for good reason were buried in the last century threatens the decent political order that has made the U.S. an exceptionally free and prosperous civilization.F or many leftists, America’s commitment to liberty and equality before the law has always masked despotism and exploitation. This view long predated Trump’s rise, and if they didn’t subscribe to it themselves, too often mainstream Democrats and progressives treated its proponents—the likes of Noam Chomsky and Howard Zinn—as beloved and respectable, if slightly eccentric, relatives.
This cynical vision of the free society (as a conspiracy against the dispossessed) was a mainstay of Cold War–era debates about the relative merits of Western democracy and Communism. Soviet apologists insisted that Communist states couldn’t be expected to uphold “merely” formal rights when they had set out to shape a whole new kind of man. That required “breaking a few eggs,” in the words of the Stalinist interrogators in Arthur Koestler’s Darkness at Noon. Anyway, what good were free speech and due process to the coal miner, when under capitalism the whole social structure was rigged against him?
That line worked for a time, until the scale of Soviet tyranny became impossible to justify by anyone but its most abject apologists. It became obvious that “bourgeois justice,” however imperfect, was infinitely preferable to the Marxist alternative. With the Communist experiment discredited, and Western workers uninterested in staging world revolution, the illiberal left began shifting instead to questions of identity. In race-gender-sexuality theory and the identitarian “subaltern,” it found potent substitutes for dialectical materialism and the proletariat. We are still living with the consequences of this shift.
Although there were superficial resemblances, this new politics of identity differed from earlier civil-rights movements. Those earlier movements had sought a place at the American table for hitherto entirely or somewhat excluded groups: blacks, women, gays, the disabled, and so on. In doing so, they didn’t seek to overturn or radically reorganize the table. Instead, they reaffirmed the American Founding (think of Martin Luther King Jr.’s constant references to the Declaration of Independence). And these movements succeeded, owing to America’s tremendous capacity for absorbing social change.
Yet for the new identitarians, as for the Marxists before them, liberal-democratic order was systematically rigged against the downtrodden—now redefined along lines of race, gender, and sexuality, with social class quietly swept under the rug. America’s strides toward racial progress, not least the election and re-election of an African-American president, were dismissed. The U.S. still deserved condemnation because it fell short of perfect inclusion, limitless autonomy, and complete equality—conditions that no free society can achieve given the root fact of human nature. The accidentals had changed from the Marxist days, in other words, but the essentials remained the same.
In one sense, though, the identitarians went further. The old Marxists still claimed to stand on objectively accessible truth. Not so their successors. Following intellectual lodestars such as the gender theorist Judith Butler, the identity left came to reject objective truth—and with it, biological sex differences, aesthetic standards in art, the possibility of universal moral precepts, and much else of the kind. All of these things, the left identitarians said, were products of repressive institutions, hierarchies, and power.
Today’s “social-justice warriors” are heirs to this sordid intellectual legacy. They claim to seek justice. But, unmoored from any moral foundations, SJW justice operates like mob justice and revolutionary terror, usually carried out online. SJWs claim to protect individual autonomy, but the obsession with group identity and power dynamics means that SJW autonomy claims must destroy the autonomy of others. Self-righteousness married to total relativism is a terrifying thing.
It isn’t enough to have legalized same-sex marriage in the U.S. via judicial fiat; the evangelical baker must be forced to bake cakes for gay weddings. It isn’t enough to have won legal protection and social acceptance for the transgendered; the Orthodox rabbi must use preferred trans pronouns on pain of criminal prosecution. Likewise, since there is no objective truth to be gained from the open exchange of ideas, any speech that causes subjective discomfort among members of marginalized groups must be suppressed, if necessary through physical violence. Campus censorship that began with speech codes and mobs that prevented conservative and pro-Israel figures from speaking has now evolved into a general right to beat anyone designated as a “fascist,” on- or off-campus.
For the illiberal left, the election of Donald Trump was indisputable proof that behind America’s liberal pieties lurks, forever, the beast of bigotry. Trump, in this view, wasn’t just an unqualified vulgarian who nevertheless won the decisive backing of voters dissatisfied with the alternative or alienated from mainstream politics. Rather, a vote for Trump constituted a declaration of war against women, immigrants, and other victims of American “structures of oppression.” There would be no attempt to persuade Trump supporters; war would be answered by war.
This isn’t liberalism. Since it can sometimes appear as an extension of traditional civil-rights activism, however, identity leftism has glommed itself onto liberalism. It is frequently impossible to tell where traditional autonomy- and equality-seeking liberalism ends and repressive identity leftism begins. Whether based on faulty thinking or out of a sense of weakness before an angry and energetic movement, liberals have too often embraced the identity left as their own. They haven’t noticed how the identitarians seek to undermine, not rectify, liberal order.
Some on the left, notably Columbia University’s Mark Lilla, are sounding the alarm and calling on Democrats to stress the common good over tribalism. Yet these are a few voices in the wilderness. Identitarians of various stripes still lord over the broad left, where it is fashionable to believe that the U.S. project is predatory and oppressive by design. If there is a viable left alternative to identity on the horizon, it is the one offered by Sanders and his “Bernie Bros”—which is to say, a reversion to the socialism and class struggle of the previous century.
Americans, it seems, will have to wait a while for reason and responsibility to return to the left.T
hen there is the illiberal fever gripping American conservatives. Liberal democracy has always had its critics on the right, particularly in Continental Europe, where statist, authoritarian, and blood-and-soil accounts of conservatism predominate. Mainstream Anglo-American conservatism took a different course. It has championed individual rights, free enterprise, and pluralism while insisting that liberty depends on public virtue and moral order, and that sometimes the claims of liberty and autonomy must give way to those of tradition, state authority, and the common good.
The whole beauty of American order lies in keeping in tension these rival forces that are nevertheless fundamentally at peace. The Founders didn’t adopt wholesale Enlightenment liberalism; rather, they tempered its precepts about universal rights with the teachings of biblical religion as well as Roman political theory. The Constitution drew from all three wellsprings. The product was a whole, and it is a pointless and ahistorical exercise to elevate any one source above the others.
American conservatism and liberalism, then, are in fact branches of each other, the one (conservatism) invoking tradition and virtue to defend and, when necessary, discipline the regime of liberty; the other (liberalism) guaranteeing the open space in which churches, volunteer organizations, philanthropic activity, and other sources of tradition and civic virtue flourish, in freedom, rather than through state establishment or patronage.
One result has been long-term political stability, a blessing that Americans take for granted. Another has been the transformation of liberalism into the lingua franca of all politics, not just at home but across a world that, since 1945, has increasingly reflected U.S. preferences. The great French classical liberal Raymond Aron noted in 1955 that the “essentials of liberalism—the respect for individual liberty and moderate government—are no longer the property of a single party: they have become the property of all.” As Aron archly pointed out, even liberalism’s enemies tend to frame their objections using the rights-based talk associated with liberalism.
Under Trump, however, some in the party of the right have abdicated their responsibility to liberal democracy as a whole. They have reduced themselves to the lowest sophistry in defense of the New Yorker’s inanities and daily assaults on presidential norms. Beginning when Trump clinched the GOP nomination last year, a great deal of conservative “thinking” has amounted to: You did X to us, now enjoy it as we dish it back to you and then some. Entire websites and some of the biggest stars in right-wing punditry are singularly devoted to making this rather base point. If Trump is undermining this or that aspect of liberal order that was once cherished by conservatives, so be it; that 63 million Americans supported him and that the president “drives the left crazy”—these are good enough reasons to go along.
Some of this is partisan jousting that occurs with every administration. But when it comes to Trump’s most egregious statements and conduct—such as his repeated assertions that the U.S. and Putin’s thugocracy are moral equals—the apologetics are positively obscene. Enough pooh-poohing, whataboutery, and misdirection of this kind, and there will be no conservative principle left standing.
More perniciously, as once-defeated illiberal philosophies have returned with a vengeance to the left, so have their reactionary analogues to the right. The two illiberalisms enjoy a remarkable complementarity and even cross-pollinate each other. This has developed to the point where it is sometimes hard to distinguish Tucker Carlson from Chomsky, Laura Ingraham from Julian Assange, the Claremont Review from New Left Review, and so on.
Two slanders against liberalism in particular seem to be gathering strength on the thinking right. The first is the tendency to frame elements of liberal democracy, especially free trade, as a conspiracy hatched by capitalists, the managerial class, and others with soft hands against American workers. One needn’t renounce liberal democracy as a whole to believe this, though believers often go the whole hog. The second idea is that liberalism itself was another form of totalitarianism all along and, therefore, that no amount of conservative course correction can set right what is wrong with the system.
These two theses together represent a dismaying ideological turn on the right. The first—the account of global capitalism as an imposition of power over the powerless—has gained currency in the pages of American Affairs, the new journal of Trumpian thought, where class struggle is a constant theme. Other conservatives, who were always skeptical of free enterprise and U.S.-led world order, such as the Weekly Standard’s Christopher Caldwell, are also publishing similar ideas to a wider reception than perhaps greeted them in the past.
In a March 2017 essay in the Claremont Review of Books, for example, Caldwell flatly described globalization as a “con game.” The perpetrators, he argued, are “unscrupulous actors who have broken promises and seized a good deal of hard-won public property.” These included administrations of both parties that pursued trade liberalization over decades, people who live in cities and therefore benefit from the knowledge-based economy, American firms, and really anyone who has ever thought to capitalize on global supply chains to boost competitiveness—globalists, in a word.
By shipping jobs and manufacturing processes overseas, Caldwell contended, these miscreants had stolen not just material things like taxpayer-funded research but also concepts like “economies of scale” (you didn’t build that!). Thus, globalization in the West differed “in degree but not in kind from the contemporaneous Eastern Bloc looting of state assets.”
That comparison with predatory post-Communist privatization is a sure sign of ideological overheating. It is somewhat like saying that a consumer bank’s lending to home buyers differs in degree but not in kind from a loan shark’s racket in a housing project. Well, yes, in the sense that the underlying activity—moneylending, the purchase of assets—is the same in both cases. But the context makes all the difference: The globalization that began after World War II and accelerated in the ’90s took place within a rules-based system, which duly elected or appointed policymakers in Western democracies designed in good faith and for a whole host of legitimate strategic and economic reasons.
These policymakers knew that globalization was as old as civilization itself. It would take place anyway, and the only question was whether it would be rules-based and efficient or the kind of globalization that would be driven by great-power rivalry and therefore prone to protectionist trade wars. And they were right. What today’s anti-trade types won’t admit is that defeating the Trans-Pacific Partnership and a proposed U.S.-European trade pact known as TTIP won’t end globalization as such; instead, it will cede the game to other powers that are less concerned about rules and fair play.
The postwar globalizers may have gone too far (or not far enough!). They certainly didn’t give sufficient thought to the losers in the system, or how to deal with the de-industrialization that would follow when information became supremely mobile and wages in the West remained too high relative to skills and productivity gains in the developing world. They muddled and compromised their way through these questions, as all policymakers in the real world do.
The point is that these leaders—the likes of FDR, Churchill, JFK, Ronald Reagan, Margaret Thatcher, and, yes, Bill Clinton—acted neither with malice aforethought nor anti-democratically. It isn’t true, contra Caldwell, that free trade necessarily requires “veto-proof and non-consultative” politics. The U.S., Britain, and other members of what used to be called the Free World have respected popular sovereignty (as understood at the time) for as long as they have been trading nations. Put another way, you were far more likely to enjoy political freedom if you were a citizen of one of these states than of countries that opposed economic liberalism in the 20th century. That remains true today. These distinctions matter.
Caldwell and like-minded writers of the right, who tend to dwell on liberal democracies’ crimes, are prepared to tolerate far worse if it is committed in the name of defeating “globalism.” Hence the speech on Putin that Caldwell delivered this spring at a Hillsdale College gathering in Phoenix. Promising not to “talk about what to think about Putin,” he proceeded to praise the Russian strongman as the “preeminent statesman of our time” (alongside Turkish strongman Recep Tayyip Erdogan). Putin, Caldwell said, “has become a symbol of national self-determination.”
Then Caldwell made a remark that illuminates the link between the illiberalisms of yesterday and today. Putin is to “populist conservatives,” he declared, what Castro once was to progressives. “You didn’t have to be a Communist to appreciate the way Castro, whatever his excesses, was carving out a space of autonomy for his country.”
Whatever his excesses, indeed.T
he other big idea is that today’s liberal crises aren’t a bug but a core feature of liberalism. This line of thinking is particularly prevalent among some Catholic traditionalists and other orthodox Christians (both small- and capital-“o”). The common denominator, it seems to me, is having grown up as a serious believer at a time when many liberals—to their shame—have declared war on faith generally and social conservatism in particular.
The argument essentially is this:
We (social conservatives, traditionalists) saw the threat from liberalism coming. With its claims about abstract rights and universal reason, classical liberalism had always posed a danger to the Church and to people of God. We remembered what those fired up by the new ideas did to our nuns and altars in France. Still we made peace with American liberal order, because we were told that the Founders had “built on low but solid ground,” to borrow Leo Strauss’s famous formulation, or that they had “built better than they knew,” as American Catholic hierarchs in the 19th century put it.
Maybe these promises held good for a couple of centuries, the argument continues, but they no longer do. Witness the second sexual revolution under way today. The revolutionaries are plainly telling us that we must either conform our beliefs to Herod’s ways or be driven from the democratic public square. Can it still be said that the Founding rested on solid ground? Did the Founders really build better than they knew? Or is what is passing now precisely what they intended, the rotten fruit of the Enlightenment universalism that they planted in the Constitution? We don’t love Trump (or Putin, Hungary’s Viktor Orbán, etc.), but perhaps he can counter the pincer movement of sexual and economic liberalism, and restore a measure of solidarity and commitment to the Western project.
The most pessimistic of these illiberal critics go so far as to argue that liberalism isn’t all that different from Communism, that both are totalitarian children of the Enlightenment. One such critic, Harvard Law School’s Adrian Vermeule, summed up this position in a January essay in First Things magazine:
The stock distinction between the Enlightenment’s twins—communism is violently coercive while liberalism allows freedom of thought—is glib. Illiberal citizens, trapped [under liberalism] without exit papers, suffer a narrowing sphere of permitted action and speech, shrinking prospects, and increasing pressure from regulators, employers, and acquaintances, and even from friends and family. Liberal society celebrates toleration, diversity, and free inquiry, but in practice it features a spreading social, cultural, and ideological conformism.1
I share Vermeule’s despair and that of many other conservative-Christian friends, because there have been genuinely alarming encroachments against conscience, religious freedom, and the dignity of life in Western liberal democracies in recent years. Even so, despair is an unhelpful companion to sober political thought, and the case for plunging into political illiberalism is weak, even on social-conservative grounds.
Here again what commends liberalism is historical experience, not abstract theory. Simply put, in the real-world experience of the 20th century, the Church, tradition, and religious minorities fared far better under liberal-democratic regimes than they did under illiberal alternatives. Are coercion and conformity targeting people of faith under liberalism? To be sure. But these don’t take the form of the gulag or the concentration camp or the soccer stadium–cum-killing field. Catholic political practice knows well how to draw such moral distinctions between regimes: Pope John Paul II befriended Reagan. If liberal democracy and Communism were indeed “twins” whose distinctions are “glib,” why did he do so?
And as Pascal Bruckner wrote in his essay “The Tyranny of Guilt,” if liberal democracy does trap or jail you (politically speaking), it also invariably slips the key under your cell door. The Swedish midwives driven out of the profession over their pro-life views can take their story to the media. The Down syndrome advocacy outfit whose anti-eugenic advertising was censored in France can sue in national and then international courts. The Little Sisters of the Poor can appeal to the Supreme Court for a conscience exemption to Obamacare’s contraceptives mandate. And so on.
Conversely, once you go illiberal, you don’t just rid yourself of the NGOs and doctrinaire bureaucrats bent on forcing priests to perform gay marriages; you also lose the legal guarantees that protect the Church, however imperfectly, against capricious rulers and popular majorities. And if public opinion in the West is turning increasingly secular, indeed anti-Christian, as social conservatives complain and surveys seem to confirm, is it really a good idea to militate in favor of a more illiberal order rather than defend tooth and nail liberal principles of freedom of conscience? For tomorrow, the state might fall into Elizabeth Warren’s hands.
Nor, finally, is political liberalism alone to blame for the Church’s retreating on various fronts. There have been plenty of wounds inflicted by churchmen and laypeople, who believed that they could best serve the faith by conforming its liturgy, moral teaching, and public presence to liberal order. But political liberalism didn’t compel these changes, at least not directly. In the space opened up by liberalism, and amid the kaleidoscopic lifestyles that left millions of people feeling empty and confused, it was perfectly possible to propose tradition as an alternative. It is still possible to do so.N one of this is to excuse the failures of liberals. Liberals and mainstream conservatives must go back to the drawing board, to figure out why it is that thoughtful people have come to conclude that their system is incompatible with democracy, nationalism, and religious faith. Traditionalists and others who see Russia’s mafia state as a defender of Christian civilization and national sovereignty have been duped, but liberals bear some blame for driving large numbers of people in the West to that conclusion.
This is a generational challenge for the liberal project. So be it. Liberal societies like America’s by nature invite such questioning. But before we abandon the 200-and-some-year-old liberal adventure, it is worth examining the ways in which today’s left-wing and right-wing critiques of it mirror bad ideas that were overcome in the previous century. The ideological ferment of the moment, after all, doesn’t relieve the illiberals of the responsibility to reckon with the lessons of the past.
1 Vermeule was reviewing The Demon in Democracy, a 2015 book by the Polish political theorist and parliamentarian Ryszard Legutko that makes the same case. Fred Siegel’s review of the English edition appeared in our June 2016 issue.
How the courts are intervening to block some of the most unjust punishments of our time
Barrett’s decision marked the 59th judicial setback for a college or university since 2013 in a due-process lawsuit brought by a student accused of sexual assault. (In four additional cases, the school settled a lawsuit before any judicial decision occurred.) This body of law serves as a towering rebuke to the Obama administration’s reinterpretation of Title IX, the 1972 law barring sex discrimination in schools that receive federal funding.
Beginning in 2011, the Education Department’s Office for Civil Rights (OCR) issued a series of “guidance” documents pressuring colleges and universities to change how they adjudicated sexual-assault cases in ways that increased the likelihood of guilty findings. Amid pressure from student and faculty activists, virtually all elite colleges and universities have gone far beyond federal mandates and have even further weakened the rights of students accused of sexual assault.
Like all extreme victims’-rights approaches, the new policies had the greatest impact on the wrongly accused. A 2016 study from UCLA public-policy professor John Villasenor used just one of the changes—schools employing the lowest standard of proof, a preponderance of the evidence—to predict that as often as 33 percent of the time, campus Title IX tribunals would return guilty findings in cases involving innocent students. Villasenor’s study could not measure the impact of other Obama-era policy demands—such as allowing accusers to appeal not-guilty findings, discouraging cross-examination of accusers, and urging schools to adjudicate claims even when a criminal inquiry found no wrongdoing.
In a September 7 address at George Mason University, Education Secretary Betsy DeVos stated that “no student should be forced to sue their way to due process.” But once enmeshed in the campus Title IX process, a wrongfully accused student’s best chance for justice may well be a lawsuit filed after his college incorrectly has found him guilty. (According to data from United Educators, a higher-education insurance firm, 99 percent of students accused of campus sexual assault are male.) The Foundation for Individual Rights has identified more than 180 such lawsuits filed since the 2011 policy changes. That figure, obviously, excludes students with equally strong claims whose families cannot afford to go to court. These students face life-altering consequences. As Judge T.S. Ellis III noted in a 2016 decision, it is “so clear as to be almost a truism” that a student will lose future educational and employment opportunities if his college wrongly brands him a rapist.
“It is not the role of the federal courts to set aside decisions of school administrators which the court may view as lacking in wisdom or compassion.” So wrote the Supreme Court in a 1975 case, Wood v. Strickland. While the Supreme Court has made clear that colleges must provide accused students with some rights, especially when dealing with nonacademic disciplinary questions, courts generally have not been eager to intervene in such matters.
This is what makes the developments of the last four years all the more remarkable. The process began in May 2013, in a ruling against St. Joseph’s University, and has lately accelerated (15 rulings in 2016 and 21 thus far in 2017). Of the 40 setbacks for colleges in federal court, 14 came from judges nominated by Barack Obama, 11 from Clinton nominees, and nine from selections of George W. Bush. Brown University has been on the losing side of three decisions; Duke, Cornell, and Penn State, two each.
Court decisions since the expansion of Title IX activism have not all gone in one direction. In 36 of the due-process lawsuits, courts have permitted the university to maintain its guilty finding. (In four other cases, the university settled despite prevailing at a preliminary stage.) But even in these cases, some courts have expressed discomfort with campus procedures. One federal judge was “greatly troubled” that Georgia Tech veered “very far from an ideal representation of due process” when its investigator “did not pursue any line of investigation that may have cast doubt on [the accuser’s] account of the incident.” Another went out of his way to say that he considered it plausible that a former Case Western Reserve University student was actually “innocent of the charges levied against him.” And one state appellate judge opened oral argument by bluntly informing the University of California’s lawyer, “When I . . . finished reading all the briefs in this case, my comment was, ‘Where’s the kangaroo?’”
Judges have, obviously, raised more questions in cases where the college has found itself on the losing side. Those lawsuits have featured three common areas of concern: bias in the investigation, resulting in a college decision based on incomplete evidence; procedures that prevented the accused student from challenging his accuser’s credibility, chiefly through cross-examination; and schools utilizing a process that seemed designed to produce a predetermined result, in response to real or perceived pressure from the federal government.C olleges and universities have proven remarkably willing to act on incomplete information when adjudicating sexual-assault cases. In December 2013, for example, Amherst College expelled a student for sexual assault despite text messages (which the college investigator failed to discover) indicating that the accuser had consented to sexual contact. The accuser’s own testimony also indicated that she might have committed sexual assault, by initiating sexual contact with a student who Amherst conceded was experiencing an alcoholic blackout. When the accused student sued Amherst, the college said its failure to uncover the text messages had been irrelevant because its investigator had only sought texts that portrayed the incident as nonconsensual. In February, Judge Mark Mastroianni allowed the accused student’s lawsuit to proceed, commenting that the texts could raise “additional questions about the credibility of the version of events [the accuser] gave during the disciplinary proceeding.” The two sides settled in late July.
Amherst was hardly alone in its eagerness to avoid evidence that might undermine the accuser’s version of events; the same happened at Penn State, St. Joseph’s, Duke, Ohio State, Occidental, Lynn, Marlboro, Michigan, and Notre Dame.
Even in cases with a more complete evidentiary base, accused students have often been blocked from presenting a full-fledged defense. As part of its reinterpretation of Title IX, the Obama administration sought to shield campus accusers from cross-examination. OCR’s 2011 guidance “strongly” discouraged direct cross-examination of accusers by the accused student—a critical restriction, since most university procedures require the accused student, rather than his lawyer, to defend himself in the hearing. OCR’s 2014 guidance suggested that this type of cross-examination in and of itself could create a hostile environment. The Obama administration even spoke favorably about the growing trend among schools to abolish hearings altogether and allow a single official to serve as investigator, prosecutor, judge, and jury in sexual-assault cases.
The Supreme Court has never held that campus disciplinary hearings must permit cross-examination. Nonetheless, the recent attack on the practice has left schools struggling to explain why they would not want to utilize what the Court has described as the “greatest legal engine ever invented for the discovery of truth.” In June 2016, the University of Cincinnati found a student guilty of sexual assault after a hearing at which neither his accuser nor the university’s Title IX investigator appeared. In an unintentionally comical line, the hearing chair noted the absent witnesses before asking the accused student if he had “any questions of the Title IX report.” The student, befuddled, replied, “Well, since she’s not here, I can’t really ask anything of the report.” (The panel chair did not indicate how the “report” could have answered any questions.) Cincinnati found the student guilty anyway.1
Limitations on full cross-examination also played a role in judicial setbacks for Middlebury, George Mason, James Madison, Ohio State, Occidental, Penn State, Brandeis, Amherst, Notre Dame, and Skidmore.
Finally, since 2011, more than 300 students have filed Title IX complaints with the Office for Civil Rights, alleging mishandling of their sexual-assault allegation by their college. OCR’s leadership seemed to welcome the complaints, which allowed Obama officials not only to inspect the individual case but all sexual-assault claims at the school in question over a three-year period. Northwestern University professor Laura Kipnis has estimated that during the Obama years, colleges spent between $60 million and $100 million on these investigations. If OCR finds a Title IX violation, that might lead to a loss of federal funding. This has led Harvard Law professors Jeannie Suk Gersen, Janet Halley, Elizabeth Bartholet, and Nancy Gertner to observe in a white paper submitted to OCR that universities have “strong incentives to ensure the school stays in OCR’s good graces.”
One of the earliest lawsuits after the Obama administration’s policy shift, involving former Xavier University basketball player Dez Wells, demonstrated how an OCR investigation can affect the fairness of a university inquiry. The accuser’s complaint had been referred both to Xavier’s Title IX office and the Cincinnati police. The police concluded that the allegation was meritless; Hamilton County Prosecuting Attorney Joseph Deters later said he considered charging the accuser with filing a false police report.
Deters asked Xavier to delay its proceedings until his office completed its investigation. School officials refused. Instead, three weeks after the initial allegation, the university expelled Wells. He sued and speculated that Xavier’s haste came not from a quest for justice but instead from a desire to avoid difficulties in finalizing an agreement with OCR to resolve an unrelated complaint filed by two female Xavier students. (In recent years, OCR has entered into dozens of similar resolution agreements, which bind universities to policy changes in exchange for removing the threat of losing federal funds.) In a July 2014 ruling, Judge Arthur Spiegel observed that Xavier’s disciplinary tribunal, however “well-equipped to adjudicate questions of cheating, may have been in over its head with relation to an alleged false accusation of sexual assault.” Soon thereafter, the two sides settled; Wells transferred to the University of Maryland.
Ohio State, Occidental, Cornell, Middlebury, Appalachian State, USC, and Columbia have all found themselves on the losing side of court decisions arising from cases that originated during a time in which OCR was investigating or threatening to investigate the school. (In the Ohio State case, one university staffer testified that she didn’t know whether she had an obligation to correct a false statement by an accuser to a disciplinary panel.) Pressure from OCR can be indirect, as well. The Obama administration interpreted federal law as requiring all universities to have at least one Title IX coordinator; larger universities now employ dozens of Title IX personnel who, as the Harvard Law professors explained, “have reason to fear for their jobs if they hold a student not responsible or if they assign a rehabilitative or restorative rather than a harshly punitive sanction.”A mid the wave of judicial setbacks for universities, two decisions in particular stand out. Easily the most powerful opinion in a campus due-process case came in March 2016 from Judge F. Dennis Saylor. While the stereotypical campus sexual-assault allegation results from an alcohol-filled, one-night encounter between a male and a female student, a case at Brandeis University involved a long-term monogamous relationship between two male students. A bad breakup led to the accusing student’s filing the following complaint, against which his former boyfriend was expected to provide a defense: “Starting in the month of September, 2011, the Alleged violator of Policy had numerous inappropriate, nonconsensual sexual interactions with me. These interactions continued to occur until around May 2013.”
To adjudicate, Brandeis hired a former OCR staffer, who interviewed the two students and a few of their friends. Since the university did not hold a hearing, the investigator decided guilt or innocence on her own. She treated each incident as if the two men were strangers to each other, which allowed her to determine that sexual “violence” had occurred in the relationship. The accused student, she found, sometimes looked at his boyfriend in the nude without permission and sometimes awakened his boyfriend with kisses when the boyfriend wanted to stay asleep. The university’s procedures prevented the student from seeing the investigator’s report, with its absurdly broad definition of sexual misconduct, in preparing his appeal. “In the context of American legal culture,” Boston Globe columnist Dante Ramos later argued, denying this type of information “is crazy.” “Standard rules of evidence and other protections for the accused keep things like false accusations or mistakes by authorities from hurting innocent people.” When the university appeal was denied, the student sued.
At an October 2015 hearing to consider the university’s motion to dismiss, Saylor seemed flabbergasted at the unfairness of the school’s approach. “I don’t understand,” he observed, “how a university, much less one named after Louis Brandeis, could possibly think that that was a fair procedure to not allow the accused to see the accusation.” Brandeis’s lawyer cited pressure to conform to OCR guidance, but the judge deemed the university’s procedures “closer to Salem 1692 than Boston, 2015.”
The following March, Saylor issued an 89-page opinion that has been cited in virtually every lawsuit subsequently filed by an accused student. “Whether someone is a ‘victim’ is a conclusion to be reached at the end of a fair process, not an assumption to be made at the beginning,” Saylor wrote. “If a college student is to be marked for life as a sexual predator, it is reasonable to require that he be provided a fair opportunity to defend himself and an impartial arbiter to make that decision.” Saylor concluded that Brandeis forced the accused student “to defend himself in what was essentially an inquisitorial proceeding that plausibly failed to provide him with a fair and reasonable opportunity to be informed of the charges and to present an adequate defense.”
The student, vindicated by the ruling’s sweeping nature, then withdrew his lawsuit. He currently is pursuing a Title IX complaint against Brandeis with OCR.
Four months later, a three-judge panel of the Second Circuit Court of Appeals produced an opinion that lacked Saylor’s rhetorical flourish or his understanding of the basic unfairness of the campus Title IX process. But by creating a more relaxed standard for accused students to make federal Title IX claims, the Second Circuit’s decision in Doe v. Columbia carried considerable weight.
Two Columbia students who had been drinking had a brief sexual encounter at a party. More than four months later, the accuser claimed she was too intoxicated to have consented. Her allegation came in an atmosphere of campus outrage about the university’s allegedly insufficient toughness on sexual assault. In this setting, the accused student found Columbia’s Title IX investigator uninterested in hearing his side of the story. He cited witnesses who would corroborate his belief that the accuser wasn’t intoxicated; the investigator declined to speak with them. The student was found guilty, although for reasons differing from the initial claim; the Columbia panel ruled that he had “directed unreasonable pressure for sexual activity toward the [accuser] over a period of weeks,” leaving her unable to consent on the night in question. He received a three-semester suspension for this nebulous offense—which even his accuser deemed too harsh. He sued, and the case was assigned to Judge Jesse Furman.
Furman’s opinion provided a ringing victory for Columbia and the Obama-backed policies it used. As Title IX litigator Patricia Hamill later observed, Furman’s “almost impossible standard” required accused students to have inside information about the institution’s handling of other sexual-assault claims—information they could plausibly obtain only through the legal process known as discovery, which happens at a later stage of litigation—in order to survive a university’s initial motion to dismiss. Furman suggested that, to prevail, an accused student would need to show that his school treated a female student accused of sexual assault more favorably, or at least provide details about how cases against other accused students showed a pattern of bias. But federal privacy law keeps campus disciplinary hearings private, leaving most accused students with little opportunity to uncover the information before their case is dismissed.
At the same time, the opinion excused virtually any degree of unfairness by the institution. Furman reasoned that taking “allegations of rape on campus seriously and . . . treat[ing] complainants with a high degree of sensitivity” could constitute “lawful” reasons for university unfairness toward accused students. Samantha Harris of the Foundation for Individual Rights in Education detected the decision’s “immediate and nationwide impact” in several rulings against accused students. It also played the same role in university briefs that Saylor’s Brandeis opinion did in filings by accused students.
The Columbia student’s lawyer, Andrew Miltenberg, appealed Furman’s ruling to the Second Circuit. The stakes were high, since a ruling affirming the lower court’s reasoning would have all but foreclosed Title IX lawsuits by accused students in New York, Connecticut, and Vermont. But a panel of three judges, all nominated by Democratic presidents, overturned Furman’s decision. In the opinion’s crucial passage, Judge Pierre Leval held that a university “is not excused from liability for discrimination because the discriminatory motivation does not result from a discriminatory heart, but rather from a desire to avoid practical disadvantages that might result from unbiased action. A covered university that adopts, even temporarily, a policy of bias favoring one sex over the other in a disciplinary dispute, doing so in order to avoid liability or bad publicity, has practiced sex discrimination, notwithstanding that the motive for the discrimination did not come from ingrained or permanent bias against that particular sex.” Before the Columbia decision, courts almost always had rebuffed Title IX pleadings from accused students. More recently, judges have allowed Title IX claims to proceed against Amherst, Cornell, California–Santa Barbara, Drake, and Rollins.
After the Second Circuit’s decision, Columbia settled with the accused student, sparing its Title IX decision-makers from having to testify at a trial. James Madison was one of the few universities to take a different course, with disastrous results. A lawsuit from an accused student survived a motion to dismiss, but the university refused to settle, allowing the student’s lawyer to depose the three school employees who had decided his client’s fate. One unintentionally revealed that he had misapplied the university’s own definition of consent. Another cited the importance of the accuser’s slurring words on a voicemail, thus proving her extreme intoxication on the night of the alleged assault. It was left to the accused student’s lawyer, at a deposition months after the decision had been made, to note that the voicemail in question actually was received on a different night. In December 2016, Judge Elizabeth Dillon, an Obama nominee, granted summary judgment to the accused student, concluding that “significant anomalies in the appeal process” violated his due-process rights under the Constitution.niversities were on the losing side of 36 due-process rulings when Obama appointee Catherine Lhamon was presiding over the Office for Civil Rights between 2013 and 2016; no record exists of her publicly acknowledging any of them. In June 2017, however, Lhamon suddenly rejoiced that “yet another federal court” had found that students disciplined for sexual misconduct “were not denied due process.” That Fifth Circuit decision, involving two former students at the University of Houston, was an odd case for her to celebrate. The majority cabined its findings to the “unique facts” of the case—that the accused students likely would have been found guilty even under the fairest possible process. And the dissent, from Judge Edith Jones, denounced the procedures championed by Lhamon and other Obama officials as “heavily weighted in favor of finding guilt,” predicting “worse to come if appellate courts do not step in to protect students’ procedural due process right where allegations of quasi-criminal sexual misconduct arise.”
At this stage, Lhamon, who now chairs the U.S. Commission on Civil Rights, cannot be taken seriously when it comes to questions of campus due process. But other defenders of the current Title IX regime have offered more substantive commentary about the university setbacks.
Legal scholar Michelle Anderson was one of the few to even discuss the due-process decisions. “Colleges and universities do not always adjudicate allegations of sexual assault well,” she noted in a 2016 law review article defending the Obama-era policies. Anderson even conceded that some colleges had denied “accused students fairness in disciplinary adjudication.” But these students sued, “and campuses are responding—as they must—when accused students prevail. So campuses face powerful legal incentives on both sides to address campus sexual assault, and to do so fairly and impartially.”
This may be true, but Anderson does not explain why wrongly accused students should bear the financial and emotional burden of inducing their colleges to implement fair procedures. More important, scant evidence exists that colleges have responded to the court victories of wrongly accused students by creating fairer procedures. Some have even made it more difficult for wrongly accused students to sue. After losing a lawsuit in December 2014, Brown eliminated the right of students accused of sexual assault to have “every opportunity” to present evidence. That same year, an accused student showed how Swarthmore had deviated from its own procedures in his case. The college quickly settled the lawsuit—and then added a clause to its procedures immunizing it from similar claims in the future. Swarthmore currently informs accused students that “rules of evidence ordinarily found in legal proceedings shall not be applied, nor shall any deviations from any of these prescribed procedures alone invalidate a decision.”
Many lawsuits are still working their way through the judicial system; three cases are pending at federal appellate courts. Of the two that address substantive matters, oral arguments seemed to reveal skepticism of the university’s position. On July 26, a three-judge panel of the First Circuit considered a case at Boston College, where the accused student plausibly argued that someone else had committed the sexual assault (which occurred on a poorly lit dance floor). Judges Bruce Selya and William Kayatta seemed troubled that a Boston College dean had improperly intruded on the hearing board’s deliberations. At the Sixth Circuit a few days later, Judges Richard Griffin and Amul Thapar both expressed concerns about the University of Cincinnati’s downplaying the importance of cross-examination in campus-sex adjudications. Judge Eric Clay was quieter, but he wondered about the tension between the university’s Title IX and truth-seeking obligations.
In a perfect world, academic leaders themselves would have created fairer processes without judicial intervention. But in the current campus environment, such an approach is impossible. So, at least for the short term, the courts remain the best, albeit imperfect, option for students wrongly accused of sexual assault. Meanwhile, every year, young men entrust themselves and their family’s money to institutions of higher learning that are indifferent to their rights and unconcerned with the injustices to which these students might be subjected.
1 After a district court placed that finding on hold, the university appealed to the Sixth Circuit.
Review of 'Terror in France' By Gilles Kepel
Kepel is particularly knowledgeable about the history and process of radicalization that takes place in his nation’s heavily Muslim banlieues (the depressed housing projects ringing Paris and other major cities), and Terror in France is informed by decades of fieldwork in these volatile locales. What we have been witnessing for more than a decade, Kepel argues, is the “third wave” of global jihadism, which is not so much a top-down doctrinally inspired campaign (as were the 9/11 attacks, directed from afar by the oracular figure of Osama bin Laden) but a bottom-up insurgency with an “enclave-based ethnic-racial logic of violence” to it. Kepel traces the phenomenon back to 2005, a convulsive year that saw the second-generation descendants of France’s postcolonial Muslim immigrants confront a changing socio-political landscape.
That was the year of the greatest riots in modern French history, involving mostly young Muslim men. It was also the year that Abu Musab al-Suri, the Syrian-born Islamist then serving as al-Qaeda’s operations chief in Europe, published The Global Islamic Resistance Call. This 1,600-page manifesto combined pious imprecations against the West with do-it-yourself ingenuity, an Anarchist’s Cookbook for the Islamist set. In Kepel’s words, the manifesto preached a “jihadism of proximity,” the brand of civil war later adopted by the Islamic State. It called for ceaseless, mass-casualty attacks in Western cities—attacks which increase suspicion and regulation of Muslims and, in turn, drive those Muslims into the arms of violent extremists.
The third-generation jihad has been assisted by two phenomena: social-networking sites that easily and widely disseminate Islamist propaganda (thus increasing the rate of self-radicalization) and the so-called Arab Spring, which led to state collapse in Syria and Libya, providing “an exceptional site for military training and propaganda only a few hours’ flight from Europe, and at a very low cost.”
Kepel’s book is not just a study of the ideology and tactics of Islamists but a sociopolitical overview of how this disturbing phenomenon fits within a country on the brink. For example, Kepel finds that jihadism is emerging in conjunction with developments such as the “end of industrial society.” A downturn in work has led to an ominous situation in which a “right-wing ethnic nationalism” preying on the economically anxious has risen alongside Islamism as “parallel conduits for expressing grievances.” Filling a space left by the French Communist Party (which once brought the ethnic French working class and Arab immigrants together), these two extremes leer at each other from opposite sides of a societal chasm, signaling the potentially cataclysmic future that awaits France if both mass unemployment and Islamist terror continue undiminished.
The French economy has also had a more direct inciting effect on jihadism. Overregulated labor markets make it difficult for young Muslims to get jobs, thus exacerbating the conditions of social deprivation and exclusion that make individuals susceptible to radicalization. The inability to tackle chronic unemployment has led to widespread Muslim disillusionment with the left (a disillusionment aggravated by another, often glossed over, factor: widespread Muslim opposition to the Socialist Party’s championing of same-sex marriage). Essentially, one left-wing constituency (unions) has made the unemployment of another constituency (Muslim youth) the mechanism for maintaining its privileges.
Kepel does not, however, cite deprivation as the sole or even main contributing factor to Islamist radicalization. One Parisian banlieue that has sent more than 80 residents to fight in Syria, he notes, has “attractive new apartment buildings” built by the state and features a mosque “constructed with the backing of the Socialist mayor.” It is also the birthplace of well-known French movie stars of Arab descent, and thus hardly a place where ambition goes to die. “The Islamophobia mantra and the victim mentality it reinforces makes it possible to rationalize a total rejection of France and a commitment to jihad by making a connection between unemployment, discrimination, and French republican values,” Kepel writes. Indeed, Kepel is refreshingly derisive of the term “Islamophobia” throughout the book, excoriating Islamists and their fellow travelers for “substituting it for anti-Semitism as the West’s cardinal sin.” These are meaningful words coming from Kepel, a deeply learned scholar of Islam who harbors great respect for the faith and its adherents.
Kepel also weaves the saga of jihadism into the ongoing “kulturkampf within the French left.” Arguments about Islamist terrorism demonstrate a “divorce between a secular progressive tradition” and the children of the Muslim immigrants this tradition fought to defend. The most ironically perverse manifestation of this divorce was ISIS’s kidnapping of Didier François, co-founder of the civil-rights organization SOS Racisme. Kepel recognizes the origins of this divorce in the “red-green” alliance formed decades ago between Islamists and elements of the French intellectual left, such as Michel Foucault, a cheerleader of the Iranian revolution.
Though he offers a rigorous history and analysis of the jihadist problem, Kepel is generally at a loss for solutions. He decries a complacent French elite, with its disregard for genuine expertise (evidenced by the decline in institutional academic support for Islamicists and Arabists) and the narrow, relatively impenetrable way in which it perpetuates itself, chiefly with a single school (the École normale supérieure) that practically every French politician must attend. Despite France’s admirable republican values, this has made the process of assimilation rather difficult. But other than wishing that the public education system become more effective and inclusive at instilling republican values, Kepel provides little in the way of suggestions as to how France emerges from this mess. That a scholar of such erudition and humanity can do little but throw up his hands and issue a sigh of despair cannot bode well. The third-generation jihad owes as much to the political breakdown in France as it does to the meltdown in the Middle East. Defeating this two-headed beast requires a new and comprehensive playbook: the West’s answer to The Global Islamic Resistance Call. That book has yet to be written.
resident Trump, in case you haven’t noticed, has a tendency to exaggerate. Nothing is “just right” or “meh” for him. Buildings, crowds, election results, and military campaigns are always outsized, gargantuan, larger, and more significant than you might otherwise assume. “People want to believe that something is the biggest and the greatest and the most spectacular,” he wrote 30 years ago in The Art of the Deal. “I call it truthful hyperbole. It’s an innocent form of exaggeration—and a very effective form of promotion.”
So effective, in fact, that the press has picked up the habit. Reporters and editors agree with the president that nothing he does is ordinary. After covering Trump for more than two years, they still can’t accept him as a run-of-the-mill politician. And while there are aspects of Donald Trump and his presidency that are, to say the least, unusual, the media seem unable to distinguish between the abnormal and significant—firing the FBI director in the midst of an investigation into one’s presidential campaign, for example—and the commonplace.
Consider the fiscal deal President Trump struck with Democratic leaders in early September.
On September 6, the president held an Oval Office meeting with Vice President Pence, Treasury Secretary Mnuchin, and congressional leaders of both parties. He had to find a way to (a) raise the debt ceiling, (b) fund the federal government, and (c) spend money on hurricane relief. The problem is that a bloc of House Republicans won’t vote for (a) unless the increase is accompanied by significant budget cuts, which interferes with (b) and (c). To raise the debt ceiling, then, requires Democratic votes. And the debt ceiling must be raised. “There is zero chance—no chance—we will not raise the debt ceiling,” Senate Majority Leader Mitch McConnell said in August.
The meeting went like this. First House Speaker Paul Ryan asked for an 18-month increase in the debt ceiling so Republicans wouldn’t have to vote again on the matter until after the midterm elections. Democrats refused. The bargaining continued until Ryan asked for a six-month increase. The Democrats remained stubborn. So Trump, always willing to kick a can down the road, interrupted Mnuchin to offer a three-month increase, a continuing resolution that will keep the government open through December, and about $8 billion in hurricane money. The Democrats said yes.
That, anyway, is what happened. But the media are not satisfied to report what happened. They want—they need—to tell you what it means. And what does it mean? Well, they aren’t really sure. But it’s something big. It’s something spectacular. For example:
1. “Trump Bypasses Republicans to Strike Deal on Debt Limit and Harvey Aid” was the headline of a story for the New York Times by Peter Baker, Thomas Kaplan, and Michael D. Shear. “The deal to keep the government open and paying its debts until Dec. 15 represented an extraordinary public turn for the president, who has for much of his term set himself up on the right flank of the Republican Party,” their article began. Fair enough. But look at how they import speculation and opinion into the following sentence: “But it remained unclear whether Mr. Trump’s collaboration with Democrats foreshadowed a more sustained shift in strategy by a president who has presented himself as a master dealmaker or amounted to just a one-time instinctual reaction of a mercurial leader momentarily eager to poke his estranged allies.”
2. “The decision was one of the most fascinating and mysterious moves he’s made with Congress during eight months in office,” reported Jeff Zeleny, Dana Bash, Deirdre Walsh, and Jeremy Diamond for CNN. Thanks for sharing!
3. “Trump budget deal gives GOP full-blown Stockholm Syndrome,” read the headline of Tina Nguyen’s piece for Vanity Fair. “Donald Trump’s unexpected capitulation to new best buds ‘Chuck and Nancy’ has thrown the Grand Old Party into a frenzy as Republicans search for explanations—and scapegoats.”
4. “For Conservatives, Trump’s Deal with Democrats Is Nightmare Come True,” read the headline for a New York Times article by Jeremy W. Peters and Maggie Haberman. “It is the scenario that President Trump’s most conservative followers considered their worst nightmare, and on Wednesday it seemed to come true: The deal-making political novice, whose ideology and loyalty were always fungible, cut a deal with Democrats.”
5. “Trump sides with Democrats on fiscal issues, throwing Republican plans into chaos,” read the Washington Post headline the day after the deal was announced. “The president’s surprise stance upended sensitive negotiations over the debt ceiling and other crucial policy issues this fall and further imperiled his already tenuous relationships with Senate Majority Leader Mitch McConnell and House Speaker Paul Ryan.” Yes, the negotiations were upended. Then they made a deal.
6. “Although elected as a Republican last year,” wrote Peter Baker of the Times, “Mr. Trump has shown in the nearly eight months in office that he is, in many ways, the first independent to hold the presidency since the advent of the two-party system around the time of the Civil War.” The title of Baker’s news analysis: “Bound to No Party, Trump Upends 150 Years of Two-Party Rule.” One hundred and fifty years? Why not 200?
The journalistic rule of thumb used to be that an article describing a political, social, or cultural trend requires at least three examples. Not while covering Trump. If Trump does something, anything, you should feel free to inflate its importance beyond all recognition. And stuff your “reporting” with all sorts of dramatic adjectives and frightening nouns: fascinating, mysterious, unexpected, extraordinary, nightmare, chaos, frenzy, and scapegoats. It’s like a Vince Flynn thriller come to life.
The case for the significance of the budget deal would be stronger if there were a consensus about whom it helped. There isn’t one. At first the press assumed Democrats had won. “Republicans left the Oval Office Wednesday stunned,” reported Rachael Bade, Burgess Everett, and Josh Dawsey of Politico. Another trio of Politico reporters wrote, “In the aftermath, Republicans seethed privately and distanced themselves publicly from the deal.” Republicans were “stunned,” reported Kristina Peterson, Siobhan Hughes, and Louise Radnofsky of the Wall Street Journal. “Meet the swamp: Donald Trump punts September agenda to December after meeting with Congress,” read the headline of Charlie Spiering’s Breitbart story.
By the following week, though, these very outlets had decided the GOP was looking pretty good. “Trump’s deal with Democrats bolsters Ryan—for now,” read the Politico headline on September 11. “McConnell: No New Debt Ceiling Vote until ‘Well into 2018,’” reported the Washington Post. “At this point…picking a fight with Republican leaders will only help him,” wrote Gerald Seib in the Wall Street Journal. “Trump has long warned that he would work with Democrats, if necessary, to fulfill his campaign promises. And Wednesday’s deal is a sign that he intends to follow through on that threat,” wrote Breitbart’s Joel Pollak.
The sensationalism, the conflicting interpretations, the visceral language is dizzying. We have so many reporters chasing the same story that each feels compelled to gussy up a quotidian budget negotiation until it resembles the Ribbentrop–Molotov pact, and none feel it necessary to apply to their own reporting the scrutiny and incredulity they apply to Trump. The truth is that no one knows what this agreement portends. Nor is it the job of a reporter to divine the meaning of current events like an augur of Rome. Sometimes a cigar is just a cigar. And a deal is just a deal.
Remembering something wonderful
Not surprisingly, many well-established performers were left in the lurch by the rise of the new media. Moreover, some vaudevillians who, like Fred Allen, had successfully reinvented themselves for radio were unable to make the transition to TV. But a handful of exceptionally talented performers managed to move from vaudeville to radio to TV, and none did it with more success than Jack Benny, whose feigned stinginess, scratchy violin playing, slightly effeminate demeanor, and preternaturally exact comic timing made him one of the world’s most beloved performers. After establishing himself in vaudeville, he became the star of a comedy series, The Jack Benny Program, that aired continuously, first on radio and then TV, from 1932 until 1965. Save for Bob Hope, no other comedian of his time was so popular.
With the demise of nighttime network radio as an entertainment medium, the 931 weekly episodes of The Jack Benny Program became the province of comedy obsessives—and because Benny’s TV series was filmed in black-and-white, it is no longer shown in syndication with any regularity. And while he also made Hollywood films, some of which were box-office hits, only one, Ernst Lubitsch’s To Be or Not to Be (1942), is today seen on TV other than sporadically.
Nevertheless, connoisseurs of comedy still regard Benny, who died in 1974, as a giant, and numerous books, memoirs, and articles have been published about his life and art. Most recently, Kathryn H. Fuller-Seeley, a professor at the University of Texas at Austin, has brought out Jack Benny and the Golden Age of Radio Comedy, the first book-length primary-source academic study of The Jack Benny Program and its star.1 Fuller-Seeley’s genuine appreciation for Benny’s work redeems her anachronistic insistence on viewing it through the fashionable prism of gender- and race-based theory, and her book, though sober-sided to the point of occasional starchiness, is often quite illuminating.
Most important of all, off-the-air recordings of 749 episodes of the radio version of The Jack Benny Program survive in whole or part and can easily be downloaded from the Web. As a result, it is possible for people not yet born when Benny was alive to hear for themselves why he is still remembered with admiration and affection—and why one specific aspect of his performing persona continues to fascinate close observers of the American scene.B orn Benjamin Kubelsky in Chicago in 1894, Benny was the son of Eastern European émigrés (his father was from Poland, his mother from Lithuania). He started studying violin at six and had enough talent to pursue a career in music, but his interests lay elsewhere, and by the time he was a teenager, he was working in vaudeville as a comedian who played the violin as part of his act. Over time he developed into a “monologist,” the period term for what we now call a stand-up comedian, and he began appearing in films in 1929 and on network radio three years after that.
Radio comedy, like silent film, is now an obsolete art form, but the program formats that it fostered in the ’20s and ’30s all survived into the era of TV, and some of them flourish to this day. One, episodic situation comedy, was developed in large part by Jack Benny and his collaborators. Benny and Harry Conn, his first full-time writer, turned his weekly series, which started out as a variety show, into a weekly half-hour playlet featuring a regular cast of characters augmented by guest stars. Such playlets, relying as they did on a setting that was repeated from week to week, were easier to write than the free-standing sketches favored by Allen, Hope, and other ex-vaudevillians, and by the late ’30s, the sitcom had become a staple of radio comedy.
The process, as documented by Fuller-Seeley, was a gradual one. The Jack Benny Program never broke entirely with the variety format, continuing to feature both guest stars (some of whom, like Ronald Colman, ultimately became semi-regular members of the show’s rotating ensemble of players) and songs sung by Dennis Day, a tenor who joined the cast in 1939. Nor was it the first radio situation comedy: Amos & Andy, launched in 1928, was a soap-opera-style daily serial that also featured regular characters. Nevertheless, it was Benny who perfected the form, and his own character would become the prototype for countless later sitcom stars.
The show’s pivotal innovation was to turn Benny and the other cast members into fictionalized versions of themselves—they were the stars of a radio show called “The Jack Benny Program.” Sadye Marks, Benny’s wife, played Mary Livingstone, his sharp-tongued secretary, with three other characters added as the self-reflexive concept took shape. Don Wilson, the stout, genial announcer, came on board in 1934. He was followed in 1936 by Phil Harris, Benny’s roguish bandleader, and, in 1939, by Day, Harris’s simple-minded vocalist. To this team was added a completely fictional character, Rochester Van Jones, Benny’s raspy-voiced, outrageously impertinent black valet, played by Eddie Anderson, who joined the cast in 1938.
As these five talented performers coalesced into a tight-knit ensemble, the jokey, vaudeville-style sketch comedy of the early episodes metamorphosed into sitcom-style scripts that portrayed their offstage lives, as well as the making of the show itself. Scarcely any conventional jokes were told, nor did Benny’s writers employ the topical and political references in which Allen and Hope specialized. Instead, the show’s humor arose almost entirely from the close interplay of character and situation.
Benny was not solely responsible for the creation of this format, which was forged by Conn and perfected by his successors. Instead, he doubled as the star and producer—or, to use the modern term, show runner—closely supervising the writing of the scripts and directing the performances of the other cast members. In addition, he and Conn turned the character of Jack Benny from a sophisticated vaudeville monologist into the hapless butt of the show’s humor, a vain, sexually inept skinflint whose character flaws were ceaselessly twitted by his colleagues, who in turn were given most of the biggest laugh lines.
This latter innovation was a direct reflection of Benny’s real-life personality. Legendary for his voluble appreciation of other comedians, he was content to respond to the wisecracking of his fellow cast members with exquisitely well-timed interjections like “Well!” and “Now, cut that out,” knowing that the comic spotlight would remain focused on the man of whom they were making fun and secure in the knowledge that his own comic personality was strong enough to let them shine without eclipsing him in the process.
And with each passing season, the fictional personalities of Benny and his colleagues became ever more firmly implanted in the minds of their listeners, thus allowing the writers to get laughs merely by alluding to their now-familiar traits. At the same time, Benny and his writers never stooped to coasting on their familiarity. Even the funniest of the “cheap jokes” that were their stock-in-trade were invariably embedded in carefully honed dramatic situations that heightened their effectiveness.
A celebrated case in point is the best-remembered laugh line in the history of The Jack Benny Program, heard in a 1948 episode in which a burglar holds Benny up on the street. “Your money or your life,” the burglar says—to which Jack replies, after a very long pause, “I’m thinking it over!” What makes this line so funny is, of course, our awareness of Benny’s stinginess, reinforced by a decade and a half of constant yet subtly varied repetition. What is not so well remembered is that the line is heard toward the end of an episode that aired shortly after Ronald Colman won an Oscar for his performance in A Double Life. Inspired by this real-life event, the writers concocted an elaborately plotted script in which Benny talks Colman (who played his next-door neighbor on the show) into letting him borrow the Oscar to show to Rochester. It is on his way home from this errand that Benny is held up, and the burglar not only robs him of his money but also steals the statuette, a situation that was resolved to equally explosive comic effect in the course of two subsequent episodes.
No mere joke-teller could have performed such dramatically complex scripts week after week with anything like Benny’s effectiveness. The secret of The Jack Benny Program was that its star, fully aware that he was not “being himself” but playing a part, did so with an actor’s skill. This was what led Ernst Lubitsch to cast him in To Be or Not to Be, in which he plays a mediocre Shakespearean tragedian, a character broadly related to but still quite different from the one who appeared on his own radio show. As Lubitsch explained to Benny, who was skeptical about his ability to carry off the part:
A clown—he is a performer what is doing funny things. A comedian—he is a performer what is saying funny things. But you, Jack, you are an actor, you are an actor playing the part of a comedian and this you are doing very well.
To Be or Not to Be also stands out from the rest of Benny’s work because he plays an identifiably Jewish character. The Jack Benny character that he played on radio and TV, by contrast, was never referred to or explicitly portrayed as Jewish. To be sure, most listeners were in no doubt of his Jewishness, and not merely because Benny made no attempt in real life to conceal his ethnicity, of which he was by all accounts proud. The Jack Benny Program was written by Jews, and the ego-puncturing insults with which their scripts were packed, as well as the schlemiel-like aspect of Benny’s “fall guy” character, were quintessentially Jewish in style.
As Benny explained in a 1948 interview cited by Fuller-Seeley:
The humor of my program is this: I’m a big shot, see? I’m fast-talking. I’m a smart guy. I’m boasting about how marvelous I am. I’m a marvelous lover. I’m a marvelous fiddle player. Then, five minutes after I start shooting off my mouth, my cast makes a shmo out of me.
Even so, his avoidance of specific Jewish identification on the air is noteworthy precisely because his character was a miser. At a time when overt anti-Semitism was still common in America, it is remarkable that Benny’s comic persona was based in large part on an anti-Semitic stereotype—yet one that seems not to have inspired any anti-Semitic attacks on Benny himself. When, in 1945, his writers came up with the idea of an “I Can’t Stand Jack Benny Because . . . ” write-in campaign, they received 270,000 entries. Only three made mention of his Jewishness.
As for the winning entry, submitted by a California lawyer, it says much about what insulated Benny from such attacks: “He fills the air with boasts and brags / And obsolete, obnoxious gags / The way he plays his violin / Is music’s most obnoxious sin / His cowardice alone, indeed, / Is matched by his obnoxious greed / And all the things that he portrays / Show up MY OWN obnoxious ways.” It is clear that Benny’s foibles were seen by his listeners not as particular but universal, just as there was no harshness in the razzing of his fellow cast members, who very clearly loved the Benny character in spite of his myriad flaws. So, too, did the American people. Several years after his TV series was cancelled, a corporation that was considering using him as a spokesman commissioned a national poll to find out how popular he was. It learned that only 3 percent of the respondents disliked him.
Therein lay Benny’s triumph: He won total acceptance from the American public and did so by embodying a Jewish stereotype from which the sting of prejudice had been leached. Far from being a self-hating whipping boy for anti-Semites, he turned himself into WASP America’s Jewish uncle, preposterous yet lovable.W hen the bottom fell out of network radio, Benny negotiated the move to TV without a hitch, debuting on the small screen in 1950 and bringing the radio version of The Jack Benny Program to a close five years later, making it one of the very last radio comedy series to shut up shop. Even after his weekly TV series was finally canceled by CBS in 1965, he continued to star in well-received one-shot specials on NBC.
But Benny’s TV appearances, for all their charm, were never quite equal in quality to his radio work, which is why he clung to the radio version of The Jack Benny Program until network radio itself went under: Better than anyone else, he knew how good the show had been. For the rest of his life, he lived off the accumulated comic capital built up by 21 years of weekly radio broadcasts.
Now, at long last, he belongs to the ages, and The Jack Benny Program is a museum piece. Yet it remains hugely influential, albeit at one or more removes from the original. From The Dick Van Dyke Show and The Danny Thomas Show to Seinfeld, Everybody Loves Raymond, and The Larry Sanders Show, every ensemble-cast sitcom whose central character is a fictionalized version of its star is based on Benny’s example. And now that the ubiquity of the Web has made the radio version of his series readily accessible for the first time, anyone willing to make the modest effort necessary to seek it out is in a position to discover that The Jack Benny Program, six decades after it left the air, is still as wonderfully, benignly funny as it ever was, a monument to the talent of the man who, more than anyone else, made it so.
Review of 'The Transferred Life of George Eliot' By Philip Davis
Not that there’s any danger these theoretically protesting students would have read George Eliot’s works—not even the short one, Silas Marner (1861), which in an earlier day was assigned to high schoolers. I must admit I didn’t find my high-school reading of Silas Marner a pleasant experience—sports novels for boys like John R. Tunis’s The Kid from Tomkinsville were inadequate preparation. I must confess, too, that when I was in graduate school, determined to study 17th-century English verse, my reaction to the suggestion that I should also read Middlemarch (1871–72) was “What?! An 800-page novel by the guy who wrote Silas Marner?” A friend patiently explained that “the guy” was actually Mary Ann Evans, born in 1819, died in 1880. Partly because she was living in sin with the literary jack-of-all-trades George Henry Lewes (legally and irrevocably bound to his estranged wife), she adopted “George Eliot” as a protective pseudonym when, in her 1857 debut, she published Scenes from Clerical Life.
I did, many times over and with awe and delight, go on to read Middlemarch and the seven other novels, often in order to teach them to college students. Students have become less and less receptive over the years. Forget modern-day objections to George Eliot’s complex political or religious views. Adam Bede (1859) and The Mill on the Floss (1860) were too hefty, and the triple-decked Middlemarch and Deronda, even if I set aside three weeks for them, rarely got finished.
The middle 20th century was perhaps a more a propitious time for appreciating George Eliot, Henry James, and other 19th-century English and American novelists. Influential teachers like F.R. Leavis at Cambridge and Lionel Trilling at Columbia were then working hard to persuade students that the study of literature, not just poetry and drama but also fiction, matters both to their personal lives—the development of their sensibility or character—and to their wider society. The “moral imagination” that created Middlemarch enriches our minds by dramatizing the complications—the frequent blurring of good and evil—in our lives. Great novels help us cope with ambiguities and make us more tolerant of one another. Many of Leavis’s and Trilling’s students became teachers themselves, and for several decades the feeling of cultural urgency was sustained. In the 1970s, though, between the leftist emphasis on literature as “politics by other means” and the deconstructionist denial of the possibility of any knowledge, literary or otherwise, independent of political power, the high seriousness of Leavis and Trilling began to fade.
The study of George Eliot and her life has gone through many stages. Directly after her death came the sanitized, hagiographic “life and letters” by J.W. Cross, the much younger man she married after Lewes’s death. Gladstone called it “a Reticence in three volumes.” The three volumes helped spark, if they didn’t cause, the long reaction against the Victorian sages generally that culminated in the dismissively satirical work of the Bloomsbury biographer and critic Lytton Strachey in his immensely influential Eminent Victorians (1916). Strachey’s mistreatment of his forbears was, with regard to George Eliot at least, tempered almost immediately by Virginia Woolf. It was Woolf who in 1919 provocatively said that Middlemarch had been “the first English novel for adults.” Eventually, the critical tide against George Eliot was decisively reversed in the ’40s by Joan Bennett and Leavis, who made the inarguable case for her genuine and lasting achievement. That period of correction culminated in the 1960s with Gordon S. Haight’s biography and with interpretive studies by Barbara Hardy and W.J. Harvey. Books on George Eliot over the last four decades have largely been written by specialists for specialists—on her manuscripts or working notes, and on her affiliations with the scientists, social historians, and competing novelists of her day.
The same is true, only more so, of the books written, with George Eliot as the ostensible subject, to promote deconstructionist or feminist agendas. Biographies have done a better job appealing to the common reader, not least because the woman’s own story is inherently compelling. The question right now is whether a book combining biographical and interpretive insight—one “pitched,” as publishers like to say, not just at experts but at the common reader—is past praying for.
Philip Davis, a Victorian scholar and an editor at Oxford University Press, hopes not. His The Transferred Life of George Eliot—transferred, that is, from her own experience into her letters, journals, essays, and novels, and beyond them into us—deserves serious attention. Davis is conscious that George Eliot called biographies of writers “a disease of English literature,” both overeager to discover scandals and too inclined to substitute day-to-day travels, relationships, dealings with publishers and so on, for critical attention to the books those writers wrote. Davis therefore devotes himself to George Eliot’s writing. Alas, he presumes rather too much knowledge on the reader’s part of the day-to-day as charted in Haight’s marvelous life. (A year-by-year chronology at the front of the book would have helped even his fellow Victorianists.)
As for George Eliot’s writing, Davis is determined to refute “what has been more or less said . . . in the schools of theory for the last 40 years—that 19th-century realism is conservatively bland and unimaginative, bourgeois and parochial, not truly art at all.” His argument for the richness, breadth, and art of George Eliot’s realism—her factual and sympathetic depiction of poor and middling people, without omitting a candid representation of the rich—is most convincing. What looms largest, though, is the realist, the woman herself—the Mary Ann Evans who, from the letters to the novels, became first Marian Evans the translator and essayist and then later “her own greatest character”: George Eliot the novelist. Davis insists that “the meaning of that person”—not merely the voice of her omniscient narrators but the omnipresent imagination that created the whole show—“has not yet exhausted its influence nor the larger future life she should have had, and may still have, in the world.”
The transference of George Eliot’s experience into her fiction is unquestionable: In The Mill on the Floss, for example, Mary Ann is Maggie, and her brother Isaac is Tom Tulliver. Davis knows that a better word might be transmutation, as George Eliot had, in Henry James’s words, “a mind possessed,” for “the creations which brought her renown were of the incalculable kind, shaped themselves in mystery, in some intellectual back-shop or secret crucible, and were as little as possible implied in the aspect of her life.” No data-accumulating biographer, even the most exhaustive, can account for that “incalculable . . . mystery.”
Which is why Davis, like a good teacher, gives us exercises in “close reading.” He pauses to consider how a George Eliot sentence balances or turns on an easy-to-skip-over word or phrase—the balance or turn often representing a moment when the novelist looks at what’s on the underside of the cards.
George Eliot’s style is subtle because her theme is subtle. Take D.H. Lawrence’s favorite heroine, the adolescent Maggie Tulliver. The external event in The Mill on the Floss may be the girl’s impulsive cutting off her unruly hair to spite her nagging aunts, or the young woman’s drifting down the river with a superficially attractive but truly impossible boyfriend. But the real “action” is Maggie’s internal self-blame and self-assertion. No Victorian novelist was better than George Eliot at tracing the psychological development of, say, a husband and wife who realize they married each other for shallow reasons, are unhappy, and now must deal with the ordinary necessities of balancing the domestic budget—Lydgate and Rosamund in Middlemarch—or, in the same novel, the religiously inclined Dorothea’s mistaken marriage to the old scholar Casaubon. That mistake precipitates not merely disenchantment and an unconscious longing for love with someone else, but (very finely) a quest for a religious explanation of and guide through her quandary.
It’s the religio-philosophical side of George Eliot about which Davis is strongest—and weakest. Her central theological idea, if one may simplify, was that the God of the Bible didn’t exist “out there” but was a projection of the imagination of the people who wrote it. Jesus wasn’t, in Davis’s characterization of her view, “the impervious divine, but [a man who] shed tears and suffered,” and died feeling forsaken. “This deep acceptance of so-called weakness was what most moved Marian Evans in her Christian inheritance. It was what God was for.” That is, the character of Jesus, and the dramatic play between him and his Father, expressed the human emotions we and George Eliot are all too familiar with. The story helps reconcile us to what is, finally, inescapable suffering.
George Eliot came to this demythologized understanding not only of Judaism and Christianity but of all religions through her contact first with a group of intellectuals who lived near Coventry, then with two Germans she translated: David Friedrich Strauss, whose 1,500-page Life of Jesus Critically Examined (1835–36) was for her a slog, and Ludwig Feuerbach, whose Essence of Christianity (1841) was for her a joy. Also, in the search for the universal morality that Strauss and Feuerbach believed Judaism and Christianity expressed mythically, there was Spinoza’s utterly non-mythical Ethics (1677). It was seminal for her—offering, as Davis says, “the intellectual origin for freethinking criticism of the Bible and for the replacement of religious superstition and dogmatic theology by pure philosophic reason.” She translated it into English, though her version did not appear until 1981.
I wish Davis had left it there, but he takes it too far. He devotes more than 40 pages—a tenth of the whole book—to her three translations, taking them as a mother lode of ideational gold whose tailings glitter throughout her fiction. These 40 pages are followed by 21 devoted to Herbert Spencer, the Victorian hawker of theories-of-everything (his 10-volume System of Synthetic Philosophy addresses biology, psychology, sociology, and ethics). She threw herself at the feet of this intellectual huckster, and though he rebuffed her painfully amorous entreaties, she never ceased revering him. Alas, Spencer was a stick—the kind of philosopher who was incapable of emotion. And she was his intellectual superior in every way. The chapter is largely unnecessary.
The book comes back to life when Davis turns to George Henry Lewes, the man who gave Mary Ann Evans the confidence to become George Eliot—perhaps the greatest act of loving mentorship in all of literature. Like many prominent Victorians, Lewes dabbled in all the arts and sciences, publishing highly readable accounts of them for a general audience. His range was as wide as Spencer’s, but his personality and writing had an irrepressible verve that Spencer could only have envied. Lewes was a sort Stephen Jay Gould yoked to Daniel Boorstin, popularizing other people’s findings and concepts, and coming up with a few of his own. He regarded his Sea-Side Studies (1860) as “the book . . . which was to me the most unalloyed delight,” not least because Marian, whom he called Polly, had helped gather the data. She told a friend “There is so much happiness condensed in it! Such scrambles over rocks, and peeping into clear pool [sic], and strolls along the pure sands, and fresh air mingling with fresh thoughts.” In his remarkably intelligent 1864 biography of Goethe, Lewes remarks that the poet “knew little of the companionship of two souls striving in emulous spirit of loving rivalry to become better, to become wiser, teaching each other to soar.” Such a companionship Lewes and George Eliot had in spades, and some of Davis’s best passages describe it.
Regrettably, Davis also offers many passages well below the standard of his best—needlessly repeating an already established point or obfuscating the obvious. Still, The Transferred Lives is the most formidably instructive, and certainly complete, life-and-works treatment of George Eliot we have.